Cal Newport, writing at the New Yorker: Technological transitions often stumble when we expect them to sprint. In 1989, the Stanford economist Paul David wanted to understand why so many companies were so slow to adopt computer technology; for historical perspective, he turned to the history of the electric dynamo, which had been invented around a hundred years before, and which, before it transformed industrial production, had also been adopted slowly. In his paper “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox,” published in the American Economic Review, David explained that, at the turn of the century, most factories were powered by massive central steam engines. The engines turned overhead shafts, which were connected by an intricate array of belts and pulleys to close-packed machinery. When electric motors were first introduced, factory owners tried to integrate them into their existing setups; often, they’d simply replace the hulking steam engine with a giant electric dynamo. This introduced some conveniences — no one had to shovel coal — but also created complexities. It was hard to keep all the electrical components working; many factory owners opted to stay with steam.
It took decades for factory owners to figure out how to make the most of electric power. Eventually, they discovered that the best approach was to put a small motor on each individual piece of machinery. Since a factory no longer needed to draw power from a central engine, its equipment could be spread out. This, in turn, changed the nature of industrial architecture. Buildings that no longer required reinforced ceilings to house shafts, belts, and pulleys could incorporate windows and skylights, of the sort we know today from urban loft buildings. Inertia, David found, had been part of the problem. Factory owners who had spent a lot of money and time building physical plants organized around central-drive trains were reluctant to commit to complex, expensive overhauls. There were imaginative obstacles: powering each machine with its own individual motor may seem like an obvious idea now, but in fact it represented a sharp break from the centralized-power model that had dominated for the previous hundred and fifty years. Finally, technological barriers stood in the way — small issues, compared to the invention of electricity, but persistent and important ones nonetheless. Someone, for instance, had to figure out how to construct a building-wide power grid capable of handling the massively variable load created by many voltage-hungry mini-motors being turned off and on unpredictably. Until that happened, it was central power or bust.
In some respects, we may be in an electric-dynamo moment for remote work. In theory, we have the technology we need to make remote work workable. And yet most companies that have tried to graft it onto their existing setups have found only mixed success. In response, many have stuck with what they know. Now the coronavirus pandemic has changed the equation. Whole workplaces have gone remote; steam engines have been outlawed. The question is whether, having been forced to embrace this new technology, we can solve the long-standing problems that have thwarted its adoption in the past. Some useful innovation is possible on an individual level. As a newly minted remote worker, you may find that demands on your attention are actually more incessant and intrusive than they used to be — a natural consequence when a workplace depends more than ever on phone calls, e-mails, and video conferences. You might respond by consolidating all of your appointments into a given half of the day — say, between 1 p.m. and 5 p.m. — preserving the other hours, by default, for actually working on the items discussed.