1
An Overnight Revolution, Fifty Years in the Making
Lessons of the EHR Era
In Ernest Hemingway's classic 1926 novel The Sun Also Rises, a character named Mike Campbell loses all his money due to a series of reckless financial choices. "How did you go bankrupt?" his friend Bill asks. "Two ways," Campbell replies. "Gradually and then suddenly."
When it comes to the digital transformation of healthcare, we have the "gradually" part down pat-no industry has been slower than healthcare in disrupting the status quo with digital tools. Could the introduction of generative AI be our "suddenly" moment, when a breathtaking new technology crashes into a healthcare system in desperate need of change, igniting true transformation? Answering this question requires some appreciation of the history of healthcare digitization.
Before 2008, very few US hospitals and physicians' offices recorded their data-lab tests, X-ray results, and the observations and treatment plans of physicians and other health professionals-in an electronic database. Back then, after I took a history from and examined a patient, I would scribble my observations, along with my "differential diagnosis" (the list of conditions I was considering, such as "probable pneumonia, rule out congestive heart failure"), and my diagnostic and treatment plans on a piece of paper; this would be stored-along with dozens of other observations by medical consultants, nurses, physical therapists, social workers, and more-in a large binder. I'd then write my "Doctor's Orders" on another piece of paper, filed in a different section of the binder, finally placing the binder on a lazy Susan in the nursing station of the hospital or clinic, cueing the clerk that there were orders to be carried out by rotating a color-coded wheel on the outside of the chart.
Having healthcare data stored as physical artifacts-paper notes, X-rays, EKG tracings-created a host of problems. If I referred my patient to see a specialist, I prayed that the chart and X-rays would end up at the specialist's office in time for the appointment. They often didn't. And, of course, when the record was a collection of papers and films, giving patients access to their own medical information was impossible.
While an analog health record created numerous challenges when it came to the care of individual patients, there was an even greater consequence: The healthcare system could never undergo the kind of transformation that might make it better, safer, more satisfying, and less expensive until our records were digitized.
You'd think that hospitals and medical practices would have invested in digitization without much prompting, but they did not. This was partly because of a vicious circle. Since so few healthcare systems were buying electronic health records in the 1990s and 2000s, there was very little business case for investors to put their money into EHR companies. This, in turn, meant that the few EHRs that existed at the time were profoundly flawed, having failed to benefit from either the iterative improvement cycles needed to refine any piece of software or the investments required to create a world-class company. Many of the digital giants-Google, Microsoft, IBM, GE-took a stab at building an electronic medical record of one sort or another. Every one of these flamed out, as the companies learned that healthcare, with its high stakes, overwhelming complexity, narrow profit margins, and copious regulations, was a tough nut to crack.
The result was that medicine remained a paper-based industry decades after most other industries had gone digital. This, in turn, meant that the digital transformation of healthcare, facilitated by advanced data analytics, continued to be elusive.
Things changed in 2009. That year, the US government scrambled to infuse billions of dollars into a sputtering economy in the wake of the Great Recession. The mantra of the architects of the stimulus package, you might remember, was that the feds would invest in "shovel-ready projects"-ones that injected money directly into the veins of the American economy to vanquish the recession.
Normally, such money would go to building bridges, repairing roads, and other major infrastructure projects. But a group of health policy wonks in the Obama administration saw a once-in-a-lifetime chance to turbocharge the use of electronic health records by positioning EHR adoption as just another shovel-ready project. A major talking point was a 2005 study by the RAND Corporation predicting that the widespread implementation of EHRs would cut US healthcare costs by $81 billion per year. Using both political persuasion and adroit sleight of hand, the Obama team managed to slip $30 billion into the $830 billion stimulus package to offer incentives to doctors and hospitals that implemented EHRs, while threatening cuts in reimbursement for those that failed to do so in the coming years.
The gambit worked. In 2008, fewer than one in ten US hospitals had an EHR. A decade later, fewer than one in ten did not. Uptake was similarly rapid in physicians' offices. After a generation of dawdling, the American healthcare system had finally gone digital.
The electronic health record improved many things. It became possible for clinicians in multiple locations to view a patient's record simultaneously. Some basic decision support-such as alerts that warned us that we were about to prescribe a drug to which the patient was allergic-materialized, and healthcare systems could now look for gaps in care, such as women overdue for a mammogram. Electronic prescriptions became the norm and were far less error-prone than paper ones. Virtual care became feasible, not only because of improvements in videoconferencing technology but also because EHRs allowed physicians to view a patient's medical history without depending on the uncertain availability of a paper chart.
Yet the first decade of our new digital healthcare system was chock-full of unanticipated consequences, mostly unhappy ones, particularly for doctors. Since physicians could now be prompted to enter stuff into the EHR, everybody who had an interest in influencing the doctor's actions suddenly had the electronic means to do so. To appreciate why this matters, one needs to understand the role of clinicians in healthcare's ecosystem.
In the pre-digital era, it was often said that "the most expensive piece of technology in a hospital is the doctor's pen," because while physicians' salaries only account for 8 percent of healthcare costs, the costs that emanate from our clinical decisions represent about 80 percent. Our choice to order a particular drug, scan, or procedure can easily amount to many thousands of dollars; one extra day in a hospital costs about $10,000. Add a visit to the OR or a stay in the ICU and that number mushrooms.
Once we substituted keyboards for pens, hospital administrators, regulators, and payers spied an opportunity to shape what the doctor did in real time. Armed with this power, it was utterly unsurprising that they used it. The result was that the EHR turned doctors into high-priced box-checkers. One click to document you had queried the patient about her family's medical history, another to indicate you had examined at least nine body parts (no, you didn't get a point for each extremity), one more to show that you counseled the patient about wearing seatbelts, and a bolded option to ensure that you asked if the patient felt safe at home. The premium on billing led to a game of Name the Right Diagnosis, such that an elderly patient who was weak and confused was now said to be suffering from "functional quadriplegia" (ICD-10-CM code R53.2), verbiage that paid the hospital far more than "weak and confused" (ICD-10-CM code R41.82, perhaps with an R53.1 chaser). An entire industry sprang up that reviewed physicians' digital notes and, in an impressive feat of bureaucratic doublespeak, helped ensure "Clinical Documentation Integrity," prompting the doctor to record a slightly different term that would lead to higher reimbursement.
In short, the job of being a physician was transformed by the electronic health record-and not for the better. Doctors found that they were spending half their day staring at their EHR and clicking through screens, nearly double the time they spent with their patients. Physician burnout reached alarming levels in 2022, with more than half of American doctors experiencing symptoms of exhaustion and detachment. EHR documentation was a key factor, significantly diminishing both well-being and career satisfaction.
When I wrote The Digital Doctor in 2015, documentation burden was physicians' predominant complaint about their electronic health records. Alas, there was worse to come. Beginning around then, in what felt like a positive development, patients were given their own version of the EHR: a "patient portal." The most popular of these is Epic's MyChart, which currently has nearly two hundred million patient-users in the US. On their portal, patients could now view their basic medical information and interact with the health system to request refills, make appointments, and the like. A 2016 US law even gave patients real-time access to their doctor's notes and laboratory and X-ray results. This democratization of care, health policy experts predicted, would allow patients to become more discerning consumers-creating a competitive marketplace that would lead to higher quality and lower costs. At least, that was the hope.
But there were problems with the EHR's patient portal. The fact that patients could read their clinicians' notes meant that they could now see lots of medical jargon and incomprehensible test results, with virtually no accompanying explanations. In a 2024 article, a Colorado physician named Benjamin Vipler recounted a conversation with his mother after she received a result through her portal:
Mom: Hey Ben, I got my colonoscopy results back in my app. The polyp wasn't colon cancer.
Son: Oh? That's great news!
Mom: Yeah. What's lymphoma?
There was more. When I'm interacting with Bank of America or Delta Air Lines via their app or website, I can solve 99 percent of my problems online without having to speak to a human. But healthcare's version of digital democratization hit a hellish sweet spot: It gave patients just enough information to confuse them and just enough digital access to create the illusion that they could accomplish key transactions online, while providing few of the tools that would allow them to do these things.
In response to the portal's relative unhelpfulness in meeting patients' needs, EHR companies added a little button to the portal, one that seemed innocent enough at first: Click here to send a message to your doctor. And that's exactly what people did. A river of electronic messages began to flow; that river became a tsunami in early 2020 when the Covid-19 pandemic forced most doctors' offices to close except for emergencies. If we physicians were as clever as lawyers and accountants, we would have figured out a way to charge thirty dollars for every six minutes we spent answering these queries, but, at least at first, all of this was free to the patients-and their insurance companies.
In essence, the EHR patient portal-particularly the "send a message" button-created an expectation of 24-7-365 access to the physician and team. This sounds ideal, of course-what patient wouldn't want that kind of access to their doctor and healthcare system? But no one had considered the consequences of opening this floodgate of messages, so there was no workforce, workflow, or business model to sustain it. The average family physician was soon spending an hour and a half after dinner each day (we called it "pajama time") dealing with inbox messages.
All this time spent documenting in the EHR and trying to keep up with inbox messages might have been tolerable to clinicians if the EHR was helping us do our jobs. But mostly, it wasn't. Sure, we'd periodically receive an alert warning us not to prescribe a medicine to which the patient was allergic, but the preponderance of alerts were false alarms. Early predictive tools, such as ones to signal that a patient might have sepsis, were clunky, distracting, and mostly wrong.
Very few physicians are Luddites. We're glad the electronic health record is there and would never want to return to paper. And virtually all clinicians favor giving patients digital tools to help them manage their own health and healthcare. But most of us find the EHR to be surprisingly unhelpful in our efforts to provide higher-quality, safer, and less expensive care, and surprisingly harmful to the goal of having a satisfying and sustainable clinical practice. A 2013 report by the RAND Corporation-yes, the same think tank that projected that EHRs would save the American healthcare system $81 billion per year-concluded that the actual savings were essentially zero.
Clearly something had gone very wrong with healthcare's transition from paper to digital. When I asked physicians and nurses about it in the mid-2010s, their answers tended to be versions of "electronic health records are expensive billing machines that were sold to the chief financial officer with no clinical input whatsoever." A few added, "We should have waited five years until the EHRs were better."
These responses had the ring of partial truth-I was reminded of the parable of the blind man feeling the elephant's leg and thinking he'd come upon a tree. But they didn't seem like the whole story, which led me to write The Digital Doctor. The answers I found taught me that healthcare's bumpy path to digital transformation was completely predictable, hewing closely to the experience of digitization in other industries. This history is central to understanding and predicting the next phase of healthcare's digital evolution, this one fueled by AI.
The Productivity Paradox
In 1993, Stanford economist Erik Brynjolfsson coined the term "the Productivity Paradox of Information Technology," citing repeated examples in diverse fields in which vaunted technologies initially failed-sometimes for decades-to deliver on their promise of improving productivity. In 1987, after noticing that modern factories and Wall Street trading floors were peppered with new computers but that the promised productivity gains had not materialized, Nobel Prize-winning economist Robert Solow quipped, "You can see the computer age everywhere but in the productivity statistics."
The bad news about the Productivity Paradox is that it seems to be universal-the benefits of so-called general-purpose technologies (technologies that transform our work and lives across a range of activities) never live up to the hype, at least in their first few years of widespread use.
The good news is that-for those general-purpose technologies that are destined to be keepers, like electricity, automobiles, and the internet-the paradox eventually sorts itself out. (Not all heralded general-purpose technologies turn out to be winners, of course-if you doubt me, just don your Google Glass and use them to locate your nuclear-fusion-powered Segway.) After a delay of several-and sometimes many-years, those general-purpose technologies that are destined to change the world begin to demonstrate their superpowers.
Research by Brynjolfsson and others has shown that there are two main reasons for the lag in productivity and quality gains. The first is that the early versions of the tools are invariably clunky; they only get better after many use-feedback-improvement cycles. If you're of a certain age, you probably remember your first experience using a dial-up modem or the Alta Vista search engine, examples of technologies that needed to mature before becoming transformative.
Copyright © 2026 by Robert Wachter. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.