THE END OF PRIMARY CARE
The patient shivered as she waited for the doctor to climb the stairs to the second floor. She was 6 years old, and a fever had kept her home for the past few days. Her mother, half of a working couple with four other children, called the doctor that afternoon. The child listened as her mother’s voice grew closer..
The patient shivered as she waited for the doctor to climb the stairs to the second floor. She was 6 years old, and a fever had kept her home for the past few days. Her mother, half of a working couple with four other children, called the doctor that afternoon. The child listened as her mother’s voice grew closer, punctuated by an occasional low-pitched question. Fear overcame malaise, and the girl slipped quietly under the bed. The doctor, a tall, thin man with reddish-blond hair and freckles, glanced around the room and then somewhat awkwardly lowered himself onto the floor. He smiled at the girl. ”Hello, Lisa,” he said to me. And then we talked — right there, underneath the bed, as if it were the most natural thing in the world.
I don’t remember what we talked about or how long the conversation lasted. But that morning and for years after, I trusted the man with the freckled face to care for me — whenever care was needed. It’s that bond that drew me and others to the practice of medicine. We longed to be the person entrusted with the care of another. For both doctor and patient, there exists in that relationship the possibility of profound satisfaction. The patient feels content, knowing that his physical health has an active, knowledgeable champion. The doctor is rewarded with a similar gratification watching a lifetime of education and experience redeemed in practice. The mutual satisfaction of good doctoring is difficult to discuss in an era so focused on quantifiable results. Yet patients look for it. And med students anticipate it as they enter their training. That connection is certainly why I knew when I decided to become a doctor that I wanted to be like my own Dr. James.
I arrived at medical school as part of the class of 1996. That year, a third of the medical students in the United States planned, like me, to practice primary care medicine. Even here at Yale, long the bastion of superspecialization, primary care medicine had made its mark: there were the Primary Care Club and the Primary Care dinners, where we heard speakers talk about the future of primary care medicine. I felt at home among these doctors who struggled with the issue of caring for the whole patient in addition to the pharmacology, physiology and technology of medicine. We would be the very linchpin of 21st-century medicine. Even in the ivory tower, the complexity of taking care of a human being over time — what one physician recently dubbed ”big doctoring” — and the important role it was to play in the entire architecture of America’s health care created its own kind of excitement. When I finished medical school, I entered a residency program devoted to teaching new doctors the skills essential to those on the front lines of care. After training, I joined the faculty of Yale’s primary care program.
Over the past few years, though, I have witnessed a troubling change: the future of primary care doctoring is in danger. Applications to this program and to programs like it have plummeted. The year I applied to Yale, I was among nearly a thousand graduates vying for one of its 30 positions. Last year that number dropped to just over 500. Yale’s program is one of the largest and most influential in the country but, like others, has recently struggled to attract the most talented graduates. Finally, after watching the applicant pool dwindle, after years of discussion, we cut the size of this year’s incoming class by a third. The changing marketplace has led to some smart innovations in what we teach and how we teach it. But it has also driven us to consider more disconcerting changes, like ridding ourselves of the ”primary care” label we once wore so proudly.
We certainly aren’t alone: according to the National Resident Matching Program, the organization that coordinates placement for postgraduate medical training, the number of primary care residency programs has dropped by more than a third over the past decade. And most of those remaining are now smaller. During the same period, the number of doctors applying for postgraduate training overall has increased. ”When students come to medical school, being somebody’s doctor is really what they want to do. Somehow, by the time they leave, we’ve changed their minds; they’d rather do just about anything but that.” That’s the assessment of Dr. Allan Goroll, an early advocate of primary care training and the author of the most widely used textbook in primary care medicine. What happened? Why don’t medical-school graduates want to be ”somebody’s doctor” anymore? And it’s not just doctors: patients, even those with insurance, are voting with their feet, increasingly choosing to visit emergency rooms and specialists directly. Is this just the marketplace speaking? Or could it be that the idea of the personal physician is out of date and that the time has come to retire the picture of Marcus Welby?
The whole concept of a specialty devoted to primary care is both very old and pretty new. At the turn of the last century, there were no specialists. Doctors hung out a shingle and performed every conceivable kind of medicine. While individual doctors may have been well known for their skills in surgery, or in the delivery room, their training was no different. They were all just somebody’s doctor. Then, early in the 20th century, it became clear that surgery required a different type of training than the other forms of doctoring, and the American College of Surgeons was born in 1913. Until World War II, there were just two kinds of doctors — physicians and surgeons.
The discovery of penicillin and the rapid development of medical technology moved medicine out of the home in the 50’s. In hospitals and in new laboratories, specialty medicine flourished. The sheer number of sick patients of all types gave doctors the opportunity to develop a comprehensive knowledge of a single disease or field. These newly trained specialists led the charge against heart disease, cancer, pneumonia and diabetes. Newspaper headlines trumpeted their success as these diseases began to give up their secrets to this generation of physicians armed with new training and newer technology.
Of course, the general practitioner had neither: he was just a doc with four years of medical school plus a yearlong internship under his belt when he started practicing. He had gotten most of his training on the job. How could he compete with these new specialists? Graduating doctors flocked to hospital-based training programs to enter the rarefied air of specialty medicine. In 1940, three-quarters of doctors were generalists. Three decades later, only 20 percent were. Around that time, though, it became clear that in trading up from the general practitioner to the specialist, some things had been lost.
In 1961, The New England Journal of Medicine published a paper that used the new science of epidemiology to show that specialists, with all their fancy training, were actually rather poorly prepared for the patients they saw once they went into practice. These doctors learned medicine by treating the sickest patients, yet many ended up caring for a relatively healthy population. Most of their patients were far too well to be in the hospital and wanted their doctors to help them stay that way. Hospital-based training left these doctors singularly unprepared for that task. As a result, specialty doctors, like the old G.P.’s, got the training they really needed on the job.
The author of that paper, Kerr White, argued for a new ”specialty” in medicine: one that specifically prepared doctors to take care of patients before they got sick enough to need hospitalization. Suddenly, the generalist physician was redefined from the doctor who had no specialty training to one who was specially educated to care for the whole patient; a doctor who specialized in the breadth of medical care rather than the deep knowledge of a single disease or organ system.
Family practice was the first of these generalist specialties. Created in 1969, family practice physicians were trained to deal with the broad scope of medical problems faced by the old G.P.: everything from childbirth to surgery, from mumps to hypertension. In 1973, Harvard took the lead and created the first internal-medicine residency dedicated to primary care. The idea spread rapidly, and by 1990 there were dozens of primary care residency programs. Yale was a relative latecomer: its program started in the late 80’s.
By the time I applied for residency training in 1996, primary care was hot. More than half of all doctors graduating from medical school that year entered a primary care residency: either family practice, pediatrics or internal medicine. The enthusiasm was fueled by three decades of research showing that primary care doctors delivered care that was, in many ways, better than that provided by specialists. A widely touted 1996 report issued by the Institute of Medicine described primary care as the future of sound medical care. ”Primary care improves the quality and efficiency of care and expands access to appropriate services,” the authors summarized. ”It also forms an important bridge between personal health care and public health, to the advantage of both.”
The report cited research showing that people treated by primary care physicians spent less time in the hospital, had fewer visits to the E.R. and had fewer procedures and tests. Yet they were healthier and happier with their care than those without these doctors. ”Big doctoring,” it turns out, was also good doctoring, and now we had the data to prove it. The success of primary care moved the generalists, who had long played second fiddle to the specialists, to center stage. There was even talk of a National Primary Care Day to encourage medical students to go into this specialty in order to meet the expected demand. And then, just as quickly as it began, it all came apart. The trend peaked the next year, teetered and then crashed.
The very quality of primary care that made it so attractive is what led to its downfall. Legislators, insurance companies, even physicians themselves began to look for ways to harness the expertise of primary care doctors to expand care and limit cost. But no one seemed to recognize that the basis for these economies was the bond between patient and doctor. And without that trust, the economies of primary care were lost.
The initial and most serious blow came when H.M.O.’s persuaded primary care doctors that they should take on the role of gatekeeper. Research indicated that care provided by primary care physicians was more cost-effective than that delivered by specialists. From the insurance companies’ perspective, if these doctors were already curtailing costs by getting rid of unnecessary referrals and testing, then providing them with incentives to cut costs would make the savings even greater. What could be better?
The appeal of this system for doctors was more complicated, said Dr. Steve Schroeder, a self-proclaimed card-carrying generalist and the former head of the Robert Wood Johnson Foundation. It flattered primary care physicians by placing them right where they felt they should be: deciding the best, most cost-effective options for their patients. And directing them to a specialist, if need be. That was the theory.
The generalist, formerly a second-class citizen in a world that valued specialists, would finally get the respect he deserved. Money, too, was part of it. Insurance companies promised to address the inequities built into their payment structure. Medical insurance had been developed to pay for the huge expenses of modern surgery. Office visits were an afterthought. In this system, simply performing procedures was much more highly paid than figuring out whether, or which, procedures were needed. The promise that this would be redressed made this new system very appealing to generalists.
The gatekeeper idea was presented to doctors in a 1985 paper published in The Annals of Internal Medicine. The author of that paper, John Eisenberg, an internist at the University of Pennsylvania, was an early supporter. ”The gatekeeper approach,” he wrote, ”sanctifies the internist’s role as primary care physician and captain of the patient’s ship.” He also recognized its potential flaws and pointed out that there was no evidence that this type of system would actually work. One study of the gatekeeper system, published in the late 70’s, did show a reduction in hospital days and specialist referrals in one plan. But not long after that study was published, hospital days and referrals rebounded, and two years later the plan went out of business. Another study showed that although a gatekeeper plan did cut costs, the physicians were uncomfortable with their role at the turnstile.
Moreover, many of the tools necessary for doctors to make cost-effective medical decisions weren’t available. While many therapies available to doctors were supported by evidence of their effectiveness, there was very little to help doctors decide which was more effective, much less the more cost-effective. Without this data, how could a doctor be expected to routinely provide the most cost-effective treatment?
The gatekeeper system had worked well in England and Canada, where specialists were in short supply; primary care physicians were essential to move you up the line when it was necessary. But here, where specialists are plentiful, the gatekeeper kept people away from otherwise available specialists. It was a job despised by doctors and loathed by patients.
Gatekeeping, and the financial incentives that came with it, made patients worry that their doctors weren’t acting in their best interests. Doctors worried, too. This possible conflict of interest gnawed at the connection between doctor and patient. And in doing that, it erased much of the value of primary care medicine. Moments of trust, like mine with Dr. James under my childhood bed, are impossible if patient and doctor are worried about the costs of such care. If there is no physician whom you trust, why not go directly to a specialist or an E.R.?
The money incentive changes everything. I have never worked under a gatekeeper system, yet I am frequently in the position of talking patients out of care they may want but won’t help them and may even harm them. Recently I saw a patient who suffers from chronic low-back pain because of her mild scoliosis. The pain eases when she is active and worsens when weather or circumstances limit her activity. Years ago she had a small fatty tumor removed from her back. She said that its removal alleviated her pain. At our last appointment she complained about her back pain and asked for a referral to the surgeon to have another operation. The exam showed that she had no evidence of a recurrence of her benign tumor, and her back pain, even by her own assessment, was no different than usual. I felt that her surgery’s apparent efficacy was probably coincidental. But only after a careful history and thorough exam could I convince her that stretching and exercise were more likely to treat the pain than surgery. If she thought that I had a stake in denying her care, I’m not sure I could have kept her off the operating table.
And the promise of pay equity has never panned out, either. Reimbursement to primary care doctors has not even kept up with inflation. Doctors, in response to lower revenue and higher costs, have tried to maintain incomes by seeing more patients. Squeezing more patients in made doctors less able to develop the relationships that were a chief reward of primary care medicine. From a patient’s point of view, this simply made getting an appointment more difficult, and when you finally saw the doctor, he seemed tired, distracted, hurried.
Technology may rescue primary care doctors as they struggle to be efficient enough to spend the time they need with their patients. What’s more, the machinery of the gatekeeping system has been put in check in recent years, first with laws and court decisions granting patients the right to a second opinion and then by the pressure of consumer choice. Will this be enough to restore the trust between a patient and his doctor? Or have physicians become so overwhelmed by deterioration of trust and the circumstance of practicing that they’ve lost sight of the values that drew them to medicine? These questions remain to be answered.
Some doctors and other critics say that primary care is just a remnant of an earlier time in medicine and that we are simply witnessing the end of a type of doctoring, like bloodletting or cupping, whose time is over. Maybe. But there is some evidence that this is not the case. In this country, people are willing to pay for what they really want, if they have the means, and a recent trend suggests that what people really want is my old Dr. James. Patients can now pay an annual fee — anywhere from $1,000 to $10,000 — and in return get a doctor who can see them when they need to be seen. A doctor who knows them and whom they trust. Sound familiar? It’s Marcus Welby repackaged as ”luxury primary care.”
This week I’ll be seeing my own roster of patients, none of whom would be able to afford the expensive version of primary care, and as long as I, and others in my line of doctoring, are able to work, they won’t have to. But what happens after that? If we don’t work to balance the benefits of the primary care system with those of the specialist system, if we don’t figure out how to incorporate ”big doctoring” into the system, then all we have done is postponed the real restructuring for another generation. ”If we didn’t already have primary care medicine,” Allan Goroll said, ”we’d just have to invent it. It’s the way we want to be cared for.”