In the summer of 1796, a British country doctor named Edward Jenner invented the modern concept of vaccination and changed the course of medical history through an experiment that most researchers today would consider an almost criminal lapse of ethics.
Jenner, then 47, had noticed that farmhands who contracted harmless cowpox while milking cows seemed immune to the far more serious and often lethal disease of smallpox. He tried to duplicate that response by scraping the pus from a cowpox sore on a milkmaid’s arm and placing it in a cut on the arm of an 8-year-old boy named James Phipps.
Two months later, Jenner took the next fateful step. He scraped a smallpox sore and inoculated his patient with the deadly virus, then sat back to watch the results.
Luckily for young James, the test did not kill him–in fact, he had gained immunity from smallpox. Jenner went on to develop the technique he called vaccination, from the Latin word “vacca,” meaning cow.
Although Jenner’s blatant, intentional endangerment of James Phipps’ life would be unacceptable in our time, the kind of problem he encountered is typical of medical advances throughout the centuries, even to this day. His failing was not that of a stereotypical mad scientist delving into forces better left untouched, or even of a gifted man who did questionable things.
The difficult legacy of such transgressions is that sometimes, as in Jenner’s case, they have been inseparably linked to genuine leaps of knowledge and improvements in human welfare. That leaves an enduring temptation for medical researchers to put some individual patients at higher risk for the greater public good.
In fact, our society often condones such behavior, according to Arthur Caplan, director of the Center for Bioethics at the University of Pennsylvania. Few historical accounts stress the dubious aspects of Jenner’s experiments, indicating a preference to give a pass when bad choices yield great achievements.
“History says the people who got the answers get the rewards,” Caplan said. “The important thing is what you discover, not how you get there. That’s been a recurring problem in research ethics.”
Louis Pasteur, the other great figure in vaccination, also had real ethical shortcomings, Caplan said. Although Pasteur created vaccines for rabies and anthrax and showed that germs cause disease, he was sometimes surprisingly willing to give children vaccines he knew to be unproven.
Yet nearly all such concerns are dwarfed by the fact that in 1980, less than two centuries after Jenner and barely 100 years after Britain first required smallpox vaccination for infants, world health authorities declared smallpox eradicated as a living disease. The sole remaining viral samples sit in freezers in the United States and Russia, witnesses to one of humanity’s greatest medical triumphs.
It would be reassuring if recent trends gave clear proof that such discoveries can flower without the taint of moral compromise. The record, however, is decidedly mixed.
One fundamental conflict may spring from what is now considered the gold standard for any medical discovery: the randomized clinical trial. Some experts worry that such methods, by their very nature, may have the unintended effect of neglecting the interests of individual patients in favor of a larger scientific goal.
Before the 20th Century, medical discoveries often flowed from a mere handful of observations.
Paul Broca, the French surgeon who is credited with demonstrating in 1861 that the human-language faculty is localized in the left side of the brain, based his theory at first on the study of a single patient. Broca gave a brilliant anatomical report, detailing the story of a man who lost the power of speech after extensive injury to his brain’s left hemisphere. The patient, a man named Leborgne, went by the nickname of “Tan”–the only sound he could make when asked his name.
Yet Broca’s bold conclusions from just one case would not have withstood the scrutiny of a modern medical journal. New statistical approaches perfected in the 20th Century demanded larger sample sizes, with study participants placed into different treatment groups at random to ensure a fair test of competing drugs or medical theories.
That standard took permanent hold only in the early 1960s, when the National Institutes of Health adopted randomized clinical trials for its investigators. A huge proliferation of trials followed, assessing treatments for cancer, heart disease and a wide range of other ailments. In fact, clinical trials have grown so common that they now rank as the treatment of choice for hundreds of thousands of patients who either lack the money for conventional care or are desperate to get innovative therapies for their dire conditions.
But despite the advantages of such new methods, some experts fear the approach may also jeopardize the traditional covenant between doctor and patient, which dates back more than 2,000 years to the time of Hippocrates.
The problem is this: As care-givers, doctors are duty-bound to give their patients the best possible treatment available, even if that judgment is based on nothing more than a strong hunch. But as medical researchers, those same doctors must gather new knowledge, and the accepted way to do that is by assigning patients at random to treatment groups or to receive no treatment at all. A physician’s personal beliefs about which therapy will prove best cannot be allowed to interfere with the study’s outcome.
The result is that in order for medical science to take its next leap, some patients may be asked to stick with a treatment their physicians would not otherwise recommend, said Dr. Samuel Hellman, a radiation oncologist at the University of Chicago and former dean of the medical school there. Such an approach created a storm among AIDS patients starting in the late 1980s, when activists rejected the notion that dying patients would be given placebos.
“Putting the doctor in the role of investigator creates a conflict of interest,” Hellman said. “You risk putting the utilitarian goals of society in conflict with the rights of individual patients.”
Sometimes, Hellman said, the best course may be unconventional studies that preserve the doctor-patient relationship, though perhaps at the cost of some scientific rigor. For example, patients in a study might receive different therapies according to the hunches of their individual doctors, instead of being thrown randomly into treatment groups.
To be sure, Hellman and most other doctors believe the advent of rigorous trials has been a key to the revolutionary improvements in medical care over the last 100 years. In contrast to previous ages, when therapies had their foundation in a physician’s spiritual strength or semi-magical prowess, the last century saw the rise of “evidence-based medicine,” in which nearly every diagnostic test or surgical procedure in common use has its roots in years of methodical scientific tests.
Some of the dilemmas arising from the new medical ideal may simply be a product of growing pains.
As the 1900s dawned, a typical doctor would not even have attended medical school as we now know it. Most sick patients received treatment at the hands of physicians and surgeons whose training still resembled a medieval apprenticeship, with little formal schooling in basic chemistry, physiology or experimentally proven therapies.
One of the loudest calls for change came from Abraham Flexner, an educator who in 1910 authored an influential report for the Carnegie Foundation on the state of American medical schools. The Flexner Report, which cited an urgent need for more instruction in the basic sciences, changed the face of American medical training. Flexner himself helped plan the University of Chicago medical school, which opened in 1927.
For all the phenomenal medical discoveries of the 20th Century, touching everything from the mechanisms of brain function to the structure of DNA, attempts to create detailed ethical rules for medical research date only to 1972, when the public learned of the now-infamous Tuskegee experiment.
In that study, begun in 1932 by the U.S. Public Health Service, researchers withheld treatment for syphilis from about 400 African-American men, even decades after treatment with penicillin became available. Outcry over the deep betrayal by the federal government led to more research safeguards, especially the universal requirement of obtaining informed consent from every study participant.
Of course such rules have not put an end to wrongdoing in medical research, and the story of Edward Jenner may give one explanation that goes beyond the eternal hunt for research dollars. It’s likely that even the most misguided investigators believe they are doing the right thing, and if their breakthroughs require acts of reckless daring, history will be kind.
Such attitudes will take time to change, if they can be changed at all. The shift in views on questions of medical ethics since 1972 may be just the first tokens of a transformation as sweeping as that begun by the physician Andreas Vesalius in the 1540s, when medicine emerged from the Dark Ages.
Vesalius, a native Belgian who taught at the University of Padua in Italy, was the first medical scholar to seriously question the teachings of Galen, the great Roman physician of the 2nd Century A.D. Although Galen had done groundbreaking work on anatomy, his work was largely limited to animals because Roman law prohibited the dissection of human cadavers. Some of the blood vessels and brain structures he described do not even exist in people.
It fell to Vesalius to do the methodical human dissections that overturned more than 1,000 years of misguided assumptions based on Galen’s work. Vesalius published his findings in 1543, the same year that Nicolaus Copernicus published his work detailing the radical notion that the earth revolves around the sun, not vice versa.
The way Vesalius described his new insight offers a good lesson for medical insurgents intent on dethroning old idols. In the end, he said, “I had to put my own hand into the business.”




