The reason we do clinical trials…is because we didn’t used to.
There was a lot of observational science happening, but those kinds of studies don’t exactly give us the body of evidence we’re used to when it comes to proving something actually works.
One of the most famous, and earliest, examples of something close to a randomized controlled trial is the case of James Lind, the man often credited with discovering the connection between vitamin C deficiency and scurvy, way back in 1747.
Lind knew that scurvy—a disease characterized by anemia, swollen and bleeding gums, loose teeth, lethargy, and bleeding under the skin—killed a huge portion of sailors each year.
He took it upon himself to look after a group of 12 men afflicted with the illness, dividing them into groups of two, and giving them various remedies from cider to sea water to vinegar.
For the two who got oranges and lemons, one man recovered in under a week and the other recovered enough to help care for the other sick sailors.
Though it was one of the first versions of a randomized controlled trial, the results were not sufficiently rigorous to change even Lind’s mind about the cure for scurvy.
He expressed his own doubts about treatment, and the British navy didn’t implement the practice of supplying ships with citrus for another few decades.
Though Lind’s experiment laid the groundwork to start early clinical trials, progress toward a scientific method for drug discovery was slow.
By the 1900s, scientific breakthroughs such as pasteurization and the discovery of microbes had begun to inspire others to extend the scientific process to medications.
But a lack of standardization or rules meant that the majority of drugs being advertised had little to no proof their claims were true or their ingredients even useful.
In 1905, the American Medical Association, frustrated with the lack of movement on the part of the federal government to evaluate medical products, decided to do something about it.
They created the Council on Pharmacy and Chemistry, which charged drug manufacturers to test their concoctions for quality and safety. The proof of a drug’s efficacy was limited to recommendations by trusted members of the medical community.
A seal of approval was necessary to be able to advertise in the Journal of the American Medical Association (or JAMA), the leading medical journal at the time, which still exists today.
This, of course, only worked on drugs from manufacturers that cared about advertising in JAMA and other drugs still came to market without any safety testing.
One such drug was sulfanilamide.
It was one of the first “wonder drugs” to come along, an antibiotic that was a game changer for treating streptococcus infections like strep throat.
The manufacturers of the raspberry-flavored treatment had recently added a new compound to the mixture: A type of antifreeze that was
And because they didn’t test it in animals or humans beforehand, no one knew there was a safety issue until it was too late.
The tragedy inspired the Federal Food, Drug and Cosmetic Act of 1938.
The act required that drug makers submit safety data to the FDA (which was formed in 1906), and required all new drugs be tested on animals before humans.
To this day, animal models are an essential part of the clinical trial process. Typically called “pre-clinical trials” animals or microorganisms can be used to establish if an intervention is even worth attempting to examine in humans.
And for my tender-hearted listeners out there, fear not, there is actually a huge effort to move away from using animals in medical testing, not only because of the emotional discomfort of it, but also because animals are not humans and there’s only so much information you can glean from using them.
Instead, increasing evidence is showing that artificial intelligence-based algorithms can do a pretty good job at predicting toxicity, and lab grown pseudo-human cells can act as a closer stand in for pre-clinical studies.
If you’re someone who is anti-animal testing, know there’s a pretty metal name for your kind: antivivisectionists.
In the early 19th century, the antivivisectionist movement actually also opposed the use of humans in medical experiments. It was playwright George Bernard Shaw who actually came up with the term “human guinea pig.”
So at this point in history we’ve established the need for animal testing, but how did we get humans into the mix? Really through a combination of triumph and tragedy.
I often say, “all things in infectious disease come back to tuberculosis,” and the origins of clinical trials are no different.
Streptomycin was a new and promising treatment for tuberculosis.
After the second World War the United States was using it to great success in patients but lacked the rigorous trial data desired to prove its efficacy.
A British doctor by the name of A. Bradford Hill wanted to get that trial data, but in post-war England the drug was less available and more expensive. So, to maximize the existing supply, Hill decided to *randomly* assign patients to control groups and trial groups.
Prior to this, doctors tended to assign healthier patients to the trial groups, and sicker patients to the control group, which could exaggerate the benefits of a treatment.
Hill’s approach gave us the first true “randomized” controlled trial, and he eventually suggested, with the help of colleagues, the addition of things like double blinding.
This is where neither the participant nor the researcher knows who’s getting the medication and who’s getting the placebo (a substance that has no therapeutic value).
Double blinding cuts down on potential bias if, say, you as a participant know you are getting a drug and that alone makes you feel a little better.
Even with these new and innovative standards for trials, it was estimated that in 1951, about half of trials still lacked a control group.
And then came thalidomide.
A popular drug in Europe that was often used to combat morning sickness, thalidomide was found to cause severe birth defects when used in the early stages of pregnancy, including infants born without arms or legs.
At this point in time, the FDA required a certain level of safety data for drugs to be approved, and an FDA reviewer by the name of Dr. Frances Kelsey had concerns about the drug and did not approve it for sale in the U.S.
Nevertheless, sponsors of the drug had already sent samples to thousands of doctors to prescribe to patients, a practice that was common at a time when doctor anecdotes were taken as evidence of efficacy.
As a result, there were cases in the U.S. of doctors prescribing the drug, and pregnant people taking it, without either of them knowing it was experimental, and about a dozen babies were born with the physical deformities.
The outrage, and change, was immediate.
In 1962, Congress passed the Kefauver-Harris Drug Amendments, which required drug makers to prove both safety and efficacy through well-controlled studies, mandated an institutional review board (or IRB) approve the ethics of a study, made approval a requirement before new drugs could be marketed, and established the requirement for informed consent in clinical trials.
Informed consent and IRB approval for clinical trials are particularly important aspects of the process.
After World War II, the horrific abuses carried out by Nazis on captive people brought medical ethics to the forefront of a lot of
Of course, World War II wasn’t the first time unethical and immoral experimentation was undertaken. The United States has its own long history of using marginalized bodies to “advance medicine” without consent.
One of the most famous examples is the Tuskegee Syphilis Study.
The “study” began in the 1930s and followed a number of Black men with syphilis infections to see how the disease would progress.
The men were told they were being enrolled to help treat “bad blood” which was a catch all term at the time for a variety of general maladies.
However, the men were not only NOT treated for syphilis, but had new treatments such as penicillin actively withheld from them over the course of their lifetime.
This was particularly heinous considering we already had a pretty good idea of what untreated syphilis did to a person at that time.
What happened in Tuskegee is only one example of many horrifying violations in the history of scientific progress, which is why it is so important to make sure we keep examining and revising the ethics of research.
During the early days of COVID vaccine clinical trials I had a lot of people ask, “If we need a high volume of participants quickly to learn if the vaccines work, why not just offer something to prisoners?”
And the answer is not only is that illegal, it’s super duper unethical!
People who are incarcerated are part of what’s known as “vulnerable” research populations. Their lack of freedom means that they are at extremely high risk for potential exploitation and abuse when it comes to medical research.
The only studies you can really do involving incarcerated persons are ones that cannot be done without their involvement, so studies that have to do with the environment of incarceration.
And while it’s really important to consider these vulnerable groups, you have to strike a fine balance between protecting and excluding.
Take, for example, pregnant people. No one would want to repeat the tragedy of thalidomide, but for pregnant people with chronic illnesses, the fact that many of their essential drugs have never been tested in pregnancy often means choosing between discontinuing a medication essential to their health, or potentially putting their unborn child at risk.
It’s a difficult issue to overcome, but it needs to be done with shared input from the people most affected. Baby steps (no pun intended) are being made every day.
Phase 1: This phase is primarily to establish the safety of a drug and what dose should be given. It enrolls a smaller number of generally healthy people to gauge reaction to the medication (of course after the pre-clinical animal or cellular testing).
I think a great example of this is from the COVID vaccine clinical trials. You maybe remember (though it feels like ages ago now) the story of the man who was in the Moderna phase 1 trial who had to go to urgent care after his second dose of the vaccine for an extremely high fever.
During this trial, Moderna was testing doses in the size of 25 mcg, 100 mcg and 250mcg.
This man got the 250 mcg dose and had a serious reaction. When they looked at the antibody response, it was found that his antibody levels were not significantly higher than the people in the 100mcg dose groups, who did not have serious reactions, so the larger dose size was no longer used.
Phase 2: This phase starts to expand the pool of participants. It’s about including more people for more safety data, and starting to see if maybe you can gauge if it’s having the desired outcome.
Phase 3: This is where you need to enroll enough people to prove whatever results you get aren’t by chance and divide everyone up into intervention and control (and of course still safety).
You might be enrolling a general population (if what you’re testing is something preventative) or you might enroll people with a certain medical condition (if what you’re testing is specific to that condition).
If you’re testing an intervention that’s the first of it’s kind, you’ll likely test it against a placebo. But if it’s something to treat or prevent conditions for which medications already exist then you might test your new thingy against the current standard of care.
This is because, with certain interventions, it would be totally unethical to just not give people the thing we know works, in order to prove our new thing works better.
For example, a new medication for epilepsy. You wouldn’t enroll thousands of people who are used to one type of antiseizure medication and just stop half of them from getting any medication so you can compare their outcomes to the people getting the new medication.
Phase 4: The phase of the study that happens after a drug has been approved and taken to market. It’s extremely important that we continue to monitor new drugs in real world situations to learn more about their safety and efficacy, while also allowing more people access to potentially life saving drugs.
Even if phases 1-3 of a clinical trial enroll tens of thousands of people, they won’t be able to pick up adverse events that only happen at rates of hundreds of thousands of people.
It’s also important to keep in mind that, while these phases often do happen sequentially, sometimes they are able to happen with overlap too.
Typically, clinical trials can take years to decades from start to finish.
Each step must recruit, screen and enroll an appropriate number of participants. They then have to receive the intervention, reach the desired endpoints for their phase, perform analysis, submit the data, have the data reviewed, and then approved to move on to the next phase.
This can get really hard for trials in things with rare disease or outcomes. You have to enroll enough people who have a rare disease and or/wait for enough people to experience a rare end point.
The development of the COVID vaccines occurred during an extremely unique point in history that allowed us to move much faster than we do with normal vaccine development. So much of the delay in vaccine development comes from a lack of willing participants, bureaucratic process, and rarity of study endpoints.
But in the case of COVID vaccines we had so many incredible volunteers step up that recruitment and enrollment went much faster than anything we normally see.
And because cases were so prevalent, it took a very short amount of time to get to the point when we hit the number of infections that statisticians determined would be enough to tell us if the vaccine was protective or not.
Plus, there was huge political will to make sure the FDA and NIH reviewed the study protocols and results quickly so they could be turned around to start the next phase of the trial.
Not only were the COVID vaccines not rushed, but they are also probably one of the most studied vaccines in history.