What If We Could Do Clinical Trials Before We Do Clinical Trials?

I’m a doctor, but I hate wearing my car seat belt. And it’s not for the reason you might think. Rather than comfortably crossing over my shoulder as it’s intended, the seat belt agonizingly crosses right over the most tender spot in my neck. It wasn’t until a few years ago that I finally grasped why — no matter how I adjust my seat — I am doomed to drive with a seat belt noose around my neck. It’s because cars are designed for the 6-foot, 180-pound man. They are not made for me, at 5 feet, 3 inches tall.

This led me to an “aha” moment in medicine. Medical research has historically embraced a one-size-fits-all concept. In the not-too-distant past, clinical trials mostly enrolled Caucasian males with the assumption that that they represent the entire human spectrum. But research has shown that this assumption is wrong.

Genetic Factors Influence Patient Responses To Investigational Drugs, Yet Many Groups Are Still Underrepresented In Clinical Research

Many groups underrepresented in clinical research often have distinct disease presentations or other circumstances that affect how they respond to investigational therapies. For example, men are more likely to respond to tricyclic antidepressants and women to selective serotonin reuptake inhibitors as treatment for depression.1 In another example, reduced renal and hepatic clearance in older adults puts them at increased risk of harm from many drugs such as anticoagulants and psychotropic agents. Although warfarin is an important anticoagulant to reduce the risk of blood clots, too much can cause serious internal bleeding. There is a 20-fold interpatient variability in therapeutic warfarin dose requirements.2

Nearly half of the variability in patient response to warfarin can be explained by genetic variants, which differ across populations. Thus, populations with greater genetic African ancestry are more likely to require higher daily doses of warfarin, whereas populations with greater genetic Asian ancestry require lower doses. However, because most of the early studies of warfarin were conducted in populations with predominantly European ancestry, dosing algorithms failed to generalize to other populations. It was not until 2013 that genotype-guided dosing would be used to determine warfarin dosing for different populations.

Despite the updated dosing guidances, warfarin remains one of the leading causes of adverse drug events, and incorrect dosing can lead to increased risk of bleeding, hospitalization, and death. That’s because variability within the human population is not limited to genetic factors alone. It is caused by a combination of influences that are genetic and non-genetic. Race and ethnicity confer life experiences that themselves may result in specific physiological expressions that are not genetic in origin. For example, lower socioeconomic status and decreased access to healthcare are associated with higher blood pressure and risk of cardiovascular diseases, which in turn can influence response to therapeutics. Other non-genetic factors such as diet, exercise, and other lifestyle habits also play a significant role in this regard.

Despite the increasing recognition of the inter-human variability in therapeutic responses over the years, for decades the medical scientific field continued to have this glaring blind spot when it came to the practice of clinical trials. It has often taken the demands from activist groups to increase inclusion of diverse groups in clinical trials. As an example, It was not until the mid-80s, when HIV activists protested the exclusion of women in clinical trials, that policies were enacted to encourage researchers to include women in studies.

Today, researchers must strive to not only include women, but also be inclusive of ethnicity and race, of age, and other factors. The reason for this is simple. There is a great diversity within the human species that causes differing reactions to medications and other therapies.

If Variability Exists Even Among Humans, Why Are We Still Conducting Animal Testing?

But increasing representation in clinical trials is just the start of moving away from a one-size-fits-all model. Back up in the process and you’ll find today’s blind spot: before human trials begin. Why, when we know that the biology of 40-year-old Caucasian male does not reflect the biology of an 80-year-old Asian woman, do we continue to believe that somehow the biology of a rat, a dog, or a monkey will accurately predict human results?

The FDA generally requires all new drugs and vaccines to be tested in two different species for safety and efficacy. Although this preclinical phase also includes in vitro, computer simulation, and other types of testing, it is predominately the results from animal testing that determine whether a drug or vaccine moves into human clinical trials. But there is a big flaw with this process.

It is now widely understood that 90%-95% of all drugs and vaccines found safe and effective in animal tests fail in humans, mostly due to toxicity or because they just don’t work in humans. Despite this immense failure rate, the medical field views animal testing as the “gold standard” and continues to treat me, you, and everyone else based on results from animal testing. In other words, we are all being treated like overgrown rats.

The good news is that there is a better way. Human clinical trials are, of course, the true gold standard. But we need some kind of preclinical testing regime that best models human biology before conducting live human trials. So, what if, instead of animal testing, we could conduct clinical trials in the lab?

Alternatives To Animal Testing Offer Better Insight Into Efficacy And Safety In Humans

Advanced, innovative medical research tools are being developed that can not only replace animal testing but be far more effective for the investigation of human diseases and in predicting which therapies will be safe and effective in humans. These techniques are human-relevant in that they use human cells and tissues to create three-dimensional architectures of living organs and biological systems or use AI to mine the wealth of human data that already exists. They include human body on a chip, digital (virtual) twins, organoids, and bioprinting.

Human chip models are already gaining recognition as the next revolution in medical science. Chip models contain engineered or natural tissues derived from different organs that are grown inside miniaturized fluid channels molded into a chip made of silicon, glass, or other substances. These living, three-dimensional architectures of human organs provide a window into their inner workings and the effects that drugs can have on them — all without using other animals or live humans. Recently, researchers found that liver chip models outperformed animal testing by a wide margin in predicting drug-induced liver injury, one of the main causes for drug failures due to safety problems. There are now chip models for numerous organs, including the heart, gut, kidney, skin, lung, and even mini brains. Next, human body-on-a-chip models are being created that are networks of multiple organs that link together to create complex bodily systems.

These techniques are not only proving to be more effective than animal testing, but they also will herald a completely new way of investigating new treatments and determining which treatments to select for a given patient. Using these new techniques, medical researchers will be able to actually run “clinical trials” before testing a drug in live humans. Cells can be captured from different segments of human populations to create human body-on-a-chip models that represent human diversity. Trials can then be run using these models to determine which drugs work best in which populations, and which are more likely to cause safety problems. Furthermore, human chip and other technologies can be linked with electronic health record data (EHR) to create hybrid tissue-digital twin models. With these advanced models, not only can we run trials that represent human diversity, we can also screen for the more infrequent but serious adverse events (SAEs) that are often discovered only after a drug is on the market.

I strongly believe we can realize this better— and kinder — medical science. It is for this reason that I helped draft the language for the FDA Modernization Act 2.0, which President Biden signed into law in December 2022. This law removes a Depression-era mandate that required animal testing for new drugs and allows for human chip and other more advanced testing methods to be used. While this new law will not change the drug development process overnight, it does incentivize the use of more modern and effective techniques such as human chip models. The FDA Modernization Act 2.0 is just the first step, however, to transforming medical science. What’s needed is a dedicated governmental commitment and resources channeled into further developing chip models and other innovative techniques that are human-relevant. At the Center for Contemporary Sciences, we are creating a movement to support the discovery, development, and use of more effective, human-relevant testing methods.



  1. Baca, E., M. Garcia-Garcia, and A. Porras-Chavarino. 2004. Gender differences in treatment response to sertraline versus imipramine in patients with nonmelancholic depressive disorders. Progress in Neuro-Psychopharmacol and Biological Psychiatry 28(1):57–65
  2. National Academies of Sciences, Engineering, and Medicine. 2022.
    Improving Representation in Clinical Trials and Research: Building Research Equity for
    Women and Underrepresented Groups. Washington, DC: The National Academies Press.


Share this Post


Subscribe for email updates...

Skip to content