Skip main navigation

Military Health System

Clear Your Browser Cache

This website has recently undergone changes. Users finding unexpected concerns may care to clear their browser's cache to ensure a seamless experience.

Skip subpage navigation

Types of Bias in Randomized Controlled Trials: A Refresher for Military Mental Health Providers

By Erin Beech, M.A. and Alexandra Kelly, Ph.D.
Feb. 4, 2019

""U.S. Air Force photo/Airman 1st Class Katrina M. Brisbin

It's important for mental health providers and health system administrators to be active and informed consumers of the research literature in their areas of practice. Keeping abreast of new treatments and delivery models – and being able to critically evaluate their merits – enables providers to continually incorporate emerging evidence-based practices into their work with service members. For example, recent studies exploring the effectiveness of group versus individual cognitive processing therapy and the efficacy of compressed treatment schedules for prolonged exposure have implications for selecting the best treatment modality, and potentially accelerating return to duty, for service members with posttraumatic stress disorder.

Most of us learn in our academic training that randomized controlled trials (RCTs), which compare one or more experimental groups to a control group, are the gold-standard research design for answering questions about treatment effectiveness. But not all RCTs are created equal. In fact, the methodological rigor of RCTs can vary widely, and it is vital to critically evaluate research studies for sources of bias in order to determine how much confidence one can have in their results.

This is the first post in a two-part blog series about strategies for evaluating the risk of bias in RCTs. Today we'll provide an overview of various types of bias that one might find in clinical trials. Next week, we’ll follow up with a post that introduces a strategy for systematically evaluating the impact of each of these sources of bias on a given RCT.

Common sources of bias in research studies can be broken into six domains:

Selection bias occurs when there are systematic differences between groups. For example, if groups are not comparable on key demographic factors, then between-group differences in treatment outcomes cannot necessarily be attributed solely to the study intervention. RCTs attempt to address selection bias by randomly assigning participants to groups – but it is still important to assess whether randomization was done well enough to eliminate the influence of confounding variables.

Performance bias refers to systematic differences between groups that occur during the study. For example, if participants know that they are in the active treatment rather than the control condition, this could create positive expectations that have an impact on treatment outcome beyond that of the intervention itself. Ideally, participants and investigators should remain unaware of which group participants are assigned to. Of note, this is more easily achieved in medication trials, where the medication and the placebo appear identical, than in psychotherapy trials.

Detection bias refers to systematic differences in the way outcomes are determined. For example, if providers in a psychotherapy trial are aware of the investigators' hypotheses, this knowledge could unconsciously influence the way they rate participants' progress. It is crucial that psychotherapy RCTs address this by utilizing independent outcome assessors who are blind to participants' assigned treatment groups and investigators' expectations.

Attrition bias occurs when there are systematic differences between groups in withdrawals from a study. It's common for participants to drop out of a trial before or in the middle of treatment, and researchers who only include those who completed the protocol in their final analyses are not presenting the full picture. Analyses should include all participants who were randomized into the study (intention to treat analysis), and not only participants who completed some or all of the intervention.

Reporting bias refers to systematic differences between reported and unreported data. One example is publication bias, which occurs because studies with positive results are more likely to be published, and tend to be published more quickly, than studies with findings supporting the null hypothesis. At the investigator level, outcome reporting bias can also occur when researchers only write about study outcomes that were in line with their hypotheses. Efforts to address this include requirements that RCT protocols be published in journals or on trial registry websites, which allows for confirmation that all primary outcomes are reported in study publications.

Other bias is a catch-all category that includes specific situations not covered by the above domains. This includes bias that can occur when study interventions are not delivered with fidelity by therapists, or when there is "contamination" between experimental and control interventions within a study (for example, participants in different treatment conditions discussing the interventions they are receiving with each other).

Stay tuned for next week's blog, where we'll introduce a tool for systematically evaluating potential bias and walk you through a sample trial related to panic disorder treatment.

Ms. Beech is a senior research associate at the Psychological Health Center of Excellence. She has expertise in evidence synthesis, and is responsible for drafting the Psych Health Evidence Briefs.

Dr. Kelly is a contracted psychological health subject matter expert at the Psychological Health Center of Excellence. She has a master's degree in counseling and psychological services and a doctorate in counseling psychology. She specializes in trauma, vocational psychology, and multicultural counseling.

Last Updated: September 14, 2023
Follow us on Instagram Follow us on LinkedIn Follow us on Facebook Follow us on X Follow us on YouTube Sign up on GovDelivery