Learn more about
our approach
We conduct umbrella reviews of meta-analyses following the U-REACH framework and assess certainty using the international GRADE standards. Every piece of evidence is systematically evaluated to ensure clinical reliability.
Umbrella reviews
An umbrella review is a "review of reviews"—a systematic synthesis that evaluates multiple meta-analyses on the same topic to provide the highest level of evidence overview.
Individual studies
Single randomized controlled trials (RCTs) testing specific interventions
Meta-analyses
Statistical combination of multiple RCTs to estimate overall treatment effects
Umbrella reviews (our approach)
Comprehensive synthesis of all available meta-analyses to identify the most reliable evidence
Why umbrella reviews?
Evidence certainty (GRADE)
A key component of our platform is to present not only the estimated effect of all interventions but also to present the confidence that we have that the true effect of the interventions lies close to the estimated effect reported. We rate the confidence in every estimation of effects from very low to high certainty using an adapted version of the GRADE (Grading of Recommendations Assessment, Development and Evaluation) framework.
In the dashboard, each cell displays the results of the GRADE with three dots representing our level of certainty:
Very confident that the true effect lies close to the estimated effect.
Moderately confident in the effect estimate; likely close but could differ.
Limited confidence. The true effect may be substantially different.
Very little confidence. The true effect is likely substantially different.
We start with the assumption that we have a High confidence in all effects, and then we reappraise this judgement based on potential problems. The certainty level decreases when:
Study quality issues
Studies have design flaws or are poorly conducted (e.g., people measuring the efficacy knew which intervention the participants received)
Inconsistent results
Different studies show very different effects for the same intervention
Imprecise estimates
Too few participants or very wide uncertainty ranges make it hard to pinpoint the true effect
Indirect evidence
Studies tested different populations or outcomes than what we're interested in
Publication bias
Positive studies are more likely to be published, potentially skewing our understanding
Limitations and interpretation
As an umbrella review ("summary of summaries"), this platform has specific constraints essential for clinical interpretation.
Data lag and "summary of summaries"
Because we synthesize meta-analyses, there is a time lag. Recent RCTs published after the most recent meta-analysis in our database will not yet be visible here.
Medication lumping
Meta-analyses often pool similar drugs to increase statistical power. For example, "amphetamines" may combine Adderall and Vyvanse data.
Drug vs. non-drug comparisons
Direct comparisons are difficult. Drug trials usually blind participants (placebo), while non-drug therapy trials often cannot, which may inflate non-drug effect sizes.
Short-term focus
Most high certainty evidence is derived from short-term trials (approx. 12 weeks). Long-term efficacy/safety data (>52 weeks) remains sparse.