The quality of evidence is the confidence in the veracity of the information or data, and depends on the source, design and quality of each study or piece of information. In contrast with EBM where randomised controlled trials are ranked highest and observational studies ranked lowest, in rapid risk assessment the evidence may be limited and therefore there may be greater reliance on observational studies, including case reports and specialist expert knowledge. For most infectious disease threats only observational data are available.

Certain factors affect the quality of evidence. Factors that may increase the quality include: the method of generating data and study design (i.e. analytical epidemiology versus descriptive), the strength of association, evidence of dose response, and consistency with other studies/expert opinion. Factors that may decrease the quality include: reporting bias, inconsistency, and conflicting evidence/opinion.

Ideally, a rapid risk assessment should not rely on a single study or piece of evidence. There should be a cautious approach to the interpretation of information if only one research group reports on an infection or disease association in multiple publications. Poor evidence or information should not be used for the rapid risk assessment unless this is the only data available; in this case any uncertainties should be documented in the information table. 

Triangulation is a technique widely used in qualitative research to address internal validity by using more than one method of data collection to answer a research question. The body of evidence should be considered as a whole, and the triangulation of evidence should confirm (or refute) internal validity of findings. Triangulation of evidence,including specialist expert knowledge, may be important to reach a consensus. Ensure a minimum of two to threedata sources and agreement between these (i.e. two experts or expert and literature). Sources of evidence and agreement between these (or absence of) should be clearly stated in the information table. Based on consistency, relevance and external validity of the available and relevant information the quality of evidence is graded as: good, satisfactory, or unsatisfactory (definitions and examples are given in Checklist 3).

Checklist 3: Evaluating the quality of evidence (for information tables)

Quality of evidence = confidence in information; design, quality and other factors assessed and judged on consistency, relevance and validity. Grade: good, satisfactory, unsatisfactory

Examples of types of information/evidence

Good

Further research unlikely to change confidence in information.

  • Peer-reviewed published studies where design and analysis reduce bias, e.g. systematic reviews, randomised control trials, outbreak reports using analytical epidemiology 
  • Textbooks regarded as definitive sources
  • Expert group risk assessments, or specialised expert knowledge, or consensus opinion of experts

Satisfactory

Further research likely to have impact on confidence of information and may change assessment.

  • Non-peer-reviewed published studies/reports 
  • Observational studies/surveillance reports/outbreak reports
  • Individual (expert) opinion

Unsatisfactory 

Further research very likely to have impact on confidence of information and likely to change assessment.

  • Individual case reports
  • Grey literature
  • Individual (non-expert) opinion