Skip to Main Content

Evidence-Based Practice for Nursing: Evaluating the Evidence

Evaluating Evidence: Questions to Ask When Reading a Research Article or Report

For guidance on the process of reading a research book or an article, look at Paul N. Edward's paper, How to Read a Book (2014). When reading an article, report, or other summary of a research study, there are two principle questions to keep in mind:

1. Is this relevant to my patient or the problem?

  • Once you begin reading an article, you may find that the study population isn't representative of the patient or problem you are treating or addressing. Research abstracts alone do not always make this apparent.
  • You may also find that while a study population or problem matches that of your patient, the study did not focus on an aspect of the problem you are interested in. E.g. You may find that a study looks at oral administration of an antibiotic before a surgical procedure, but doesn't address the timing of the administration of the antibiotic.
  • The question of relevance is primary when assessing an article--if the article or report is not relevant, then the validity of the article won't matter (Slawson & Shaughnessy, 1997).

2. Is the evidence in this study valid?

  • Validity is the extent to which the methods and conclusions of a study accurately reflect or represent the truth. Validity in a research article or report has two parts: 1) Internal validity--i.e. do the results of the study mean what they are presented as meaning? e.g. were bias and/or confounding factors present?; and 2) External validity--i.e. are the study results generalizable? e.g. can the results be applied outside of the study setting and population(s)?
  • Determining validity can be a complex and nuanced task, but there are a few criteria and questions that can be used to assist in determining research validity. The set of questions, as well as an overview of levels of evidence, are below.

For a checklist that can help you evaluate a research article or report, use our checklist for Critically Evaluating a Research Article

How to Read a Paper--Assessing the Value of Medical Research

Evaluating the evidence from medical studies can be a complex process, involving an understanding of study methodologies, reliability and validity, as well as how these apply to specific study types. While this can seem daunting, in a series of articles by Trisha Greenhalgh from BMJ, the author introduces the methods of evaluating the evidence from medical studies, in language that is understandable even for non-experts. Although these articles date from 1997, the methods the author describes remain relevant. Use the links below to access the articles.

Levels of Evidence

In some journals, you will see a 'level of evidence' assigned to a research article. Levels of evidence are assigned to studies based on the methodological quality of their design, validity, and applicability to patient care. The combination of these attributes gives the level of evidence for a study.  Many systems for assigning levels of evidence exist.  A frequently used system in medicine is from the Oxford Center for Evidence-Based Medicine.  In nursing, the system for assigning levels of evidence is often from Melnyk & Fineout-Overholt's 2011 book, Evidence-based Practice in Nursing and Healthcare: A Guide to Best Practice.  The Levels of Evidence below are adapted from Melnyk & Fineout-Overholt's (2011) model. 

Graphic chart depicting Melnyk & Fineout-Overholt's Levels of Evidence model

Uses of Levels of Evidence: Levels of evidence from one or more studies provide the "grade (or strength) of recommendation" for a particular treatment, test, or practice. Levels of evidence are reported for studies published in some medical and nursing journals. Levels of Evidence are most visible in Practice Guidelines, where the level of evidence is used to indicate how strong a recommendation for a particular practice is. This allows health care professionals to quickly ascertain the weight or importance of the recommendation in any given guideline. In some cases, levels of evidence in guidelines are accompanied by a Strength of Recommendation.

About Levels of Evidence and the Hierarchy of Evidence: While Levels of Evidence correlate roughly with the hierarchy of evidence (discussed elsewhere on this page), levels of evidence don't always match the categories from the Hierarchy of Evidence, reflecting the fact that study design alone doesn't guarantee good evidence. For example, the systematic review or meta-analysis of randomized controlled trials (RCTs) are at the top of the evidence pyramid and are typically assigned the highest level of evidence, due to the fact that the study design reduces the probability of bias (Melnyk, 2011), whereas the weakest level of evidence is the opinion from authorities and/or reports of expert committees. However, a systematic review may report very weak evidence for a particular practice and therefore the level of evidence behind a recommendation may be lower than the position of the study type on the Pyramid/Hierarchy of Evidence.

About Levels of Evidence and Strength of Recommendation: The fact that a study is located lower on the Hierarchy of Evidence does not necessarily mean that the strength of recommendation made from that and other studies is low--if evidence is consistent across studies on a topic and/or very compelling, strong recommendations can be made from evidence found in studies with lower levels of evidence, and study types located at the bottom of the Hierarchy of Evidence. In other words, strong recommendations can be made from lower levels of evidence.

For example: a case series observed in 1961 in which two physicians who noted a high incidence (approximately 20%) of children born with birth defects to mothers taking thalidomide resulted in very strong recommendations against the prescription and eventually, manufacture and marketing of thalidomide. In other words, as a result of the case series, a strong recommendation was made from a study that was in one of the lowest positions on the hierarchy of evidence.

Hierarchy of Evidence for Quantitative Questions

The pyramid below represents the hierarchy of evidence, which illustrates the strength of study types; the higher the study type on the pyramid, the more likely it is that the research is valid. The pyramid is meant to assist researchers in prioritizing studies they have located to answer a clinical or practice question. 

 

evidence pyramid

For clinical questions, you should try to find articles with the highest quality of evidence. Systematic Reviews and Meta-Analyses are considered the highest quality of evidence for clinical decision-making and should be used above other study types, whenever available, provided the Systematic Review or Meta-Analysis is fairly recent. 

As you move up the pyramid, fewer studies are available, because the study designs become increasingly more expensive for researchers to perform. It is important to recognize that high levels of evidence may not exist for your clinical question, due to both costs of the research and the type of question you have.  If the highest levels of study design from the evidence pyramid are unavailable for your question, you'll need to move down the pyramid.

While the pyramid of evidence can be helpful, individual studies--no matter the study type--must be assessed to determine the validity.

Hierarchy of Evidence for Qualitative Studies

Qualitative studies are not included in the Hierarchy of Evidence above. Since qualitative studies provide valuable evidence about patients' experiences and values, qualitative studies are important--even critically necessary--for Evidence-Based Nursing. Just like quantitative studies, qualitative studies are not all created equal. The pyramid below  shows a hierarchy of evidence for qualitative studies.

Adapted from Daly et al. (2007)

References

Daly, J., Willis, K., Small, R., Green, J., Welch, N., Kealy, M., & Hughes, E. (2007). A hierarchy of evidence for assessing qualitative health research. Journal of Clinical Epidemiology, 60(1), 43–49. doi:10.1016/j.jclinepi.2006.03.014
McBride, W. G. ‘‘Thalidomide and Congenital Abnormalities.’’ Letter to the Editor. The Lancet 2
(December 16, 1961): 1358.
Melnyk, B. M. (2011). Evidence-based practice in nursing & healthcare: a guide to best practice (2nd ed.). Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins.
Slawson, D. C., & Shaughnessy, A. F. (1997). Obtaining useful information from expert based sources. BMJ (Clinical Research Ed.), 314(7085), 947–949.