1: Systematic Review as a Basis for Evidence-Based Healthcare

Additional resources for this chapter

instructor material

DOI:

10.1891/9780826152268.0001

Authors

  • Salmond, Susan
  • Holly, Cheryl

Abstract

Evidence-based health care (EBHC) is a model of problem-solving or decision making that combines the art and science of practice within the context of patient values to deliver quality, cost-sensitive care. EBHC can be practiced at the individual and population/organizational level. The components of EBHC require best evidence to be integrated with patient values, the clinical context, and clinical judgment/expertise. This chapter presents an overview of the emergence of EBHC as the driving paradigm of health care today, highlighting the knowledge and skill that are required to be successful. Through this overview, the central role of up-to-date systematic reviews or other syntheses of research findings as the central link between research and clinical decision making will be apparent. The chapter presents the skills that nurses need for EBHC.

OBJECTIVES

At the end of this chapter, the reader will be able to:

  • Differentiate between expert-driven healthcare and evidence-based healthcare (EBHC).

  • Define the components of EBHC.

  • Discern the process of EBHC and the value of systematic reviews (SRs) as a quality source of evidence.

  • Define filtered evidence and unfiltered evidence.

CHAPTER HIGHLIGHTS

  • High-quality evidence, provided by a systematic review (SR), yields a more reliable foundation to guide clinical practice and healthcare decisions.

  • The evidence-based care paradigm calls for the integration of best research evidence along with clinical expertise, clinical context, and the opinions and values of patients and their families as a component in clinical decision-making.

  • The evidence-based process includes asking a question, acquiring evidence to support the question, appraising the evidence, applying the evidence to an individual or population, acting to put the evidence to use for patients/groups, and assessing whether the evidence leads to desired patient outcomes.

  • SRs are at the top of the evidence hierarchy, as they provide a summary of research findings that are available on a particular topic or clinical question.

  • As the SR process uses an explicit, rigorous process to comprehensively identify, critically appraise, and synthesize relevant studies, findings from a SR have greater validity than those from a single research study and consequently more valuable to inform practice.

The move to evidence-based medicine (EBM), initiated in the early 1990s, was considered to be a significant paradigm shift, calling for a move from traditional ways of decision-making based primarily on the experienced practitioner as the source of knowledge, which is referred to as expert-driven knowledge. In this old paradigm, expert opinion and intuition, tradition, experience, and pathophysiologic rationale were primary influencers of practice and clinical decision-making (Swanson et al., 2010). The shifting paradigm toward EBM initially focused on enhancing the empirical practice of medicine and the use of research findings in decision-making (Djulbegovic & Guyatt, 2017). The classic definition of EBM by Sackett et al. (1996) was the conscientious, explicit, and judicious use of current best evidence in making decisions about individual patients. The perspective on EBM has evolved beyond evidence alone, to stress clinical expertise and patient values and has branched out from being a medical-specific approach to include other disciplines (i.e., evidence-based nursing and evidence-based public health) and to capture a multidisciplinary approach in the use of the terms evidence-based practice and evidence-based healthcare (EBHC). Throughout this book, we will use the term EBHC.

In an effort to promote safety and quality patient care, in 2009, the Institute of Medicine convened a roundtable on EBM and set the goal that by 2020, 90% of clinical decisions will be supported by accurate, timely, and up-to-date clinical information and will reflect the best available evidence (Institute of Medicine, 2009). Their aim was

  • the development of learning healthcare systems designed to generate and apply best evidence for the collaborative healthcare choices of each patient and provider;

  • to drive the process of discovery as a natural outgrowth of patient care; and,

  • to ensure innovation, quality, safety, and value in healthcare.

Unfortunately, we have not met that goal, and a significant evidence–practice gap continues (Grimshaw et al., 2012; Leach & Tucker, 2018; Melnyk et al., 2018). Practices with proven effectiveness are often underused, with less than one in five evidence-based practices adopted routinely in healthcare settings (Kilbourne et al., 2019; Pagliaro, 2016). Other practices are overused despite lack of evidence, often leading to unnecessary exposure to iatrogenic harms (Grimshaw et al., 2012). With medical error estimated to be the third biggest cause of death in the United States (Makaray & Daniel, 2016), there is a need for more reliable healthcare practice and systems and a need for accurate, timely, and up-to-date clinical evidence, safety evidence, and implementation of science evidence. EBHC continues to be seen as crucial to closing the quality chasm and improving patient and quality outcomes.

This chapter presents an overview of the emergence of EBHC as the driving paradigm of healthcare today, highlighting the knowledge and skill that are required to be successful. Through this overview, the central role of up-to-date systematic reviews (SRs) or other syntheses of research findings as the central link between research and clinical decision-making (Grimshaw et al., 2012; Institute of Medicine, 2001) will be apparent.

EVIDENCE-BASED HEALTHCARE

EBHC is a model of problem-solving or decision-making that combines the art and science of practice within the context of patient values to deliver quality, cost-sensitive care (Prentiss & Butler, 2018). EBHC can be practiced at the individual and population/organizational levels. Although definitions vary slightly, there is consensus that it is an integration of the following components:

  1. The explicit and judicious use of current best evidence in making decisions about individual patients (the science of practice);

  2. Clinical expertise of the clinician (the art of practice);

  3. Patient preferences and values.

These three components interface with the clinical context—the feasibility of a specific intervention or approach based on the organizational and patient community context. Figure 1.1 (inner circle) shows the components of the EBHC process and the constant interaction of the components anticipated to lead to improved quality outcomes. There is no rule for which component is the most important; rather, the weight given to each component varies according to the clinical situation (Melnyk et al., 2009), as evidence, expertise, context, preferences, and values inform each other in positive and different ways (Fineout-Overholt et al., 2005). Figure 1.1 shows the complementary nature of EBHC and quality improvement. The intersection of EBHC and quality improvement occurs with the shared focus on evaluation of outcomes and using the results of the outcomes to improve processes and approaches to care. EBHC involves integration of evidence, clinical expertise and patient values at the individual patient level; quality improvement is translational—it is a dynamic and ongoing process that involves putting evidence into practice within the healthcare system (Banerjee et al., 2012).

FIGURE 1.1
Complementary nature of evidence-based healthcare and quality improvement.
9780826152268_fig1_1

COMPONENTS OF EVIDENCE-BASED HEALTHCARE

EVIDENCE

Best evidence is current, up-to-date, relevant, valid, and grounded in research about the effects of a treatment, the potential for harm from exposure to particular agents, the accuracy of diagnostic tests, and the predictive power of prognostic factors (Cochrane, 1972). Although Archie Cochrane wrote this definition almost 50 years ago, it remains relevant to the practice of EBHC. Point-of-care clinicians and other decision makers need accurate, high-quality information to make important decisions.

Practice decisions require valid information about prevention, diagnosis, prognosis, treatment, and experience of care. The evidence available in any clinical decision-making can be arranged in an order of strength based on its likelihood of freedom from error. Many evidence pyramids have been proposed that rank the degree of bias by study design. As shown in a traditional hierarchy (Figure 1.2A), as one moves higher up the pyramid, the quality of the evidence is likely to improve. There are limitations to this hierarchical model, however.

The first limitation is that quality varies by study design and implementation approach. Consequently, there can be a blurring of quality differences across levels of the hierarchies in the pyramid. The use of Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) presented a framework where the quality of a study or the certainty of the evidence at the outcome level is judged for risk of bias specific to imprecision, inconsistency, indirectness, and publication bias (Brozek, 2009). To this end, the quality of the evidence can be rated up or down. Consequently, Murad et al. (2016) recommend a modified pyramid, shown in Figure 1.2B, where the straight lines are replaced with wavy lines to reflect the fact that the evidence at each level of the pyramid can be rated up or down based on domains of quality of evidence.

A second problem with the hierarchies shown in Figure 1.2A and B is the fact that SRs are at the top of the pyramid. Although SRs are the pillar on which EBHC rests (Munn et al., 2018), SRs may be a synthesis of any of the research designs on the pyramid, and consequently all would not be considered equal. Thus, Murad recommends, as shown in Figure 1.2C, that SRs not be at the top of the pyramid but be the lens through which the types of studies should be seen.

The results of SRs provide the most valid evidence base to inform clinical decision-making. The type of SR, that is, the lens through which one will examine their question, varies depending on the type of clinical question to be answered. Pearson et al. (2012) have long argued the need to gather evidence beyond that of effectiveness research as questions asked by decision-makers not only include effectiveness but also consider the feasibility, appropriateness, and meaningfulness of health practices and delivery methods. Answering these questions requires the use of a diverse range of research methodologies to generate appropriate evidence (Pearson, 2004). More information will be provided later in the chapter on the process of EBHC and searching for the best available evidence.

CLINICAL EXPERTISE

Best research evidence by itself is insufficient to direct practice. Clinical expertise, or the clinician’s accumulated experience, knowledge, and clinical skills, is also a necessary element of evidence-based decision-making. Clinical expertise incorporates two types of evidence—experiential and physical.

FIGURE 1.2
Evidence pyramid.
9780826152268_fig1_2

Source: Reprinted with permission from Murad, M. H., Asi, N., Alsawas, M., & Alahdab, F. (2016). New evidence pyramid. BMJ Evidence-Based Medicine, 21(4), 125–127.

  • Experiential evidence is based on the clinical practice insight, skill, and expertise of the healthcare provider and is often referred to as intuitive, craft, or tacit knowledge.

  • Physical evidence is any tangible object that may play a role in decision-making, such as diagnostic reports, signs, or symptoms. The clinician’s proficiency and judgment in interpreting the presenting symptoms are acquired through clinical experience and clinical practice (Sackett, 1998).

Practitioners use their professional craft knowledge, the proficiency and judgment acquired through clinical experience, to determine whether best evidence applies to a particular patient or a group and whether the evidence should be integrated into the clinical decision. The aim is not to universally apply best evidence but to individualize the evidence based on practitioner assessment. This tacit, unspoken knowledge is used to assess the course and effects of implemented interventions. By using clinical skills and experience, the expert clinician rapidly identifies “each patient’s unique health state and diagnosis, their individual risks and benefits of potential interventions, and their personal circumstances and expectations” (Straus et al., 2000, p. 1).

An important piece of evidence integral to clinical expertise includes internal evidence generated from quality improvement data, patient assessment and patient perception data, and evaluation—what sometimes is called practice-based evidence. By measuring what we do and the outcomes for all patients, there is a rich source of evidence as to perceived and experienced effectiveness for different groups of patients and families. This type of knowledge guides the skilled practitioner in taking evidence and making decisions regarding the appropriateness for the individual patient.

Once new practices based on best evidence are implemented, the clinician assesses the course and effects of the intervention and uses their clinical acumen to make necessary adjustments (Shah & Chung, 2009). This dynamic balance between evidence and expertise is captured by Sackett and colleagues, as they describe the dangers in practice guided only by clinical expertise or only by best evidence: Without clinical expertise, practice risks becoming tyrannized by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date to the detriment of patients (Sackett et al., 1996). As this text focuses on the SR process, it will consequently emphasize best practice using empirical research; however, it is essential to recognize that the tacit knowledge of clinical expertise is a valuable component of EBHC.

PATIENT/FAMILY PREFERENCES AND VALUES

It is insufficient to simply blend expertise and evidence, for at the heart of EBHC is the patient with unique physical, emotional, psychological, environmental, and cultural perspectives. True EBHC requires the clinician to practice in a patient-centered model incorporating patient-specific evidence in decision-making and including the patient and family as co-decision makers in the selection of interventions or approaches for the patient’s improved health. Failure to consider these patient preferences and practicing from a medical model leads to unintentional bias toward a professional’s view of the world. Recognizing that there may be a monocultural bias in evidence, healthcare providers must be prepared to make “real-time” adjustments to their approach to care based on patient and family perspectives.

Making the patient central to the decision-making process involves the following:

  1. Developing a relationship with the patient.

  2. Listening to the patient’s expectations, concerns, and beliefs.

  3. Learning about the patient’s experiences in managing their illness or treatment regimen.

  4. Discussing (two-way communication) the best available evidence, one’s clinical assessment/judgment regarding that evidence, and the patient perspective/preferences evidence.

  5. Using a shared decision-making approach incorporating all evidence sources.

Shared decision-making begins with finding out what matters to the patient. Being aware of ethnocultural beliefs and traditions of patient populations enhances the clinician’s openness to individual and family perspectives. With mutual understanding and respect, both the professional’s perspective as a healthcare provider and the patient’s preferences can be weighed together in arriving at the treatment plan for the individual patient. In shared decision-making, there is a bidirectional, respectful, collaborative relationship where the clinician contributes technical expertise (evidence and clinical expertise) while the patient is the expert on their own needs, situations, and preferences (Truglio-Londigran & Slyer, 2018). Bringing the two together advances the goal of the decision-making process to match care with patient preferences and to shift the locus of decision-making from solely the clinician to be inclusive of the patient (Johnson et al., 2010), which is more likely to result in care that is meaningful and valuable to the patient.

Bastemeijer et al.’s (2017) qualitative SR examined what patients say they value in healthcare; their results are presented as a taxonomy that can guide the patient–professional interaction. Figure 1.3 illustrates the seven interrelated themes that fall within the patient–professional interaction. The authors suggest that there is a relevance to the sequence of these elements. Recognition by professionals of the patient’s uniqueness and autonomy leads to the professional behaving with compassion, professionalism, and responsiveness, thereby creating interaction based on partnership and empowerment. Table 1.1 provides a description of each of the seven elements.

Attending to the patient perspective goes beyond clinical practice and is a priority in clinical research as well. Including the perspectives of the patient as an end user of the research enhances the relevance of research to actual health and perhaps will even decrease the evidence–practice gap. The Patient-Centered Outcomes Research Institute (PCORI) is mandated to improve the quality and relevance of evidence available to help patients, caregivers, clinicians, employers, insurers, and policy makers to make informed health decisions (www.pcori.org). To meet this mandate, PCORI involves patients, caregivers, clinicians, and other healthcare stakeholders, including payers and researchers, throughout the process.

FIGURE 1.3
Taxonomy of patient values and preferences.
9780826152268_fig1_3

Source: Reprinted with permission from Bastemeijer, C. M., Voogt, L., van Ewiik, J. P., & Hazelet, J. A. (2017). What do patient values and preferences mean? A taxonomy based on a systematic review of qualitative papers. Patient Education and Counseling, 100(5), 871–881.

TABLE 1.1
DESCRIPTION OF THE SEVEN ELEMENTS IN THE TAXONOMY OF PATIENT VALUES
Patient uniquenessEncapsulates the need of the patient to be seen as a unique individual. The patient brings to the health situation a personal history and membership in one or more social and ethnocultural groups, which shapes their life experience, preferences, and knowledge as a patient. A key to a practice approach that recognizes patient uniqueness is to appreciate that the health problem represents only a small component of who the person is as a whole.
Patient autonomyRefers to respecting the patient’s ability to participate in making their own decisions on treatment and care. In attending to patient autonomy, the patient may opt to be a co-decision-maker or may opt to have the clinician be the decision-maker. This option may vary based on the differing presentation of the health condition and should not be treated as a universal preference.
CompassionA professional approach characterized by empathy for the person. A compassionate response means being attentive to the person as a unique individual demonstrating understanding, caring, and honesty.
ProfessionalismThe competence (knowledge and skill) and attitude in behavior and communication with the patient and other professionals and an openness to discuss alternatives.
ResponsivenessA coordinated, caring approach to the implementation of treatment that respects uniqueness and autonomy and is timely, safe, and appropriately responsive to the management of symptoms, especially pain.
PartnershipThe preferred relationship between the patient and the professional. Interactions that are based on partnership facilitate open dialogue and mutual respect.
EmpowermentProfessionals enabling patients to have control of their own situation and to trust in themselves and the patient–provider interaction. Requires support and education to help the patient and family learn to manage the condition and treatment with a goal of self-management and prevention.

CLINICAL CONTEXT

Context is any circumstance in which something happens. Contextual evidence is based on local factors that are specific to a community or setting and helps determine the feasibility of a specific intervention or approach. Clinical care takes place within many differing contexts; those contexts dictate the nature of that care and the evidence available to assist in decision-making (Dieppe et al., 2002). Sources of evidence in the clinical context may include the following:

  • Audit and performance data

  • Patient stories and narratives

  • Knowledge about the culture (norms) and politics of the organization and the individuals within it

  • Social and professional networks

  • Information from stakeholder evaluation

  • Local and national policies (Rycroft-Malone et al., 2004)

These sources of data can be used to guide practice decisions and practice changes and inform about the need for research-based evidence. Adapting evidence-based practices to the local context requires comparing the pieces of evidence for similarities or differences in context and determining if these differences or similarities matter and ultimately adapting the intervention to be more congruent with the local context. To guide clinicians in adapting high-quality evidence and CPGs to accommodate the local context, the ADAPTE framework and toolkit, available through Guidelines International Network (GIN), can serve as a guide. This framework is discussed further in Chapter 12.

EVIDENCE-BASED HEALTHCARE PROCESS

Figure 1.4 links the EBHC process to the six “A” steps model developed by Sackett et al. (2000) and by Straus et al. (2000). The process and its components (ask, acquire, appraise, apply, act, and assess) are defined and explained in depth here.

FIGURE 1.4
Evidence-based healthcare process: the “A” steps model.
9780826152268_fig1_4

Practicing from an evidence-based paradigm calls for clinicians to adopt a mind-set of informed scepticism. Instead of simply accepting tradition, hierarchy, and expert opinion, the EBHC clinician questions “why” things are being done as they are, “whether” there is a better way to do them, and “what” the evidence suggests may be best in the specific clinical situation (Salmond, 2007). This clinical inquiry stance along with concerns generated from evidence at the practice level (clinical expertise, patient values, and contextual issues) leads the clinician to recognize the need to ask for further information. At its core EBHC is a lifelong, self-directed learning process of clinical questioning, searching for the evidence and appraising information, and incorporating relevant information into daily practice.

ASK

There is both an art and a science to asking clinical questions to efficiently obtain needed information to make informed clinical decisions about patients. Information needs from practice are converted into focused, structured, and searchable questions that are relevant to the clinical issue by using the PICO (population, intervention, comparison, outcome) mnemonic or similar approaches. PICO provides a systematic way to identify the components of the clinical issue and structures the question in a way that will guide the search for evidence (Stillwell et al., 2010). These four components of a good clinical question can be thought of as data fields that will aid in the search for evidence and answers. How the question is framed or whether the question will include all components of PICO, such as a “C” or comparison, will depend on the type of question. One of the benefits of developing a thoughtful and careful question is that it makes the search for evidence easier and more efficient. A focused clinical question provides the keywords and phrases needed for searching to be readily apparent.

ACQUIRE

After the question is framed, the next step in the process is to acquire the evidence. Practitioners should first search sites where research has already been critically reviewed and summarized and deemed of sufficient quality to guide clinical practice. These resources are called filtered resources. The 6 “S” model (updated over time from the 4 “S” and 5 “S” model with the growth of evidence summary services) describes six layers of evidence sources when searching for an answer to a clinical question (DiCenso et al., 2009). Using this top-down approach to obtaining the best evidence most efficiently, searching should begin at the highest level on the pyramid recognizing that high-level evidence resources may not be available for all questions. Figure 1.5 illustrates the 6 “S” model.

The highest level of filtered resources features evidence-based clinical evidence integrated into computerized decision support systems built into the electronic health record (EHR). Although not a reality yet in most EHRs, this would allow patient data to be matched with evidence to generate patient-specific recommendations.

Summaries are the next level to draw from. This source of evidence includes Clinical Practice Guidelines (CPGs) that draw evidence from SRs as well as electronic evidence-based textbooks. Clinical guidelines are generally developed by professional groups (with representation from government agencies and specialty practice organizations) through a rigorous process of gathering, appraising, and combining evidence of varying levels. Guidelines provide actionable recommendations (with grading of the evidence) that inform practice decisions. Examples of sources for evidence-based clinical guidelines include the Agency for Healthcare Research and Quality (2004) (AHRQ), the U.S. Preventive Services Task Force (USPTF) recommendations, GIN, Emergency Care Research Institute (ECRI) Guidelines Trust, the Registered Nurses Association of Ontario (RNAO), the American Cancer Society, the World Health Organization, and the Oncology Nursing Society. Electronic textbooks such as Dynamed, UpToDate, EBSCOhost, and Natural Medicine provide a body of evidence at a topic level. Topic-level summaries, drawn from SRs if possible, provide a full range of evidence concerning management options for a given topic or health problem (e.g., congestive heart failure). Summaries are kept up to date and provide a mechanism for passive decision support by linking them to individual patient problems.

FIGURE 1.5
The 6 “S” levels of organization of evidence from healthcare research.
9780826152268_fig1_5

Synopses of syntheses provide a summary and critical appraisal of SRs and usually a discussion regarding the strength of the evidence and the clinical applicability of its findings. As such, the clinician can more confidently examine whether it is applicable to their question and population of interest. Many synopses also provide additional discussion on the practice implications of the study and a rating of the evidence. Synopses of SRs can be found in evidence-based journals such as Evidence-Based Nursing, BMJ Evidence-Based Medicine, and the ACP Journal Club.

Syntheses are SRs where the authors have synthesized research on a defined clinical question. As SRs use a rigorous approach to comprehensively search and appraise studies and provide a summary of the pooled findings, syntheses provide a valuable level of evidence for informing clinical decision-making and are considered more valid than a single study. The search for SRs should initially concentrate on sites that focus on this type of evidence. These include the following:

The clinician must, however, critically appraise the SR for its rigor and relevance.

Synopses of a single study provide limited information specific to one study but it has been reviewed by an expert with output consisting of an overview and appraisal of the study. The general standard is that a practice change should not be based on a single study but requires higher-level evidence sources such as SRs or synopses of these reviews. Sources of synopses of studies include evidence-based journals such as Evidence-Based Nursing and BMJ Evidence-Based Medicine.

The base of the pyramid of evidence resources draws from unfiltered resources—originally published single studies which are searched for from common databases such as PubMed and CINAHL. In searching, the components of the PICO question can be used as search terms. One can further limit the search by stipulating the preferred research design for the question being asked. Knowing the preferred research design (the strongest evidence), one can begin the search by stipulating the research approach, thus narrowing to the “best” available evidence. The reader needs to critically appraise the article in order to determine the balance of its strengths and weaknesses.

APPRAISE

All evidence needs some form of appraisal. If the evidence has been pre-appraised, it still needs to be examined for

  • the recommendations,

  • levels of evidence,

  • whether the guideline or study is up to date or current, and

  • the relevance of the evidence to the population of interest.

Chapa et al. (2013) provide a decision guide to appraising and using pre-appraised evidence.

If the retrieved evidence has not been from a synopsis (whether of an SR or individual study), then it is important to appraise the retrieved studies for quality and confidence in the trustworthiness of the data and their clinical usefulness. The appraisal process differs depending on whether it involves primary (individual study) or secondary research (systematic review) and the type of research design used. An SR by Katrak et al. (2004) examined 127 critical appraisal tools and concluded that there was no gold-standard tool for a specific design and no generic tool that can be used across study types. More recently, Aromataris and Munn (2017) emphasized the importance of using a validated critical appraisal tool yet acknowledged that there is no gold-standard tool. A discussion of the most commonly used, validated, and reliable tools is found in Chapter 8, “Choosing the Right Critical Appraisal Tool.”

APPLY

After high-quality evidence/studies have been selected from the appraisal process, the next step is to determine whether there is applicability to one’s own context and patient population. The decision to apply results in real-time clinical practice is based on the magnitude of the findings, their applicability to different populations, and the strength of the evidence.

In considering the magnitude of the findings or the clinical significance, the practitioner must ask, “Is the size of the benefit (effect size) likely to help my patient?” This requires agreement between the patient and the practitioner on the outcome that is important to the patient. The practitioner can provide the evidence to the patient as to the likelihood of benefit or harm that is specific to the intervention, comparison interventions, and the desired outcome in plain language so that the patient can make an informed decision.

In research that examines whether interventions work or not, such as randomized controlled trials (RCTs), the intervention is often tested in a carefully defined population under tightly controlled situations that do not simulate real-world settings. The practitioner must determine whether the settings and patient populations from the evidence-based studies are sufficiently similar to those from their own routine practice and whether the interventions used can be duplicated and are acceptable to their patients. Thus, in the application stage, one is questioning the applicability of the findings to one’s own context and one’s own patient population. It is important to recognize that although the data may be objective, the meanings have intrinsically subjective values that are dependent on the audience and may differ among nurses, physicians, patients, and administrators (Manchikanti et al., 2007).

A final factor to consider in examining applicability is the strength of the evidence. It is not unusual for studies to be of poor quality, and frequently the recommendations of SRs emphasize the need for more high-quality trials. One can consider the progression of recommendations for bed rest in the presence of low-back pain to illustrate this. Recommendations from “experience” called for bed rest in episodes of acute back pain and sciatica. Early recommendations from lower levels and lower quality of evidence found bed rest to be effective in alleviating low-back pain. Subsequent higher-quality clinical trials found different results. In a 2010 SR on whether to advise patients to rest in bed versus stay active for acute low-back pain and sciatica, moderate-quality evidence showed that patients with acute low-back pain may experience small benefits in pain relief and functional improvement when advised to stay active as compared with recommendations for bed rest. There was little or no difference between the two approaches in patients with sciatica (Dahm et al., 2010). In other words, activity was recommended. However, it must be pointed out that the hierarchy of evidence is not absolute. It may be that observational studies with sufficiently large and consistent treatment effects are more compelling than small RCTs (Manchikanti et al., 2007).

ACT AND ASSESS

If the practitioner identifies that the evidence can be applied to practice, the final steps are to act (put it into practice) and to assess whether the expected outcomes are achieved. This ongoing monitoring and review provide ongoing practice-based data on efficacy and effectiveness.

SUMMARY

This chapter highlighted the skills that nurses need for EBHC. The components of EBHC require best evidence to be integrated with patient values, the clinical context, and clinical judgment/expertise. The process for incorporating the new paradigm (i.e., the paradigm in which the best research evidence is integrated with clinical expertise, clinical context, and the opinions and values of patients and their families in clinical decision-making) requires an ongoing sense of inquiry, in which the practitioner

  • asks or challenges the way things are and whether practice is based on best practice,

  • has the skills to acquire and appraise the evidence,

  • makes decisions about whether to apply the evidence, and finally

  • acts by implementing the new practice and assessing the outcomes of the change.

PRACTICE ACTIVITIES
1.

Think about an aspect of your professional practice that you routinely undertake (e.g., depression screening, vital signs every four hours). What is the evidence base for this aspect of your practice? If you do not know, do you know where you could find an evidence base for this aspect of your practice?

2.

Identify any areas of care in your current practice situation that you feel would benefit from evidence that is more robust. Ask those you work with to do the same. Are there any areas of commonality?

3.

Look at the policy or procedure of a routine in which you are frequently engaged. What were the sources of evidence used to develop the policy or procedure? How old are they? Is the policy still relevant, up to date, and valid?

SUGGESTED READING

  1. Bastemeijer, C. M., Voogt, L., van Ewijk, J. P., & Hazelzet, J. A. (2017). What do patient values and preferences mean? A taxonomy based on a systematic review of qualitative papers. Patient Education and Counseling, 100(5), 871881. https://doi.org/10.1016/j.pec.2016.12.019
  2. Chapa, D., Hartung, M. K., Mayberry, L. J., & Pintz, C. (2013). Using pre-appraised evidence sources to guide practice decisions. Journal of the American Association of Nurse Practitioners, 25(5), 234243. https://doi.org/10.1111/j.1745-7599.2012.00787.x
  3. Gupta, S., Rajiah, P., Middlebrooks, E. H., Baruah, D., Carter, B. W., Burton, K. R., Chatterjee, A. R., & Miller, M. M. (2018). Systematic review of the literature: Best practices. Academic Radiology, 25(11), 14811490. https://doi.org/10.1016/j.acra.2018.04.025
  4. Pollock, A., Campbell, P., Struthers, C., Synnot, A., Nunn, J., Hill, S., Goodare, H., Morris, J., Watts, C., & Morley, R. (2018). Stakeholder involvement in systematic reviews: A scoping review. Systematic Reviews, 7(1), 208. https://doi.org/10.1186/s13643-018-0852-0
  5. Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual Review of Psychology, 70, 747770. https://doi.org/10.1146/annurev-psych-010418-1028

QUESTIONS FOR DISCUSSION

1.

In your own words, describe EBHC and how it relates to your practice situation.

2.

What are empirically supported practices that you engage in? How do you know that they are empirically supported?

3.

Discuss how nursing and other health professionals obtain their knowledge base.

4.

What are the main differences between knowledge and beliefs and between intuition and professional judgment?

REFERENCES

  1. Agency for Healthcare Research and Quality. (2004). Systems to rate the strength of scientific evidence. Evidence Report/Technology Assessment No 47, Publication No 02-E016 Rockville.
  2. Aromataris, E., & Munn, Z. (Eds.). (2017). Joanna Briggs Institute reviewer’s manual. The Joanna Briggs Institute. https://reviewersmanual.joannabriggs.org/
  3. Banerjee, A., Stanton, E., Lemer, C., & Marshall, M. (2012). What can quality improvement learn from evidence-based medicine? Journal of the Royal Society of Medicine, 105(2), 5559.
  4. Bastemeijer, C. M., Voogt, L., van Ewiik, J. P., & Hazelet, J. A. (2017). What do patient values and preferences mean? A taxonomy based on a systematic review of qualitative papers. Patient Education and Counseling, 100(5), 871881.
  5. Brozek, J. L., Akl, E. A., Alonso-Coello, P., Lang, D., Jaeschke, R., Williams, J. W., Phillips, B., Lelgemann, M., Lethaby, A., Bousquet, J., Guyatt, G. H., Schünemann, H. J., & GRADE Working Group. (2009, May). Grading quality of evidence and strength of recommendations in clinical practice guidelines. Part 1 of 3. An overview of the GRADE approach and grading quality of evidence about interventions. Allergy, 64(5), 669677. https://doi.org/10.1111/j.1398-9995.2009.01973.x. PMID: 19210357.
  6. Chapa, D., Hartung, M. K., Mayberry, L. J., & Pintz, C. (2013, May). Using preappraised evidence sources to guide practice decisions. Journal of the American Association of Nurse Practitioners, 25(5), 234243. https://doi.org/10.1111/j.1745-7599.2012.00787.x. Epub 2012 Sep 28. PMID: 24170565.
  7. Cochrane, A. (1972). Effectiveness and efficiency: Random reflections on health services. Nuffield Provincial Hospitals Trust (Reprinted in 1989 in association with the BMJ. Reprinted in 1999 for Nuffield Trust by the Royal Society of Medicine Press, London).
  8. Dahm, K. T., Brurberg, K. G., Jamtvedt, G., & Hagen, K. B. (2010). Advice to rest in bed versus advice to stay active for acute low-back pain and sciatica. Cochrane Database of Systematic Reviews, 2010(6), CD007612. https:/doi.org/10.1002/14651858.CD007612.pub2
  9. Dieppe, P., Rafferty, A., & Kitson, A. (2002). The clinical encounter—The focal point of patient-centered care. Health Expectations, 5(4), 279281.
  10. DiCenso, A., Bayley, L., & Haynes, R. (2009). Accessing pre-appraised evidence: Fine-tuning the 5S model into a 6S model. Evidence Based Nursing, 12(4), 99101. https://doi.org/10.1136/ebn.12.4.99-b
  11. Djulbegovic, B., & Guyatt, G. H. (2017, July 22). Progress in evidence-based medicine: A quarter century on. Lancet, 390(10092), 415423. https://doi.org/10.1016/S0140-6736(16)31592-6. Epub 2017 Feb 17. PMID: 28215660.
  12. Fineout-Overholt, E., Melnyk, B. M., & Schultz, A. (2005, November–December). Transforming health care from the inside out: Advancing evidence-based practice in the 21st century. Journal of Professional Nursing, 21(6), 335344. https://doi.org/10.1016/j.profnurs.2005.10.005. PMID: 16311228.
  13. Grimshaw, J. M., Eccles, M. P., Lavis, J. N., Hill, S. J., & Squires, J. E. (2012, May 31). Knowledge translation of research findings. Implementation Science, 7, 50. https://doi.org/10.1186/1748-5908-7-50. PMID: 22651257; PMCID: PMC3462671.
  14. Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. National Academies Press.
  15. Institute of Medicine (US) Roundtable on Evidence-Based Medicine. (2009). Leadership commitments to improve value in healthcare: Finding common ground: Workshop summary. National Academies Press (US). https://www.ncbi.nlm.nih.gov/books/NBK52847/
  16. Johnson, S. L., Kim, Y. M., & Church, K. (2010). Towards client-centered counseling: Development and testing of the WHO decision-making tool. Patient Education and Counseling, 81(3), 355361.
  17. Katrak, P., Bialocerkowski, A. E., Massy-Westropp, N., Kumar, S., & Grimmer, K. A. (2004). A systematic review of the content of critical appraisal tools. BMC Medical Research Methodology, 4, 22. https://doi.org/10.1186/1471-2288-4-22
  18. Kilbourne, A. M., Goodrich, D. E., Miake-Lye, I., Braganza, M. Z., & Bowersox, N. W. (2019). Quality enhancement research initiative implementation roadmap: Toward sustainability of evidence-based practices in a learning health system. Medical Care, 57(10 Suppl 3), S286S293.
  19. Leach, M. J., & Tucker, B. (2018). Current understandings of the research-practice gap in nursing: A mixed-methods study. Collegian, 25(2), 171179.
  20. Makary, M. A., & Daniel, M. (2016, May 3). Medical error—The third leading cause of death in the US. British Medical Journal, 353, i2139. https://doi.org/10.1136/bmj.i2139. PMID: 27143499.
  21. Manchikanti, L., Boswell, M. V., & Giordano, J. (2007). Evidence-based interventional pain management: Principles, problems, potential and applications. Pain Physician, 10(2), 329356.
  22. Melnyk, B. M., Fineout-Overholt, E., Stillwell, S. B., & Williamson, K. M. (2009). Igniting a spirit of inquiry: An essential foundation for evidence-based practice. American Journal of Nursing, 109(11), 4952.
  23. Melnyk, B. M., Gallagher-Ford, L., Zellefrow, C., Tucker, S., Thomas, B., Sinnott, L. T., & Tan, A. (2018). The first US study on nurses’ evidence-based practice competencies indicates major deficits that threaten healthcare quality, safety, and patient outcomes. Worldviews on Evidence-Based Nursing, 15(1), 1625.
  24. Munn, Z., Stern, C., Aromataris, E., Lockwood, C., & Jordan, Z. (2018). What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Medical Research Methodology, 18(1), 19.
  25. Murad, M. H., Asi, N., Alsawas, M., & Alahdab, F. (2016). New evidence pyramid. Evidence Based Medicine, 21(4), 125127. https://doi.org/10.1136/ebmed-2016-110401
  26. Pagliaro, L. (2016). Essay: Evidence-based medicine (EBM): New paradigm or integration? Digestive and Liver Disease, 48(1), 23.
  27. Pearson, A. (2004). Balancing the evidence: Incorporating the synthesis of qualitative data into systematic revies. JBI Reports, 2(2), 4563.
  28. Pearson, A., Jordan, Z., & Munn, Z. (2012). Translational science and evidence-based healthcare: A clarification and reconceptualization of how knowledge is generated and used in healthcare. Nursing Research and Practice, 2012, 792519.
  29. Prentiss, A., & Butler, E. (2018). What’s in a name: Performance improvement, evidence-based practice, and research? Nursing & Health Sciences Research Journal, 1(1), 4045.
  30. Rycroft-Malone, J., Seers, K., Titchen, A., Harvey, G., Kitson, A., & McCormack, B. (2004). What counts as evidence in evidence-based practice? Journal of Advanced Nursing, 47(1), 8190.
  31. Sackett, D. L. (1998). Evidence-based medicine. Spine, 23(10), 10851086.
  32. Sackett, D. L., Rosenberg, W. M., Muir Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996, January 13). Evidence based medicine: What it is and what it isn’t. British Medical Journal, 312(7023), 7172. https://doi.org/10.1136/bmj.312.7023.71. PMID: 8555924; PMCID: PMC2349778.
  33. Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence based medicine: How to practice and teach EBM (2nd ed.). Churchill Livingstone.
  34. Salmond, S. (2007). Advancing evidence-based practice: A primer. Orthopaedic Nursing, 26(2), 114123.
  35. Shah, H. M., & Chung, K. C. (2009). Archie Cochrane and his vision for evidence-based medicine. Plastic and Reconstructive Surgery, 124(3), 982988.
  36. Stillwell, S. B., Fineout-Overhold, E., Melnyk, B. M., & Williamson, K. M. (2010). Evidence-based practice, step by step: Asking the clinical question: A key step in evidence-based practice. American Journal of Nursing, 110(3), 5861.
  37. Straus, S. E., Richardson, W. S., Glasziou, P., & Haynes, R. B. (2000). Evidence-based medicine. How to practice and teach EBM. (3rd ed., p. 1). Churchill Livingstone Elsevier.
  38. Swanson, J. A., Schmitz, D., & Chung, K. C. (2010). How to practice evidence-based medicine. Plastic & Reconstructive Surgery, 126(1), 286294.
  39. Truglio-Londrigan, M., & Slyer, J. T. (2018, January 22). Shared decision-making for nursing practice: An integrative review. The Open Nursing Journal, 12, 114. https://doi.org/10.2174/1874434601812010001. PMID: 29456779; PMCID: PMC580620.