J Manipulative Physiol Ther. 2019 (Jun); 42 (5): 319–326 ~ FULL TEXT
Thanks to JMPT for permission to reproduce this Open Access article!
Ian D. Coulter, PhD, Gursel R. Aliyev, MA, Margaret D. Whitley, MPH, Lisa S. Kraus, MSPPM,
Praise O. Iyiewuare, MPH, Ryan W. Gery, PhD, Lara G. Hilton, PhD, Patricia M. Herman, PhD
University of California Los Angeles,
Southern California Health Sciences,
Santa Monica, California.
OBJECTIVES: This paper focuses on the methods of a single study, incorporating data from chiropractic clinics into an evidenced-based investigation of the appropriateness of manipulation for chronic back pain.
METHODS: A cluster sample of clinics (125) from 6 sites across the United States was chosen for this observation study. Patients with chronic low-back and neck pain were recruited using iPads, completed a series of online questionnaires, and gave permission for their patient records to be scanned. Patient records for a random sample were also obtained. The RAND staff and clinic personnel collected record data.
RESULTS: We obtained survey data from 2,024 patients with chronic low back pain, chronic neck pain, or both. We obtained patient record data from 114 of 125 clinics. These included the records of 1,475 of the individuals who had completed surveys (prospective sample), and a random sample of 2,128 patients. Across 114 clinics, 22% of clinics had patient records that were fully electronic, 32% had paper files, and 46% used a combination. Of the 114 clinics, about 47% scanned the records themselves with training from RAND. We obtained a total of 3,603 scanned records. The patient survey data were collected from June 2016 to February 2017, the provider surveys from June 2016 to March 2017, and the chart pull from April 2017 to December 2017.
CONCLUSIONS: Clinics can be successfully recruited for practice-based studies, and patients can be recruited using iPads. Obtaining patient records presents considerable challenges, and clinics varied in whether they had electronic files, nonelectronic records, or a mixture. Clinic staff can be trained to select and scan samples of charts to comply with randomization and data protection protocols in transferring records for research purposes.
KEYWORDS: Chiropractic; Chronic Pain; Complementary Therapies; Low Back Pain; Neck Pain; Pain; Surveys and Questionnaires
From the FULL TEXT Article:
Appropriateness-of-care decisions have been based on the published literature on safety and efficacy and the judgments of experts (both clinical and scientific experts),  but the voices of patients and data from real clinics have been missing. 
The Center of Excellence for Research in Complementary and Integrative Health (CERC)  was established at RAND specifically to develop a method for appropriateness studies in complementary and integrative health (CIH) that included patient preferences and costs.  We also wished to study the appropriateness of the manipulation and mobilization used in chiropractic clinics and to relate that to the outcomes from care. Achieving both these objectives required the recruitment of clinics and patients. In the last decade, a lot of attention has been given to evidenced-based practice (EBP) as being the most appropriate care, [4–7] but to evaluate efficacy and effectiveness of therapies and to understand patients’ experiences and beliefs in chiropractic and other areas of CIH, researchers need to collect data from real practices.
This article describes how chiropractic clinics were recruited, trained, and incorporated along with their patients in our study to ensure real-world patient data about preferences and experience are included in appropriateness decisions. In the broader context, the study was based on the premise that EBP requires practice-based evidence, that is, evidence from real clinical practice needs to inform EBP.
Within EBP, a hierarchy of evidence has been established that places randomized controlled trials (RCTs) at the pinnacle and expert opinion at the bottom. In between are such things as observation studies.  The result is hierarchies of evidence based on which body is considered superior (the gold standard). Usually this is based on methodological criteria so that a RCT that is double blinded is considered better than an observation study. The problem here, however, is it depends on what you want the evidence for. If you are interested in efficacy and causal inference, then the RCT research design is superior, but if your interest is in effectiveness (what works in real practice), it may not be the most relevant research design.  Many of the RCTs will exclude patients with comorbidities, but in the real world patients come to the clinic with numerous comorbidities.
The strongest evidence comes from systematic reviews and particularly when a series of RCTs are including in a meta-analysis, which can only be done if the studies are reasonably homogeneous. But basing EBP on systematic reviews ignores that in whole areas of health care there are insufficient, if any, reviews to systematize or use to develop a meta-analysis. This is especially true for CIH. [9–11] Examples of misleading meta-analysis have already been documented in the literature.  Furthermore, studies with negative results are less likely to be published. This itself has a tremendous impact on the “evidence.” [13, 14]
We are left with a dilemma here. True efficacy studies are not based on normal practice but on trials that in many important features do not resemble practice. But the studies that are truly based on practice cannot determine efficacy, and therefore, because EBP is primarily based on such studies, EBP is not based strictly on true practice. Randomized controlled trials are usually based on new LBP or NP patients who haven’t already been treated, whereas most patients in a practice have been getting treatment for a long time and these chiropractic patients are a self-selected group. Pragmatic trials such as comparative effectiveness trials have tried to overcome this problem by making normal practice the focus and simply measuring the outcomes. [15, 16]
Part of the dilemma of trying to create EBP is that practice is essentially case based.  Case studies are the lowest rank for evidence within EBP and have been critiqued within such fields as ethics as being potentially very misleading.  But as noted by Godlee, from the point of view of practice the “research literature is poorly organized, largely of poor quality and irrelevant to clinical practice, often conflicting, and often not there at all.”  (p 1621) Even when it does exist, it may be on a patient sample quite dissimilar from the one treated by the provider.
A strong case, therefore, can be made that practice based research is also required for true EBP. It requires a rethinking of the methodological approaches to what evidence should count in EBP and a rethinking of the “house of evidence” to the “houses of evidence.” [19, 20]
In summary, the problem is: How do you do make the evidence practiced based in a way that ensures rigorous methods are applied and valid and reliable data are collected? It is this challenge that the Center of Excellence for Research in Complementary and Integrative Health was confronted with regarding appropriateness.
Although it might appear self-evident that EBP should be practice-based evidence, to implement it in a project requires a whole range of strategies from recruiting chiropractors and recruiting clinics to training clinic staff and collecting data from the clinics. Each step provided distinct challenges in this study. Some of these might be distinct to this study and the research question about appropriateness. But many of the solutions could be relevant to other chiropractic studies and to other health professions.
The success of enrolling the clinics, the staff, and the patients in this study was very encouraging. In many ways, this study could have been seen by the chiropractors as threatening. It is, after all, a study about the appropriateness of care. But in both this study and the previous one done by RAND on acute low back pain,22 RAND has been able to enroll a high percentage of the clinics we approached. However, this was not primarily from cold calls or emails. Snowballing and personal contacts worked best. In this study, we were also able to enroll a large number of patients. Our experience was if the chiropractor was supportive of the study, then the staff was as well. If both those were, patients seemed not only willing to participate but also enthusiastic to do so.
A major logistical challenge in this study was how to collect data from sites with a mix of electronic and paper records. Data collection at mixed-record clinics took more time and effort. Chiropractic is clearly in a transition mode of moving to electronic medical files. As this becomes more widespread, sampling records for studies will be easier. Working with clinics that had recently changed to electronic systems also proved difficult because some of their patient files had not already been transferred to the new system. Paper-only files were most time-intensive and involved dealing with staples, tape, carbon type paper, shelving/reshelving files, and moving across various rooms or storage units because providers often would keep only the active files in the front office and the rest could be in storage, the basement, or a variety of locations.
The other challenge came from the way chiropractors kept records. Sometimes there were legibility issues, some chiropractors used their own cryptic code, and some would have various parts of their patient records in different files, so piecing that information was difficult. The research team often had to understand the unique annotation style. Electronic files (or even hybrid) did not present this type of problem for the most part.
This is only the second study done on appropriateness in chiropractic. The previous one was also conducted by RAND but was done over 20 years ago, so there is no comparative data other than the earlier study. We can say the participation rate of chiropractors and patients in this study was considerably higher. But this study had features the earlier one did not have. It focused on chronic back pain and so was dealing with patients with a much longer experience of chiropractic and pain. It also followed patients over time and collected data at numerous points in the patients’ care. Getting providers to participate in a study that will come into their practice and measure the extent to which the care provided is appropriate is itself a significant achievement because this could be seen as a very threatening study to a provider. So, our results bode well for future studies in chiropractic. But as we have tried to show in the paper, it takes a lot of effort. We hope that by sharing how it was done, and the results, both chiropractic researchers and chiropractors will be encouraged to continue doing practice-based research.
One limitation was that we did not use random samples. This makes generalizing from the data collected problematic. But this was a limitation about the results, not the methods themselves. Although the extent of the resources was not a limitation for this study, it does pose a problem for replicating the study at least on the same scale. But the methods are applicable even for much smaller-scale projects. We were able to try numerous approaches to recruiting, but future studies could choose the most successful.
In this paper, we have outlined the way in which we can make the evidence practiced based in a way that ensures rigorous methods are applied and valid and reliable data are collected. Through this process, we learned that clinic staff is essential. The study demonstrated that at least in chiropractic, and we think CIH generally, there is a strong desire among practitioners to be involved in research and therefore a good basis to move the “P” into EBP. If the chiropractor supported the study, so did the staff, and if the staff and chiropractor supported the study, so did the patients. Another lesson from this study was the amount of effort needed to obtain a substantial and empowered sample. The RAND Corp was helped by its earlier studies in chiropractic and its positive reputation in the chiropractic community, but it was also helped by the responsiveness of the profession in engaging in research.
This study describes approaches that can be used to make sure that the practice, and the patients, are part of EBP.
These approaches were successfully applied in a national study of chiropractic patients with chronic pain.
The findings will be of interest to researchers and clinicians in the complementary and integrative health professions who want to collect data about patient preferences, experiences and beliefs, and patient records.
Funding Sources and Conflicts of Interest
This study was funded by the National Institutes of Health’s National Center for Complementary and Integrative Health Grant No: 1U19AT007912-01. All authors report that they were funded by a grant from the National Center for Complementary and Integrative Health during the study. No conflicts of interest were reported for this study.
Concept development (provided idea for the research): I.D.C., G.R.A.
Design (planned the methods to generate the results): I.D.C., G.R.A., M.D.W., L.S.K., P.O.I., L.G.H.
Supervision (provided oversight, responsible for organization and implementation, writing of the manuscript): I.D.C., R.W.G., P.M.H.
Data collection/processing (responsible for experiments, patient management, organization, or reporting data): G.R.A., M.D.W., L.S.K., P.O.I., L.G.H.
Analysis/interpretation (responsible for statistical analysis, evaluation, and presentation of the results): I.D.C., G.R.A., M.D.W., L.S.K., P.O.I., L.G.H.
Literature search (performed the literature search): I.D.C.
Writing (responsible for writing a substantive part of the manuscript): I.D.C., G.R.A., M.D.W.
Critical review (revised manuscript for intellectual content, this does not relate to spelling and grammar checking): I.D.C., G.R.A., M.D.W., L.S.K., P.O.I., R.W.G., L.G.H., P.M.H.
Coulter, ID, Herman, PM, Ryan, GW, Hays, RD, Hilton, LG, and Whitley, MD.
Researching the Appropriateness of Care in the Complementary and Integrative Health Professions: Part I
J Manipulative Physiol Ther. 2018 (Nov); 41 (9): 800–806
Putting the practice into evidence-based dentistry.
J Calif Dent Assoc. 2007; 35: 45–49
Herman P, Hilton L, Sorbero ME, et al
Characteristics of Chiropractic Patients Being Treated for Chronic Low Back and Neck Pain
J Manipulative Physiol Ther. 2018; 41: 445–455
Appropriateness: the next frontier.
Br Med J. 1994; 308: 218–219
Evidenced-based practice and appropriateness of care studies.
J Evid Based Dent Pract. 2001; 1: 222–226
Expert panels and evidence: the RAND alternative.
J Evid Based Dent Pract. 2001; 1: 142–148
Evidence-based medicine (editorial).
Spine. 1998; 23: 1085–1086
Evidence based complementary and alternative medicine: promises and problems.
Forsch Komplementarmed. 2007; 14: 102–108
Linde, K and Coulter, ID.
Systematic reviews and meta-analyses.
in: G Lewith, W Jonas, H Walach (Eds.) Clinical Research in Complementary Therapies. 2nd ed.
Elsevier, Oxford, England; 2011: 119–134
Evidence summaries and synthesis: necessary but insufficient approach for determining
clinical practice of integrated medicine?.
Integrative Cancer Therapies. 2006; 5: 282
Bland, CJ, Meurer, LN, and Maldonado, GA.
Systematic approach to conducting a non-statistical meta-analysis of research literature.
Acad Med. 1995; 70: 642–653
Egger, M and Smith, GD.
Misleading meta-analysis. editorial.
Br Med J. 1995; 310: 752–754
Eskinazi, D and Muehsam, D.
Is the scientific publishing of complementary and alternative medicine objective?.
J Altern Complement Med. 1999; 5: 587–594
Easterbrook, PJ, Berlin, JA, Gopalan, R, and Matthews, DR.
Publication bias in clinical research.
Lancet. 1991; 337: 867–872
Coulter, ID, Khorsan, R, Crawford, C, and Hsiao, AF.
Integrative health care under review: an emerging field.
J Manipulative Physiol Ther. 2010; 33: 690–710
Comparative effectiveness research: does the emperor have clothes?.
Altern Ther Health Med. 2011; 17: 8–15
Glasziou, P, Guyatt, GH, Dans, AL, Dans, LF, Straus, S, and Sackett, DL.
Applying the results of trials and systematic reviews to individual patients. Editorial.
Am Coll Physicians J Club. 1998; 129: A15–A16
Godlee, F. Applying research to individual patients.
Evidence based case reports will help. Editorial.
Br Med J. 1998; 316: 1621–1622
Coulter, I, Elfenbaum, P, Jain, S, and Jonas, W.
SEaRCH™ expert panel process: streamlining the link between evidence and practice.
BMC Res Notes. 2016; 9: 16
The Evidence House: How to Build an Inclusive Base for Complementary Medicine
Western Journal of Medicine 2001 2001 (Aug); 175 (2): 79–80
Whitley, MD, Coulter, ID, Ryan, GW, Hays, RD, Sherbourne, C, and Herman, PM.
Researching the Appropriateness of Care in the Complementary and Integrative Health Professions Part 3:
Designing Instruments With Patient Input
J Manipulative Physiol Ther. 2019 (Jun); 42 (5): 307–318
Iyiewuare, P, Coulter, ID, Whitley, MD, and Herman, PM.
Researching the Appropriateness of Care in the Complementary and Integrative Health Professions Part 2:
What Every Researcher and Practitioner Should Know About the Health Insurance Portability
and Accountability Act and Practice-based Research in the United States
J Manipulative Physiol Ther. 2018 (Nov); 41 (9): 807–813
Shekelle PG, Coulter I, Hurwitz EL, Genovese B, Adams AH.
Congruence Between Decisions To Initiate Chiropractic Spinal Manipulation
for Low Back Pain and Appropriateness Criteria in North America
Annals of Internal Medicine 1998 (Jul 1); 129 (1): 9–17
Roth, CP, Coulter, ID, Kraus, LS et al.
Researching the Appropriateness of Care in the Complementary and Integrative Health Professions Part 5:
Using Patient Records: Selection, Protection, and Abstraction
J Manipulative Physiol Ther. 2010 (Jun); 42 (5): 327–334
Return to ALT-MED/CAM