In the hierarchy of evidence, randomized controlled trials (RCT) are considered the highest-quality design for evaluating an intervention. They are most valuable when conducted with precision and the results are reported with adequate and sufficient information. Since the term “evidence-based medicine” was coined in the 1990s,1 there has been regular debate about the perceived deficiencies of the evidence-based practice model in surgical specialties. Until recently, trials of surgical interventions have been uncommon, and much of the surgical decision-making has been based primarily on experience gained from poorly designed and low-quality studies such as case series. Several review articles have concluded that RCTs are not commonly reported in the surgical literature2–5 and that the quality of surgical trials are inferior compared with trials of medical interventions.3,5–7
As the number of published RCTs in the field of surgery increases, surgeons can use more high-quality evidence in their daily decision-making. With this increased focus on RCTs, we are realizing the spectrum of possible trials in surgery, including the comparison of new surgical procedures versus conventional procedures (type I), the comparison of surgical techniques to laparoscopic methods or to medical treatments (type II) and the evaluation of medical treatments (or drug trials) involving surgical patients (type III).8
One major difference between drug trials and surgical trials is that surgical trials require skills and training to administer the surgical procedure. Even for a fully trained surgeon, there is a learning process to become an expert at a new procedure. As well, there is inherent variation in the quality of performance of one procedure by different surgeons. Drug trials do not require any additional skills to administer an active medication (versus a placebo medication) to patients; however, surgical trials are prone to differential “expertise” bias. Along with a number of other challenges in surgical trials, including randomization, concealment, blinding, patient recruitment and differential expertise biases, improvements in both the quality of reporting and the conduct of surgical research are needed.
To complement the Canadian Journal of Surgery’s focus on evidence-based practice, this new series of short papers will provide “Practical Tips for Surgical Research.” Our goals are simple: to highlight the concepts of evidence-based surgery, discuss the methodological problems of conducting high-quality research in surgery and the potential biases introduced because of these problems and to provide surgeons and surgical researchers with the practical tips to avoid or minimize these biases. We will cover a broad range of concepts and methods that will be useful for producing high-quality evidence. We based our decision about the choice of topics to be covered in this series on a review of the literature, an evaluation of the reporting quality of published literature in different areas of surgery, highlighting the weaknesses of these reports and their impact on the quality of published evidence, as well as the group’s experiences in conducting research involving surgical patients.6,8–14 We hope that surgical communities will find this evidence-based series informative, and we, in turn, are interested in receiving feedback from readers about how these tips might further be improved or modified for surgical research settings.
Footnotes
Contributors: Drs. Farrokhyar and Bhandari initiated and cochaired this series of articles. They edited all manuscripts for accuracy and consistency of the concepts, content and format, as well as advised on all aspects of the project’s development. They approved this article for publication.
Competing interests: No funding was received for the preparation of this paper. Dr. Bhandari is funded, in part, by a Canada Research Chair at McMaster University.
- Accepted January 27, 2009.