Blast from the Past: Don't always believe what you read…

This post was originally published on FTPT on February 27, 2012.  The concepts described in this article are still very relevant and important for us to understand as we assess how research is meaningful to clinical practice.  To view more of Dr. Cook’s work, check him out here. Enjoy!!!
By: Chad Cook PT, PhD, MBA, FAAOMPT
Last year, Charles Sheets and I published a paper titled “Clinical equipoise and personal equipoise: two necessary ingredients for reducing bias in manual therapy trials.” in the Journal of Manual and Manipulative Therapy. The focus of the paper was to inform the reader about the perils of failing to control for lack of personal equipoise in randomized controlled trials, and in a related sense, the hazards of an inappropriate study design that lacks clinical equipoise (in other words, the study was set up for one intervention to succeed over another). A true state of equipoise exists when one has no good basis for a choice between two or more care options. Violations of equipoise can occur in many forms and I’m thankful for the opportunity from Dr. Joe Brence to further discuss this important issue. My comments reflect my concerns of this issue and do not represent Charles Sheets’ thoughts; although his are very welcome.
Believability is a concept associated with face validity that allows one to determine, at “face” value, whether the results of a study truly have clinical merit. If a surgical team compared “conservative care” versus surgery, but provided a conservative intervention that was poorly defined, was not supported by clinical guidelines, or worse yet was called something such as exercise, manual therapy, or physical therapy, but was not provided by any experts in application of these interventions, most conservative-based clinicians would cry foul, and would suggest the intervention was biased and lacked equipoise. If a team of McKenzie-based clinicians provided a McKenzie (or MDT) approach versus a generic application of unsophisticated manual therapy care, one might also suggest bias. Further, because of assumed bias, sponsored interventions from device companies, equipment suppliers, or others who have a financial interest in the outcome, are often very difficult to publish. The authors of these studies are frequently required to report their vested interest in one side of the intervention. A recent paper published in JBJS in which I was senior author (Nunley et al. JBJS 2012) is a characteristic example of this challenge, since the primary author was also a paid consultant for the STAR total ankle replacement device company. We (the authors) were required to disclose our personal interests in the outcome of the trial, which indeed did favor the STAR device. I was able to disclose that I had no personal interests. Others were not.
The Commission on Publication Ethics (COPE) guidelines actually requires this disclosure for all publications and when this is absent, it is considered a strong enough reason to request retraction of the publication. A retraction is a “pulling” of the article from publication because of new information or because of ethics violations that were not disclosed at the time of publication. As stated by the COPE guidelines document “Retractions are also used to alert readers to cases of redundant publication (i.e. when authors present the same data in several publications), plagiarism, and/or failure to disclose a major competing interest likely to influence interpretations or recommendations”.
Recently, there have been a number of manual therapy studies that were designed by clinicians who have a personal interest in the success of one intervention over another. These clinicians have either a vested interest in the applications that are part of the interventional model because they provide instruction of these techniques in continuing education courses in which they profit from (although it may seem minimal, it is not); or because the tools are part of a philosophical approach or a decision tool that was designed from their efforts or efforts from those they were affiliated with. In nearly all cases, the bias is non-intentional and certainly not malicious. Nonetheless, in most cases, the comparative intervention (comparator) is designed in such a manner that it does not adequately represent clinical practice and in some cases, the same comparator is used despite the fact that it has been demonstrated to be ineffective in past clinical trials. It is my impression that these studies lack clinical and personal equipoise and require a disclosure of conflict of interest that is outlined within the COPE guidelines.
This concerns me greatly for a number of reasons. First, because the findings are not representative, and certainly, with the continued emphasis on “evidence based medicine” and the overt focus on publications as a source mechanism for ‘evidence’, we have the tendency to bias our future and advocate findings that reflect slanted papers. Second, I am concerned that new clinicians, passionate followers of selected manual therapy approaches, and in some occasions, seasoned clinicians, will be mislead because they lack expansive/formal research training with respect to study methodology. Certainly, once information is advocated within the clinical population, it takes years to diffuse its use, even after acknowledgement of its erroneous findings.
Since this is a Blog, and I’ve seen Blogs used quite effectively in the past to sway the masses, it’s my hope that you “Don’t always believe what you read”. And while I am not personally naming any papers or authors with respect to my concerns I’ve outlined, it’s very likely that if you suspect something screwy in a published paper, then you are probably accurate in your suspicions.

Article Tags:
Article Categories:
All-Star Posts

All Comments

Leave a Reply

Your email address will not be published. Required fields are marked *