All-Star Posts

Don’t always believe what you read…

By: Chad Cook PT, PhD, MBA, FAAOMPT

Last year, Charles Sheets and I published a paper titled “Clinical equipoise and personal equipoise: two necessary ingredients for reducing bias in manual therapy trials.” in the Journal of Manual and Manipulative Therapy. The focus of the paper was to inform the reader about the perils of failing to control for lack of personal equipoise in randomized controlled trials, and in a related sense, the hazards of an inappropriate study design that lacks clinical equipoise (in other words, the study was set up for one intervention to succeed over another). A true state of equipoise exists when one has no good basis for a choice between two or more care options. Violations of equipoise can occur in many forms and I’m thankful for the opportunity from Dr. Joe Brence to further discuss this important issue. My comments reflect my concerns of this issue and do not represent Charles Sheets’ thoughts; although his are very welcome.

Believability is a concept associated with face validity that allows one to determine, at “face” value, whether the results of a study truly have clinical merit. If a surgical team compared “conservative care” versus surgery, but provided a conservative intervention that was poorly defined, was not supported by clinical guidelines, or worse yet was called something such as exercise, manual therapy, or physical therapy, but was not provided by any experts in application of these interventions, most conservative-based clinicians would cry foul, and would suggest the intervention was biased and lacked equipoise. If a team of McKenzie-based clinicians provided a McKenzie (or MDT) approach versus a generic application of unsophisticated manual therapy care, one might also suggest bias. Further, because of assumed bias, sponsored interventions from device companies, equipment suppliers, or others who have a financial interest in the outcome, are often very difficult to publish. The authors of these studies are frequently required to report their vested interest in one side of the intervention. A recent paper published in JBJS in which I was senior author (Nunley et al. JBJS 2012) is a characteristic example of this challenge, since the primary author was also a paid consultant for the STAR total ankle replacement device company. We (the authors) were required to disclose our personal interests in the outcome of the trial, which indeed did favor the STAR device. I was able to disclose that I had no personal interests. Others were not.

The Commission on Publication Ethics (COPE) guidelines actually requires this disclosure for all publications and when this is absent, it is considered a strong enough reason to request retraction of the publication. A retraction is a “pulling” of the article from publication because of new information or because of ethics violations that were not disclosed at the time of publication. As stated by the COPE guidelines document “Retractions are also used to alert readers to cases of redundant publication (i.e. when authors present the same data in several publications), plagiarism, and/or failure to disclose a major competing interest likely to influence interpretations or recommendations”.

Recently, there have been a number of manual therapy studies that were designed by clinicians who have a personal interest in the success of one intervention over another. These clinicians have either a vested interest in the applications that are part of the interventional model because they provide instruction of these techniques in continuing education courses in which they profit from (although it may seem minimal, it is not); or because the tools are part of a philosophical approach or a decision tool that was designed from their efforts or efforts from those they were affiliated with. In nearly all cases, the bias is non-intentional and certainly not malicious. Nonetheless, in most cases, the comparative intervention (comparator) is designed in such a manner that it does not adequately represent clinical practice and in some cases, the same comparator is used despite the fact that it has been demonstrated to be ineffective in past clinical trials. It is my impression that these studies lack clinical and personal equipoise and require a disclosure of conflict of interest that is outlined within the COPE guidelines.

This concerns me greatly for a number of reasons. First, because the findings are not representative, and certainly, with the continued emphasis on “evidence based medicine” and the overt focus on publications as a source mechanism for ‘evidence’, we have the tendency to bias our future and advocate findings that reflect slanted papers. Second, I am concerned that new clinicians, passionate followers of selected manual therapy approaches, and in some occasions, seasoned clinicians, will be mislead because they lack expansive/formal research training with respect to study methodology. Certainly, once information is advocated within the clinical population, it takes years to diffuse its use, even after acknowledgement of its erroneous findings.

Since this is a Blog, and I’ve seen Blogs used quite effectively in the past to sway the masses, it’s my hope that you “Don’t always believe what you read”. And while I am not personally naming any papers or authors with respect to my concerns I’ve outlined, it’s very likely that if you suspect something screwy in a published paper, then you are probably accurate in your suspicions.

About these ads

Categories: All-Star Posts, research

Tagged as: , ,

19 replies »

  1. I won’t mention any names either, but sadly when “screwy” stuff is brought up online in a blog discussion, the pedestal some researchers are put on remains – even in light of the “screwy” stuff. Readers have a bias too – they have researchers they like and it doesn’t appear to me readers are easily swayed by “screwy” and accept all sorts of excuses a researcher provides. A reader’s argument is that peer-review is gold – it’s all good; it got published in a peer-reviewed journal. It went through the rigor, it’s gotta be good; it’s gotta be true.

    Our American Physical Therapy Association perpetuates this thought of “experts” and who can be believed… look at how PTNow is structured. Apparently we can only trust experts, sadly that’s a fail due to the reasons you share.

    You know how some patients have learned helplessness? Our profession perpetuates that type of attitude when it comes to research… research learned helplessness. It’d be nice if all of us clinicians, whether we have authored a paper or not, could not only believe and trust we DO have the smarts to critique what we read, but also realize we DO have to put on our thinking caps because we truly can’t trust the experts 100% of the time.

  2. Dr. Cook,
    First of all, thanks for a fantastic post. Second, I agree that when there is a personal bias, anyone can design a study which will result in their favor. A study which comes to mind was one I read a few months ago in the clinical journal of pain which compared the effects of using “leeches” vs. “cream” to control lateral epicondylitis symptoms. When designing this study, the authors had to know that “the more impressive a treatment is, the larger the unspecific effects are…” The so called “control” was barely a “control or comparative sham”. This likely influenced the results of the participants —leeches are the exciting “in” thing. I see this alot in studies which compare an exciting tool or technique with “usual care”. Is the “usual care” truly reflective of what clinicians do?

  3. We are neurobiologically wired for bias. Otherwise, for most of us, our heads would be spinning as we try to make sense of what is going on in the world. However, the next time I talk with an expert and ask; where is the evidence? I wont add; well if there is no evidence, why aren’t you doing a study?

  4. I won’t give out names either, although I did on my personnal blog in French. Journals share an important responsability in my mind in sorting out these kind of papers where authors have a vested interest in one of the interventions. That is why they should be very cautious when deciding when an article is deemed for publication. And also why I cannot understand why some journals let some papers with a very questionnble methodology make it in their pages. That is especially true when there are relatively few papers on the topic or question asked by the paper as this will give the questionnable paper a relatively important weight among the currently published evidence about the topic studied. Thus, its favored/biased intervention will be given an unwarranted advantage over a competitive one in the currently published evidence base. I think this applies really well in the manips vs mobs current debate.

  5. What TherapyGirl5 said.

    Here’s what I’ve written on my blog regarding this topic:
    http://www.therextras.com/therextras/reading-research.html

    Also, agreeing with Arthur in this recent post:
    http://www.therextras.com/therextras/2011/12/perception-and-perspective.html

    Not naming names either but other therapists who tweet media reports with inaccurate interpretation of one journal article are promoting some of that learned research helplessness.

  6. Hey Chad, interesting read, and I’d take it a step further and say, ‘never believe what you read’. Belief to me means that one has reconciled or accepted that something is true, whereas for the most part we live in a world of probabilities, where truth is a dynamic mystery. As soon as ‘belief’ enters into the picture, so does ‘trust’.

    As a wannabe brilliant epidemiologist, it seems to me that the more I dig, the more complexity is revealed. We can’t believe individual papers, because one paper is no paper. We can’t believe systematic reviews, because they may simply lack a definitive future study that swings the conclusions the other way. We can’t believe meta-analyses, because some of the statistical understanding required is beyond even those statisticians who don’t specialise in meta-analyses – let alone the statistical wannabes.

    The subtleties and nuances of statistical interpretation both impress and overwhelm me, and I’ve hung out with enough of these brilliant people, who perform meta-analyses on a daily basis across massive datasets, to know that I don’t know an awful lot – despite my years of study.

    Which bootstrap methodology was used to calculate the 95% confidence intervals and was it the correct one? What are the assumptions of using said bootstrap method and were the assumptions met? Would an above average clinician, well versed in statistics, be able to critique a meta-analysis with that depth of knowledge, understanding and expertise?

    In many circumstances, we either have to ‘trust’ or we have to remain in a state of ‘non-belief’. I find myself in the later category more often than not, and am comfortable nowadays with the uncertainty of it all.

    And then there’s the issue of applicability. How many clinicians spend time with a patient and really contemplate whether the ‘research’ knowledge they have is applicable to the patient in front of them?

    And if that patient is not similar to those in whom your ‘research’ knowledge was obtained, then as soon as you say in your mind ‘close enough’, you have started to ‘trust’. If I were a fly on the wall of an EBM-type physical therapist, and I had my clipboard out taking notes during their assessment, treatment and advice, how many times would I find a logical fallacy, or a point during the diagnosis when the test results were ambiguous, but ignored?

    I feel confident that I wouldn’t have to scratch very hard to find an EBM-type therapist move beyond the limits of the evidence. I base this confidence on personal experience first, and then on my direct observation of many therapists. And I reckon that’s just human, and is encapsulated by the phrase ‘evidence-based medicine, rather than ‘evidence-only medicine’.

  7. Very interesting thoughts posted by all. As a relative new comer to the PT blog scene I am continually impressed by the passion and debate that is ongoing in cyberspace.
    As a new graduate I was inundated with the importance of EBM in graduate school and essentially developed a blind faith that all research was developed for the benefit of our patients. Unfortunately as I have continued to grow as a clinician and my knowledge has started to catch-up with my inquisitive nature I have come to realize that sadly alternative motives exist for the publication of research.
    I share some of the same concerns that others do regarding research design and bias and I am simply left in a state of frustration. I do not have nearly the amount of research experience as others on this post and thus am left hoping that the research I encounter is performed for a just cause.
    As I read more and more articles and debates about EBM, more and more flags are raised cautioning me to not always believe what I read. Whether it is two seemingly well-constructed studies providing contrasting results, or researcher bias as stated above I find it difficult to weed through the amount of material to find the information which is truly applicable and valuable to my patients.
    Furthermore I am very concerned about the number of clinicians who base their decisions and practice philosophy on supposedly keystone articles which may of may not support their pre-existing bias. How can they be certain there is not a fatal flaw in the research design that they are overlooking due to psychosocial factors they are unaware of. I will continue to be a skeptic of research even though I desperately want to believe and find the “truth”.
    I acknowledge and understand the importance of EBM but how, as a novice in the research realm, can I find the truth for my patients. Any thoughts from those with a greater understanding.

    • Mr Maiers,
      It is very simple. In order for you to gain a greater understanding you must as Dr. Cook has so eloquently stated ” gain an expansive/formal research training with respect to study methodology.” Take a course on research statistics and methodology to gain an understanding of the various complexities of research design. If you cannot find a course obtaining Foundations of Clinical Research: Applications to Practice by Portney & Watkins is a great place to start (although it is a very dense read). To give a man a fish you feed him for a day. To teach a man to fish you feed him for a lifetime.

  8. More importantly perhaps is the need for a specific research outcome to be produced in multiple locations, by multiple researchers, over an extended period of time. Think resveratrol, glucosamine, chondroitin, fish oil…

    I think we do a pretty fair job at controlling for bias as PT researchers since the results of most RTC’s will likely lead to another project or allow an academician to continue a research agenda. We also should disclose financial incentives, so those that publish using methods they profit from will need to disclose that, so we as readers, are aware of possible “financial” bias.

  9. Dr. Cook, thanks for an excellent post to provide some excellent food for thought for those of us trying to grasp EBM and how to provide the best care for out patients on a daily basis.

    I think your point in regards to the competing interest area is an important one. As you stated many times the author may list none, but it is hard not to imagine there is none. Getting the study published might help with providing their continuing education course that they are involved with, getting tenure at their university, many possible conflicts of interests exist. It is hard to imaging while most papers state the authors have no competing interests, really have none.

    Nic, thanks for pointing out that it is okay to be comfortable with a little uncertainty. I think we search for things that may or may not be there just to try and create certainty for ourselves with some patients.

  10. I think being comfortable with uncertainty is very important in being an effective physical therapist. And it is often what we are asking of our patients as we reassure them that a bioanatomical cause of their pain may never be found and in fact is not necessary to relieve their pain and restore their function.

  11. Great points. I think there is sufficient reason to believe there is bias in all research that I read all of it with some cynicism. I would like some actual references to a paper or two here, I wonder why the hesitancy to provide an example when it is published research, making it fair game for lively discussion? Post-publication peer review is a great way to generate some critical thinking. There should not be anyone “beyond questioning” in Physical Therapy (or anywhere, really).

    I need to be able to defend my clinical assumptions and to do that I have to read the literature. I appreciate links to research on twitter and would like to see more discussions like this, especially perhaps of those studies that look great at first glance and have become “the new thing”. I think everything we do in the clinic should be fair game for fresh reviews.

    So, toss up some examples?

  12. I really enjoyed this editorial follow-up by Dr. Cook and glad to see a follow up here on a blog. I would love to have follow up from authors on my site too. I think this drives significant amount of discussion and provides learning environment for all, especially from experts in our field. Possibly the responses to editorials and studies that could be published in JMMT or JOSPT be published through AAOMPT

    I agree with Sandy that I would like to know more details about which articles are being discussed here. I have a good idea but feel out of loop some.

    Harrison

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s