Announcement

Collapse
No announcement yet.

Evidence based PT a crisis in movement

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • amacs
    replied
    Late reply on a re-reading.

    I am afraid to say it does Keith. But at least it is coherent!

    A.

    Leave a comment:


  • Keith
    replied
    Originally posted by amacs View Post
    So where do we go Keith? How do we formulate therapeutic interventions if the evidence base is that corrupt?
    I have been trying to think of what my answer would be to this question - which is, obviously, beyond my pay-grade...but...I offer this:

    All clinicians working with patients with painful complaints MUST have:

    (1) a premise
    (2) a process (i.e. test - intervention - retest)
    (3) a non-iatrogenic narrative
    (4) humility

    ...each of which needs to be informed by the best evidence available with an understanding of the limits of that evidence.

    The down side is that there is A LOT of room here for interpretation, which is why therapy is not a commodity - especially when considering that the other 2 pillars of EBP include the expertise (bias) of the clinician and the expectations of the patient (informed by their culture).

    I suspect that this answer feels insufficient though, doesn't it?

    Respectfully,
    Keith
    Last edited by Keith; 06-10-2014, 02:50 AM.

    Leave a comment:


  • amacs
    replied
    Ioannidis's 6 corollaries help when trying to assess 'best available' evidence
    http://www.plosmedicine.org/article/...l.pmed.0020124
    I think Prof Kerry is suggesting something that exceeds those corollaries or would seek to make them redundant, I think. Haven't wrapped my head around it yet.

    ANdy

    Leave a comment:


  • Evanthis Raftopoulos
    replied
    Also, philosophy of practice and value system is often used to interpret evidence, so everything kind of blends together in a way that "makes sense" to the individual. I'm willing to bet that we tend to dismiss the evidence that does not support our already established philosophies and value systems.

    My apologies if I'm talking excessively here.

    Leave a comment:


  • Evanthis Raftopoulos
    replied
    An example of philosophy of practice would be

    " I avoid relatively aggressive approaches to a pain problem, because they seem counterintuitive in the context of treating pain"

    A conflicting example would be

    " I embrace relatively aggressive approaches to a pain problem, because it seems reasonable that they can help downregulate threat response systems"

    Couple of more examples of philosophy of practice (these may also be part of value system for some)

    " I understand that you wish to have [massage, or that much pressure], but I don't provide that type of service".

    or

    "I understand that manipulation (hvlat) was helpful in the past, but I don't provide that".


    An example of practicing based on a value system would be

    "I really enjoy working with this individual, so I don't mind seeing him/her for a few more sessions"

    or

    "I really enjoy working with this individual, but I don't think I should continue seeing him/her".

    Leave a comment:


  • Evanthis Raftopoulos
    replied
    I could be wrong, but it seems to me that most of us practice based on expertise (which IMO includes our own interpretation of the literature), philosophy of practice, and value system.

    Ioannidis's 6 corollaries help when trying to assess 'best available' evidence
    http://www.plosmedicine.org/article/...l.pmed.0020124
    Last edited by Evanthis Raftopoulos; 05-10-2014, 05:55 PM.

    Leave a comment:


  • amacs
    replied
    So where do we go Keith? How do we formulate therapeutic interventions if the evidence base is that corrupt?

    ANdy

    Leave a comment:


  • Keith
    replied
    Originally posted by keithp View Post
    And herein lies the inherent problems with the 'less-wrong' EBM, compared to anecdote and empiricism. We are on the right path, but continue to fall short of practice that is informed by science to the extent that we all desire...

    ...So, we are going to have ANOTHER study to assess the difference between a passive placebo and a movement based therapy for chronic pain? And when the MDT is improved more than placebo by statistically significant margin? What claims will be made?
    Originally posted by venerek View Post
    I don't see a big issue with the above RCT protocol...
    Originally posted by John W View Post
    ...As Keith suggests, the very nature of a passive compared to an active intervention for patients with persistent pain is loaded with variables that could lead to specious conclusions...

    ...I can already predict the outcome of this study. There'll be a short-term effect in favor of the active McKenzie intervention, which will level off in the long-term and ultimately show small effect sizes, if any.
    Then there is the issue of what such a study actually tells us. In this review by Kerry published last year, less than half of scientific studies that are published were scientifically true.

    Sometimes - when I read of research studies - it feels like we are simply tossing a weighted coin.

    Respectfully,
    Keith

    Leave a comment:


  • amacs
    replied
    Originally posted by Evanthis Raftopoulos View Post
    This is Roger’s definition of pain science https://twitter.com/RogerKerry1/stat...95899639898112

    It’s weird to me trying to discuss things on twitter, I don’t know how people do it.
    Beyond me Evan, too fragmented and it feels like it is all about the 'sound bite' rather than having room to look at content.

    ANdy

    Leave a comment:


  • Diane
    replied
    A facebook thread:
    Dearest Physio colleagues, friends, and critics,
    Following numerous requests, and for the purposes of recent discussions, I am defining 'Pain Science' as:
    "That body of scientific knowledge which de-emphasises a biological component to a person's painful experience and prioritises education as an interventional strategy"
    Please tell me if this is inaccurate / mis-represents the notion of Pain Science as you see it, for you are much more cleverer than what I am.
    Another:
    Thanks to the marvellous Sigurd Mikkelsen
    I think Kerry has done a good job of provoking conversation, and letting himself be poked some fun at at in the process without taking it personally.

    Leave a comment:


  • Evanthis Raftopoulos
    replied
    This is Roger’s definition of pain science https://twitter.com/RogerKerry1/stat...95899639898112

    It’s weird to me trying to discuss things on twitter, I don’t know how people do it.

    Leave a comment:


  • John W
    replied
    Kenny,
    When people think of "placebo" in the way it's being delivered in this study, they are comparing that definition to what's used in drug trials. In placebo-controlled drug trials, non-specific effects of the interaction between the investigator and subjects in both groups are essentially a wash. This isn't the case in a trial of this nature. As Keith suggests, the very nature of a passive compared to an active intervention for patients with persistent pain is loaded with variables that could lead to specious conclusions.

    For instance, are they going to control for the "placebo" group participants' expectations of the detuned US and shortwave therapy? What if members of the control group have a strong negative expectation of his intervention based on past experience? Owing to the known effects of negative expectation on pain, that could inflate the mean differences between the groups resulting in statistical significance and a conclusion that McKenzie has been shown to be efficacious in a placebo-controlled trial. All that'll have been shown is that expectations of passive placebo interventions in patients with chronic LBP are highly variable.

    There's also the issue of how the time is spent with the clinician in the control group. Surely those in the McKenzie group will get lots of verbal interaction with the therapist. How will this compare to the interaction that is received by the subjects getting fake US and diathermy? Won't this increased verbal interaction have the potential to improve therapeutic alliance and response to treatment?

    I can already predict the outcome of this study. There'll be a short-term effect in favor of the active McKenzie intervention, which will level off in the long-term and ultimately show small effect sizes, if any.

    This study will cost a lot of money. Awards will be won- perhaps a Bronze Lady. Speeches will be made at international symposia...

    Leave a comment:


  • Kyle Ridgeway
    replied
    I found Roger's post thought provoking and appreciated many of his insights. I posted the following response

    Roger, thank you for the thought provoking post. I agree with many of the insights as well as frustrations you elucidate as well as the insight that evidence based practice is not really being understood and subsequently utilized correctly. Much work to be done...

    Your post seems to inadvertently create a few false dichotomies:
    Utilizing Bio-Psycho-Social OR Biomedical/Biomechanical. Similar issues are present in the proper application of the BPS model. The BPS model is meant to FULLY incorporate the biomedical and biomechanical model (it's part of the "bio" part), but recognizes the importance of psychological and social constructs. The BPS down right is the biomedical model expanded and broadened to assess more than just a person's anatomy and physiology.

    Missing an aortic aneurysm is not a failure of the BPS model nor "pain science" per se, but rather a failure of the clinician to properly screen medical conditions and rule out occult medical conditions.

    Pain Science OR Biomedical. Similar thoughts here. The application and integration of the science of pain into the treatment of patients should never ignore biomedicine, red flags, or proper medical screening. It's inherent, and should be assumed, that as a professional you are charged with proper screening, ruling out, and evaluation.

    As you mentioned, those researching pain and the subsequent studies illustrate to us just how darn complex the individualized, lived pain experience really is. And, how many factors affect "pain." It's more than brains, but helping us recognize that PT is not just from C1 down is quite important. Yet, instead of recognizing this complexity and working to integrate understanding into practice, as you recognized, we instead make up complex treatment paradigms, classification systems, and sub-groups of responders in ways that are likely not quite valid. PT as a profession also loves to attempt classification of clinical syndromes into made up nominal pain diagnoses. Impingement, patella femoral pain syndrome, and other clinical syndromes/diagnoses come to mind. As you mentioned I'm not sure these nominal diagnoses or at times complex diagnostic constructs help us any....

    Per usual, Jason Silvernail summarizes the issue with keen insight so I will link to this must read post: http://www.evidenceinmotion.com/abou...p-deep-models/

    Some of my general thoughts on integrating evidence and research into practice
    http://ptthinktank.com/2014/01/06/me...ce-dptstudent/
    http://ptthinktank.com/2014/05/15/da...n-garbage-out/
    http://ptthinktank.com/2014/05/04/dp...al-experience/

    We absolutely must learn from humanities, psychology, and other scientists.
    Other relevant posts
    http://ptthinktank.com/2014/05/07/pr...omment-page-1/
    http://physiologicalpt.com/2014/08/1...an-deceive-us/
    http://physiologicalpt.com/2014/09/0...effectiveness/

    Some discussion happening here: http://www.somasimple.com/forums/showthread.php?t=19074

    Thanks again for your honest, straight forward critiques of what we do and the importance of modeling the WHY.

    Leave a comment:


  • venerek
    replied
    I don't see a big issue with the above RCT protocol. It looks to me like it will be a valuable addition to the literature. The protocol is structured in a way to determine if the McKenzie method has efficacy in treating CNSLBP. To determine efficacy, a no treatment or placebo control group should be used.

    We might assume that since the McKenzie approach is active it will likely show a greater effect than a placebo but, by how much? Will it be statistically significant? Or more importantly, clinically significant? I think these are important questions that can provide us with information to help determine if the McKenzie method is a useful approach in treating CNSLBP. Some well designed pragmatic trials would also be necessary to give a clearer picture of the benefits (or lack thereof) of the McKenzie method.

    More on efficacy vs effectiveness here -- http://physiologicalpt.com/2014/09/0...effectiveness/

    Leave a comment:


  • John W
    replied
    Yeah, Keith, the PTJ editors need to bone up on the research on non-specific effects and patient expectation. These passive "placebo" studies are passé (pardon the pun) given that we are attempting to establish a therapeutic relationship with a person suffering persistent pain. We know that treating pain requires entering the "third space", and to the extent an active intervention is more likely to achieve this than a passive one determines the statistical significance of a study like this.

    As an aside, I'm going to address a point that I agreed with in Kerry's piece, and it's relevant to the issue of research studies like this one Keith referenced. I've seen some less than well designed trials published in PTJ and JOSPT over recent months from research groups in South America. The dry needling trial by Mejuto-Vazquez et al (JOSPT, Sept 2014) where kappa values from referenced reliability studies were mis-cited in the introduction comes to mind. This study never should have passed peer-review. Aside from the mis-citations from previous research, the review of the reliability literature for diagnosis of trigger points- a total of one short paragraph- was far too cursory for a trial that hinges on the existence of these things.

    In his blog, Kerry implies that journal editors are too worried about impact factor at the expense of "faciltat[ing] the dissemination of thought and knowledge." It seems that some studies are getting published from areas of the world that are not as well-represented in the higher profile journals. I'm all for increasing intellectual diversity, but this shouldn't occur at the expense of rigorous peer review standards. I wonder if impact factor concerns are driving this.

    Leave a comment:

Working...
X