Announcement

Collapse
No announcement yet.

Trials

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ref Trials

    http://www.trialsjournal.com/?utm_ca...ource=Teradata

    Aims & scope
    Trials is an open access, peer-reviewed, online journal that encompasses all aspects of the performance and findings of randomized controlled trials in health. We publish articles on general trial methodology as well as protocols, commentaries and traditional results papers - regardless of outcome or significance of findings.

    Trials aims to experiment with, and refine, innovative approaches to improving communication about trials. We are keen to move beyond publishing traditional trial results articles (although these are included). We believe this journal represents an exciting opportunity to advance the science and reporting of trials.

    Making all its content open access and not retaining copyright, Trials offers a way to make data both freely available and highly visible to trialists worldwide; this will benefit the impact of your publication among peers and societies. The journal has unrestricted space and takes advantage of all the technical possibilities available for electronic publishing.

    To date, journals have focused on reporting the results of trials, with very little coverage of why and how they are conducted. Reports of trials have been restricted both by authors and editors &mdash both parties often select only a subset of the outcomes measured, while the latter often impose word limits on the articles published making it difficult to communicate the lessons learnt from conducting the trial, let alone include adequate details of how the trial was conducted.

    The Internet offers both unlimited space and interactivity, and we are keen to harness these attributes. For instance, trialists are able to provide the detail required to be a true scientific record and do more to make the article's message comprehensible to a variety of reader groups. They are able to communicate not only all outcome measures, as well as varying analyses and interpretations, but also in-depth descriptions of what they did and what they learnt. This sharing of direct experience is fundamental to improving the quality and conduct of trials worldwide.

    Prior to 2006, Trials was published as Current Controlled Trials in Cardiovascular Medicine (CCTCVM). All published CCTCVM articles are available via the Trials website and citations to CCTCVM article URLs will continue to be supported.
    via @SimonGandevia




    New clues to why a French drug trial went horribly wrong

    http://www.sciencemag.org/news/2017/...et_cid=1375976

    Scientists are one step closer to understanding how a clinical trial in France killed one volunteer and led to the hospitalization of five others in January 2016. A new study shows that the compound tested in the study, BIA 10-2474, has effects on many other enzymes in addition to the one it was supposed to inhibit. These “off-target” effects might explain why the drug caused side effects ranging from headaches to irreversible brain damage.

    “We suspected that BIA 10-2474 was a bad compound—now we know for sure,” says neuropharmacologist Daniele Piomelli from the University of California, Irvine, who was not involved in the new study.
    Update 10/06/2017
    Last edited by Jo Bowyer; 10-06-2017, 10:45 PM.
    Jo Bowyer
    Chartered Physiotherapist Registered Osteopath.
    "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

  • #2
    The Changing Nature of Scientific Sharing and Withholding in Academic Life Sciences Research.

    http://www.sciencedaily.com/releases...1217130601.htm

    Measures instituted in recent years to encourage the sharing of scientific information appear to have reduced the overall level of withholding of data and materials among academic life science researchers. In their follow up to an earlier study that documented the extent of data withholding in 2000, a multi-institutional research team describes the results of a 2013 survey of investigators at top research institutions. Their report has been published online in Academic Medicine.

    "Our study showed a dramatic change in the ways scientists share information and materials since the first study," says Darren Zinner, PhD, of the Heller School for Social Policy and Management, Brandeis University, lead and corresponding author of the paper. "The good news is that we are seeing more exchanges of information, making it easier for new research to build on existing findings. But we also found that, since most of these exchanges are happening through third parties -- online journal supplements or data repositories -- we are witnessing fewer person-to-person collaborations among scientists."

    The authors note that open disclosure of study methods and results is essential to scientific progress, and many funding organizations require open sharing of research data and materials. But the fact that career advancement in science usually depends on the quality and quantity of published papers and on being the first to publish novel information establishes competing incentives for secrecy. The 2000 study, led by investigators at the Mongan Institute for Health Policy at Massachusetts General Hospital (MGH) and published in the Jan. 23, 2002, issue of JAMA, found that 10 percent of responding scientists had requests for additional information related to published papers denied, and 12 percent admitted denying requests from other investigators.

    Since that study's publication, new policies designed to encourage and sometimes require data sharing have been put into place. The National Institutes of Health (NIH) requires that all grant applications include data sharing plans and that the data and materials be made available to other researchers. Most major journals require study authors to include detailed online data and methodology supplements; third-party repositories for data and biomaterials have been established, and online forums and other technologies have been created to further increase communication. The current study was designed to examine whether and how these policies have affected data sharing and withholding among academic life science researchers.

    The methodology of the current study was essentially unchanged from that of the 2000 study, with surveys sent to life science researchers at the 100 U.S. universities that receive the most NIH funding. As in the previous study, special emphasis was placed on researchers in genetics, a field that is generating massive amounts of data and for which many repositories were specifically established. The only change to the survey was the addition of three questions specifically asking about the effects on sharing of journal policies regarding online supplements and third-party repositories. The surveys were mailed between January and June of 2013.

    Out of 3,000 surveyed investigators, 1,165 responded, compared with 1,849 in 2000. The percentage of respondents who reported making or receiving requests during the preceding three years, the percentage who indicated that had denied a request, and the percentage of requests that had been denied was essentially unchanged. But there was an overall drop in the total numbers of person-to-person requests made or received. Whereas the 2000 survey indicated that respondents whose research was supported by industry funding or who were involved in commercial activities, such as licensing patents on their discoveries, were significantly more likely to keep their findings secret, the 2013 survey found that those with industry support were no more likely to withhold data than those without such funding. Request denial continued to be more common among respondents engaged in commercial activities.

    Responses to the questions about new requirements and methods for data sharing indicated that 44 percent had been required by journals to submit detailed data and method supplements and 25 percent were required to place data or biomaterials within third-party repositories. Almost 30 percent of respondents had submitted requests to repositories in the preceding three years -- among those, 11 percent experienced at least one denial, 24 percent experienced a significant delay and 6 percent believed the response they received was 'misleading or inaccurate.' But almost 40 percent of respondents and 62 percent of geneticists indicated that repositories had helped their research.

    The total number of requests made both to other investigators and to repositories increased significantly, particularly among geneticists. Similar percentages of respondents to both surveys reported being 'scooped' by another researcher who had beaten them to publication or that sharing data had compromised the ability of a junior member of their team to publish. Respondents to the 2013 survey were significantly less likely to report that sharing with other researchers resulted in new collaborations, and they were less likely to believe that sharing was helpful towards innovation.

    The authors indicate that the increased availability of data and materials from third parties may explain the overall decline in person-to-person data requests. Total requests made to all sources averaged 8.4 per respondent in 2000 and increased to an average of 15 per respondent, only 6.6 of which were to other scientists, in 2013. Although the percentage of requests that respondents denied was unchanged, the overall number of requests that were honored increased significantly when the new sharing methods were included.

    "A primary finding is that we've seen a change in the way information, data and materials are shared in the scientific community," says Eric Campbell, PhD, of the Mongan Institute for Health Policy at MGH, senior author of the current study and lead author of the 2002 report. "Scientists used to be the gatekeepers of their data, and increasingly that responsibility has transitioned to third-party repositories. The key question now is what impacts -- both positive and negative -- does this shift have on individual scientists, research groups, scientific fields and science as a whole."
    Jo Bowyer
    Chartered Physiotherapist Registered Osteopath.
    "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

    Comment


    • #3
      Impact of medical academic genealogy on publication patterns: An analysis of the literature for surgical resection in brain tumor patients

      http://onlinelibrary.wiley.com/doi/1...24569/abstract

      “Academic genealogy” refers to the linking of scientists and scholars based on their dissertation supervisors. We propose that this concept can be applied to medical training and that this “medical academic genealogy” may influence the landscape of the peer-reviewed literature. We performed a comprehensive PubMed search to identify US authors who have contributed peer-reviewed articles on a neurosurgery topic that remains controversial: the value of maximal resection for high-grade gliomas (HGGs). Training information for each key author (defined as the first or last author of an article) was collected (eg, author's medical school, residency, and fellowship training). Authors were recursively linked to faculty mentors to form genealogies. Correlations between genealogy and publication result were examined. Our search identified 108 articles with 160 unique key authors. Authors who were members of 2 genealogies (14% of key authors) contributed to 38% of all articles. If an article contained an authorship contribution from the first genealogy, its results were more likely to support maximal resection (log odds ratio = 2.74, p < 0.028) relative to articles without such contribution. In contrast, if an article contained an authorship contribution from the second genealogy, it was less likely to support maximal resection (log odds ratio = −1.74, p < 0.026). We conclude that the literature on surgical resection for HGGs is influenced by medical academic genealogies, and that articles contributed by authors of select genealogies share common results. These findings have important implications for the interpretation of scientific literature, design of medical training, and health care policy. Ann Neurol 2016
      Jo Bowyer
      Chartered Physiotherapist Registered Osteopath.
      "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

      Comment


      • #4
        You are not alone: selecting your group members and leading an outstanding research team

        http://onlinelibrary.wiley.com/enhan...341839ad8f1472

        Being hired in a faculty position is the pot of gold at the end of the scientific training rainbow. After years of education and postdoctoral training, this first faculty job is a thrilling progression in a scientific career that allows us to develop our own research program and pursue the questions that most interest us, but starting a lab from scratch comes with a unique set of pressures and struggles. Luckily, Principle Investigators (PIs) don't have to walk alone, as most build teams to work with. Thus, one of the first and most important things to do at this critical career stage is to recruit team members. Subsequently, the PI leads and guides the group both in terms of the scientific projects and facilitating the career progression of the team members. While scientists are generally very well trained in designing and running experiments, most of us do not receive much, if any, training in the organizational skills needed for managing a group of people and inspiring them to do great work and plan for the future. This is a problem because proper selection, training, and mentoring of team members are essential to achieve scientific aims.

        The Federation of European Neurosciences (FENS) and the Kavli foundation have established a new group called the FENS-Kavli Network of Excellence (http://www.fens.org/Outreach/FENS-Ka...of-Excellence/) to provide peer support for early career neuroscientists and to provide a voice for people at this career stage in shaping the future of Neuroscience. Part of this support is this series of Opinion Articles in EJN to provide advice about different aspects of career progression in neuroscience. The first in the series was a piece about getting hired and negotiating a group leader position (Karadottir et al., 2015). Here we will discuss the next stage of the process, building and effectively leading a research team. There is a glaring omission in this article, the elephant in the lab, which is how to get funding, but don't worry, the next in the series of Opinion Articles will be entirely dedicated to funding. For now, we will focus on recruitment, leadership, mentoring, and handling problems in the team. Unfortunately there is not a magic recipe for cooking up the perfect research group, but we hope that our experiences will at least help avoid some of the common pitfalls and provide some tips that have helped us along the way.
        This is not a path down which I am tempted to go, but I love to watch them work and am fascinated by what goes on. I have patients who are reseachers, I am a patient in a large trial and I occasionally have the opportunity to go and look at stuff as a clinician or to be interviewed by those doing qualitative research. My own work has benefited greatly by having access to papers.

        This was flagged up on Mick Thacker's twitter.
        Jo Bowyer
        Chartered Physiotherapist Registered Osteopath.
        "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

        Comment


        • #5
          Personalized medicine: Time for one-person trials

          http://www.nature.com/news/personali...trials-1.17411

          Every day, millions of people are taking medications that will not help them. The top ten highest-grossing drugs in the United States help between 1 in 25 and 1 in 4 of the people who take them (see 'Imprecision medicine'). For some drugs, such as statins — routinely used to lower cholesterol — as few as 1 in 50 may benefit1. There are even drugs that are harmful to certain ethnic groups because of the bias towards white Western participants in classical clinical trials2.

          Recognition that physicians need to take individual variability into account is driving huge interest in 'precision' medicine. In January, US President Barack Obama announced a US$215-million national Precision Medicine Initiative. This includes, among other things, the establishment of a national database of the genetic and other data of one million people in the United States.

          Classical clinical trials harvest a handful of measurements from thousands of people. Precision medicine requires different ways of testing interventions. Researchers need to probe the myriad factors — genetic and environmental, among others — that shape a person's response to a particular treatment.

          Studies that focus on a single person — known as N-of-1 trials — will be a crucial part of the mix. Physicians have long done these in an ad hoc way. For instance, a doctor may prescribe one drug for hypertension and monitor its effect on a person's blood pressure before trying a different one. But few clinicians or researchers have formalized this approach into well-designed trials — usually just a handful of measurements are taken, and only during treatment.

          If enough data are collected over a sufficiently long time, and appropriate control interventions are used, the trial participant can be confidently identified as a responder or non-responder to a treatment. Aggregated results of many N-of-1 trials (all carried out in the same way) will offer information about how to better treat subsets of the population or even the population at large.
          It will be interesting to see the results of the conformis knee replacement multicentre trials when they become availale
          Last edited by Jo Bowyer; 17-01-2016, 08:45 PM.
          Jo Bowyer
          Chartered Physiotherapist Registered Osteopath.
          "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

          Comment


          • #6
            If we want medicine to be evidence-based, what should we think when the evidence doesn’t agree?

            https://theconversation.com/if-we-wa...nt-agree-53152

            To understand if a new treatment for an illness is really better than older treatments, doctors and researchers look to the best available evidence. Health professionals want a “last word” in evidence to settle questions about what the best modes of treatment are.

            But not all medical evidence is created equal. And there is a clear hierarchy of evidence: expert opinion and case reports about individual events are at the lowest tier, and well-conducted randomized controlled trials are near the top. At the very top of this hierarchy are meta-analyses – studies that combine the results from multiple studies that asked the same question. And the very, very top of this hierarchy are meta-analyses performed by a group called the Cochrane Collaboration.

            To be a member of the Cochrane Collaboration, individual researchers or research groups are required to adhere to very strict guidelines about how meta-analyses are to be reported and conducted. That’s why Cochrane reviews are generally considered to be the best meta-analyses.

            However, no one has ever asked if the results in meta-analyses performed by the Cochrane Collaboration are different from meta-analyses from other sources. In theory, if you compared a Cochrane and non-Cocrhane meta-analysis, both published within a similar time frame, you’d tend to expect that they’d have chosen the same studies to analyze, and that their results and interpretation would more or less match up.

            Our team at Boston University’s School of Public Health decided to find out. And surprisingly, that’s not what we found.




            The crisis of expertise

            https://aeon.co/essays/its-time-to-r...b451c-69418129

            In the 1970s, the top nutritional scientists in the US told the government that eggs, among many other foods, might be lethal. There could be no simpler application of Occam’s Razor, with a trail leading from the barnyard to the morgue. Eggs contain a lot of cholesterol, cholesterol clogs arteries, clogged arteries cause heart attacks, and heart attacks kill people. The conclusion was obvious: Americans need to get all that cholesterol out of their diet. And so they did. Then something unexpected happened: Americans gained a lot of weight and started dying of other things.

            The egg scare was based on a cascade of flawed studies, some going back almost a half century. People who want to avoid eggs can still do so, of course. In fact, there are now studies that suggest that skipping breakfast entirely – which scientists have also long been warning against – isn’t as bad as anyone thought either.

            Experts get things wrong all the time. The effects of such errors range from mild embarrassment to wasted time and money; in rarer cases, they can result in death, and even lead to international catastrophe. And yet experts regularly ask citizens to trust expert judgment and to have confidence not only that mistakes will be rare, but that the experts will identify those mistakes and learn from them.

            Day to day, laypeople have no choice but to trust experts. We live our lives embedded in a web of social and governmental institutions meant to ensure that professionals are in fact who they say they are, and can in fact do what they say they do. Universities, accreditation organisations, licensing boards, certification authorities, state inspectors and other institutions exist to maintain those standards.

            This daily trust in professionals is a prosaic matter of necessity. It is in much the same way that we trust everyone else in our daily lives, including the bus driver we assume isn’t drunk or the restaurant worker we assume has washed her hands. This is not the same thing as trusting professionals when it comes to matters of public policy: to say that we trust our doctors to write us the correct prescription is not the same thing as saying that we trust all medical professionals about whether the US should have a system of national healthcare. To say that we trust a college professor to teach our sons and daughters the history of the Second World War is not the same as saying that we therefore trust all academic historians to advise the president of the US on matters of war and peace.

            For these larger decisions, there are no licences or certificates. There are no fines or suspensions if things go wrong. Indeed, there is very little direct accountability at all, which is why laypeople understandably fear the influence of experts.
            How do experts go wrong? There are several kinds of expert failure. The most innocent and most common are what we might think of as the ordinary failures of science. Individuals, or even entire professions, get important questions wrong because of error or because of the limitations of a field itself. They observe a phenomenon or examine a problem, come up with theories and solutions, and then test them. Sometimes they’re right, and sometimes they’re wrong.

            Science is learning by doing. Laypeople are uncomfortable with ambiguity, and they prefer answers rather than caveats. But science is a process, not a conclusion. Science subjects itself to constant testing by a set of careful rules under which theories can be displaced only by other theories. Laypeople cannot expect experts to never be wrong; if they were capable of such accuracy, they wouldn’t need to do research and run experiments in the first place. If policy experts were clairvoyant or omniscient, governments would never run deficits, and wars would break out only at the instigation of madmen.
            The goal of expert advice and prediction is not to win a coin toss, it is to help guide decisions about possible futures. Professionals must own their mistakes, air them publicly, and show the steps they are taking to correct them. Laypeople must exercise more caution in asking experts to prognosticate, and they must educate themselves about the difference between failure and fraud.

            If laypeople refuse to take their duties as citizens seriously, and do not educate themselves about issues important to them, democracy will mutate into technocracy. The rule of experts, so feared by laypeople, will grow by default.

            Democracy cannot function when every citizen is an expert. Yes, it is unbridled ego for experts to believe they can run a democracy while ignoring its voters; it is also, however, ignorant narcissism for laypeople to believe that they can maintain a large and advanced nation without listening to the voices of those more educated and experienced than themselves.
            I love this!

            I am one of those asked to give "expert opinion" on a regular basis. I have a great deal of mileage.....and grey hair, but I am not a scientist, I don't have a degree. An opinion is, when it comes down to it, an opinion.

            06/06/2017




            Trust me, I’m an… expert
            By timcocks


            https://noijam.com/2017/06/13/trust-me-im-an-expert/

            Update 13/06/2017
            Last edited by Jo Bowyer; 13-06-2017, 01:48 PM.
            Jo Bowyer
            Chartered Physiotherapist Registered Osteopath.
            "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

            Comment


            • #7
              Research integrity: Don't let transparency damage science

              http://www.nature.com/news/research-...TWT_NatureNews

              Transparency has hit the headlines. In the wake of evidence that many research findings are not reproducible1, the scientific community has launched initiatives to increase data sharing, transparency and open critique. As with any new development, there are unintended consequences. Many measures that can improve science2 — shared data, post-publication peer review and public engagement on social media — can be turned against scientists.
              Jo Bowyer
              Chartered Physiotherapist Registered Osteopath.
              "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

              Comment


              • #8
                Interview: Professor David Vaux talks about Statistics and Publishing in Big Journals

                https://motorimpairment.neura.edu.au...dr-david-vaux/
                Jo Bowyer
                Chartered Physiotherapist Registered Osteopath.
                "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                Comment


                • #9
                  Evaluation of a self-management patient education program for patients with fibromyalgia syndrome: study protocol of a cluster randomized controlled trial

                  http://bmcmusculoskeletdisord.biomed...891-016-0903-4

                  Abstract

                  Background
                  Fibromyalgia syndrome (FMS) is a complex chronic condition that makes high demands on patients’ self-management skills. Thus, patient education is considered an important component of multimodal therapy, although evidence regarding its effectiveness is scarce. The main objective of this study is to assess the effectiveness of an advanced self-management patient education program for patients with FMS as compared to usual care in the context of inpatient rehabilitation.

                  Methods/Design
                  We conducted a multicenter cluster randomized controlled trial in 3 rehabilitation clinics. Clusters are groups of patients with FMS consecutively recruited within one week after admission. Patients of the intervention group receive the advanced multidisciplinary self-management patient education program (considering new knowledge on FMS, with a focus on transfer into everyday life), whereas patients in the control group receive standard patient education programs including information on FMS and coping with pain. A total of 566 patients are assessed at admission, at discharge and after 6 and 12 months, using patient reported questionnaires. Primary outcomes are patients’ disease- and treatment-specific knowledge at discharge and self-management skills after 6 months. Secondary outcomes include satisfaction, attitudes and coping competences, health-promoting behavior, psychological distress, health impairment and participation. Treatment effects between groups are evaluated using multilevel regression analysis adjusting for baseline values.

                  Discussion
                  The study evaluates the effectiveness of a self-management patient education program for patients with FMS in the context of inpatient rehabilitation in a cluster randomized trial. Study results will show whether self-management patient education is beneficial for this group of patients.
                  Jo Bowyer
                  Chartered Physiotherapist Registered Osteopath.
                  "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                  Comment


                  • #10
                    How often are outcomes switched in clinical trials? And why does it matter?

                    http://compare-trials.org/blog/are-y...omes-switched/
                    Jo Bowyer
                    Chartered Physiotherapist Registered Osteopath.
                    "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                    Comment


                    • #11
                      Surgically modifiable factors measured by computer-navigation together with patient-specific factors predict knee society score after total knee arthroplasty

                      http://bmcmusculoskeletdisord.biomed...891-016-0929-7

                      Abstract

                      Background
                      The purpose was to investigate whether patient-specific factors (PSF) and surgically modifiable factors (SMF), measured by means of a computer-assisted navigation system, can predict the Knee Society Scores (KSS) after total knee arthroplasty (TKA).

                      Methods
                      Data from 99 patients collected during a randomized clinical trial were used for this secondary data analysis. The KSS scores of the patients were measured preoperatively and at 4-years follow-up. Multiple regression analyses were performed to investigate which combination of variables would be the best to predict the 4-years KSS scores.

                      Results
                      When considering SMF alone the combination of four of them significantly predicted the 4-years KSS-F score (p = 0.009), explaining 18 % of its variation. When considering only PSF the combination of age and body weight significantly predicted the 4-years KSS-F (p = 0.008), explaining 11 % of its variation. When considering both groups of predictors simultaneously the combination of three PSF and two SMF significantly predicted the 4-years KSS-F (p = 0.007), explaining 20 % of its variation.

                      Conclusions
                      Younger age, better preoperative KSS-F scores and lower BMI before surgery, a positive tibial component slope and small changes in femoral offset were predictors of better KSS-F scores at 4-years.
                      Keywords

                      Total knee replacement Computer-assisted surgery Prognosis Outcome assessment
                      Jo Bowyer
                      Chartered Physiotherapist Registered Osteopath.
                      "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                      Comment


                      • #12
                        Personalized medical education: Reappraising clinician-scientist training

                        http://stm.sciencemag.org/content/8/321/321fs2.full

                        Abstract
                        Revitalizing the Oslerian ideal of the clinician-scientist-teacher may help in the training of the next generation of translational researchers.

                        William Osler’s ideal of the clinician-scientist-teacher not only set standards for medical education on both sides of the Atlantic more than a century ago, it also holds solutions for the training of the next generation of translational researchers today. Having pioneered modern bedside teaching in Canada and the United States in the late 1800s, Osler was appointed Regius Professor of Medicine at the University of Oxford in 1905. Upon his arrival from Johns Hopkins to Oxford, he discovered that, in England, research and preclinical medical education in universities were dissociated from clinical practice and postgraduate training in hospitals. Because Osler was convinced that future advances in medical education and patient care would come from research, he challenged the English medical establishment to integrate research into medical education and patient care under the auspices of a university professor: “The Professor has three duties—to see that the patients are well treated, to investigate disease, and to teach students and nurses” (1). He argued that great scientific discoveries came from “the pursuit of knowledge for its own sake” and that already in the early 20th century, the hallmark of such discoveries was their translatability into practical applications (1). Thus, the enduring challenge in modern medicine is not only scientific innovation but also translation of scientific discoveries into new therapies for the benefit of humanity.
                        Jo Bowyer
                        Chartered Physiotherapist Registered Osteopath.
                        "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                        Comment


                        • #13
                          The Completeness of Intervention Descriptions in Randomised Trials of Supervised Exercise Training in Peripheral Arterial Disease

                          http://journals.plos.org/plosone/art...l.pone.0150869

                          Abstract

                          Research supports the use of supervised exercise training as a primary therapy for improving the functional status of people with peripheral arterial disease (PAD). Several reviews have focused on reporting the outcomes of exercise interventions, but none have critically examined the quality of intervention reporting.

                          Adequate reporting of the exercise protocols used in randomised controlled trials (RCTs) is central to interpreting study findings and translating effective interventions into practice.

                          The purpose of this review was to evaluate the completeness of intervention descriptions in RCTs of supervised exercise training in people with PAD. A systematic search strategy was used to identify relevant trials published until June 2015. Intervention description completeness in the main trial publication was assessed using the Template for Intervention Description and Replication checklist. Missing intervention details were then sought from additional published material and by emailing authors. Fifty-eight trials were included, reporting on 76 interventions.

                          Within publications, none of the interventions were sufficiently described for all of the items required for replication; this increased to 24 (32%) after contacting authors. Although programme duration, and session frequency and duration were well-reported in publications, complete descriptions of the equipment used, intervention provider, and number of participants per session were missing for three quarters or more of interventions (missing for 75%, 93% and 80% of interventions, respectively). Furthermore, 20%, 24% and 26% of interventions were not sufficiently described for the mode of exercise, intensity of exercise, and tailoring/progression, respectively. Information on intervention adherence/fidelity was also frequently missing: attendance rates were adequately described for 29 (38%) interventions, whereas sufficient detail about the intensity of exercise performed was presented for only 8 (11%) interventions. Important intervention details are commonly missing for supervised exercise programmes in the PAD trial literature. This has implications for the interpretation of outcome data, the investigation of dose-response effects, and the replication of protocols in future studies and clinical practice.

                          Researchers should be mindful of intervention reporting guidelines when attempting to publish information about supervised exercise programmes, regardless of the population being studied.
                          my italics
                          Jo Bowyer
                          Chartered Physiotherapist Registered Osteopath.
                          "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                          Comment


                          • #14
                            Research Updates

                            I went to a meeting last night, which is not unusual, I go to at least one a week. This was similar to most, there were three presentations, a chance to get something to eat and add to my pen stash and a goody bag to go home with.

                            This was slightly unusual in that I knew the presenters reasonably well, but didn't know anyone else at my table, the questions from the floor were excellent and most of the audience were in their seventies.

                            We were all there as participants in trials and interested to hear how what is happening with us, may change delivery of care to our fellow patients who are not in trials.
                            Last edited by Jo Bowyer; 08-03-2016, 10:19 AM.
                            Jo Bowyer
                            Chartered Physiotherapist Registered Osteopath.
                            "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                            Comment


                            • #15
                              Focal cartilage defects in the knee –a randomized controlled trial comparing autologous chondrocyte implantation with arthroscopic debridement

                              http://bmcmusculoskeletdisord.biomed...891-016-0969-z

                              Abstract

                              Background
                              Focal cartilage injuries in the knee might have devastating effect due to the predisposition of early onset osteoarthritis. Various surgical treatment options are available, however no statistically significant differences have been found between the different surgical treatments. This supports the suggestion that the improvement might be a result of the post-operative rehabilitation rather than the surgery itself. Autologous chondrocyte implantation (ACI) has become a recognized treatment option for larger cartilage lesions in the knee. Although ACI has been compared to other surgical treatment such as microfracture and mosaicplasty, it has never been directly compared to simple arthroscopic debridement and rehabilitation alone. In this study we want to increase clinical and economic knowledge about autologous chondrocyte implantation compared to arthroscopic debridement and physical rehabilitation in the short and long run.

                              Methods/Design
                              We will conduct a randomized controlled trial to compare ACI with simple arthroscopic debridement (AD) and physiotherapy for the treatment of cartilage lesions in the knee. The study will include a total of 82 patients, both men and non-pregnant women, with a full thickness cartilage defect in the weight bearing area of the femoral condyles or trochlea larger than 2 cm2. The lesion must be symptomatic, with a Lysholm score less than 75.

                              The two treatment groups will receive identical rehabilitation protocol according to a modification of Wondrasch et al., which is an active rehabilitation and education program divided into 3 phases: accommodation, rehabilitation and return to activity. The patients will be followed for 24 months, with additional late follow-ups at 5 and 10 years to monitor the potential onset of osteoarthtitis.

                              The primary outcome measure will be the difference in the KOOS knee-related quality of life (QoL) subscore in the ACI group compared to the AD group at 2 years. A combination of self-explanatory questionnaires, clinical parameters, clinical hop tests and radiographs and Magnetic Resonance Imaging (MRI) will be used as secondary endpoints.

                              Discussion
                              This is the first study with a high level of evidence to compare ACI with simple debridement and physiotherapy for the treatment of isolated symptomatic full thickness lesions of the knee.
                              Jo Bowyer
                              Chartered Physiotherapist Registered Osteopath.
                              "Out beyond ideas of wrongdoing and rightdoing,there is a field. I'll meet you there." Rumi

                              Comment

                              Working...
                              X