Announcement

Collapse
No announcement yet.

Trials

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Jo Bowyer
    started a topic Ref Trials

    Trials

    http://www.trialsjournal.com/?utm_ca...ource=Teradata

    Aims & scope
    Trials is an open access, peer-reviewed, online journal that encompasses all aspects of the performance and findings of randomized controlled trials in health. We publish articles on general trial methodology as well as protocols, commentaries and traditional results papers - regardless of outcome or significance of findings.

    Trials aims to experiment with, and refine, innovative approaches to improving communication about trials. We are keen to move beyond publishing traditional trial results articles (although these are included). We believe this journal represents an exciting opportunity to advance the science and reporting of trials.

    Making all its content open access and not retaining copyright, Trials offers a way to make data both freely available and highly visible to trialists worldwide; this will benefit the impact of your publication among peers and societies. The journal has unrestricted space and takes advantage of all the technical possibilities available for electronic publishing.

    To date, journals have focused on reporting the results of trials, with very little coverage of why and how they are conducted. Reports of trials have been restricted both by authors and editors &mdash both parties often select only a subset of the outcomes measured, while the latter often impose word limits on the articles published making it difficult to communicate the lessons learnt from conducting the trial, let alone include adequate details of how the trial was conducted.

    The Internet offers both unlimited space and interactivity, and we are keen to harness these attributes. For instance, trialists are able to provide the detail required to be a true scientific record and do more to make the article's message comprehensible to a variety of reader groups. They are able to communicate not only all outcome measures, as well as varying analyses and interpretations, but also in-depth descriptions of what they did and what they learnt. This sharing of direct experience is fundamental to improving the quality and conduct of trials worldwide.

    Prior to 2006, Trials was published as Current Controlled Trials in Cardiovascular Medicine (CCTCVM). All published CCTCVM articles are available via the Trials website and citations to CCTCVM article URLs will continue to be supported.
    via @SimonGandevia




    New clues to why a French drug trial went horribly wrong

    http://www.sciencemag.org/news/2017/...et_cid=1375976

    Scientists are one step closer to understanding how a clinical trial in France killed one volunteer and led to the hospitalization of five others in January 2016. A new study shows that the compound tested in the study, BIA 10-2474, has effects on many other enzymes in addition to the one it was supposed to inhibit. These “off-target” effects might explain why the drug caused side effects ranging from headaches to irreversible brain damage.

    “We suspected that BIA 10-2474 was a bad compound—now we know for sure,” says neuropharmacologist Daniele Piomelli from the University of California, Irvine, who was not involved in the new study.
    Update 10/06/2017
    Last edited by Jo Bowyer; 10-06-2017, 10:45 PM.

  • Jo Bowyer
    replied
    How often do authors with retractions for misconduct continue to publish?

    http://retractionwatch.com/2019/05/0...ue-to-publish/

    Leave a comment:


  • Jo Bowyer
    replied
    Statisticians want to abandon science’s standard measure of ‘significance’

    https://www.sciencenews.org/article/...test_Headlines

    The P value itself is only a statistical test, and no one is trying to get rid of it. Instead, the signers of the Nature manifesto are against the idea of statistical significance, where P is less than or equal to 0.05. That limit gives a false sense of certainty about results, McShane says. “Statistics is often wrongly perceived to be a way to get rid of uncertainty,” he says. But it’s really “about quantifying the degree of uncertainty.”

    Embracing that uncertainty would change how science is communicated to the public. People expect clear yes-or-no answers from science, or want to know that an experiment “found” something, though that’s never truly the case, Haaf says. There is always uncertainty in scientific results. But right now scientists and nonscientists alike have bought into the false certainty of statistical significance.

    Those teaching or communicating science — and those learning and listening — would need to understand and embrace uncertainty right along with the scientific community. “I’m not sure how we do that,” says Haaf. “What people want from science is answers, and sometimes the way we report data should show [that] we don’t have a clear answer; it’s messier than you think.”

    Leave a comment:


  • Jo Bowyer
    commented on 's reply
    Maybe combining red wine and tea doesn’t kill tumors after all
    https://retractionwatch.com/2019/04/...ors-after-all/

  • Jo Bowyer
    replied
    How to Review an Article

    https://thesciencept.com/how-to-review-an-article/

    One of the most common questions I get is along the lines of, “How should I go about reading and critically analyzing a research article?” Many clinicians and students are starting a journal club and want to make sure that they don’t fall victim to the trappings of “bad science”. After replying to enough of these, I have decided to just put it into a blog post. If you just want to know how to stay current with the literature, that is a separate post that I wrote several years ago.

    Why do we need to critically analyze research papers? Why can’t we just take articles at their face value? Well, a study by the Center for Open Science that attempted to replicate 100 previously conducted studies showed that we had a problem. 97% of the original studies showed an effect. When the same studies were run a second time EXACTLY as before, only 36% showed an effect – and those remaining effects were much smaller than originally described. Why is this? We’ll get to that. For now just know that the evidence you thought you had supporting what you do may not be “real”.

    Leave a comment:


  • Jo Bowyer
    replied
    Evaluation of an E-Learning Training Program to Support Implementation of a Group-Based, Theory-Driven, Self-Management Intervention For Osteoarthritis and Low-Back Pain: Pre-Post Study

    https://www.jmir.org/2019/3/e11123/

    Leave a comment:


  • Jo Bowyer
    replied
    Lumbar mechanical traction: a biomechanical assessment of change at the lumbar spine

    https://bmcmusculoskeletdisord.biome...891-019-2545-9

    Leave a comment:


  • Jo Bowyer
    replied
    A study of reproducibility of kinesiology tape applications: review, reliability and validity

    https://bmcmusculoskeletdisord.biome...891-019-2533-0

    Leave a comment:


  • Jo Bowyer
    replied
    Impact of compression stockings on leg swelling after arthroscopy – a prospective randomised pilot study

    https://bmcmusculoskeletdisord.biome...891-019-2540-1

    Leave a comment:


  • Jo Bowyer
    replied
    Living Science: Love writing

    https://elifesciences.org/articles/4...19-elife-alert

    Leave a comment:


  • Jo Bowyer
    replied
    Point of View: Data science for the scientific life cycle

    https://elifesciences.org/articles/4...19-elife-alert

    A key tenet of the scientific method is that we learn from previous work. In principle we observe something about the world and generate a hypothesis. We then design an experiment to test that hypothesis, set up the experiment, collect the data and analyse the results. And when we report our results and interpretation of them in a paper, we make it possible for other researchers to build on our work.

    In practice, there are impediments at every step of the process. In particular, our work depends on published research that often does not contain all the information required to reproduce what was reported. There are too many possible experimental parameters to test under our time and budget constraints, so we make decisions that affect how we interpret the outcomes of our experiments. As researchers, we should not be complacent about these obstacles: rather, we should always look towards new technologies, such as data science, to help us improve the quality and efficiency of scientific research.

    Leave a comment:


  • Jo Bowyer
    replied
    Unilateral laminectomy for bilateral decompression improves low back pain while standing equally on both sides in patients with lumbar canal stenosis: analysis using a detailed visual analogue scale

    https://bmcmusculoskeletdisord.biome...891-019-2475-6

    Leave a comment:


  • Jo Bowyer
    replied
    Do the humanities need a replication drive? A debate rages on

    https://retractionwatch.com/2019/02/...bate-rages-on/

    Leave a comment:


  • Jo Bowyer
    replied
    Research pushes back on benefits of compounded topical pain creams

    https://www.sciencedaily.com/release...0205102542.htm

    Leave a comment:


  • Jo Bowyer
    replied
    Should journals credit eagle-eyed readers by name in retraction notices?

    https://retractionwatch.com/2019/02/...ction-notices/

    Leave a comment:

Working...
X