centered image

Investigating 'Spin' In Scientific Journals

Discussion in 'General Discussion' started by Mahmoud Abudeif, Aug 11, 2019.

  1. Mahmoud Abudeif

    Mahmoud Abudeif Golden Member

    Joined:
    Mar 5, 2019
    Messages:
    6,517
    Likes Received:
    42
    Trophy Points:
    12,275
    Gender:
    Male
    Practicing medicine in:
    Egypt

    A recent study investigated "spin" in psychiatry and psychology research papers. The study authors found spin in more than half of the abstracts they analyzed. What impact might this have on doctors' decisions?

    [​IMG]
    As news and media outlets compete for views, they can sometimes exaggerate headlines and content to lure the reader in.

    Although many believe scientific journals to be some of the most reliable sources of information, they are not immune to the desire to be read and shared.

    A recent study set out to assess how much "spin" authors used in the abstracts of research papers published in psychology and psychiatry journals.

    They chose to look at abstracts because they summarize the entire paper, and doctors often use them to help inform medical decisions.

    What is spin?

    In this study, the authors outline their definition of spin as follows:

    "[T]he use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results."

    The authors published their findings in the journal BMJ Evidence Based Medicine. They looked at papers from the top six psychiatry and psychology journals from 2012–2017.

    The journals included JAMA Psychiatry, the American Journal of Psychiatry, and the British Journal of Psychiatry.

    Specifically, the researchers focused on randomized controlled trials with "nonsignificant primary endpoints." The primary endpoint of a study is the main result of the study, and "nonsignificant" in this context means that, statistically, the team did not find enough evidence to back up their theory.

    Spin comes in many forms, including:
    • selectively reporting outcomes, wherein the authors only mention certain results
    • P-hacking, wherein researchers run a series of statistical tests but only publish the figures from tests that produce significant results
    • inappropriate or misleading use of statistical measures
    How common is spin?

    In total, they analyzed the abstracts of 116 papers. Of these, 56% showed evidence of spin. This included spin in 2% of titles, 21% of the results sections of the abstract, and 49% in the conclusion sections of the abstract. In 15% of the papers, spin was present in both the results and conclusion sections of the abstracts.

    The researchers also investigated if industry funding was associated with spin. Perhaps surprisingly, they found no evidence that having financial backing from industry increased the likelihood of spin.

    The findings are concerning. Although spin in news media in general is worrying in itself, doctors use research papers to help steer clinical decisions. As the authors write:

    "Researchers have an ethical obligation to honestly and clearly report the results of their research." However, in the abstract section, authors can pick and choose the details that they include. The authors of the current study have concerns about what this might mean for doctors:

    "Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about a treatment for patients. Most physicians read only the article abstract the majority of the time."

    The implications

    Although researchers have not investigated the effects of spin in great depth, the authors point to one study that hammers home their point.

    In it, scientists collected abstracts from the field of cancer research. All were randomized controlled trials with a statistically nonsignificant primary outcome. All abstracts included spin.

    The researchers created second versions of these abstracts in which they removed the spin. They recruited 300 oncologists as participants. The researchers gave half of them an original abstract with spin, and they gave the other half the abstract with no spin.

    Worryingly, the doctors who read the abstracts with spin rated the intervention covered in the paper as more beneficial.

    As the authors of the recent study paper write: "Those who write clinical trial manuscripts know that they have a limited amount of time and space in which to capture the attention of the reader. Positive results are more likely to be published, and many manuscript authors have turned to questionable reporting practices in order to beautify their results."

    Another study, published in 2016, extends the scope of this issue. They investigated how peer reviewers — experienced scientists who scrutinize papers before publication — influenced spin. They found that in 15% of cases, the peer reviewer asked the authors to add spin.

    The current study does have some limitations. For example, these findings might not apply to other journals or fields of research. They also note that identifying spin is a subjective endeavor, and although they employed two independent data extractors, there is room for error.

    The exact size of the spin issue in medical research remains to be seen, but the authors conclude that "[a]uthors, journal editors, and peer reviewers should continue to be vigilant for spin to reduce the risk of biased reporting of trial results."

    Source
     

    Add Reply

Share This Page

<