Is Most Published Research Wrong?

5,757,368
0
Published 2016-08-11
Mounting evidence suggests a lot of published research is false.
Check out Audible: bit.ly/AudibleVe
Support Veritasium on Patreon: bit.ly/VePatreon

Patreon supporters:
Bryan Baker, Donal Botkin, Tony Fadell, Jason Buster, Saeed Alghamdi

More information on this topic: wke.lt/w/s/z0wmO

The Preregistration Challenge: cos.io/prereg/

Resources used in the making of this video:

Why Most Published Research Findings Are False:
journals.plos.org/plosmedicine/article?id=10.1371/…

Trouble at the Lab:
www.economist.com/news/briefing/21588057-scientist…

Science isn't broken:
fivethirtyeight.com/features/science-isnt-broken/#…

Visual effects by Gustavo Rosa

All Comments (21)
  • @raznaot8399
    As the famous statistical saying goes, "If you torture data long enough, it will confess to anything"
  • @MrMakae90
    For people freaking out in the comments: we don't need to change the scientific method, we need to change the publication strategies that incentive scientific behavior.
  • @ModernGolfer
    As a very wise man once stated, "It's not the figures lyin'. It's the liars figurin'". Very true.
  • this happens because of "publish or perish" mentality. I hate writing scientific papers because it is too much of a hassle. I love the clinic work and reading those papers, not writing them. in this day and age it is almost an obligation that EVERYBODY HAS TO PUBLISH. if you force everyone to write manuscripts, flood of trash is inevitable. only certain people who are motivated should do these kind of work, it should not be forced upon everyone.
  • @Campusanis
    The most shocking thing to me in this video was the fact that some journals would blindly refuse replication studies.
  • @qwerty9170x
    I really think undergrads should be replicating constantly. They dont need to publish or perish, step-by-step replication is great for learning, and any disproving by an undergrad can be rewarded (honors, graduate school admissions, etc) more easily than publication incentives can change
  • @etanben-ami8305
    When I was in grad school for applied psychology , my supervising professor wrote the discussion section of a paper before the data was all gathered. He told me to do whatever I needed to do in order to get those results. The paper was delivered at the Midwestern Psychology Conference. I left grad school, stressed to the max by overwork and conscience.
  • The problem is people are suppose to be able to replicate the results by doing the experiment over again. If I can’t find multiple experiments of a study, it’s hard for me to not be skeptical
  • @GiRR007
    "There is no cost to getting things wrong, the cost is not getting them published" It's a shame this also applies to news media as well.
  • @josephmoya5098
    As a former grad student, the real issue is the pressure universities put on their professors to publish. When my dad got his PhD, he said being published 5 times in his graduate career was considered top notch. He was practically guaranteed to get a tenure track position. Now I have my Masters and will be published twice. No one would consider giving you a post doc position without being published 5-10 times, and you are unlikely to get a tenure track position without being published 30 or so times. And speaking as a grad student who worked on a couple major projects, it is impossible to be published thirty times in your life and have meaningful data. The modern scientific process takes years. It takes months of proposal writing, followed by months of modeling, followed by months or years of experimentation, followed by months of pouring over massive data sets. To be published thirty times before you get your first tenure track position means your name is on somewhere between 25-28 meaningless papers. You'll be lucky to have one significant one.
  • @callumc9426
    As someone who studies theoretical statistics and data science, this really resonates with me. I see students in other science disciplines such as psychology or biology taking a single, compulsory (and quite basic) statistics paper, who are then expected to undertake statistical analysis for all their research, without really knowing what they're doing. Statistics is so important, but can also be extremely deceiving, so to the untrained eye a good p-value = correct hypothesis, when in reality it's important to scrutinise all results. Despite it being so pertinent, statistics education in higher education and research is obviously lacking, but making it a more fundamental part of the scientific method would make research much more reliable and accurate.
  • @karldavis7392
    This has influenced my thinking more than any other video I have ever seen, literally it's #1. I always wondered how the news could have one "surprising study" result after another, often contradicting one another, and why experts and professionals didn't change their practices in response to recent studies. Now I understand.
  • @Vathorst2
    Research shows lots of research is actually wrong spoopy
  • @psychalogy
    It’s almost impossible to publish negative results. This majorly screws with the top tier level of evidence, the meta analysis. Meta analyses can only include information contained in studies that have actually been published. This bias to preferentially publish only the new and positive skews scientific understanding enormously. I’ve been an author on several replication studies that came up negative. Reviewers sometimes went to quite silly lengths to avoid recommending publication. Just last week a paper was rejected because it both 1. Didn’t add anything new to the field, and 2. disagreed with previous research in the area. These two things cannot simultaneously be true.
  • @-30h-work-week
    Sabine Hossenfelder: "Most science websites just repeat press releases. The press releases are written by people who get paid to make their institution look good, and who for the most part don't understand the content of the paper. They're usually informed by the authors of the paper, but the authors have an interest in making their institution happy. The result is that almost all science headlines vastly exaggerate the novelty and relevance of the research they report on."
  • @kunk8789
    “p<0.05” is the scientific equivalent for “SHOCKING!!” in media
  • @Deupey445
    Gotta love when a published research article states that most published research findings are false
  • @2ndEarth
    My favorite BAD EXPERIMENT is when mainstream news began claiming that OATMEAL gives you CANCER. The study was so poorly constructed that they didn't account for the confounding variable that old people eat oatmeal more often and also tend to have higher incidences of cancer (nodding and slapping my head as I type this).
  • @LincolnDWard
    Science isn't the initial idea, it's the dozens of people who come along and test the idea afterwards
  • @StructEdOrg
    This is huge in my field, Structural Engineering, as people get way too lax about sample size. Thanks to testing things like full-sized bridge girders being incredibly expensive, samples sizes of 1-3 have become all too common, and no one does replication studies... Then that mentality bleeds over to things like anchor bolts that can be had for $5 a piece at any big box hardware store. It's getting dangerous out there!