• AnonStoleMyPants@sopuli.xyz
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    7 months ago

    I don’t think this is a problem at all. Are they saying fake data is hard to do? I don’t get it. Why would it be hard to fake data? Get real results and shift the values by some number. That’s it. I mean, obviously if you shift too much then you will have problems, but enough to be credible? Easy.

    Sure it is slightly harder ro make the data from scratch, but let’s be real here, a TON of the data is just a random csv file a machine pops out. Why on earth would it be hard to fake?

    Now, human trials are a bit different than some measurement data but I fail to see why this would be hard, assuming you are an expert in rhe field.

    Much more prominent problem in science is cherry picking data. It is very common to have someone make 50 new devices and measure them all, and conveniently leave out half of the measurements. Happens alllll the time

    • appel@whiskers.bim.boats
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      There are some statistical tests and methods you can do to quite easily spot fake data from what I remember. (The name has escaped me, sorry). Ie. To check if it has come from an RNG, or if it is too positive given the sample, etc. but you are right in that it is often enough to fool the review board and get something published. Often the data is only scrutinized with these methods thoroughly after it has been published.

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        There was an infamous case a few years ago, which was caught because the researcher forgot to delete the fake-data-generating formula from the Excel file.