• Carrolade@lemmy.world
      link
      fedilink
      English
      arrow-up
      106
      ·
      6 months ago

      This almost makes me think they’re trying to fully automate their publishing process. So, no editor in that case.

      Editors are expensive.

      • YAMAPIKARIYA@lemmyfi.com
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        4
        ·
        6 months ago

        If they really want to do it, they can just run a local language model trained to proofread stuff like this. Would be way better

          • YAMAPIKARIYA@lemmyfi.com
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            6
            ·
            6 months ago

            I don’t think so. They are using AI from a 3rd party. If they train their own specialized version, things will be better.

            • FiniteBanjo@lemmy.today
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              2
              ·
              6 months ago

              Here is a better idea: have some academic integrity and actually do the work instead of using incompetent machine learning to flood the industry with inaccurate trash papers whose only real impact is getting in the way of real research.

              • YAMAPIKARIYA@lemmyfi.com
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                6 months ago

                There is nothing wrong with using AI to proofread a paper. It’s just a grammar checker but better.

                • BearGun@ttrpg.network
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  6 months ago

                  Proofreading involves more than just checking grammar, and AIs aren’t perfect. I would never put my name on something to get published publicly like this without reading it through at least once myself.

                • FiniteBanjo@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  6 months ago

                  You can literally use tools to check grammar perfectly without using AI. What the LLM AI does is it predict what word comes next in a sequence, and if the AI is wrong as it often is then you’ve just attempted to publish a paper with halucinations wasting the time and effort of so many people because you’re greedy and lazy.

            • alehc@slrpnk.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              That’s not necessarily true. General-purpose 3rd party models (chatgpt, llama3-70b, etc) perform surprisingly good in very specific tasks. While training or finetuning your specialized model should indeed give you better results, the crazy amount of computational resources and specialized manpower needed to accomplish it makes it unfeasible and unpractical in many applications. If you can get away with an occational “as an AI model…”, you are better off using existing models.

    • TheFarm@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      6 months ago

      This is what baffles me about these papers. Assuming the authors are actually real people, these AI-generated mistakes in publications should be pretty easy to catch and edit.

      It does make you wonder how many people are successfully putting AI-generated garbage out there if they’re careful enough to remove obviously AI-generated sentences.

      • BluJay320@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        6 months ago

        I definitely utilize AI to assist me in writing papers/essays, but never to just write the whole thing.

        Mainly use it for structuring or rewording sections to flow better or sound more professional, and always go back to proofread and ensure that any information stays correct.

        Basically, I provide any data/research and get a rough layout down, and then use AI to speed up the refining process.

        EDIT: I should note that I am not writing scientific papers using this method, and doing so is probably a bad idea.

          • BluJay320@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 months ago

            Yeah, same. I’m good at getting my info together and putting my main points down, but structuring everything in a way that flows well just isn’t my strong suit, and I struggle to sit there for long periods of time writing something I could just explain in a few short points, especially if there’s an expectation for a certain length.

            AI tools help me to get all that done whilst still keeping any core information my own.

      • MBM@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I’ve heard the word “delve” has suddenly become a lot more popular in some fields

  • shadowtofu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    163
    arrow-down
    2
    ·
    6 months ago

    This article has been removed at the request of the Editors-in-Chief and the authors because informed patient consent was not obtained by the authors in accordance with journal policy prior to publication. The authors sincerely apologize for this oversight.

    In addition, the authors have used a generative AI source in the writing process of the paper without disclosure, which, although not being the reason for the article removal, is a breach of journal policy. The journal regrets that this issue was not detected during the manuscript screening and evaluation process and apologies are offered to readers of the journal.

    The journal regrets – Sure, the journal. Nobody assuming responsibility …

    • Taako_Tuesday@lemmy.ca
      link
      fedilink
      English
      arrow-up
      85
      ·
      6 months ago

      What, nobody read it before it was published? Whenever I’ve tried to publish anything it gets picked over with a fine toothed comb. But somehow they missed an entire paragraph of the AI equivalent of that joke from parks and rec: “I googled your symptoms and it looks like you have ‘network connectivity issues’”

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        6
        ·
        6 months ago

        Nobody would read it even after it was published. No scientist have time to read other’s papers. They’re too busy writing their own papers. This mistake probably made it more read than 99% of all other scientific papers.

      • FiniteBanjo@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        6 months ago

        I think that part of the issue is quantity and volume. You submit a few papers a year, an AI can in theory submit a few per minute. Even if you filter 98% of them, mistakes will happen.

        That said, this particular error in the meme is egregious.

    • Patrizsche@lemmy.ca
      link
      fedilink
      English
      arrow-up
      31
      ·
      6 months ago

      Daaaaamn they didn’t even get consent from the patient😱😱😱 that’s even worse

      • Frenchy@aussie.zone
        link
        fedilink
        English
        arrow-up
        15
        ·
        6 months ago

        I mean holy shit you’re right, the lack of patient consent is a much bigger issue than getting lazy writing the discussion.

      • exscape@kbin.social
        link
        fedilink
        arrow-up
        30
        ·
        6 months ago

        The entire abstract is AI. Even without the explicit mention in one sentence, the rest of the text should’ve been rejected as nonspecific nonsense.

        • canihasaccount@lemmy.world
          link
          fedilink
          English
          arrow-up
          30
          arrow-down
          1
          ·
          6 months ago

          That’s not actually the abstract; it’s a piece from the discussion that someone pasted nicely with the first page in order to name and shame the authors. I looked at it in depth when I saw this circulate a little while ago.

          • exscape@kbin.social
            link
            fedilink
            arrow-up
            8
            ·
            6 months ago

            Ah, that makes more sense. I looked up the original abstract and indeed it looks more like what you’d expect (hard to comprehend for someone that’s not in the field).

            Though to clarify (for others reading this) they still did use generative AI to (help?) write the paper, which is only part of why it was withdrawn.

    • hydroptic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      2
      ·
      6 months ago

      It’s Elsevier, so this probably isn’t even the lowest quality article they’ve published

      • Optional@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        6 months ago

        Yep. And AI will totally help.

        Ooh I mean not help. It’ll make it much worse. Particularly with the social sciences. Which were already pretty fuX0r3d anyway due to the whole “your emotions equal this number” thing.

    • Cornelius_Wangenheim@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      6 months ago

      Many journals are absolute garbage that will accept anything. Keep that in mind the next time someone links a study to prove a point. You have to actually read the thing and judge the methodology to know if their conclusions have any merits.

      • clearedtoland@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        6 months ago

        Full disclosure: I don’t intend to be condescending.

        Research Methods during my graduate studies forever changed the way I interpret just about any claim, fact, or statement. I’m obnoxiously skeptical and probably cynical, to be honest. It annoys the hell out of my wife but it beats buying into sensationalist headlines and miracle research. Then you get into the real world and see how data gets massaged and thrown around haphazardly…believe very little of what you see.

        • Adalast@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 months ago

          I have this problem too. My wife gets so annoyed at things because I question things I notice as biases or statistical irregularities instead of just accepting that they knee what they were doing. I have tried to explain it to her. Skepticism is not dismissal and it is not saying I am smarter than them, it is recognizing that they are human and that I may be more proficient in one spot they made a mistake than they were.

          I will acknowledge that the lay need to stop trying to argue with scientists because “they did their own research”, but the actually informed and educated need to do a better job of calling each other out.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      6 months ago

      We are in top dystopia mode right now. Students have AI write articles that are proofread and edited by AI, submitted to automated systems that are AI vetted for publishing, then posted to platforms where no one ever reads the articles posted but AI is used to scrape them to find answers or train all the other AIs.

      • VeganPizza69 Ⓥ@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        How generative AI is clouding the future of Google search

        The search giant doesn’t just face new competition from ChatGPT and other upstarts. It also has to keep AI-powered SEO from damaging its results.

        More or less the same phenomenon of signal pollution:

        “Google is shifting its responsibility for maintaining the quality of results to moderators on Reddit, which is dangerous,” says Ray of Amsive. Search for “kidney stone pain” and you’ll see Quora and Reddit ranking in the top three positions alongside sites like the Mayo Clinic and the National Kidney Foundation. Quora and Reddit use community moderators to manually remove link spam. But with Reddit’s traffic growing exponentially, is a human line of defense sustainable against a generative AI bot army?

        We’ll end up using year 2022 as a threshold for reference criteria. Maybe not entirely blocked, but like a ratio… you must have 90% pre-2022 and 10% post-2022.

        Perhaps this will spur some culture shift to publish all the data, all the notes, everything - which will be great to train more AI on. Or we’ll get to some type of anti-AI or anti-crawler medium.

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    130
    arrow-down
    1
    ·
    edit-2
    6 months ago

    To me, this is a major ethical issue. If any actual humans submitted this “paper”, they should be severely disciplined by their ethics board.

  • repungnant_canary@lemmy.world
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    2
    ·
    6 months ago

    Maybe, if reviewers were paid for their job they could actually focus on reading the paper and those things wouldn’t slide. But then Elsevier shareholders could only buy one yacht a year instead of two and that would be a nightmare…

    • adenoid@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      ·
      6 months ago

      Elsevier pays its reviewers very well! In fact, in exchange for my last review, I received a free month of ScienceDirect and Scopus…

      … Which my institution already pays for. Honestly it’s almost more insulting than getting nothing.

      I try to provide thorough reviews for about twice as many articles as I publish in an effort to sort of repay the scientific community for taking the time to review my own articles, but in academia reviewing is rewarded far less than publishing. Paid reviews sound good but I’d be concerned that some would abuse this system for easy cash and review quality would decrease (not that it helped in this case). If full open access publishing is not available across the board (it should be), I would love it if I could earn open access credits for my publications in exchange for providing reviews.

      • Ragdoll X@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        6 months ago

        I’ve always wondered if some sort of decentralized, community-led system would be better than the current peer review process.

        That is, someone can submit their paper and it’s publicly available for all to read, then people with expertise in fields relevant to that paper could review and rate its quality.

        Now that I think about it it’s conceptually similar to Twitter’s community notes, where anyone with enough reputation can write a note and if others rate it as helpful it’s shown to everyone. Though unlike Twitter there would obviously need to be some kind of vetting process so that it’s not just random people submitting and rating papers.

          • fossilesqueOPM
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            6 months ago

            I feel like I’ve seen this model before, I know I’ve heard it. There’s better ways to do it than your suggestion, but it’s there in spirit. Science is a conversation, it would be a really cool idea to make room for things like this. In the meantime, check out Pubpeer, it’s got extensions for browsers. Super useful and you have to attach your ORCID to be verified. Everyone can read it though.

      • bananabenana@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        Open access credits is a fantastic idea. Unfortunately it goes against the business model of these parasites. Ultimately, these businesses provide little to no actual value except siphoning taxpayer money. I really prefer eLifes current model but it would be great if it was cheaper. arXiv, Biorxiv provides a better service than most journals IMO

        Also I agree with the reviewing seriously and twice as often as publishing. Many people leave academia so reviewing more can cover them.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      Fuck that, they should pay special bounty hunters to expose LLM garbage, I’d take that job instantly

  • Lissa@beehaw.org
    link
    fedilink
    English
    arrow-up
    77
    ·
    6 months ago

    It is astounding to me that this happened. A complete failure of peer review, of the editors, and OF COURSE of the authors. Just absolutely bonkers that this made it to publication. Completely clown shoes.

    • BakerBagel@midwest.social
      link
      fedilink
      English
      arrow-up
      48
      ·
      6 months ago

      It keeps happening across all fields. I think we are about to witness a complete overhaul of the publishing model.

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        27
        ·
        6 months ago

        I’ve been saying it to everyone who’ll listen …

        the journals should be run by universities as non-profits with close ties to the local research community (ie, editors from local faculty and as much of the staff from the student/PhD/Postdoc body as possible). It’s really an obvious idea. In legal research, there’s a long tradition of having students run journals (Barrack Obama, if you recall, was editor of the Harvard Law Journal … that was as a student). I personally did it too … it’s a great experience for a student to see how the sausage is made.

          • maegul (he/they)@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            You don’t need one in each University, that wouldn’t scale. There’s be natural specialisations. And journals could even move from University to university as academic personnel change over time.

            The main point is that they’re non-profit and run by researchers for researchers.

  • Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    ·
    6 months ago

    In Elsevier’s defense, reading is hard and they have so much money to count.

  • Diabolo96@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    46
    ·
    edit-2
    6 months ago

    They mistakenly sent the “final final paper.docx” file instead of the “final final final paper v3.docx”. It could’ve happen to any of us.

  • soloner@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    6 months ago

    Guys it’s simple they just need to automate AI to read these papers for them to catch if AI language was used. They can automate the entire peer review process /s

  • blackstampede@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    40
    ·
    6 months ago

    I started a business with a friend to automatically identify things like this, fraud like what happened with Alzheimer’s research, and mistakes like missing citations. If anyone is interested, has contacts or expertise in relevant domains or just wants to talk about it, hit me up.

      • aqwxcvbnji [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        Fun fact! In the Netherlands, Elsevier publishes a weekly magazine about politics, which is basically the written version of Fox News for that country. Very nice that those people control like 50% of all academic publishing.

        • fossilesqueOPM
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          6 months ago

          Military Industrial Publishing Complex. It isn’t tinfoil.