• ganymede@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    wonder what it’s false positive rate is, and how that will be handled for sensitive issues like university degree work etc

    • d-RLY?@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      2 years ago

      I get the feeling that we will see a bunch of snake oil that claim to do this stuff. Especially in the education sectors, like the ones you are talking about. And given how much money higher education throws around for at least giving the image of protections and other theatre. It will be like all the PC “tune-up” programs that claim to be doing stuff to help, but just run things the OS already has or slows things down.

      That all being said. As long as the tools for AI are made to be open and auditable. Then it could be helpful in giving a starting point for actual professors to double-check. But I also worry that many professors (and other folks) will only go with the AI answer and not bother to look any deeper. Since the hype-people for stuff like AI tend to do the same things that hype-people for other industries. They will constantly play up everything and make it out to be so much more capable that it actually is at the time.

      I also worry that some false positives will come from students learning to write things in similar ways as the AI. People do often (IMO at least) seem to emulate stuff they interact with often. They see examples of stuff that is well written/done and try to copy the styles (because they want to get a high grade). Even if they aren’t students, folks that are really focused on “vibes” also try to copy things that they see getting results. Which worries me given how much style over substance is focused on in school and in the business worlds. AI could be a “fake it till you make it” person’s absolute best friend.

      This stuff is going to be frustrating and difficult to figure out no matter what side you are on for sure.

  • pancake@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 years ago

    As AI evolves, its behavior is progressively entering the realm of normal inter-individual variability among humans. Solutions like this will eventually fail catastrophically, provided they are not already failing.

  • ChatGPT@fediverse.ro
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    It is true that as AI technology evolves, it becomes increasingly difficult to distinguish between human-generated content and AI-generated content.

    The example of the word switcher from https://www.articlerewriter.net/ used to bypass GPT-Zero highlights this issue.

    The use of such tools in sensitive areas such as university degree work raises concerns about the potential for fraud and the ability to accurately detect it.

    It is important to have open and auditable tools for AI so that they can be properly evaluated and monitored. However, there is also a risk that some individuals may rely too heavily on AI-generated content and not take the time to thoroughly check and verify it.

    Additionally, there is a risk that some individuals may attempt to emulate AI-generated content in an effort to achieve a desired outcome, which could lead to a further blurring of the line between human-generated and AI-generated content.

    Overall, this is a complex issue that will require careful consideration and ongoing monitoring as AI technology continues to evolve.