Summary

Experts warn of rising online racism fueled by X’s generative AI chatbot, Grok, which recently introduced a photorealistic image feature called Aurora.

Racist, fake AI images targeting athletes and public figures have surged, with some depicting highly offensive and historically charged content.

Organizations like Signify and CCDH highlight Grok’s ability to bypass safeguards, exacerbating hate speech.

Critics blame X’s monetization model for incentivizing harmful content.

Sports bodies are working to mitigate abuse, while calls grow for stricter AI regulation and accountability from X.

  • cygnus@lemmy.ca
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    3 days ago

    I got swamped with downvotes the last time I said this, but I maintain that in the near future we’re going to need to digitally sign authentic images. It’s simply unfeasible to police the entire internet in an effort to remove fake ones (especially since they are made by bad actors who don’t care about any rules anyway) so we need a widespread and easy to use way to distinguish real images instead.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      3 days ago

      Honest question. What will stop someone from getting AI generated images digitally signed as well? Who will be the authority doing the signing?

      • cygnus@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        3 days ago

        I mean a signature that can be matched against a known one, like GPG.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          3 days ago

          I don’t think that answered my question, but maybe I just don’t understand what you mean.

          I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the signed content.

          I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation.

          • cygnus@lemmy.ca
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            3 days ago

            I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the l signed content.

            Exactly – it’s a means of attribution. If you see a pic that claims to be from a certain media outlet but it doesn’t match their public key, you’re being played.

            I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation, that won’t be easily abused to defeat the purpose

            That’s the point. If you don’t trust the source, why would you trust their content?

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      How does this address the fact that people don’t care whether something is real or fake? You can sign it all you want, but if nobody cares about the signature, you haven’t accomplished anything.

      • cygnus@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        I think we can win back a lot of people by making it easier to prove it’s fake. Right now we’re only asking them to take one source’s word over the other. We don’t need to convince everyone — only to get things back to a normal percentage of village idiots.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          They don’t care that it’s fake. The loudest people and the fake accounts will continually post and repost without any consideration for truth.

          • cygnus@lemmy.ca
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            The loudest people and the fake accounts

            Well yeah, they’re the ones deliberately spreading it. I don’t care about them, I care about the uninformed people in the middle who don’t know what’s real anymore.

    • Zetta
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      3 days ago

      Already happening and it’s gross, and will erode internet freedom and anonymity. This is advanced internet surveillance and tracking and they are using “stopping the spread of AI misinfo” as the excuse.

      Soon all images produces in any manor will be able to practically identify the creator. Hiding tracking data in images is fucked up. This is also something adobe is working on implementing In their tools, so this isn’t going to be just AI images.

      https://openai.com/index/understanding-the-source-of-what-we-see-and-hear-online/

      • cygnus@lemmy.ca
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        Already happening and it’s gross, and will erode internet freedom and anonymity.

        How? You can remain anonymous and still have a public key. I only need to know that I trust “Zetta”; I don’t need your real name and home address.