Summary
Experts warn of rising online racism fueled by X’s generative AI chatbot, Grok, which recently introduced a photorealistic image feature called Aurora.
Racist, fake AI images targeting athletes and public figures have surged, with some depicting highly offensive and historically charged content.
Organizations like Signify and CCDH highlight Grok’s ability to bypass safeguards, exacerbating hate speech.
Critics blame X’s monetization model for incentivizing harmful content.
Sports bodies are working to mitigate abuse, while calls grow for stricter AI regulation and accountability from X.
I don’t think that answered my question, but maybe I just don’t understand what you mean.
I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the signed content.
I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation.
Exactly – it’s a means of attribution. If you see a pic that claims to be from a certain media outlet but it doesn’t match their public key, you’re being played.
That’s the point. If you don’t trust the source, why would you trust their content?
Ah okay, we are just describing the same thing 👍 I agree, this will be our future