Never talk morals with libs because these are the things they spend their time questioning. Whether or not it’s okay to use AI to detect CP. Real thinker there

https://lemm.ee/post/12171882

  • LanyrdSkynrd [comrade/them, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    Google already has an ML model that detects this stuff, and they use it to scan everyone’s private Google photos.

    https://www.eff.org/deeplinks/2022/08/googles-scans-private-photos-led-false-accusations-child-abuse

    The must have classified and used a bunch of child porn to train the model and I have no problem with that, it’s not generating new CP or abusing anyone. I’m more uncomfortable with them running all our photos through an AI model and sending the results to the US government and not telling the public.

    • WayeeCool [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      They just run it on photos stored on their servers. Microsoft, Apple, Amazon, and Dropbox also do the same. There are also employees in their security departments with the fkd up job of having to verify anything flagged then alert law enforcement.

      Everyone always forgets that “cloud storage” means files are stored on someone else’s machine. I don’t think anyone, even soulless companies like Google or Microsoft want to be hosting CSAM. So it is understandable that they scan the contents of Google Photos or Microsoft OneDrive, even if they didn’t have a legal obligation there is a moral one.