• leisesprecher@feddit.org
    link
    fedilink
    arrow-up
    10
    ·
    1 month ago

    The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I have a feeling that’s the point with a lot of their use cases, like RealPage.

      It’s not a criminal act when an AI did it! (Except it is and should be.)