Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show.

  • RobotToaster
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    9 months ago

    > Train AI on humans

    > It acts like humans

  • Godric@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    9 months ago

    The regular way of teaching LLMs new patterns of retrieving information, by giving human feedback, doesn’t help counter covert racial bias, the study showed.

    Instead, it found that it could teach language models to “superficially conceal the racism they maintain on a deeper level”.

    Wow AI is speedrunning American conservatism, it took decades to figure out they gotta put a smoke machine in front of the racism

  • Amerikan Pharaoh@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Computers and their apps only do exactly what they were told to do. Some settler programmer’s bias infected LLM-0; and now we have the same kind of problem that has rendered facial recognition technology useless, but is still certain to end up weaponized and abused by the settler regime. Just like facial recognition. Thanks for that, techbros. Jim Crow II is directly on your hands and shoulders.