A new law went into effect in New York City on Wednesday that requires any business using A.I. in its hiring to submit that software to an audit to prove that it does not result in racist or sexist…

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    AI shouldn’t be involved in hiring at all. How hard is it to look at a resume?

    All the biggest companies have massive HR departments and tons of recruiters already. AI shouldn’t be making huge life-changing decisions for human beings.

    • elscallr@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Any open position immediately gets a massive influx of resumes. Usually these are bullshit that get auto submitted by bots and, for the most part, can be discarded. But something has to do the work of separating the wheat from the chaff, and this is basically the perfect use case for a trained model.

      • Machinist3359@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Hell no, speaking as someone who has actually done a lot of hiring. It is very easy to find the top 20 candidates or so based on CVs. The hard part is actually sorting those folks out, which AI cannot do.

        AI offers an unknowable bias and unbounded potentoal for discrimination without consequences. this rubber stamp from NYC is a disaster for civil rights.

        AI should not be touching these sort of decisions , all agorithims need to be fully auditable and replicable

        • elscallr@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          I’m not talking about the people who might be good for the job. I’m talking about the 600 dime a dozen script kiddies that apply for high level devops jobs or as senior engineers. A model can reject those just fine, and it takes the sweat off the human ops people to screen them.

          • Machinist3359@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s simply not how hiring works at most institutions.

            For high traffic lower level positions, hiring managers resent getting given these AI tools. You wind up with candidates that are best at manipulating AI, not the most qualified. Their previous method, basic sorting and hitting the first acceptable worker (rather than the absolute best), is much more efficient use of their time.

            For higher level positions, networking plays a much more significant roll. Since it’s a much more significant decision, companies are also less likely to entrust it to an AI.

            Screening out unserious applicants is easier than you think, and can be addressed without a blackbox of potential lawsuits

  • discosage@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Isn’t this impossible without ensuring a completely “not racist” sample ( which itself would be impossible)? Like the article pointed to a health app that did not explicitly identify race, but still acted in a racist manner due to implicit bias in whatever the AI was trained on.

    I mean I guess the point is to ban AI in hiring which isn’t a bad idea.

    • AI is more than just machine learning. A hand-crafted AI can be shown to be free of illegal biases, for example. The border between traditional AI and nested if statements is pretty vague in that sense.

      For automatically trained AI, things are harder to prove. For neural networks like ChatGPT, the technology to train AI is decades ahead of the technology to reason about the state of the trained AI.

      This law doesn’t ban all AI, but it does ban most of the implementations out there, and for good reason.

      • conciselyverbose@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        A hand-crafted AI can be shown to be free of illegal biases, for example.

        Not really. There are the exact same issues with potential proxy identifiers black boxes have.

      • admiralteal@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Really just getting to the actual thing for more people to understand: none of these “AIs” are intelligent at all, and calling them artificial “intelligence” is completely misleading and causes people to make very dumb assumptions about how they work.

        “Traditional” AI is just writing models to quantify things and then weighting decision-making based on those metrics. It’s just playing Guess Who? with outcomes. “Oh, we can get rid of all the candidates we don’t want by asking if they have a mustache and filtering out the ones who don’t.” Decisions like that, over and over. As you said, nested if statements.

        Neural Networks are just doing the same thing, but without the authors actually making decisions about those intermediate questions. You feed dependent variables in along with lists of covariates and let an algorithm randomly stumble and guess around with them until it comes up with the relative quantification itself. This means you don’t KNOW if it is including discrimination in its process.

        Which is why laws like this are necessary. If you don’t know how the “AI” model is arriving at its outcomes, you don’t know if it is discriminating. You have to be able to audit it. Because my “traditional” AI example, you can see it’s likely discriminating against women, but with all the hidden layers and complex relationships in the NN you won’t know it did the same until decades later.

  • NotTheOnlyGamer@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I was a recruiter previously - I’ve since thankfully gotten out of the profession. But ATS systems, and any training data they throw off for an AI to model, are going to be built on job order and job description keywords, and how they match up to CV writing. That data will be biased, just as an ATS is biased, toward those people who write more keyword-dense documents with less outside content.

    If one race or one sex is better at writing focused and keyword-dense content, that’s not something that the software should be blamed for. If the AI is looking for something more advanced than keywords as compared to job-order-listed BFOQs, I take serious issue.