• astronaut_sloth
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 days ago

    A third point is that, as someone else mentioned, the cars are now trained, not ‘programmed’ with instructions to follow.

    As an addendum to that third point, the training data is us, quite literally.

    Yeah, that makes sense. I was in SF a few months ago, and I was impressed with how the Waymos drove–not so much the driving quality (which seemed remarkably average) but how lifelike they drove. They still seemed generally safer than the human-driven cars.

    Improving the ‘miles per collision’ is best at the big things.

    Given the nature of reinforcement learning algorithms, this attitude actually works pretty well. Obviously, it’s not perfect, and the company should really program in some guardrails to override the decision algorithm if it makes an egregiously poor decision (like y’know, not stopping at crosswalks for pedestrians) but it’s actually not as bad or ghoulish as it sounds.

    • Kitathalla@lemy.lol
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 days ago

      but it’s actually not as bad or ghoulish as it sounds

      We’ll have to agree to disagree on that one. I think decisions made solely for making the company’s cost as low as possible while actively choosing to not care about issues just because their chance is low (we’ve all seen fight club, right? [If A > B where B=cost of paying out * chance of occurrence and A=cost of recall, no recall]) even if devastating are ghoulish.