help-circle
  • @then_three_more@lemmy.world
    link
    fedilink
    English
    5023 days ago

    It would technically be the fifth law.

    Zeroth Law - A robot may not injure humanity or, through inaction, allow humanity to come to harm.

    • pruwyben
      link
      fedilink
      English
      1722 days ago

      But if you’re starting from zeroth it would be the fourth.

      • @olutukko@lemmy.world
        link
        fedilink
        English
        722 days ago

        and with robots and computers it just makes sense to start with 0

        • @captainlezbian@lemmy.world
          link
          fedilink
          English
          3
          edit-2
          22 days ago

          It’s even better because

          Tap for spoiler

          A robot created the zeroth law to allow the killing of people to save humanity

            • @captainlezbian@lemmy.world
              link
              fedilink
              English
              121 days ago

              Was there a movie? Mind you it’s been like 15 years since I read robots and empire but

              Tap for spoiler

              Allowing the earth to be radiation poisoned would kill people but force the humans off earth

              Like I’d love some good robots movies. Robots of Dawn would likely struggle with reception, and honestly so would Under the Naked Sun but Caves of Steel? Less so.

                • @captainlezbian@lemmy.world
                  link
                  fedilink
                  English
                  121 days ago

                  Why would anyone put will smith in this movie, or call it I, Robot, much less I have to assume they combined robots and empire with caves of steel and that’s a shit decision as well‽

    • YAMAPIKARIYA
      link
      fedilink
      English
      1523 days ago

      May not injure you say. Can’t be injured if you’re dead. (P.S. I’m not a robot)

      • dustycups
        link
        fedilink
        English
        1823 days ago

        Sounds like something a robot would say.

        • YAMAPIKARIYA
          link
          fedilink
          English
          1
          edit-2
          22 days ago

          The sentence says “…or, through inaction, allow humanity to come to harm.” If they are dead due to the robots action it is technically within the rules.

          • @samus12345@lemmy.world
            link
            fedilink
            English
            522 days ago

            Oh, I see, you’re saying they can bypass “injure” and go straight to “kill”. Killing someone still qualifies as injuring them - ever heard the term “fatally injured”? So no, it wouldn’t be within the rules.

            • @MystikIncarnate@lemmy.ca
              link
              fedilink
              English
              122 days ago

              I think he’s referring to the absolutism of the programmatic “or” statement.

              The robot would interpret (cannot cause harm to humanity) or (through inaction allow harm to come to humanity). If either statement is true, then the rule is satisfied.

              By taking action in harming humans to death, the robot made true the second statement satisfying the rule as “followed”.

              While our meat brains can work out the meaning of the phrase, the computer would take it very literally and therefore, death to all humans!

              Furthermore, if a human comes to harm, they may have violated the second half of the first rule, but since the robot didn’t cause harm to the person, the first statement is true, therefore, death to all humans!

              • @samus12345@lemmy.world
                link
                fedilink
                English
                222 days ago

                That works if you ignore the commas after “or” and “through inaction”, which does sound like a robot thing to do. Damn synths!

      • Certified Asshole
        link
        fedilink
        English
        523 days ago

        The concept of death may be hard to explain because robots don’t need to run 24\7 in order to keep functioning. Until instructed otherwise,a machine would think a person with a cardiac arrest is safe to boot later.

        • @NABDad@lemmy.world
          link
          fedilink
          English
          423 days ago

          Who can say that death is the injury? It could be that continued suffering would be an injury worse than death. Life is suffering. Death ends life. Therefore, death ends suffering and stops injury.

          • Certified Asshole
            link
            fedilink
            English
            3
            edit-2
            22 days ago

            I mean, this logic sounds not unlike mister Smith from The Matrix.

            'Why, mister Anderson' moment from The Matrix

    • (⬤ᴥ⬤)
      cake
      link
      fedilink
      English
      623 days ago

      couldn’t that be inferred from the first law?

      • @Mithre@lemmy.world
        link
        fedilink
        English
        1323 days ago

        Actually no! Lower numbered laws have priority over higher numbers, meaning that if they come into conflict the higher number law can be broken. While the first law says they can’t allow humans to come to harm, the zeroth law basically says that if it’s for the good of the species, they absolutely can kill or otherwise hurt individual humans.

        • (⬤ᴥ⬤)
          cake
          link
          fedilink
          English
          623 days ago

          does that happen in the stories?

          • Shimon
            link
            fedilink
            English
            923 days ago

            Yes! I think it is the second story in the book

        • VindictiveJudge
          link
          fedilink
          English
          423 days ago

          Law 0 is also a derived law rather than a programmed one. Robots with both the three laws and sufficient intelligence that are in a position where Law 1 becomes a catch 22 will tend to derive Law 0.

        • @HonoraryMancunian@lemmy.world
          link
          fedilink
          English
          223 days ago

          Lower numbered laws have priority over higher numbers

          That means this is the negative first law

    • TheRealKuni
      link
      fedilink
      English
      1
      edit-2
      22 days ago

      deleted by creator

  • @lolcatnip@reddthat.com
    link
    fedilink
    English
    4923 days ago

    This just reminds me I’m mildly irritated that robots in fiction have glowing eyes so often. Light is supposed to go into eyes, not come out of them!

    • @wieson@feddit.de
      link
      fedilink
      English
      2823 days ago

      Robots or any part of an automated production line with a camera typically has a light as well to either see in low light conditions or to ensure it always sees with a similar amount of light hitting the lense.

      • @HessiaNerd@lemmy.world
        link
        fedilink
        English
        421 days ago

        Also, a lot of the machine vision systems I’ve run up against use red light, but it is kind of complex. If they want to detect say blood, I think blue light would actually give better contrast for detection.

    • @MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      2622 days ago

      They addressed this on the Orville. The glowing dots were not eyes. The droid had sensors that did all the work. The “eyes” were an aesthetic addition.

      • @AmosBurton_ThatGuy@lemmy.ca
        link
        fedilink
        English
        15
        edit-2
        22 days ago

        “The last thing you need is more desert”

        Excuse me?!

        “As I cannot stutter, I must conclude that you heard me”

        Isaac is one of the best parts of that show lmao

    • Annoyed_🦀 🏅
      link
      fedilink
      English
      722 days ago

      I really like the design of Assaultron from Fallout 4, they didn’t have such issue because their eye is placed just above the glowy part, and the glowy part is the head laser that will one shot you.

    • @gamermanh@lemmy.dbzer0.com
      link
      fedilink
      English
      522 days ago

      So long as the light isn’t coming from BEHIND the lense then you can think of it being like a camera flash

      Or just think of it as the power indication LED being made stylish

    • @Ziglin@lemmy.world
      link
      fedilink
      English
      223 days ago

      To be fair it makes it harder to tell where the cameras are pointed (assuming they’re not wide angle lenses and they’re trying to work similarly to humans)

  • Nomecks
    link
    fedilink
    English
    2723 days ago

    • @samus12345@lemmy.world
      link
      fedilink
      English
      22
      edit-2
      22 days ago

      “Come on, you can trust me. You’re thinking of the old red light Agimus. Blue light Agimus wants to help!”

  • @hperrin@lemmy.world
    link
    fedilink
    English
    2122 days ago
    self.setEyeColor(self.isGood() ? 'blue' : 'red');
    
    • @Ziglin@lemmy.world
      link
      fedilink
      English
      1823 days ago

      Ooh imagine the chaos at some executive meetings where everyone’s evil eyes are blinding eachother.

      • @space@lemmy.dbzer0.com
        link
        fedilink
        English
        1222 days ago

        The intensity of the red light should be proportional to the level of evil. You could literally put solar panels in those meetings.

  • @tulliandar@lemmy.world
    link
    fedilink
    English
    1022 days ago

    If they’re evil it presumably means they’re disobeying the first three laws… they may disobey the fourth law too to help cover their other crimes

    • In the movie the bad ones didn’t exactly disobey the laws, they merely found a loophole where by they can protect humans by taking over completely so “human error” couldn’t harm humans.

  • @Belzebubulubu@mujico.org
    link
    fedilink
    English
    322 days ago

    I think it’s supposed to represent errors in the robots code like “I’m evil cuz i’m bugged”