The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • funkforager@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    172
    ·
    9 months ago

    Remember when open ai was a nonprofit first and foremost, and we were supposed to trust they would make AI for good and not evil? Feels like it was only Thanksgiving…

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      80
      ·
      9 months ago

      I mean, there was all that drama where the board formed to prevent this from happening kicked out the CEO trying to do this stuff, then the board got booted out and replaced with a new board and brought back that CEO guy. So this was pretty much going to happen.

      • hoshikarakitaridia@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        38
        ·
        9 months ago

        And some people pointed it out even back then. There were signs that the employees were very loyal to Altmann, but Altmann didn’t meet the security concerns of the board. So stuff like this was just a matter of time.

      • Sasha@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        9 months ago

        Effective altruism is just capitalism camoflauge, it’s also just really bad at being camoflauge

        • iAvicenna@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          9 months ago

          helps you get a lot of community support and publicity during startup and then you don’t have to give a damn about them once you take off

        • Dave@lemmy.nz
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 months ago

          This summary article says the board stated:

          “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” OpenAI’s post said. “The board no longer has confidence in his ability to continue leading OpenAI.”

          The article also says:

          Rumors and speculation swirled on social media, with tech industry heads, reporters, and onlookers trying to make sense of the situation based on what little information was provided in the board’s announcement. Tech journalist Kara Swisher quickly reported that based on what information she had from sources, there was a “misalignment” between OpenAI’s for-profit side, represented by Altman, and the nonprofit side, which is controlled by the board.

          As far as I know the exact issue was not made public, but basically the board is there to make sure the company puts ethics over profits. Altman was hiding stuff from the board (presumably because they would consider it in conflict with their goal), and so the board fired him. But then there was an uproar from the investors, Microsoft almost ended up hiring half the company as they threatened to resign in droves, and in the end the board resigned and was replaced.

          Does that answer the question?

            • Spedwell@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              I seriously doubt it had anything to do with his wedding. I don’t think the sexuality of a CEO is that big an issue in this day (see: Tim Cook).

              Especially considering how Atman’s has steered OpenAI vs. the boards’ stated mission, it seems much more likely that his temporary ousting had to do with company direction rather than his sexuality.

              • afraid_of_zombies@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                And when I hear about a minority being pushed out of a position with no obvious cause I wonder. Homophobia does exist, he announces his gay wedding, gets fired, and no one can come up with a clear reason why. Yeah

                • Spedwell@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  9 months ago

                  I mean, their press release said “not consistently candid”, which is about as close to calling someone a liar as corporate speak will get. Altman ended up back in the captain’s chair, and we haven’t heard anything further.

                  If the original reason for firing made Altman look bad, we would expect this silence.

                  If the original reason was a homophobic response from the board, we might expect OpenAI to come out and spin a vague statement on how the former board had a personal gripe with Altman unrelated to his performance as CEO, and that after replacing the board everything is back to the business of delivering value etc. etc.

                  I’m not saying it isn’t possible, but given all we know, I don’t think the fact that Altman is gay (now a fairly general digestible fact for public figures) is the reason he was ousted. Especially if you follow journalism about TESCREAL/Silicon Valley philosophies it is clear to see: this was the board trying to preserve the original altruistic mission of OpenAI, and the commercial branch finally shedding the dead weight.

    • guacupado@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      3
      ·
      9 months ago

      I stopped having faith in nonprofits after seeing how much the successful ones pay their CEOs. They’re just businesses riding the low-tax train until they’re rich enough to not care anymore.

      • camelCaseGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        21
        ·
        9 months ago

        I don’t understand that point of view? Why would they pay their CEOs less than any other company? If they did, then they would either not be able to hire CEOs, have the shittiest CEOs or have CEOs that wouldn’t give a crap. People don’t live on welfare, especially highly connected, highly educated people like CEOs.

        • grepe@lemmy.world
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          1
          ·
          edit-2
          9 months ago

          Why do you think lower paid CEO must be shitty? There turns out to be very little link between the CEO and CEO pay and the company performance… they are only paid a lot cause they are in the position of power to directly influence their salary.

          • uranibaba@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            9 months ago

            they are only paid a lot cause they are in the position of power to directly influence their salary.

            And not because they have a much higher responsibility? As a CEO, it is your job to make sure a company makes a profit (unless you are a nonprofit, I guess you have some other goal you need to achieve). That is what you a pay a CEO to do. I assume you would pay more for someone who is able to turn a higher profit.

    • Moira_Mayhem@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      19
      ·
      9 months ago

      It seems to be a trend that any service that claims not to be evil is just waiting for the right moment to drop that pretense.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        9 months ago

        “In 1882 I was in Vienna, where I met an American whom I had known in the States. He said: ‘Hang your chemistry and electricity! If you want to make a pile of money, invent something that will enable these Europeans to cut each others’ throats with greater facility.'”

        Hiram Maxim

        I wonder if something similar happened with openAI.

        Forgot about NFTs and marketing. Invent something that will enable these Europeans to cut each others’ throats more efficiently.

    • wooki@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      9 months ago

      I wouldnt be too worried they’ve just made an over glorified word predictor and blender of peoples art

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          9 months ago

          Chatgpt would be a terrible propaganda tool. Also why do you need a better one? The existing ones work pretty well. Fox/Sky News and the internet troll army out of Russia.

        • wooki@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          9 months ago

          Propaganda isn’t new. Sure it’s more widely available now but it’s not new

          • pinkdrunkenelephants@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            9 months ago

            And that totally justifies having a robot that does it so efficiently it allows people to deepfake shit that’s hard to invalidate, robbing people of their ability to discern what is reality and what is not

            • wooki@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              9 months ago

              Again not new stop grandstanding it as a new effect. Media outlets have been doing this since the dawn of journalism. Scientific process created to combat it, political standards to help reduce it fand laws to make it financially unattractive act remains its not new.

              The only thing that is new. The financial gain from the hype of abusing the word AI and thr media not calling it out. But hey here we are back at the start. Its not new.

              • pinkdrunkenelephants@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                9 months ago

                And that totally makes it okay for you to use an LLM to do so far more effectively and far more efficiently, destroying humanity’s ability to discern reality

                • wooki@lemmynsfw.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  9 months ago

                  The fact you think people need an LLM to create garbage is just weird. They can do it without it just fine. Better get some tinfoil, I hear putting it on your head stops the Artifical word predictor from copying your thoughts.

              • pinkdrunkenelephants@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                2
                ·
                9 months ago

                Nope, not deepfakes that convincing.

                Keep lying to yourself though. Keep convincing yourself it’s worthwhile to destroy the world you claim to love just so you can keep your shiny new toy. Keep trying to tell yourself it’s not going to harm everyone else around you and that you’re still a good person.

                • afraid_of_zombies@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  3
                  ·
                  9 months ago

                  Right all those people eating fucking horse dewormer were perfectly rational before.

                  Oh noes AI is going to destroy us all.

  • SGG@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    5
    ·
    9 months ago

    War, huh, yeah

    What is it good for?

    Massive quarterly profits, uhh

    War, huh, yeah

    What is it good for?

    Massive quarterly profits

    Say it again, y’all

    War, huh (good God)

    What is it good for?

    Massive quarterly profits, listen to me, oh

  • assassinatedbyCIA@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    9 months ago

    Capitalism gotta capital. AI has the potential to be revolutionary for humanity, but because of the way the world works it’s going to end up being a nightmare. There is no future under capitalism.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    4
    ·
    edit-2
    9 months ago

    Literally no one is reading the article.

    The terms still prohibit use to cause harm.

    The change is that a general ban on military use has been removed in favor of a generalized ban on harm.

    So for example, the Army could use it to do their accounting, but not to generate a disinformation campaign against a hostile nation.

    If anyone actually really read the article, we could have a productive conversation around whether any military usage is truly harmless, the nuances of the usefulness of a military ban in a world where so much military labor is outsourced to private corporations which could ‘launder’ terms compliance, or the general inability of terms to preemptively prevent harmful use at all.

    Instead, we have people taking the headline only and discussing AI being put in charge of nukes.

    Lemmy seems to care a lot more about debating straw men arguments about how terrible AI is than engaging with reality.

    • diffusive@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Sure, it’s less bad. It’s not good though.

      If I did accounting (or even just cooking, really) for the Mafia would be less bad than actually going with a gun to tether or kill people but it would still be bad.

      Why? Because it still helps an organisation which core mission is hurting people.

      And it’s purely out of greed because ChatGPT doesn’t desperately need this application otherwise they will go bankrupt

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 months ago

      The point is that it’s a purposeful slow walk, the entire “non-profit” framing and these “limitations” are a very calculated marketing play to soften the justified fears of unregulated, for-profit ( I.e. Endless growth) AI development. It will find its way to full evil with 1000 small cuts, and with folks like you arguing for them at every step along the way, “IT’S JUST A SMALL CUT!!!”

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        It will find its way to full evil with 1000 small cuts, and with folks like you arguing for them at every step along the way, “IT’S JUST A SMALL CUT!!!”

        While I do think AI development isn’t going to be going in the direction you think it is, if you read it carefully you’ll notice that I’m actually not saying anything about whether it’s “a small cut” or not, I’m simply laying out the key nuance of the article that no one is reading.

        My point isn’t “OpenAI changing the scope of their military ban is a good thing” it’s “people should read the fucking article before commenting if we want to have productive discussion.”

        • NeatNit@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          I guess, but I never got hooked on any of the big social media sites, and the few I did (reddit mostly) I limited myself to rather non-political subjects like jokes and specific kinds of content. I’m new to Lemmy and this is most of what I’ve been seeing, which is why I said that.

          Obviously I know that this is what all social media looks like these days. I hoped Lemmy would have at least some noticeable vocal minority of balanced people, but nah.

  • ArmokGoB@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    9 months ago

    Finally, I can have it generate a picture of a flamethrower without it lecturing me like I’m a child making finger guns at school.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    9 months ago

    If you guys think that AI hasn’t already been in use in various militarys including America y’all are living in lala land.

  • Alto@kbin.social
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    9 months ago

    So while this is obviously bad, did any of you actually think for a moment that this was stopping anything? If the military wants to use ChatGPT, they’re going to find a way whether or not OpenAI likes it. In their minds they may as well get paid for it.

    • bean@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      I can see them having their own GPT, using the model and their own data. Not using the tool to send secret info ‘out’ and back in to their own system.

      • LemmyIsFantastic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        9 months ago

        The DoD is happy to use commercial services as long as the security meets their needs.

        They likely have a private version running on gov cloud high though.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      That would count as harm and be disallowed by the current policy.

      But a military application of using GPT to identify and filter misinformation would not be harm, and would have been prevented by the previous policy prohibiting any military use, but would be allowed under the current policy.

      Of course, it gets murkier if the military application of identifying misinformation later ends up with a drone strike on the misinformer. In theory they could submit a usage description of “identify misinformation” which appears to do no harm, but then take the identifications to cause harm.

      Which is part of why a broad ban on military use may have been more prudent than a ban only on harmful military usage.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    This is the best summary I could come up with:


    OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

    “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.

    Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

    The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs.

    While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools.

    Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”


    The original article contains 1,196 words, the summary contains 254 words. Saved 79%. I’m a bot and I’m open source!

  • AquaTofana@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    I’m honestly kind of shocked at this. I know for our annual evaluations this year, people were using ChatGPT to write their statements.

    I thought for sure someone with a secret squirrel type job was going to use it for that innocuous purpose, end up inputting top secret information, and then the DoD would ban the practice completely.