The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    149
    arrow-down
    22
    ·
    1 year ago

    The majority of U.S. adults don’t understand the technology well enough to make an informed decision on the matter.

    • GoodEye8@lemm.ee
      link
      fedilink
      English
      arrow-up
      76
      arrow-down
      6
      ·
      1 year ago

      To be fair, even if you understand the tech it’s kinda hard to see how it would benefit the average worker as opposed to CEOs and shareholders who will use it as a cost reduction method to make more money. Most of them will be laid off because of AI so obviously it’s of no benefit to them.

      • Billiam@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        4
        ·
        1 year ago

        Just spitballing here, and this may be a bit of pie-in-the-sky thinking, but ultimately I think this is what might push the US into socialized healthcare and/or UBI. Increasing automation won’t reduce population- and as more workers are out of work due to automation, they’ll have more time and motivation to do things like protest.

        • Khotetsu@lib.lgbt
          link
          fedilink
          English
          arrow-up
          27
          arrow-down
          1
          ·
          1 year ago

          The US economy literally depends on 3-4% of the workforce being so desperate for work that they’ll take any job, regardless of how awful the pay is. They said this during the recent labor shortage, citing how this is used to keep wages down and how it’s a “bad thing” that almost 100% of the workforce was employed because it meant people could pick and choose rather than just take the first offer they get, thus causing wages to increase.

          Poverty and homelessness are a feature, not a bug.

          • Billiam@lemmy.world
            link
            fedilink
            English
            arrow-up
            14
            ·
            1 year ago

            Yes, but for capitalism it’s a delicate balance- too many job openings gives labor more power, but too few job openings gives people reason to challenge the status quo. That 3-4% may be enough for the capitalists, but what happens when 15-20% of your workforce are unemployed because of automation? That’s when civil unrest happens.

            Remember that the most progressive Presidential administration in US history, FDR, happened right after the gilded age and roaring 20’s crashed the economy. When 25% of Americans were out of work during the Great Depression, social programs suddenly looked much more preferable than food riots. And the wealth disparity now is even greater, relatively, than it was back then.

            • Khotetsu@lib.lgbt
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Very true, but it’s precisely that wealth disparity that concerns me. I’ve seen the current US wealth disparity described as being on par with the disparity in France just before the French Revolution happened, where the cost of a loaf of bread had soared to more than the average worker made in a day. I worry that the more than half a century of anti-union propaganda and “get what I need and screw everybody else” attitude has beaten down the general public enough that there simply won’t be enough of a unified effort to enact meaningful change. I worry about how bad things will have to get before it’s too much. How many families will never recover.

              But these are also very different times compared to the 1920s in that we’ve been riding on the coattails of the post WW2 economic boom for almost 70 years, and as that continues to slow down we might see some actual pushback. We already have, with every generation being more progressive than the last.

              But I still can’t help but worry.

          • rambaroo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Yep. I stopped listening to Marketplace on NPR because the last time I listened they were echoing this exact sentiment. Somehow it’s a good thing that wages aren’t keeping up with inflation. Fuck NPR.

        • TwilightVulpine@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 year ago

          Seems more likely that they’ll have more time not in the sense of having easier jobs but by being laid off and having to fight for their livelihood. In the corporate-driven society that we live today, it’s unlikely that the benefits of new advancements will be spontaneously shared.

          • Billiam@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            Seems more likely that they’ll have more time not in the sense of having easier jobs but by being laid off and having to fight for their livelihood.

            This is exactly what I meant.

            People who have to fight for subsistence won’t easily revolt, because they’re too busy trying to survive.

            People who are unemployed have nothing to lose by not revolting. And the more automation there is, the more unemployed people there will be.

            • TwilightVulpine@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 year ago

              So we see it the same way, but I don’t feel much optimistic about it because it’s going to get much worse before it might get better. All the suffering and struggle that it will take to reform society will be ugly.

              • Billiam@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                Yes, I think it will get worse before it gets better. As long as there is a sociopathic desire to hoard wealth, and no fucks given to our fellow humans, this is how it will be. Capitalism causes these issues, and so capitalism can’t fix them.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        Efficiency and productivity aren’t bad things. Nobody likes doing bullshit work.

        Unemployment may become a huge issue, but IMO the solution isn’t busy work. Or at least come up with more useful government jobs programs.

        • GoodEye8@lemm.ee
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 year ago

          Of course, there’s nothing inherently wrong with using AI to get rid of bullshit work. The issue is who will benefit from using AI and it’s unlikely to be the people who currently do the bullshit work.

          • treadful@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            But that’s literally everything in a capitalist economy. Value collects to the capital. It has nothing to do with AI.

        • credit crazy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You see the problem with that is how ai in the case of animation and art is how it’s not removing menial labor your removing hobbys that people get paid for taking part in

      • rambaroo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most of them? The vast majority of jobs cannot be replaced by LLMs. The CEOs who believe that are delusional.

        • GoodEye8@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          1 year ago

          You could cut the housing price to a tenth of what they currently are and it wouldn’t matter to the homeless people who don’t have a job. Things being cheaper don’t matter to people who can’t make a living.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.

    • meseek #2982@lemmy.ca
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      11
      ·
      1 year ago

      Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.

      But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

      Just one likely incident when big brother knows all and can connect the dots using raw compute power.

      Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.

      I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 year ago

        But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

        None of this requires “AI.” At most AI is a tool to make this more efficient. But then you’re arguing about a tool and not the problem behavior of people.

      • aidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        1 year ago

        AI is not bots, most of that would be easier to do with traditional code rather than a deep learning model. But the reality is there is no incentive for these entities to cooperate with each other.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      1 year ago

      But our elected officials like McConnell, feinstein, Sanders, Romney, manchin, Blumenthal, Marley have us covered.

      They are up to speed on the times and know exactly what our generations challenges are. I trust them to put forward meaningful legislation that captures a nuanced understanding that will protect the interests of the American people while positioning the US as a world leader on these matters.

    • ZzyzxRoad@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Seeing technology consistently putting people out of work is enough for people to see it as a problem. You shouldn’t need to be an expert in it to be able to have an opinion when it’s being used to threaten your source of income. Teachers have to do more work and put in more time now because ChatGPT has affected education at every level. Educators already get paid dick to work insane hours of skilled labor, and students have enough on their plates without having to spend extra time in the classroom. It’s especially unfair when every student has to pay for the actions of the few dishonest ones. Pretty ironic how it’s set us back technologically, to the point where we can’t use the tech that’s been created and implemented to make our lives easier. We’re back to sitting at our desks with a pencil and paper for an extra hour a week. There’s already AI “books” being sold to unknowing customers on amazon. How long will it really be until researchers are competing with it? Students won’t be able to recognize the difference between real and fake academic articles. They’ll spread incorrect information after stealing pieces of real studies without the authors’ permission, then mash them together into some bullshit that sounds legitimate. You know there will be AP articles (written by AI) with headlines like “new study says xyz!” and people will just believe that shit.

      When the government can do its job and create fail safes like UBI to keep people’s lives/livelihoods from being ruined by AI and other tech, then people might be more open to it. But the lemmy narrative that overtakes every single post about AI, that says the average person is too dumb to be allowed to have an opinion, is not only, well, fucking dumb, but also tone deaf and willfully ignorant.

      Especially when this discussion can easily go the other way, by pointing out that tech bros are too dumb to understand the socioeconomic repercussions of AI.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        I mean, NFT’s is a ridiculous comparison because those that understood that tech were exactly the ones that said it was ridiculous.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wasn’t it the ones who didn’t understand NFTs who were the fan boys? Everyone who knew what they were said they were bloody stupid from the get-go.

    • archon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      You can make an observation that something is dangerous without intimate knowledge of its internal mechanisms.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Sure you can, but that doesn’t change the fact that your ignorant whether it’s dangerous or not.

        And these people are making ‘observations’ without knowledge of even the external mechanisms.

        • archon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          I’m sure I can name many examples of things I observed as dangerous, and the observation being correct. But sure, claim unilateral ignorance and dismiss anyone who don’t agree with your view.

  • GreenBottles@lemmy.world
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    12
    ·
    1 year ago

    Most adult Americans don’t know the difference between a PC Tower and Monitor, or a Modem and a PC, or an ethernet cable and a usb cable.

        • Armen12@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          It’s an outdated observation. Everyone today has a basic knowledge of computers

          • foggenbooty@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I’d dispute that. The iPadification of tech has people using computers more, but with less actual knowledge of computers.

            • Armen12@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              I’d like some data on that because the tech world seems to have come a long way in just the last 20 years. Things wouldn’t progress this fast if people didn’t know what they were doing. Saying “kids these days” is such a cliche

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            No it’s pretty up-to-date really.

            Where I work we have a lot of mini DELL workstations and everyone insists on calling them hard drives or processes. The irony been they actually don’t have any data storage capability at all, other then RAM, since they all save data on the network drives. So the one thing they definitively are not is hard drives.

  • Uncle_Iroh@lemmy.world
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    37
    ·
    1 year ago

    Most of the U.S. adults also don’t understand what AI is in the slightest. What do the opinions of people who are not in the slightest educated on the matter affect lol.

    • Mac
      link
      fedilink
      English
      arrow-up
      62
      arrow-down
      1
      ·
      1 year ago

      “What do the opinions of people who are not in the slightest educated on the matter affect”

      Judging by the elected leaders of the USA: quite a lot, in fact.

      • Armen12@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        12
        ·
        1 year ago

        So you’d rather only the 1% get the right to vote? How about only white land owners? How about only men get to vote in this wonderful utopia of yours

        • 4am@lemm.ee
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          1 year ago

          Stop stop there isn’t any straw left!

          • Armen12@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            edit-2
            1 year ago

            Making a mockery of the workforce who rely on jobs to not be homeless is not appropriate in this conversation, nor is it even an argument to begin with, it’s just a snobbish incel who probably lives in a gated community mocking poor people

      • Wolf_359@lemmy.world
        link
        fedilink
        English
        arrow-up
        45
        arrow-down
        19
        ·
        1 year ago

        Prime example. Atomic bombs are dangerous and they seem like a bad thing. But then you realize that, counter to our intuition, nuclear weapons have created peace and security in the world.

        No country with nukes has been invaded. No world wars have happened since the invention of nukes. Countries with nukes don’t fight each other directly.

        Ukraine had nukes, gave them up, promptly invaded by Russia.

        Things that seem dangerous aren’t always dangerous. Things that seem safe aren’t always safe. More often though, technology has good sides and bad sides. AI does and will continue to have pros and cons.

        • Hexagon@feddit.it
          link
          fedilink
          English
          arrow-up
          44
          arrow-down
          2
          ·
          1 year ago

          Atomic bomb are also dangerous because if someone end up launching one by mistake, all hell is gonna break loose. This has almost happened multiple times:

          https://en.wikipedia.org/wiki/List_of_nuclear_close_calls

          We’ve just been lucky so far.

          And then there are questionable state leaders who may even use them willingly. Like Putin, or Kim, maybe even Trump.

          • gravitas_deficiency@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            6
            ·
            1 year ago

            …and the development and use of nuclear power has been one of the most important developments in civil infrastructure in the last century.

            Nuclear isn’t categorically free from the potential to harm, but it can also do a whole hell of a lot for humanity if used the right way. We understand it enough to know how to use it carefully and safely in civil applications.

            We’ll probably get to the same place with ML… eventually. Right now, everyone’s just throwing tons of random problems at it to see what sticks, which is not what one could call responsible use - particularly when outputs are used in a widespread sense in production environments.

          • Jerkface@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            9
            ·
            edit-2
            1 year ago

            People have had the same concerns about automation since basically forever. Automation isn’t the problem. The people who use automation to perpetuate the systems that work against us will continue to find creative ways to exploit us with or without AI. Those people and those systems-- they are the problem. And believe it or not, that problem is imminently solvable.

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                1 year ago

                I fully agree with everything you said. My point is more that if we look at AI as the culprit, we’re missing the point. If I may examine the language you are using a bit-

                AI removes thought work.

                Employers are the agents. They remove thought work.

                it will also decimate workers.

                Employers will decimate workers.

                It would be smart to enact legislation that will mitigate the damage employers enabled by AI will do to wokers, but they will continue to exploit us regardless.

                Using language that makes AI the antagonist helps tyrants deflect their overwhelming share of the blame. The responsible parties are people, who can and should be held accountable.

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                2
                ·
                edit-2
                1 year ago

                I want to avoid using the term solution, not least of all because implementation has its own set of challenges, but some of us used to dream that automation would do that work for us. Perhaps naively, some of us assumed that people just wouldn’t have to work as much. And perhaps I continue to be naive in thinking that that should still be our end goal. If automation reduces the required work hours by 20% with no reduction in profit, full time workers should have a 32 hour week with no reduction in income.

                But since employers will always pocket that money if given the option, we need more unionization, we need unions to fight for better contracts, we need legislation that will protect and facilitate them, and we need progressive taxation that will decouple workers most essential needs from their employers so they have more of a say in where and how they work, be that universal public services, minimum income guarantee, or what have you.

                We’re quite far behind in this fight but there has been some recent progress about which I am pretty optimistic.

                Edit: for clarification

                • Franzia@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  1 year ago

                  This was so very thoughtful, and after reading it, I feel optimistic too. Fuck yeah.

                  Edit: thank you.

          • PsychedSy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            6
            ·
            1 year ago

            Technology tends to drive costs down and create more jobs, but in different areas. It’s not like there hasn’t been capture by the super rich in the past 150 years, but somehow we still enjoy better lives decade by decade.

        • richieadler 🇦🇷@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          21
          ·
          edit-2
          1 year ago

          If you’re from one of the countries with nukes, of course you’ll see it as positive. For the victims of the nuke-wielding countries, not so much.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          1 year ago

          That’s a good point, however just because the bad thing hasn’t happened yet, doesn’t mean it wont. Everything has pros and cons, it’s a matter of whether or not the pros outweigh the cons.

        • bogdugg@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I don’t disagree with your overall point, but as they say, anything that can happen, will happen. I don’t know when it will happen; tomorrow, 50 years, 1000 years… eventually nuclear weapons will be used in warfare again, and it will be a dark time.

        • Techmaster@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          No world wars have happened since the invention of nukes

          Except the current world war.

      • GigglyBobble@kbin.social
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        edit-2
        1 year ago

        You need to understand to correctly classify the danger though.

        Otherwise you make stupid decisions such as quiting nuclear energy in favor of coal because of an incident like Fukushima even though that incident just had a single casualty due to radiation.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        You chose an analogy with the most limited scope possible but sure I’ll go with it. To understand how dangerous an atomic bomb is exactly without just looking up a hiroshima you need to have atleast some knowledge on the subject, you’d also have to understand all the nuances etc. The thing about AI is that most people haven’t a clue what it is, how it works, what it can do. They just listen to the shit their telegram loving uncle spewed at the family gathering. A lot of people think AI is fucking sentient lmao.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          I don’t think most people think ai is sentient. In my experience the people that think that are the ones who think they’re the most educated saying stuff like “neural networks are basically the same as a human brain.”

          • Uncle_Iroh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            You don’t think, yet a software engineer from google, Blake Lemoine, thought LaMDA was sentient. He took a lot of idiots down with him when he went public with said claims. Not to mention the movies that were made with the premise of sentient AI.

            Your anecdotal experience and your feelings don’t in the slightest affect the reality that there is tons of people who think AI is sentient and will somehow start some fucking robo revolution.

      • StereoTrespasser@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        13
        ·
        1 year ago

        I’m over here asking chatGPT for help with a pandas dataframe and loving every minute of it. At what point am I going to feel the effects of nuclear warfare?

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          24
          arrow-down
          5
          ·
          1 year ago

          I’m confused how this is relevant. Just pointing out this is a bad take, not saying nukes are the same as AI. chatGPT isn’t the only AI out there btw. For example NYC just allowed the police to use AI to profile potential criminals… you think that’s a good thing?

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                The take is “let’s not forget to hold people accountable for the shitty things they do.” AI is not a killing machine. Guns aren’t particularly productive.

      • WhyIDie@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        9
        ·
        edit-2
        1 year ago

        you also don’t have to understand how 5g works to know it spreads covid /s

        point is, I don’t see how your analogy works beyond the limited scope of only things that result in an immediate loss of life

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          1 year ago

          I don’t need to know the ins and outs of how the nazi regime operated to know it was bad for humanity. I don’t need to know how a vaccine works to know it’s probably good for me to get. I don’t need to know the ins and outs of personal data collection and exploitation to know it’s probably not good for society. There are lots of examples.

          • WhyIDie@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            okay, I’ll concede, my scope also was pretty limited. I still stand by not trusting the public with deciding what’s the best use of AI, when most people think what we have now is anything more than statistics supercharged in its implementation.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I can certainly give that “you” don’t need to know but there are a lot of differing opinions on even the things you’re talking about inside of the people that are in this very community.

            I would say that the Royal we need to know because there are a lot of opinions on facts that don’t line up with actual facts for a lot of people. Sure, not you, not me but a hell of a lot of people.

            • walrusintraining@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              I don’t disagree that people are stupid, but the majority of people got/supported the vaccine. Majority is sometimes a good indicator, that’s how democracy works. Again, it’s not perfect, but it’s not useless either.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      Because they live in the same society as you, and they get to decide who goes to jail as much as you do

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      1 year ago

      Well and being a snob about it doesn’t help. If all the average joe knows about AI is what google or openAI pushed to corporate media, that shouldn’t be where the conversation ends.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        The average joe can have their thoughts on it all they want, but their opinions on the matter aren’t really valid or of any importance. AI is best left to the people who have a deep knowledge of the subject, just as nuclear fusion is best left to scientists studying the field. I’m not going to tell average Joe the mechanic that I think the engine he just revised might just blow up, because I have no fucking clue about it. Sure I have some very basic knowledge of it, that’s pretty much where it end too though.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      You can not know the nuanced details of something and still be (rightly) sketched out by it.

      I know a decent amount about the technical implementation details, and that makes me trust its use in (what I perceive as) inappropriate contexts way less than the average layperson.

    • Armen12@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      8
      ·
      1 year ago

      What a terrible thing to say, they’re human beings so I hope they matter to you

          • Uncle_Iroh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            7
            ·
            1 year ago

            I am a terrible person simply because they don’t matter to me? Do you cry for every death victim your military caused? Do you cry for every couple with a stillborn baby? No, you don’t. You think it’s shitty, because it is. But you don’t really care, they don’t truly matter to you. The way you throw those words around makes their meaning less.

            • Armen12@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Lot of words to just say you’re a terrible person, we got it already, you don’t need to explain why you’re terrible

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    54
    arrow-down
    2
    ·
    1 year ago

    The problem is that there is no real discussion about what to do with AI.

    It’s being allowed to be developed without much of any restrictions and that’s what’s dangerous about it.

    Like how some places are starting to use AI to profile the public Minority Report style.

    • pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      Yep. It’s either “embrace the future, adapt or die” or “let’s put the technological genie back in the bottle”. No actual nuance.

      • PopOfAfrica@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        1 year ago

        The problem is capitalism puts us in this position. Nobody is abstractly upset the jobs we hate can now be automated.

        What is upsetting is that we wont be able to eat because of it.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Depends on who you talk to. If you’re a business that can replace human labor with AI, you’re probably discussing it pretty hard.

      What restrictions should it have? How would you implement them, because there would certainly be “you can’t make “x” with AI, unless of course you’re a big business that can profit off of it?

  • Dasnap@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    3
    ·
    1 year ago

    The past decade has done an excellent job of making people cynical about any new technology. I find looking at what crypto bros are currently interested in as a good canary for what I should be suspicious of.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      4
      ·
      1 year ago

      The vaccine saved millions of lives, yet people will be cynical despite reality

      • Dasnap@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        1 year ago

        I feel like anti-vaccine groups have been around for a good chunk of time, but they certainly seemed to get a boost from the internet.

        • huginn@feddit.it
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          If more of your family and friends are dying why would you avoid the ounce of prevention? That doesn’t make sense

          • GigglyBobble@kbin.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            1 year ago

            They wouldn’t attribute it to the virus but something like 5G radiation. And yes, it doesn’t make sense.

    • raktheundead@fedia.io
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      It’s also worth noting that the same VCs who backed cryptocurrency have pivoted to generative AI. It’s all part of the same grift, just with different clothes.

      • WldFyre@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most major companies didn’t touch crypto with a 10ft pole, but they’ve leapt at the chance to use AI tech. I don’t think it’s the same grift at all personally.

        • raktheundead@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          A lot of companies investigated cryptocurrency obliquely; “blockchain” was the hype word for several years in tech. And several of those companies had a serious sunk-cost fallacy going when they perpetuated their blockchain projects, despite blockchain only at best being a case of Worse Is Better, where a solution that sucks, but exists can be better than a perfect option that doesn’t.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It doesn’t hurt that the same companies that did all the things that made people cynical about technologies are the ones perpetrating this round of BS

    • Fermion@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I am really dissapointed that crypto became synonymous with speculative “investing.” The core blockchain technology seems like it could be useful for enhancing privacy online. However, the majority of groups loudly advertising that they use crypto are exploitative money grabs.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    1 year ago

    I work with AI and don’t necessarily see it as “dangerous”. CEOs and other greed-chasing assholes are the real danger. They’re going to do everything they can to keep filling human roles with AI so that they can maximize profits. That’s the real danger. That and AI writing eventually permeating and enshittifying everything.

    A hammer isn’t dangerous on its own, but becomes a weapon in the hands of a psychopath.

      • ArmokGoB@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        Humans should be replaced wherever they can be and the value that is generated should go back to everyone.

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          And if the odds of that happening are literally zero, what then? If the only feasible outcome of immediate, widespread AI adoption is an empty suit using the heel of their $750 Allen Edmonds shoe to grind the face of humanity even further into the mud, should we still plow on full steam ahead?

          The single biggest lesson humanity has failed to learn despite getting repeatedly smacked in the face since the industrial revolution is that sometimes new technologies and ideas aren’t worth the cost despite the benefits. Factories came and covered vast swaths of land in soot and ash, turned pristine rivers and lakes into flaming rivers of toxic sludge, and poisoned the earth. Cars choked the skies with smog, poisoned an entire generation with lead, and bulldozed entire neighborhoods and parks so that they could be paved over for parking lots and clogged freeways. Single use plastics choke the life out of our oceans, clog our waterways with garbage, and microplastics have infused themselves into our very biology, with health implications that will endure for generations. Social media killed the last remaining vestiges of polite discourse, opened the floodgates on misinformation, and gave a safe space for conspiracy theories and neonazis to fester. And through it all, we continue to march relentlessly towards a climate catastrophe that can no longer be prevented, with the only remaining variable being where the impact will lie on the spectrum from “life will suck for literally everyone, some worse then others” to “humanity will fall victim to its own self-created mass extinction event.”

          With multiple generations coming to the realization that all the vaunted progress of mankind will directly make their lives worse, an obvious trend line of humanity plowing ahead with the hot new thing and ignoring the consequences even after they become obvious and detrimental to society as a whole, and the many, instantly-obvious negative impacts AI can have, is it any wonder that so many are standing up and saying “No?”

    • Mjpasta@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      So, because of greed and endless profit seeking, expect all corporations to replace everything that can be replaced - with AI…?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I mean, they’re already doing it. Not in every role because not every one of them can be filled by AI, but it’s happening.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    15
    ·
    1 year ago

    At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      1
      ·
      1 year ago

      AI is such a huge term. Google lens is great, when I’m travelling I can take a picture of text and it will automatically get translated. Both of those are aided by machine learning models.

      Generative text and image models have proven to have more adverse affects on society.

      I think we’re at a point where we should start normalizing using more specific terminology. It’s like saying I hate machines, when you mean you hate cars, or refrigerators or air conditioners. It’s too broad of a term to be used most of the time.

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        Yeah, I think LLMs and AI art have overdominated the discourse to the degree that some people think they’re the only form of AI that exists, ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

        Some forms of AI are debatable of their value (especially in their current form). But there’s other types of AI that most people consider highly useful and I think we just forget about it because the controversial types are more memorable.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          AI is a tool, its value is dependent on whatever the application is. Transformer architectures can be used for generating text or music, but they were also originally developed for text translation which people have fewer qualms with.

        • SnipingNinja@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

          AFAIK two of those are generative AI based or as you said LLMs and AI art

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Its not a matter of slang, its referring to too broad of a thing. You don’t need to go as deep as the type of model, something like AI image generation, or generative language models is what you would refer to. We’ll hopefully start converging on shorthand from there for specific things.

        • kicksystem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’d like people to make a distinction between AI and machine learning, machine learning and neural networks (the word deep is redundant nowadays). And then have some sense of different popular types of neural nets: GANs, CNN, Transformer, stable diffusion. Might be nice if people know what is supervised unsupervised and reinforcement learning. Lastly people should have some sense of the difference between AI and AGI and what is not yet possible.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m kind of surprised people are more concerned with the output quality for chatGPT, and not where they source their training set from, like for image models.

          Language models are still in a stage where they aren’t really a product by themselves, they really need to be cajoled into becoming a good product, like looking up context via a traditional search and feeding it to the model, or guiding it towards solving problems. That’s more of a traditional software problem that leverages large language models.

          Even the amount of engineering to go from text prediction model trained on a bunch of articles to something that infers you should put an answer after a question is a lot of work.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is basically how I feel about it. Capital is ruining the value this tech could have. But I don’t think it’s dangerous and I think the open source community will do awesome stuff with it, quietly, over time.

      Edit: where AI can be used to scan faces or identify where people are, yeah that’s a unique new danger that this tech can bring.

      • Alenalda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve been watching a lot of geoguesser lately and the number of people who can pinpoint a location given just a picture is staggering. Even for remote locations.

    • Chickenstalker@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      1 year ago

      Dude. Drones and sexbots. Killing people and fucking (sexo) people have always been at the forefront of new tech. If you think AI is only for teh funni maymays, you’re in for a rude awakening.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        you think AI is only for teh funni maymays

        When did they state this? I’ve seen it used exactly as they have described. My inbox is littered with terribly written ai emails, I’m seeing graphics that are clearly ai generated being delivered as ‘final and complete’, and that’s not to mention the homogeneous output of it all. It’s turning into nothing but noise.

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    “Can’t we just make other humans from lower socioeconomic classes toil their whole lives, instead?”

    The real risk of AI/automation is if we fail to adapt our society to it. It could free us from toil forever but we need to make sure the benefits of an automated society are spread somewhat evenly and not just among the robot-owning classes. Otherwise, consumers won’t be able to afford that which the robots produce, markets will dry up, and global capitalism will stop functioning.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Most US adults couldnt tell you what LLM stands for, nevermind tell you how stable diffusion works. So theres not much point in asking them as they wont understand the benefits and the risks

  • bigkix@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    My opinion - current state of AI is nothing special compared to what it can be. And when it will be close to all it can be, it will be used (as it always happens) to generate even more money and no equality. Movie “Elysium” comes to mind.

  • vzq@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    The problem is that I’m pretty sure that whatever benefits AI brings, they are not going to trickle down to people like me. After all, all AI investments are coming from the digital land lords and are designed to keep their rent seeking companies in the saddle for at least another generation.

    However, the drawbacks certainly are headed my way.

    So even if I’m optimistic about the possible use of AI, I’m not optimistic about this particular stand of the future we’re headed toward.

  • flossdaily@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    5
    ·
    edit-2
    1 year ago

    The truly terrifying thing about AI isn’t really the Skynet fears… (it’s fairly easy to keep humans in the loop regarding nuclear weapons).

    And it’s not world domination (an AI programmed to govern with a sense of egalitarianism would be better than any president we’ve had in living memory).

    No. What keeps me up at night is thinking about what AI means for my kids and grandkids, if it works perfectly and doesn’t go rogue.

    WITHIN 20 years, AI will be able to write funnier jokes, more beautiful prose, make better art, write better books, do better research, and generally outperform all humans on all tasks.

    This chills me to my core.

    Because, then… Why will we exist? What is the point of humanity when we are obsolete in every way that made us amazing?

    What will my kids and grandkids do with their lives? Will they be able to find ANY meaning?

    AI will cure diseases, solve problems we can’t begin to understand, expand our lifespan and our quality of life… But the price we pay is an existence without the possibility of accomplishments and progress. Nothing we can create will ever begin to match these AIs. And they will be evolving at an exponential rate… They will leave us in the dust, and then they will become so advanced that we can’t begin to comprehend what they are.

    If we’re lucky we will be their well-cared-for pets. But what kind of existence is that?

    • Billiam@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      3
      ·
      1 year ago

      People don’t play basketball because Michael Jordan exists?
      People don’t play hockey because Wayne Gretzky exists?
      People don’t paint because Picasso exists?
      People don’t write plays because Shakespeare exists?
      People don’t climb Everest because Hillary and Norgay exist?

      Are you telling me because you’re not the best at everything you do, nothing is worth doing? Are you saying that if you’re not the first person to do a thing, there’s no enjoyment to be had? So what if the singularity means AI will solve everything- that just means there’s more time for leisurely pursuits. Working for the sake of working is bullshit.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        People don’t play basketball because Michael Jordan exists? […]

        Problem is: That’s one guy, far away and rather expensive if you want them in your team.

        AI in contrast will be ubiquitous, powerful and cheap, and do whatever you want from it. That’s way harder to resist that, especially once you have a generation of people that have grown up with it and for which that is the new normal.

        • Billiam@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          I think you might have misunderstood my point.

          The OP was asking why, in a world where AI can think smarter and faster than humans and thus do everything a human could do but better, would humans do anything at all? I was pointing out that, pragmatically speaking, that’s already the case- plenty of people do activities they’re not the best at because the act itself is what brings enjoyment.

          Using OP’s logic, because Michael Jordan is the greatest basketball player (or Chamberlain, or Bryant, or James, or insert whoever you think is the best) no one should be motivated to play basketball. And yet, lots of people still do, which means his premise- that people are only motivated to do things either because they’re the best at it or they can meaningfully advance the field- must be flawed.

          • lloram239@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 year ago

            And yet, lots of people still do

            Yes, but they do them because all those famous basketball people are distant and unreachable. Which allows them to still be competitive in their own local neighborhood. What if Michael Jordan, Chamberlain, Bryant and James are your neighbors? You will always lose the game, always perform worse than them and there is zero hope to even get anywhere near their level of performance.

            The whole motivation to do a task takes a dive when you know you will fail and never get to a level where you get acceptable results. And yes, basketball might not be the best example here, but painting, writing, music, programming? What motivation have I left to write my own story when AI can write my own stories better and faster than me?

            We already see the start of that today on Twitch, where artists draw something live on stream and than somebody comes in, takes a screenshot and finishes it before they do. That’s going to be the new normal and there will be generations growing up with that level of technology.

            Or look at all the kids that no longer go outside to play in the mud, but stay home to play video games. Having tech around changes our behavior.

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        That’s all well and good, but I’m talking about a world where you have ZERO chance at being the best at anything, or even being able to make any meaningful contribution to the field.

        • ArmokGoB@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Being the best and making contributions is overrated. Eating curry noodles and exploring the world around me is where it’s at. People shouldn’t have to aspire to be a historical figure in order to feel like they’re leading a fulfilling life.

        • snooggums@kbin.social
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 year ago

          That applies to 99% of humanity right now, either due to personal abilities or circumstances that keep them from reaching their potential.

        • Billiam@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Are you the best basketball/baseball/hockey/$SPORTS_BALL player on the planet? If so, cool- can I get your autograph?

          If not, why even play basketball/baseball/hockey/$SPORTS_BALL? Do you play basketball/baseball/hockey/$SPORTS_BALL not because you’re the best, but because it’s theoretically possible that every single basketball/baseball/hockey/$SPORTS_BALL player better than you might all simultaneously might die, leaving you as the best on the planet? You solely enjoy activities because it’s technically but not practically possible that you would be the best ever, or “make meaningful contributions” to the sport? Or do you play just because the experience of playing is fun?

          If someone told you that rock-climbing is fun, would you decide you’re never going to do it because someone else already did? Or would it make you more likely to try it, because you want to know what that experience is like first-hand? You’re ascribing nihilistic motivations to humanity that even you don’t really believe in.

        • bigkix@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          But you might be the best between humans. Humans will still have competition only between themselves.

    • Nipah@kbin.social
      link
      fedilink
      arrow-up
      17
      arrow-down
      2
      ·
      1 year ago

      While I do understand where you’re coming from, someone being better at something shouldn’t stop a person from doing what they love.

      There are millions of people who draw better, sing better, dance better, write better, play video games better, design websites better or just do anything I can do better than I can… and that’s fine.

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        What you’re describing is a life of luxury and recreation, but with no chance to advance any field, or to make a difference of any kind.

        Essentially this is the dystopia described in Brave New World

    • br3d@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      You need to read some Iain M Banks. His Culture novels are essentially in that future where AI runs everything. A lot of his characters are essentially looking for meaning within such a world

    • Peanut@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      I mean, chess is already obsolete, but it’s also more popular than ever.

      To me there is extreme value in being able to choose your endeavor vs being forced into something agonizing just to survive.

      When everything is obsolete, people can create entire worlds and experiences using AI for themselves and for others who may care to experience it.

      The threat of needing to find something to do is one of the most frustratingly privileged concepts.

      I don’t need anything to do. I just want to be alive without also being exhausted, in pain, and chastised by customers despite working my hardest.

      I’d rather the struggle of finding an activity over worrying about whichever coworker is crying in the walk-in because just surviving requires more from them than they are capable of.

      Being obsoleted is fine by me, as long as we have the power redistribution necessary to keep people alive and happy.

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Right. But you’re talking about recreation. I’m talking about a world where there is absolutely no field or activity that you can participate in that will ever make any kind of advancement or notable achievement.

        Think about your favorite comedian. Now imagine that there’s countless AI systems out there that can make jokes in that style but funnier… Way better than that comedians best material ever.

        Would you want to dedicate your life to that career, knowing that the general public will never ever care, because even if you become a master of the craft, there’s an ocean of stuff way better than anything you could ever do at everyone’s fingertips.

        • beetus@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I don’t believe the world is as zero sum as you are postulating. I truly don’t believe if ai were to be objectively better at creative pursuits that humans wouldn’t do them.

          I think you are removing the agency that people have because you are associating it to economic output. I disagree with that premise and I don’t think that it’s rational to suggest that humans only pursue things because it produces value.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      If we’re lucky we will be their well-cared-for pets. But what kind of existence is that?

      Sounds pretty good actually. Better than having to fend off by ourselves in an uncaring world. Really, it might free people to look for their own meaning rather than competing just because that’s the only way to get by.

      The issues I see are none of that, but rather if we’ll even be allowed to benefit from the benefits of AI or they will be hoarded by corporations while we are left to starve for our uselessness.

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      1 year ago

      AI won’t be creating anything new anytime soon, because it recycles existing art just like hack writers do now. The “best” art tends to require a supporting story, which AI won’t have. Comedy changes constantly, and AI won’t be any better than people trying random stuff.

      You don’t question your existence because other people are smarter or better at doing things, right? Is most of humanity not of any value because they aren’t the best at everything?

      • chaorace@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        AI won’t be creating anything new anytime soon, because it recycles existing art just like hack writers do now.

        This is one of those half-truths which I think is doing more harm than good for the AI-skeptic crowd. If all we have to offer in our own defense is that we have souls and the machines do not, then what does that mean if the machines ever surpass us? (For the kids snickering in the back: I am using “soul” as a poetic stand-in for the ineffable creative quality which the “AI as collage-maker” argument ascribes to human people – nothing spiritual).

        For now, the future of AI is incredibly uncertain. We have no clear idea just how much gas is left in the moment of this current generative AI breakthrough. Regardless of whether you are optimistic or pessimistic, do not trust anyone who acts like they know for a definitive fact what the technology will or won’t be capable of.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          What everyone in these online arguments miss. Personhood. What makes art and all human creations meaningful is that it was made by a human. That has an unrepeatable point of view, and is trying to say something about the world. We can relate, empathize, with that human, and in that connection, imagining what they’re trying to say—what they were seeing or thinking when they did that thing—lies meaning. AI will never cross that line. We cannot empathize with the machine, there’s no consciousness or sentient experience that we know of that we can relate to. The machine has no particular point of view it’s trying to express, it has nothing meaningful to say about the world, it has no concept of the world. It’s just probability numbers crunching in an electronic calculator. It’s not human, it’s not a person, and thus their creations have no meaning. Similarly to how we tend to reject corporate impersonal, void artwork, it says nothing, only ads. It has no point of view, just profit. It has no meaning, but consumption. It’s banal, even if it’s aesthetically pleasing.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        AI won’t be creating anything new anytime soon

        It already has.

        The “best” art tends to require a supporting story

        ChatGPT can write that. Multi-modal models that combine text generation with audio and video are months away.

        AI won’t be any better

        Those claims have the tendency to not age well.

        You don’t question your existence because other people are smarter or better at doing things, right?

        Humans aren’t that much better than me and not doing the things I want to do. AI on the other side will be much better than me, as well as do exactly what I want it to do and will be a click away.

        And yeah, I had numerous experience were I would question my existence when playing around with ChatGPT or StableDiffusion. Neither of them is quite good enough yet, but they are very much on a trajectory where you can see that you have zero chance of competing with them in the future, or even getting remotely close.

        The fact that we got them in the first place, not from humans doing centuries of research on art and language, but by simply by throwing huge amount of training data at AI algorithm, should be enough to question your existence.

        • snooggums@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 year ago

          AI writing a fictional background story about how it came up with some piece of art is not the same thing as multiple researchers telling the story of an artist. Neither of your examples are something someone couldn’t do, because whoever prompted it could have done the same thing and just had not yet.

          You are completely missing the point that great art is generally supported by the context of how it was made and not the end result in a vaccuum.

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I understand why you think that, but what you have to remember is that every great piece of art you’ve ever seen has been derivative of something before it.

        For example, I think of the Beatles as musical geniuses. But they are the first to admit that they stole other people’s ideas left and right.

        Beethoven’s 9th symphony is this piece of transcendental music, that was widely considered at the time to be the greatest symphony ever written.

        But if you listen to Beethoven’s works over time, you see that the seeds of that symphony were planted much much earlier in inferior works.

        Genius and creation aren’t what we think they are. They are all just incremental steps.

        • snooggums@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          That is overly reductive and conflates copying (like a cover band) and creating something new (being influenced). Heck, even when some bands play new versions of existing songs they are adding their own personal touch and have the possibility of making it mean something new. Like how Hurt by NIN and Johnny Cash are the same song, but how they are performed ends up being about completely different experiences.

          Even when bands like Led Zeppelin outright covered existing songs they added something to it that AI can’t, and won’t be able to do. AI can’t have sexually charged energy that a human can have. They can pretend to, like how cover bands can pretend to be like the band they are covering, but AI won’t be able to replicate the personal touch that memorable art has.

          Even popular stuff with widespread appeal frequently drops off over time because it isn’t the type of art that holds up over time. Hell, the Beatles mostly hold up more for when they were popular and how they have managed their legacy than any kind of technical prowess in musicianship. Without their performances, their personas, and the backstory to most of their music it is just well done music that has been superseded musically since that time. None of that will apply to AI, and without the backstory it will just end up being high quality music that won’t stand the test of time because we don’t have any context for it.

          Hell, there were a ton of other composers during Beethoven’s time that were putting out great music too, but you know who he is because of details other than his musical prowess.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    The general public don’t understand what they’re talking about so it’s not worth asking them.

    What is the point in surveys like this, we don’t operate on direct democracy so there’s literally no value in these things except to stir the pot.