• remotelove@lemmy.ca
        link
        fedilink
        arrow-up
        27
        arrow-down
        1
        ·
        5 months ago

        It’s been around for a while. It’s the fluff and the parlor tricks that need to die. AI has never been magic and it’s still a long way off before it’s actually intelligent.

        • frog 🐸@beehaw.org
          link
          fedilink
          English
          arrow-up
          22
          ·
          edit-2
          5 months ago

          The other thing that needs to die is hoovering up all data to train AIs without the consent and compensation to the owners of the data. Most of the more frivolous uses of AI would disappear at that point, because they would be non-viable financially.

            • frog 🐸@beehaw.org
              link
              fedilink
              English
              arrow-up
              6
              ·
              5 months ago

              I remember reading that a little while back. I definitely agree that the solution isn’t extending copyright, but extending labour laws on a sector-wide basis. Because this is the ultimate problem with AI: the economic benefits are only going to a small handful, while everybody else loses out because of increased financial and employment insecurity.

              So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don’t have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn’t seem practical to me.

              • Even_Adder@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                5 months ago

                The point is that It’s not an activity you can force someone to pay for. Everyone that can run models on their own can benefit, and that group can expand with time as research makes it more feasible on more devices. But that can never come to pass if we destroy the rights that allow us to make observations and analyze data.

                counting words and measuring pixels are not activities that you should need permission to perform, with or without a computer, even if the person whose words or pixels you’re counting doesn’t want you to. You should be able to look as hard as you want at the pixels in Kate Middleton’s family photos, or track the rise and fall of the Oxford comma, and you shouldn’t need anyone’s permission to do so.

                Creating an individual bargainable copyright over training will not improve the material conditions of artists’ lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.

                • frog 🐸@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  edit-2
                  5 months ago

                  Creating same-y pieces with AI will not improve the material conditions of artists’ lives, either. All that does is drag everyone down in a race to the bottom on who can churn out the most dreck the most quickly. “If we advance the technology enough, everybody can have it on their device and make as much AI-generated crap as they like” does not secure stable futures for artists.

      • DdCno1@beehaw.org
        link
        fedilink
        arrow-up
        20
        arrow-down
        1
        ·
        5 months ago

        It could be regulated into oblivion, to the point that any commercial use of it (and even non-commercial publication of AI generated material) becomes a massive legal liability, despite the fact that AI tools like Stable Diffusion can not be taken away. It’s not entirely unlikely that some countries will try to do this in the future, especially places with strong privacy and IP laws as well as equally strong laws protecting workers. Germany and France come to mind, which together could push the EU to come down hard on large AI services in particular. This could make the recently adopted EU AI Act look harmless by comparison.

  • Peter Bronez@hachyderm.io
    link
    fedilink
    arrow-up
    7
    ·
    5 months ago

    @along_the_road

    “These were mostly family photos uploaded to personal and parenting blogs […] as well as stills from YouTube videos"

    So… people posted photos of their kids on public websites, common crawl scraped them, LAION-5B cleaned it up for training, and now there are models. This doesn’t seem evil to me… digital commons working as intended.

    If anyone is surprised, the fault lies with the UX around “private URL” sharing, not devs using Common Crawl

    #commoncrawl #AI #laiondatabase

    • wagoner@infosec.pub
      link
      fedilink
      arrow-up
      8
      ·
      5 months ago

      Doesn’t Digital Commons mean common ownership? A personal blog of family photos inherently owned by that photographer are surely not commonly owned. I see this as problematic.

    • Peter Bronez@hachyderm.io
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      @along_the_road what’s the alternative scenario here?

      You could push to remove some public information from common crawl. How do you identify what public data is _unintentionally_ public?

      Assume we solve that problem. Now the open datasets and models developed on them are weaker. They’re specifically weaker at identifying children as things that exist in the world. Do we want that? What if it reduces the performance of cars’ emergency breaking systems? CSAM filters? Family photo organization?

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        what’s the alternative scenario here?

        Parents could not upload pictures of their kids everywhere in a vain attempt to attract attention to themselves?

        That would be good.

        • Peter Bronez@hachyderm.io
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          @kent_eh exactly.

          The alternative is “if you want your content to be private, share it privately.”

          If you transmit your content to anyone who sends you a GET request, you lose control of that content. The recipient has the bits.

          It would be nice to extend the core technology to better reflect your intent. Perhaps embedding license metadata in the images, the way LICENSE.txt travels with source code. That’s still quite weak, as we saw with Do Not Track.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

    The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

    HRW’s report warned that the removed links are “likely to be a significant undercount of the total amount of children’s personal data that exists in LAION-5B.”

    Han told Ars that “Common Crawl should stop scraping children’s personal data, given the privacy risks involved and the potential for new forms of misuse.”

    There is less risk that the Brazilian kids’ photos are currently powering AI tools since “all publicly available versions of LAION-5B were taken down” in December, Tyler told Ars.

    That decision came out of an “abundance of caution” after a Stanford University report “found links in the dataset pointing to illegal content on the public web,” Tyler said, including 3,226 suspected instances of child sexual abuse material.


    Saved 78% of original text.