I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    3 months ago

    You are equating traing an LLM with a person learning, but an LLM is not a person. It is not given the same rights and privileges under the law. At best it is a computer program and you can certainly infringe copyright by writing a program.

    • Specal@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      It’s not “At best it’s a computer program”. It is a computer program, a program of probability that it’s response should be X. The training data could be stolen, but it’s output isn’t.

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      An LLM is not a legal entity, nor should it be. However, similar things happen in a human brain and the network of an LLM, so same laws could be applicable to some extent. Where do we draw the line? That’s a legal/political issue we haven’t figured out yet, but following these developments is going to be interesting.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Agreed it hasn’t been settled legally yet.

        I also agree that an LLM isn’t and shouldn’t be a legal entity. Therefore an LLM is something that can be owned, sold, and a profit made from.

        It is my opinion that the original author of the works should receive compensation when their work is used to make profit i.e. to make the LLM. I’d also say that the original intent of copyright law was to give authors protection from others making money from their work without permission.

        Maybe current copyright law isn’t up to the job here, but benefiting of the back of others creative works is not socially acceptable in my opinion.

        • Hamartiogonic@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          3 months ago

          I think of an LLM as a tool, just like drill or a hammer. If you buy or rent these tools, you pay the tool company. If you use the tools to build something, your client pays you for that work.

          Similarly, OpenAI can charge me for extensive use of ChatGPT. I can use that tool to write a book, but it’s not 100% AI work. I need to spend several hours prompt crafting, structuring, reading and editing the book in order to make something acceptable. I don’t really act as a writer in this workflow, but more like an editor or a publisher. When I publish and sell my book, I’m entitled to some compensation for the time and effort that I put into it. Does that sound fair to you?

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            3 months ago

            Yes of course you are.

            …but do you agree that if you use an AI in that way that you are benefitting from another author’s work? You may even, unknowingly, violate the copyright of the original author. You can’t be held liable for that infringement because you did it unwittingly. OpenAI, or whoever, must bare responsibility for that possible outcome through the use of their tool.

            • Hamartiogonic@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Yes, it’s true that countless authors contributed to the development of this LLM, but they were not compensated for it in any way. Doesn’t sound fair.

              Can we compare this to some other situation where the legal status has already been determined?

              • wewbull@feddit.uk
                link
                fedilink
                English
                arrow-up
                3
                ·
                3 months ago

                I was thinking about money laundering when I wrote my response, but I’m not sure it’s a good analogy. It still feels to me like constructing a generative model is a form of “Copyright washing”.

                Fact is, the law has yet to be written.