OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • kava@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    1 year ago

    and it doesn’t have the rights of a person.

    And we have determined that AI created work cannot be copyrighted - because it’s not a person. Nobody’s trying to claim that AI somehow has the rights of a person.

    But reading a bunch of books and then creating new material using the knowledge gained in those books is not copyright infringement and should be not treated as such. I can take Andy Warhol’s style and create as many advertisements as I want with it. He doesn’t own the style, nobody does.

    Why should that be any different for a company using AI? Makes no sense to me.

    You have been duped into thinking copyright is protecting authors when really copyright primarily exists to protect companies like Disney.

    • warbond@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      For clarity’s sake, the original intent behind copyright was definitely to protect authors and thereby foster creativity, but corporations like Disney have lobbied very successfully over the years to prevent original works from becoming public domain.

      Meanwhile, in classic fashion, those same companies have taken public domain works and turned them into ludicrously successful IPs!

      I argue that this is a positive aspect of capitalism that our governments have unduly suppressed in favor of corporate sponsors (further solidified by an increasing legal allowance of such sponsorships), and that we should return to a more reasonable timeframe for full exclusivity.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well copyright certainly isn’t protecting authors if big corporations can use their works without paying for them. That’s the whole point.

      • kava@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It is well established in the law that you cannot copyright ideas and actors are allowed to amalgamate different pieces of copyrighted materials to create something new and that isn’t covered under copyright.

        Think of it this way. Tarantino watched a lot of movies before he made Reservoir Dogs. The movie was inspired by Kansas City Confidential. He took a bunch of styles and inspirations from various different movies he had seen, put them together in a new way, and released the movie. This is the way creative work happens. Nothing is from scratch. Everything is built on everything else. The only difference I see is that this process is being done automatically by pattern-finding algorithms.

        • assassin_aragorn@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Sure, but then it’s only even more fair for these companies to pay up during the training of an AI. Schools don’t get to copy entire book transcripts off the Internet for lessons. They can’t pirate documentaries. And for higher education, the student pays tuition to learn information. They can pirate textbooks, but that isn’t enough alone to learn fields of studies.

          If we’re going to use human analogies for AI, then it should be limited in the same ways. The companies have to buy any books or media, or use material that is explicitly in the public domain with respect to copyright law – you could post a transcript of it online in front of the most litigious lawyer, and nothing would happen.

          • kava@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Couple of things. There is no way to prove they are using “pirated content”. There are web scrapers that go through the internet and scrape everything. This is going to include discussion, articles, blog posts, and video transcripts of many people discussing copyrighted content. The AI can give you a reasonable analysis for a book without ever having read the book because of this.

            Everything online is public. You cannot force someone to pay for seeing your reddit post. Second, I bet they have large databases of all the books in the public domain. This includes a very large corpus of text.

            This alone is probably enough to train their AI. Beyond that, presumably, they could pay for books. Textbooks, fiction, biographies, etc. They could pay for these and pump them into the system.

            If I were them, personally, I would probably just find large torrents with all the books. Or write some automated script to pull from libgen. But there’s no real way of proving how they did this and there’s no real way of proving what content the AI was trained on.

            • assassin_aragorn@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I would probably just find large torrents with all the books.

              It would create incredible legal liability for the company. If the authors and publishing studios caught wind of that, the AI companies would be sued into oblivion. Think about how intense media companies get about pirating when it’s just for pleasure or entertainment – if you’re using it to turn a profit, you’re legally fucked.

              Having no chain of custody for knowing what the AI was trained on sounds like typical cost cutting, until you realize this means they can’t detect or identify another AI’s output. They’ll quickly become garbage model.

              • kava@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                Well obviously it’s a massive legal liability. However… seemingly legitimate serious companies with large legal departments have been known to do legally dangerous things before. Apple deliberately sabotaged old iPhones by sending updates to drain battery - encouraging people to get new phones. Volkswagen faked their emissions tests (if I remember correctly, people went to prison and Apple had to pay out fines). I don’t put it past OpenAI to be doing illegal things for short term benefit to their long term detriment.

                Not saying it’s happening, but I’m saying it’s possible and it’s hard for me or you to prove that it’s happening.

                Having no chain of custody for knowing what the AI was trained on sounds like typical cost cutting, until you realize this means they can’t detect or identify another AI’s output.

                I’m sure they have good internal controls for what goes in the model. I’m guessing the information is very tightly controlled, for above reasons. I’m not sure what you mean by another AI’s output though.

                • assassin_aragorn@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I think they probably have criteria for what’s used to train it, but they don’t keep a list of what material was used. I believe they’ve said in the past they don’t have that information.

                  For another AI – these models fall apart when they’re trained on AI generated content, after a few generations. If they have no way of discerning if content is AI generated or not, they’re going to have a ticking time bomb. At some point the models will heavily degrade in quality because of it. The question I guess is what % of training material can be AI generated before it causes problems.

                  This does mean however that AI generated material can never become a substantial % of all the content out there. Whenever there’s too much, the algorithms will fall apart, and probably not recover until that content falls below a certain % again.

                  • kava@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    but they don’t keep a list of what material was used. I believe they’ve said in the past they don’t have that information.

                    I will look into this. I feel like that’s quite an oversight. Perhaps it’s easier to just tell the public otherwise because of the legal questions like we are discussing. I would have kept everything in storage so we can re-train updated models or what have you with the same data.

                    I think it’s an interesting thing you bring up. There will be a sort of distinction in the corpus of human works. Pre ~2023 and Post ~2023. All work before that time will more or less be legitimate and you can use it for training data. Afterwards it will all be tainted.

                    Honestly the implications go further than that. For one, I don’t trust that there is a human behind any comment I see online anymore. Especially in topics and areas that I feel are likely to be astroturfed - like politics.