The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.

The full text of the lawsuit can be found here

  • CJOtheReal@ani.social
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Everyone accuses Open AI of everything. In the end most stuff they do will not be illigal, there are loads of reasons, mainly due to the technical issues involved. You would need a database of every copyrighted stuff to check anything. The computing power requiref for this would be absurdly high.

    The demands are idiotic and ridiculous.

    And as said they didn’t “train chat GPT on a piracy site” the scraping algorithm put some stuff form there in the training data. There is no person doing that.

    • HarkMahlberg@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      There is no person doing that.

      “No one’s responsible, the DAO did it. No humans are liable, just this amorphous, sentient carbon cloud.”

      I’ve heard many defenses of AI, some of which I agree with, but “strip mining content off the internet is fine because it’s automated” is easily one of the weakest. It doesn’t pass the sniff test.

      If you write a script that downloads every single image from every single website, no questions asked, and then reupload them to various websites at random, do you suppose the police shouldn’t charge you with (inevitably) possessing and distributing CSAM? “Oh no officer, your true culprit is the Dell in my living room! Arrest that box!”

      Everyone is, on some level, responsible for the things they create.

    • EvilMonkeySlayer@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      And as said they didn’t “train chat GPT on a piracy site” the scraping algorithm put some stuff form there in the training data. There is no person doing that.

      “Your honour my program that I created to slurp up data from the internet using my paid for internet connection, into my AI trained model that I own and control happened to slurp up copyrighted data… I um, it’s not my fault it slurped up copyrighted data even though I put no checks in place for it to check what it was slurping up or from where.”

      That is the argument you are putting forth.

      Do you think any judge/court of law would view that favourably?