Skyrim VAs are speaking out about the spread of pornographic AI mods.

  • Correct me if I’m wrong, but I don’t believe voices can be copyrighted. After all, if a human can replicate someone else’s voice, they get booked as professional impersonators rather than sued into oblivion.

    The difference here is that the voice replication happens though AI now. Would we see the same outrage if the voices in these mods were just people that sounded like the original voice actors?

    Copyright law needs to be fortified or a lot of voice actors are about to get screwed over big time. AI voice replication by modders is only the beginning, once big companies find the output acceptable these people may very well lose their jobs.

    • Rossel@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      The legal grounds are that the AI is trained using voice lines that can indeed be copyrighted material. Not the voice itself, but the delivered lines.

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        That’s a decent theoretical legal basis, but the voice lines are property of the game company rather than the voice actors.

        If this interpretation of copyright law on AI models will be the outcome of the two (three?) big AI lawsuits related to stable diffusion, most AI companies will be completely fucked. Everything from Stable Diffusion to ChatGPT 4 will instantly be in trouble.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        The problem with that approach is that the resulting AI doesn’t contain any identifiable “copies” of the material that was used to train it. No copying, no copyright. The AI model is not a legally recognizable derivative work.

        If the future output of the model that happens to sound very similar to the original voice actor counts as a copyright violation, then human sound-alikes and impersonators would also be in violation and things become a huge mess.

        • ChemicalRascal@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          The problem with that approach is that the resulting AI doesn’t contain any identifiable “copies” of the material that was used to train it. No copying, no copyright. The AI model is not a legally recognizable derivative work.

          That’s a HUGE assumption you’ve made, and certainly not something that has been tested in court, let alone found to be true.

          In the context of existing legal precedent, there’s an argument to be made that the resulting model is itself a derivative work of the copyright-protected works, even if it does not literally contain an identifiable copy, as it is a derivative of the work in the common meaning of the term.

          If the future output of the model that happens to sound very similar to the original voice actor counts as a copyright violation, then human sound-alikes and impersonators would also be in violation and things become a huge mess.

          A key distinction here is that a human brain is not a work, and in that sense, a human brain learning things is not a derivative work.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s a HUGE assumption you’ve made

            No, I know how these neural nets are trained and how they’re structured. They really don’t contain any identifiable copies of the material used to train it.

            and certainly not something that has been tested in court

            Sure, this is brand new tech. It takes time for the court cases to churn their way through the system. If that’s going to be the ultimate arbiter, though, then what’s to discuss in the meantime?

            • IncognitoErgoSum@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              Also, neural network weights are just a bunch of numbers, and I’m pretty sure data can’t be copyrighted. And yes, images and sounds and video stored on a computer are numbers too, but those can be played back or viewed by a human in a meaningful way, and as such represent a work.

    • Drusas@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Humans can’t entirely replicate one another’s voices. I recognize voices far better than faces, and I know I’m not the only one out there who does so. There are a lot of good imitators out there, but they can’t replicate another voice.