Skyrim VAs are speaking out about the spread of pornographic AI mods.

  • Correct me if I’m wrong, but I don’t believe voices can be copyrighted. After all, if a human can replicate someone else’s voice, they get booked as professional impersonators rather than sued into oblivion.

    The difference here is that the voice replication happens though AI now. Would we see the same outrage if the voices in these mods were just people that sounded like the original voice actors?

    Copyright law needs to be fortified or a lot of voice actors are about to get screwed over big time. AI voice replication by modders is only the beginning, once big companies find the output acceptable these people may very well lose their jobs.

    • Rossel@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      The legal grounds are that the AI is trained using voice lines that can indeed be copyrighted material. Not the voice itself, but the delivered lines.

      • Skull giver@popplesburger.hilciferous.nl
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        That’s a decent theoretical legal basis, but the voice lines are property of the game company rather than the voice actors.

        If this interpretation of copyright law on AI models will be the outcome of the two (three?) big AI lawsuits related to stable diffusion, most AI companies will be completely fucked. Everything from Stable Diffusion to ChatGPT 4 will instantly be in trouble.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        The problem with that approach is that the resulting AI doesn’t contain any identifiable “copies” of the material that was used to train it. No copying, no copyright. The AI model is not a legally recognizable derivative work.

        If the future output of the model that happens to sound very similar to the original voice actor counts as a copyright violation, then human sound-alikes and impersonators would also be in violation and things become a huge mess.

        • ChemicalRascal@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          The problem with that approach is that the resulting AI doesn’t contain any identifiable “copies” of the material that was used to train it. No copying, no copyright. The AI model is not a legally recognizable derivative work.

          That’s a HUGE assumption you’ve made, and certainly not something that has been tested in court, let alone found to be true.

          In the context of existing legal precedent, there’s an argument to be made that the resulting model is itself a derivative work of the copyright-protected works, even if it does not literally contain an identifiable copy, as it is a derivative of the work in the common meaning of the term.

          If the future output of the model that happens to sound very similar to the original voice actor counts as a copyright violation, then human sound-alikes and impersonators would also be in violation and things become a huge mess.

          A key distinction here is that a human brain is not a work, and in that sense, a human brain learning things is not a derivative work.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s a HUGE assumption you’ve made

            No, I know how these neural nets are trained and how they’re structured. They really don’t contain any identifiable copies of the material used to train it.

            and certainly not something that has been tested in court

            Sure, this is brand new tech. It takes time for the court cases to churn their way through the system. If that’s going to be the ultimate arbiter, though, then what’s to discuss in the meantime?

            • IncognitoErgoSum@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              Also, neural network weights are just a bunch of numbers, and I’m pretty sure data can’t be copyrighted. And yes, images and sounds and video stored on a computer are numbers too, but those can be played back or viewed by a human in a meaningful way, and as such represent a work.

    • Drusas@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Humans can’t entirely replicate one another’s voices. I recognize voices far better than faces, and I know I’m not the only one out there who does so. There are a lot of good imitators out there, but they can’t replicate another voice.

    • fearout@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Who’s Joan? My Netflix only has a Madison_rogue is awful episode.

      But in all seriousness, the tech is here and it’s here to stay. Legal stuff can curb it for a bit, but you know internet. It’s only going to grow.

      There should be some other way to deal with it besides “Forbid everything now!”.

      I’ve seen several future copyright systems being discussed which might provide a better way of doing things going forward. Like the one where all copyright is basically waived, but royalties are collected and distributed to all participants from any derivative work. Want to make a star wars fan movie? You’re free to do so (and everyone is), but every actor you use and every writer whose part of the lore you’re using gets a small cut.

      Kinda like sampling in music currently works. Could be a better system than what we currently have, with corporations owning hundreds of years of usage for one story or likeness.

      • TheDankHold@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Everyone has takes like this like we can’t just draw the line at making it illegal for a computer or algorithm to do this.

          • TheDankHold@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            It’s not though. That’s you naively ignoring the aspects that are different. Computer generated imitation is easier to create, can be created at a scale far eclipsing human action, and can be finely tuned to make it harder to discern.

            You can look up impressionists and you’ll find it’s a rather small club when looking for the true greats. Computers remove this barrier and allow any asshole with an internet connection to create a video of you screaming racial epithets if your voice is easy enough to access.

            The vast difference in scale can’t be ignored.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              Talking up the capabilities of AI voice acting is not really helping the case against it. If it’s really so good, and laws are enacted that forbid mixing human and AI voice acting, then I expect the straightforward optimal solution would be to entirely eliminate the human voice actors going forward.

              • TheDankHold@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                You write laws for the future. Unless you think ai generated content has plateaued. Which is again, naive. Just because social media wasn’t popular at first doesn’t mean we should’ve waited on passing data privacy legislation like we have. It’s good to identify potential issues and attempt to mitigate them early. So we don’t get situations like our current climate status.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  I don’t think it’s plateaued, I think it’s going to get significantly better from here onward.

                  I’m not sure what laws you’re proposing at this point. Are you suggesting that AI should be forbidden from “mimicking” a human voice actor? That’s what I’m suggesting will lead even more quickly to AI-only projects that get rid of the human voice actors entirely, since having a human voice actor under laws like that would end up as a huge hindrance.

  • TheChurn@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    The porn bit gets headlines, but it isn’t the core of the issue.

    All of these models retain a representation of the original training data in their parameters, which makes training a violation of copyright unless it was explicitly authorized. The law just hasn’t caught up yet, since it is easy to obfuscate this fact with model mumbo-jumbo in between feeding in voices and generating arbitrary output.

    The big AI players are betting that they will be able to entrench themselves with a massive data advantage before regulation locks down training and effectively kills any future competition. They will already have their models, and the worst case at that point is paying some royalties to people whose data was used in training.

    • LoafyLemon@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I’d like to know how do you expect the governments or even private institutions to enforce this since most of the countries won’t care about foreign laws.

      • Ragnell@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        They can forbid companies from using the AI to do business in their areas, like the EU is doing with privacy laws. Google not being able to use its chatbot search in the US would be a big deal.

        • LoafyLemon@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Sounds to me you would have to prove someone used AI in their work first, therefore making it difficult to realistically enforce.

          • Ragnell@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Not hard when they’re advertising it right now. And if they do try to keep it secret all the government will have to do is subpoena a look at the backend.

            But honestly, since when do we just not have laws when it’s hard to prove. It’s hard to prove someone INTENDS to murder someone, but that’s a really important legal distinction. It’s hard to prove someone’s faking a mental illness, but that’s another thing that’s got laws around it. It’s really hard to prove sexual assault, but that needs to still be outlawed too.

            Compared to that stuff? Proving someone used an AI is going to be a piece of cake with all the data that gets collected and the amount of work it would take to REMOVE the AI from a business process before the cops get there.

            • LoafyLemon@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Enforcing a potential AI ban in work environments is unrealistic right now because it’s challenging to prove that AI was actually used for work purposes and then enforce such a ban. Let’s break it down in simple terms.

              Firstly, proving that AI was used for work is not straightforward. Unlike physical objects or traditional software, AI systems often operate behind the scenes, making it difficult to detect their presence or quantify their impact. It’s like trying to catch an invisible culprit without any clear evidence.

              Secondly, even if someone suspects AI involvement, gathering concrete proof can be tricky. AI technologies leave less visible traces compared to conventional tools or processes. It’s akin to solving a mystery where the clues are scattered and cryptic.

              Assuming one manages to establish AI usage, the next hurdle is enforcing the ban effectively. AI systems are often complex and interconnected, making it challenging to untangle their influence from the overall work environment. It’s like trying to remove a specific ingredient from a dish without affecting its overall taste or texture.

              Moreover, AI can sometimes operate subtly or indirectly, making it difficult to draw clear boundaries for enforcement. It’s like dealing with a sneaky rule-breaker who knows how to skirt around the regulations, all you have to do is ask.

              Considering these challenges, implementing a ban on AI in work environments becomes an uphill battle. It’s not as simple as flipping a switch or putting up a sign. Instead, it requires navigating through a maze of complexity and uncertainty, which is no easy task.