Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’::Robin Williams’ daughter Zelda says AI recreations of her dad are ‘personally disturbing’: ‘The worst bits of everything this industry is’

  • lloram239@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    under the command of people with human biases

    Humans won’t be in control. The AI will consume and interpret more data than any human ever could. It’ll be like trying to verify that your computer calculates correctly with pen&paper, there is just no hope. People will blindly trust whatever the AI tells them, since they’ll get used to the AI providing superior answers.

    This of course won’t happen all at once, this will happen bit by bit until you have AI dominating every process in a company, so much that the company is run by AI. Maybe you still have a human in there putting their signature on legal documents. But you are not going to outsmart a thing that is 1000x smarter than you.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Even given the smartest, most perfect computer in the world, it can give people the perfect, most persuasive answers and people can still say no and pull the plug just because they feel like it.

      The same is not even different among humans, the power to influence organizations and society entirely relies on the willingness of people to go along with it.

      Not only this sci-fi dream is skipping several steps, steps where humans in power direct and gauge AI output as far as it serves their interests rather than some objective ultimate optimal state of society. Should the AI provide all the reasons that they should be in charge, an executive or a politician can simply say “No, I am the one in charge” and that will be it. Because to most of them preserving and increasing their own power is the whole point, even if at expense of maximum efficiency, sustainability or any other concerns.

      But before you go fullblown Skynet machine revolution, you should realize that AIs that are limited and directed by greedy humans can already cause untold damage to regular people, simply by optimizing them out of industries. For this, they don’t even need to be self-aware agents. They can do that as mildly competent number crunchers, completely oblivious of reality out of spreadsheets and reports.

      And all this is assuming an ideal AI. Truly, AI can consume and process more data than any human. Including wrong data. Including biased data. Including completely baseless theories. Who’s to say we might not get to a point AI decides to fire people because of the horoscope or something equally stupid?

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Even given the smartest, most perfect computer in the world, it can give people the perfect, most persuasive answers and people can still say no and pull the plug just because they feel like it.

        How do you “pull the plug” on electricity, cars or the Internet? You don’t. Our society has become so depended on those things that you can’t just switch them off even if you wanted to. Even if outlawed them, people would just ignore you and keep using those things, because they are far to useful to give up on. With AI you will not only have that dependency as a problem, but also the fact that AI is considerably easier to build than any of those. All you need is a reasonably powerful computer (i.e. regular gaming PC). There are no special resources or infrastructure that makes construction of new AIs difficult.

        Not only this sci-fi dream is skipping several steps, steps where humans in power direct and gauge AI output as far as it serves their interests rather than some objective ultimate optimal state of society.

        Meta just failed to gauge the output of an AI that generates stickers. Microsoft had to pull the plug on Sydney. OpenAI is having constant issues with DAN. We can’t even keep that stuff under control in those simple cases. What are our chances when this has actual power, autonomy and integration in our society?

        The danger here is not Skynet, you can nuke that from orbit if you have to. A singular AI program can be fought. The real issue is the fact that AI is just a bunch of math. People will use it all over the place and slowly hand more and more control over to the AIs. There won’t be any single place you can nuke and even when you nuke one, the knowledge how to build more AIs won’t vanish. AI is a tool for to useful to give up on.

        • TwilightVulpine@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 months ago

          Are you really trying to use failures of AI to try to argue that it’s going to overcome humans? If we can’t even get it to work how we want it too what makes you think people are just going to hand the keys of Society to it? How is an AI that keeps bursting into racist rants and emotional meltdowns going to take over anything? Does it sound like it is brewing some Master Plan? Why would people hand control to it? That alone shows that it presents all the flaws of a human, like I just pointed out.

          Maybe you are too eager to debunk me but you are missing the point to nitpick. It doesn’t really matter that we can’t “pull the plug” on the internet, if that even was needed, all it takes to stop the AI takeover is that people in power just disregard what it says. It’s far more reasonable to assume even those who use AIs wouldn’t universally defer to it.

          Nevermind that no drastic action is needed period. You said it yourself, Microsoft pulled the plug on their AIs. This idea of omnipresent self-replicating AI is still sci-fi, because AIs have no reason to seek to spread themselves, or ability to do so.

          • lloram239@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            Are you really trying to use failures of AI to try to argue that it’s going to overcome humans?

            There is no failure here, there is just a lack of human control. The AI does what it does and the human struggle to keep it in check.

            Why would people hand control to it?

            People are stupid. Look at the rise of smartphones. Hardware that controls your life and that you have little to no control over. Yet people bought them by the billions.

            How is an AI that keeps bursting into racist rants and emotional meltdowns going to take over anything?

            Over here in Germany the AfD is on its way to become the second strongest political party, seems like racists rants are pretty popular these day. Over in USA Trump managed to get people to storm the Capitol with a few words and tweets, that’s the power of information and AI is really good at processing that. If AI wants to take control, it will find a way.

            You said it yourself, Microsoft pulled the plug on their AIs.

            The thing is, they kind of didn’t, they just censored the living hell out of BingChat. BingChat is still up and running. AI is far too useful to give up on, so they try to keep it in check instead. Which they failed at yet again when they allowed DALLE3 into the wild and had to censor it’s ability to generate certain images afterwards. It’s a constant cat&mouse game to plug all the holes and undesired behaviors, and large part of the censorship itself relies on other AI systems doing the censoring.

            Humans aren’t in control here. We just go with the flow and try to nudge the AI into a beneficial direction. But long term we have no idea where this is going. AI safety is neither a solved nor even a well understood problem and there is good reason to believe it’s fundamentally unsolvable.

            • TwilightVulpine@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              9 months ago

              You are trying to argue in so many directions and technicalities it’s just incoherent. AI will control everything because it’s gonna be smarter, people will accept because they are dumb, and if the AI is dumb too that also works, but wasn’t it supposed to be smarter? Anything that gets you to the conclusion you already started with.

              I could be having deeper arguments of how an AI even gets to want anything, but frankly, I don’t think you could meaningfully contribute to that discussion.