• Haquer@lemmy.today
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    3
    ·
    3 months ago

    Nothingburger. They were using the AI to code their scripts and haven’t even shown the prompts that got the response. LLMs are not AGI.

    • chuckleslord@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      3
      ·
      3 months ago

      Having read the article and then the actual report from the Sakana team. Essentially, they’re letting their LLM perform research by allowing it to modify itself. The increased timeouts and self-referential calls appear to be the LLM trying to get around the research team’s guardrails on it. Not because it’s become aware or anything like that, but because its code was timing out and that was the least effort way to beat the timeout. It does handily prove that LLMs shouldn’t be the one steering any code base, because they don’t give a shit about parameters or requirements. And giving an LLM the ability to modify its own code will lead to disaster in any setting that isn’t highly controlled like this.

      Listen, I’ve been saying for a while that LLMs are a dead end towards any useful AI, and the fact that an AI Research team has turned to an LLM to try and find more avenues to explore feels like the nail in that coffin.

  • CaptainSpaceman@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    3 months ago

    “We put literally no safeguards on the bot and were surprised it did unsafe things!”

    Article in a nutshell

    • magnetosphere@fedia.io
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      3 months ago

      Not quite. The whole reason they isolated the bot in the first place was because they knew it could do unsafe things. Now they know what unsafe things are most likely, and can refine their restrictions accordingly.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    The word unexpectedly is doing a lot of heavy lifting here. It was given the ability to modify its own code, and it did, how is that unexpected?

    • psivchaz@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Everyone’s like, “It’s not that impressive. It’s not general AI.” Yeah, that’s the scary part to me. A general AI could be told, “btw don’t kill humans” and it would understand those instructions and understand what a human is.

      The current way of doing things is just digital guided evolution, in a nutshell. Way more likely to create the equivalent of a bacteria than the equivalent of a human. And it’s not being treated with the proper care because, after all, it’s just a language model and not general AI.

      • kata1yst@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Yup. A seriously intelligent AI we probably wouldn’t have to worry too much about. Morality, and prosocial behavior are logical and safer than the alternative.

        But a dumb AI that manages to get too much access is extremely risky.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    3 months ago

    I for one welcome…oh wait, this isn’t that lame Spez site. Forgot where I was for a second.