• Thorny_Thicket@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This might apply to LLMs and such but there’s no reason a true AGI couldn’t be completely unbiased though it could also be biased in a way that benefits itself.

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      How do you solve the problem of ethics? Is there even such a thing as objectively true ethics?

      You have to answer that question before you can even start saying that being unbiased is possible in the first place.

      • Thorny_Thicket@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        If we’re speaking of an AGI then I don’t need to solve those issues but it’s going to do it for me. By definition AGI doesn’t need a human to improve itself.

        • SkyeStarfall@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          How will you tell the AI what the proper ethics for humans are?

          After all, you want the AI to be in service of humans, of us… right? If not, what is going to stop the AI from just being entirely self-serving?

          • Thorny_Thicket@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 year ago

            I think we have a very different view of what a true AGI will be like. I don’t need to tell/teach it anything. It’ll be million times smarter than me and hopefully will teach me instead.

            Nothing stops it from being entirely self-serving. That’s why I expect it to destroy us.