• JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.

    • Khalic@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I’m not an ML expert but we’ve been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It’s just a fancy mirror. That’s why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn’t exist.

      Do you really want to lose your time reading a blob of data with no coherency?

      • whataboutshutup@discuss.online
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Do you really want to lose your time reading a blob of data with no coherency?

        We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you’ve found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it’s possible to make a reliable model with that bullshit detector.