Sam Altman says ChatGPT should be ‘much less lazy now’::ChatGPT users previously complained that the chatbot was slacking off and refusing to complete some tasks.

  • otp@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    10 months ago

    Some users found inventive strategies to get around ChatGPT’s laziness, with one finding that the AI model would provide longer responses if they promised to tip it $200.

    Man this thing really IS just like a human!

    /joke

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      10 months ago

      It’s based on text produced by humans so yes, it does retrieve text that was written by humans therefore it acts like a human.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        It’s still weird. That reasoning implies that there is a correlation between promising money and long answers in the training data. Seems plausible at first blush, but where can this be actually seen? It’s hardly ever seen in social media, where similar QA formats exists. It’s certainly not in textbooks, where the real good answers are. OTOH there are a lot of tips promised in completely different contexts.

        I’m not saying it’s wrong, but there is definitely a lot of cargo cult in prompting strategies.