• phorq@lemmy.ml
        link
        fedilink
        Español
        arrow-up
        20
        ·
        3 months ago

        I wouldn’t call them passive, they do too much work. More like aggressively submissive.

        • GregorGizeh@lemmy.zip
          link
          fedilink
          arrow-up
          19
          ·
          edit-2
          3 months ago

          Maliciously compliant perhaps

          They do what you tell them, but only exactly what and how you tell them. If you leave any uncertainty chances are it will fuck up the task

      • BeigeAgenda@lemmy.ca
        link
        fedilink
        arrow-up
        8
        ·
        3 months ago

        My experience is that: If you don’t know exactly what code the AI should output, it’s just stack overflow with extra steps.

        Currently I’m using a 7B model, so that could be why?

  • IWantToFuckSpez@kbin.social
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Yeah but have you ever coded shaders? That shit’s magic sometimes. Also a pain to debug, you have to look at colors or sometimes millions of numbers trough a frame analyzer to see what you did wrong. Can’t program messages to a log.