• CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    7 months ago

    I write automation code for devops stuff. I’ve tried to use ChatGPT several times for code, and it has never produced anything of even mild complexity that would work without modification. It loves to hallucinate functions, methods, and parameters that don’t exist.

    It’s very good for helping point you in the right direction, especially for people just learning. But at the level it’s at now (and all the articles saying we’re already seeing diminishing returns with LLMs) it won’t be replacing any but the worst coders out there any time soon.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      It’s great for Pseudo code. But I prefer to use a local LLM that’s been fine tuned for coding. It doesn’t seem to hallucinate functions/methods/parameters anywhere near as much as when I was using ChatGPT… but admittedly I haven’t used ChatGPT for coding in a while.

      I don’t ask it to solve the entire problem, I mostly just work with it to come up with bits of code here and there. Basically, it can partially replace stack overflow. It can save time for some cases for sure, but companies are severely overestimating LLMs if they think they can replace coders with it in its current state.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      7 months ago

      I can believe that they manage to get useful general code out of an AI, but I don’t think that it’s gonna be as simple as just training an LLM on English-code mapping. Like, part of the job is gonna be identifying edge conditions, and that can’t be just derived from the English alone; or from a lot of other code. It has to have some kind of deep understanding of the subject matter on which it’s working.

      Might be able to find limited-domain tasks where you can use an LLM.

      But I think that a general solution will require not just knowing the English task description and a lot of code. An AI has to independently know something about the problem space for which it’s writing code.

      • Cryan24@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        It’s good for doing the boilerplate code for you that’s about it… you still need a human to do the thinking on the hard stuff.

    • 7heo@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      The thing is, devops is pretty complex and pretty diverse. You’ve got at least 6 different solutions among the popular ones.

      Last time I checked only the list of available provisioning software, I counted 22.

      Sure, some like cdist are pretty niche, but still, when you apply for a company, even tho it is going to either be AWS (mostly), azure, GCE, oracle, or some run of the mill VPS provider with extended cloud features (simili S3 based on minio, “cloud LAN”, etc), and you are likely going to use terraform for host provisioning, the most relevant information to check is which software they use. Packer? Or dynamic provisioning like Chef? Puppet? Ansible? Salt? Or one of the “lesser ones”?

      And thing is, even among successive versions, among compatible stacks, the DSL evolved, and the way things are supposed to be done changed. For example, before hiera, puppet was an entirely different beast.

      And that’s not even throwing docker or (or rkt, appc) in the mix. Then you have k8s, podman, helm, etc.

      The entire ecosystem has considerable overlap too.

      So, on one hand, you have pretty clean and useable code snippets on stackoverflow, github gist, etc. So much so that tools like that emerged… And then, the very second LLMs were able to produce any moderately usable output, they were trained on that data.

      And on the other hand, you have devops. An ecosystem with no clear boundaries, no clear organisation, not much maturity yet (in spite of the industry being more than a decade old), and so organic that keeping up with developments is a full time job on its own. There’s no chance in hell LLMs can be properly trained on that dataset before it cools down. Not a chance. Never gonna happen.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      7 months ago

      Context-aware AI is where it’s at. One that’s
      integrated into your IDE and can see your entire codebase and offer suggestions with functions and variables that actually match the ones in your libraries. Github Copilot does this.

      Once the codebase gets large enough, a lot of times you can just write out a comment and suddenly you’ll have a completely functional code block pop up underneath it, and you hit “tab” to accept it and move on. It’s a very sophisticated autocomplete. It removes tediousness and lets you focus on logic.

  • AdamEatsAss@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    5
    ·
    7 months ago

    Lol. Humans are just moving up on the stack. I’m sure some people were upset about how we wouldn’t need electrical engineers anymore once digital circuits were invented. AI is a tool, without a trained user a tool is almost useless.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      edit-2
      7 months ago

      AI is a tool, without a trained user a tool is almost useless.

      Exactly. This feels a bit like the invention of the wheel to me. Suddenly some things are a lot easier than they used to be and I’m sitting here thinking “holy crap half my job is so easy now” while watching other people harp on about all the things it doesn’t help with. Sure - they’re right, but who cares about that? Look at all the things this tool can do.

    • vanderbilt@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      7 months ago

      I use Claude to write plenty of the code we use, but it comes with the huge caveat that you can’t blindly accept what it says. Ever hear newscasters talk about some hacker thing and wonder how they get it so wrong? Same thing with AI code sometimes. If you can code you can tell what it does wrong.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Is that why Windows 11 sucks so much? Like, did they just turn their codebot loose on the repo?

  • tsonfeir@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 months ago

    Bugs. Bugs. Bugs.

    AI is fine as an assistant, or to brainstorm ideas, but don’t let it run wild, or take control.