• BroBot9000@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    15
    ·
    2 days ago

    Really good until you stop double checking it and it makes shit up. 🤦‍♂️

    Go take your Ai apologist bullshit and feed it to the corporate simps.

    • Eheran@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      8
      ·
      1 day ago

      The good thing is that in code, if it makes shit up it simply does not work the way it is supposed to.

      You can keep your hatred to yourself, let alone the bullshit you make up.

      • AmbiguousProps@lemmy.today
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 day ago

        Until it leaves a security issue that isn’t immediately visible and your users get pwned.

        Funny that you say “bullshit you make up”, when all LLMs do is hallucinate and sometimes, by coincidence, have a “correct” result.

        I use them when I’m stumped or hit “writer’s block”, but I certainly wouldn’t have them produce 500 lines and then assume that just because it works, it must be good to go.

        • Eheran@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          5
          ·
          22 hours ago

          Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.

          I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.

          The hostility here against anyone using LLMs/AI is absurd.

          • Holytimes@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago

            I dislike LLMs but the only two fucking things this place seems to agree on is communism is good and ai is bad basically.

            Basically no one has a nuanced take and rather demonize then have a reasonable discussion.

            Honestly lemmy is even at this point just exactly the same as reddit was a few years ago before the mods and admins went full Nazi and started banning people for anything and everything.

            At least here we can still actually voice both sides of the opinion instead of one side getting banned.

            People are people no matter where you go

          • AmbiguousProps@lemmy.today
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            21 hours ago

            Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.

            I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.

          • AmbiguousProps@lemmy.today
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            edit-2
            1 day ago

            “my coworkers should have to read the 500 lines of slop so I don’t have to”

            That also implies that code reviews are always thoroughly scrutinized. They aren’t, and if a whole team is vibecoding everything, they especially aren’t. Since you’ve got this mentality, you’ve definitely got some security issues you don’t know about. Maybe go find and fix them?

            • onslaught545@lemmy.zip
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              17 hours ago

              If your QA process can let known security flaws into production, then you need to redesign your QA process.

              Also, no one ever said that the person generating 500 lines of code isn’t reviewing it themselves.