• Eheran@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    5
    ·
    22 hours ago

    Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.

    I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.

    The hostility here against anyone using LLMs/AI is absurd.

    • Holytimes@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      I dislike LLMs but the only two fucking things this place seems to agree on is communism is good and ai is bad basically.

      Basically no one has a nuanced take and rather demonize then have a reasonable discussion.

      Honestly lemmy is even at this point just exactly the same as reddit was a few years ago before the mods and admins went full Nazi and started banning people for anything and everything.

      At least here we can still actually voice both sides of the opinion instead of one side getting banned.

      People are people no matter where you go

    • AmbiguousProps@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      21 hours ago

      Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.

      I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.