• WorldsDumbestMan@lemmy.today
    link
    fedilink
    arrow-up
    1
    ·
    2 小时前

    The AI actually solved psychological barriers I had (along with co-workers forcing me to open up), they were quite the wombo combo.

    Then I got far worse ones from work. I’m now basically an anti-pleasure monk that is trying to decouple happiness from success, just trying to accumulate power and money instead.

  • dogs0n@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    4 小时前

    Doesn’t always help but I am unfortunately thankful it exists sometimes when I feel like giving up and it gets me on the right track.

    It never gives me good code, but the text it returns can sometimes spark an idea that works.

  • Avicenna@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    8 小时前

    It doesn’t generally completely figure it out but to be honest it does a much better job than google for finding the relevant key words which can then be used for a more detailed search.

    • kadu@scribe.disroot.org
      link
      fedilink
      arrow-up
      3
      arrow-down
      7
      ·
      8 小时前

      Not to be a dick, but this reveals more about your own limitations than it does about the power of LLMs…

            • Holytimes@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              7 分钟前

              This is Lemmy the avg user is about as willing to use a LLM as they would drink bleach.

              They rather just shit on anything they don’t like, lie though their teeth to demonize the things they don’t like and deny reality around them.

              Its the one constant Lemmy shares with reddit. It’s always funny to see that no matter where you go people are always the same.

      • CaptSatelliteJack@lemy.lol
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        7 小时前

        Wow, you mean a random stranger on the internet isn’t as good at something as you?? Say it ain’t so!

        • kadu@scribe.disroot.org
          link
          fedilink
          arrow-up
          1
          arrow-down
          5
          ·
          6 小时前

          That’s not what I said.

          LLMs are demonstrably bad at what they do, and what they do is just very basic writing, research and math.

          It’s not about things I know or don’t know. If you’re finding LLMs useful, you’re lacking in some foundational skills that everybody should practice and be capable of doing.

          • WorldsDumbestMan@lemmy.today
            link
            fedilink
            arrow-up
            2
            ·
            2 小时前

            Welcome to the real world. A lot of use are either disabled, or have been left behind, then shoved in the workforce so we can’t resolve those issues, and drained of resources as well.

          • CaptSatelliteJack@lemy.lol
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 小时前

            “If you use LLMs you’re dumb and bad.” You’re literally a toxic gamer telling someone to “git gud.” Imagine calling someone stupid because they said they use a tool to help them solve a problem. Are outside who use calculators brain-dead idiots then?

            • kadu@scribe.disroot.org
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              5 小时前

              Again, not what I said.

              But given how difficult interpreting this simple comment has been for you, and your extremely weird reply (toxic gamer? what?) I think you’re better off using the LLM to interpret and write for you after all. Good day though!

                • kadu@scribe.disroot.org
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  4 小时前

                  Your joke would work better against the guy boasting about using LLMs, not the one actively avoiding them. Maybe ask a LLM for an explanation as to why.

  • 87Six@lemmy.zip
    link
    fedilink
    arrow-up
    11
    ·
    20 小时前

    Me yesterday, except I only thought it figured it out, then found out hours later I must revert back to my workaround because it didn’t really work fully and was fragile as fuck.

    • markovs_gun@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      17 小时前

      I don’t think it’s intentional, but I think the sheer quantity of AI slop and web crawlers trying to train new AI models is the main problem. Good websites are blocking access to search engines to try to slow crawler traffic, while shitty websites are being made at an unprecedented speed. I legitimately don’t know how you fix this as a search engine provider.

      • WorldsDumbestMan@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        2 小时前

        Cmon. How many people used to say something was not done intentionally, only to be proven wrong again and again. Remember the syphilis experiments on black people that the USA gov secretly did? And that doctors co-operated with? The panama papers? The Phobos Cartel?

        Always assume they are doing it on purpose! At least someone is aware they are sabotaging their search engine, and plans to profit from it.

      • 87Six@lemmy.zip
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        20 小时前

        So they were enshittifying search engines in advance, so what? AI wasn’t born yesterday.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        32
        ·
        1 天前

        When search engine optimization becomes a target, content suffers. If Google changed their algorithm to only rank websites with high quality content instead of keyword-stuffed content, we’d see a great improvement in the quality of the internet.

        • Grenfur@pawb.social
          link
          fedilink
          arrow-up
          28
          ·
          1 天前

          The real kicker is how you even decide what quality is. A one line script that updates a driver may be a solution to your issue. A four page walkthrough that rambles and gets you to your answer but only after an hour is still a solution, but is it better quality? The issue is that you can’t quantify quality. Even if you managed to for something like programming, you couldn’t apply that same logic to horticulture. The issue is that quality isn’t something you can stick in an algorithm.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 天前

            Right, quality is not something that is easy to figure out algorithmically. But adding arbitrary rules like “content length” or “time on page” directly ruins quality by incentivizing content manipulation.

        • qarbone@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 天前

          Then someone targets them for pushing their biases because they are deciding quality.

    • mushroommunk@lemmy.today
      link
      fedilink
      arrow-up
      17
      arrow-down
      2
      ·
      1 天前

      I’ve never seen convincing arguments for that. However, if you think about it, Google wants you to stay scrolling through it forever. The more sponsored links and ads they can show, the more money they make. They didn’t need to make it worse for AI, they made it worse for profit

      • LoreSoong@startrek.website
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 天前

        Im encountering alot of AI created websites that explain concepts like “side effects of X pill” (a recent example) and there was basically no real medical websites in the top results, Just clearly AI using thousands of words to say nothing that I cant trust.

        I was considering locally hosting a search engine to circumvent my need for them entirely. Search engine optimization seems like a nightmare, if they were trying to give me useful results. So im not sure if that would be a spend 5 hours to save 5 minutes situation.

        As you and others said, Its been getting worse for years so its probably just a coincidence that its also profitable for AI.

    • Alloi@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      6
      ·
      10 小时前

      no offense, i understand what you are trying to say here. im not a massive fan of the implications of things like AI and its effects on society.

      but oversimplifying and infantalising your enemy wont stop it from out performing you.

      like i can say “all AI does it put words on a screen based on a statistical analysis and prediction algorithm based on context and available training data, and its only accurate between 95% to 97% of the time, and it lies when it doesnt know something, or wants to save power for the sake of efficiency and cost reduction”

      and it would still be far more likely to give a comprehensive breakdown and step by step analysis of systems well beyond my personal understanding. way faster than i ever could.

      we can chalk it up to stolen info and guessing letters, but itll still outperform most people in most subjects, especially in terms of time/results.

      dont get me wrong i dont think its intelligent in the way that a human can be, or as nuanced as a human can be. but that doesnt necessarily mean it cant be forever. and the way the technology is evolving across the board, seemingly faster and faster each day. with some plateaus here and there. its hard to imagine a world where we just say “well, we tried, its a dead end, oh well” and just completely abandon it for the idea of human exceptionalism.

      overall humans, as smart as they are, are also pretty fucking dumb. which is why we are ignoring things like climate change, for what are essentially IOUs made out of 1s and 0s (money). and also succumbing to a global increase in fascist ideals even though we historically know what it entails and how it ends. and its in part due to the ability of AI to manipulate the masses, in its current “primitive” state.

      i dont like AI, but im not going to pretend it wont be able to replace the output of most humans, or automate most jobs, or be used to enslave us and brainwash us further than it already has.

      the human mind simply cannot compete with the computational speed, and in some cases, quality, of what is, and what is yet to come.

      slop it may be, but if you cover the veritable feast of human creativity with enough slop, humanity will soon have no choice but to eat it or starve. everything else will get drowned out in time.

      something really fucking big would have to happen to change this outcome. ww3, nuclear war, solar flare. who the fuck knows.

      but what i do know is that those in power need the system to function as is, and in newer more efficient ways, while they still need us, in order for them to have the highest potential survival rate when it all comes crashing down at the end of this century. so, we may just avoid total annihilation unless its deemed necessary for their survival. lets hope we rise up before they take that opportunity.

      • Holytimes@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 分钟前

        This rant made me realize people need to go work at a fucking gas station for a few weeks and find out how truly. Fucking stupid and uneducated the avg person is.

        LLMs even as they are right now are so far beyond what a very sizeable part of the world is in terms of intelligence and education. It’s wild how stupid a lot people are.

        And this isn’t even a recent thing it’s been like this for all of human history. People are for the most part God damn idiots. Some people are expectational in one or two narrow fields.

        And barely anyone is good at more then a few.

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    4
    ·
    edit-2
    1 天前

    Literally never had this happen. Every time I have caved after exhausting all other options the LLM has just made it worse. I never go back anymore.

    • idunnololz@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      1 天前

      They seem to be pretty good at language. One time i forgot the word “tact” and I was trying to remember it. I even asked some people and no one could think of the word I was thinking of even after I described approximately what it meant. But I asked AI and it got it in one go.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      arrow-up
      28
      arrow-down
      1
      ·
      edit-2
      1 天前

      They’re by no means the end-all solution. And they usually aren’t my first choice.

      But when I’m out of ideas prompting gemini with a couple sentences hyper-specifically describing a problem, has often given me something actionable. I’ve had almost no success with asking it for specific instructions without specific details about what I’m doing. That’s when it just makes shit up.

      But a recent example. I was trying to re-install windows on a lenovo ARM laptop. Lenovos own docs were generic for all their laptops, and intended for x86. You could not use just any windows iso. While I was able to figure out how to create the recovery image media for the specific device at hand, there were no instructions on how to actually use it, and entering the BIOS didn’t have any relevant entries.

      Writing half a dozen sentences describing this into Gemini, instantly informed me that there is a tiny pin-hole button on the laptop that boots into a special separate menu that isn’t in the bios. A lo, that was it.

      Then again, if normal search still worked like it did a decade ago, and didn’t give me a shitload of irrelevant crap, I wouldn’t have needed an LLM to “think” it’s way to this factoid. I could have found it myself.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 天前

        I do use LLMs if I forget to plan one of my tabletop sessions. I will fully admit they are great at that. Love 'em for making encounters.

    • Farid@startrek.website
      link
      fedilink
      arrow-up
      5
      ·
      1 天前

      Happened to me yesterday. I have an old 4K TV, every component I used to connect to it had HDMI 2.0+ capabilities. Neither laptop nor Steam Deck would output 4K60, only 4K30. Tried getting another cable and a hub, same result. And I know that my Chromecast outputs 4K60 to this TV, so I was extra confused. In my desperation, asked GPT-5 what was I missing, and it plainly told me that those old Samsung TVs turn off HDMI 2.0 support unless you explicitly turn it on in TV settings under “UHD Color”. Apparently Chromecast was doing chroma subsampling, but computers refused and wanted full HDMI 2.0 bandwidth…

      • _g_be@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 小时前

        That’s rather cool, glad to hear it worked. My experience with it is often:

        Where can I find this setting to change for *this thing*? “Gladly! I know how frustrating this process scan be! First, open the settings page, find the page that says “*\thing setting* and change it there” There is no page like that " You’re absolutely right!”

        • Farid@startrek.website
          link
          fedilink
          arrow-up
          1
          ·
          6 小时前

          True, that totally happens to me all the time, too. For example, yesterday it was repeatedly insisting that there’s a certain checkbox in qbittorrent settings, which wasn’t there. I gave it the screenshot of the setting page and it “realized” it’s named differently. So in the end, it helped me with something that I couldn’t google properly. It’s a supplementary tool for me.

    • TehBamski@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 天前

      Context is highly important in this scenario. Asking it how many people live in [insert country and then province/state], and it’ll be accurate a high percentage of the time. As compared to asking it, [insert historical geo-political question], and it won’t be able to.

      Also, I have found it can depend on which LLM you ask said question to. I have found Perplexity to be my go to LLM of choice, as it acts like an LLM ‘server’ in selecting the best LLM for the task at hand. Here’s Perplexity’s Wikipedia page if you want to learn more.

    • Eheran@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      14
      ·
      1 天前

      When was the last time you tried? GPT5 thinking is able to create 500 lines of code without a single error, repeatable, and add new features into it seamlessly too. Hours of work with older LLMs reduced to minutes, I really like how much it enables me to do with my limited spare time. Same with “actual” engineering, the numbers were all correct the last few times. So things it had to find a way to calculate and then figure out some assumptions and then do the math! Sometimes it gets the context wrong and since it pretty much never asks questions back, the result was absurd for me, but somewhat correct for a different context. Really good stuff.

      • BroBot9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        15
        ·
        1 天前

        Really good until you stop double checking it and it makes shit up. 🤦‍♂️

        Go take your Ai apologist bullshit and feed it to the corporate simps.

        • Eheran@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          8
          ·
          1 天前

          The good thing is that in code, if it makes shit up it simply does not work the way it is supposed to.

          You can keep your hatred to yourself, let alone the bullshit you make up.

          • AmbiguousProps@lemmy.today
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            1 天前

            Until it leaves a security issue that isn’t immediately visible and your users get pwned.

            Funny that you say “bullshit you make up”, when all LLMs do is hallucinate and sometimes, by coincidence, have a “correct” result.

            I use them when I’m stumped or hit “writer’s block”, but I certainly wouldn’t have them produce 500 lines and then assume that just because it works, it must be good to go.

            • Eheran@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              5
              ·
              20 小时前

              Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.

              I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.

              The hostility here against anyone using LLMs/AI is absurd.

              • AmbiguousProps@lemmy.today
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                19 小时前

                Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.

                I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.

              • AmbiguousProps@lemmy.today
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                3
                ·
                edit-2
                1 天前

                “my coworkers should have to read the 500 lines of slop so I don’t have to”

                That also implies that code reviews are always thoroughly scrutinized. They aren’t, and if a whole team is vibecoding everything, they especially aren’t. Since you’ve got this mentality, you’ve definitely got some security issues you don’t know about. Maybe go find and fix them?

                • onslaught545@lemmy.zip
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  15 小时前

                  If your QA process can let known security flaws into production, then you need to redesign your QA process.

                  Also, no one ever said that the person generating 500 lines of code isn’t reviewing it themselves.

      • fuckwit_mcbumcrumble@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        1 天前

        Also did you adequately describe your problem? Treat it like a human who knows how to program, but has no idea what the fuck you’re talking about. Just like a human you have to sit it down and talk to it before you have it write code.

      • Donkter@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        6
        ·
        1 天前

        I’ve come to realize that these crazed anti-ai people are just a product of history repeating itself. They would be the same leftists who were “anti-gmo”. When you dig into it you understand that they’re against Monsanto, which is cool and good, but the whole thing is so conflated in their heads that you can’t discuss the merits of GMOs whatsoever even though they’re purportedly progressive.

        It’s a pattern, their heads in the right place for the most part. But the logic is just going a little haywire as they buy into hysteria. It’ll take a few years probably as the generations cycle.

      • lectricleopard@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        6
        ·
        1 天前

        It gave you the wrong answer. One you called absurd. And then you said “Really good stuff.”

        Not to get all dead internet, but are you an LLM?

        I dont understand how people think this is going to change the world. Its like the c suite folks think they can fire 90% of their company and just feed their half baked ideas for making super hero sequels into an AI and sell us tickets to the poop that falls out, 15 fingers and all.

        • Eheran@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          1 天前

          So you physically read what I said and then just went with “my bias against LLMs was proven” and wrote this reply? At no point did you actually try to understand what I said? Sorry but are you an LLM?

          But seriously. If you ask someone on the phone “is it raining” and the person says “not now but it did a moment ago”, do you think the person is a fucking idiot because obviously the sun has been and still is shining? Or perhaps the context is different (a different location)? Do you understand that now?

          • lectricleopard@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            1 天前

            You seem upset by my comment, which i dont understand at all. Im sorry if I’ve offended you. I don’t have a bias against LLMs. They’re good at talking. Very convincing. I dont need help creating text to communicate with people, though.

            Since you mention that this is helping you in your free time, then you might not be aware how much less useful it is in a commercial setting for coding.

            I’ll also note, since you mentioned it in your initial comment, LLMs dont think. They can’t think. They never will think. Thats not what these things are designed to do, and there is no means by which they might start to think if they are just bigger or faster. Talking about AI systems like they are people makes them appear more capable than they are to those that dont understand how they work.

            • Eheran@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              20 小时前

              Can you define “thinking”? This is such a broad statement with so many implications. We have no idea how our brain functions.

              I do not use this tool for talking. I use it for data analysis, simulations, MCU programming, … Instead of having to write all of that code myself, it only takes 5 minutes now.

              • lectricleopard@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                12 小时前

                Thinking is what humans do. We hold concepts in our working memory and use stored memories that are related to evaluate new data and determine a course of action.

                LLMs predict the next correct word in their sentence based on a statistical model. This model is developed by “training” with written data, often scraped from the internet. This creates many biases in the statistical model. People on the internet do not take the time to answer “i dont know” to questions they see. I see this as at least one source of what they call “hallucinations.” The model confidently answers incorrectly because that’s what it’s seen in training.

                The internet has many sites with reams of examples of code in many programming languages. If you are working on code that is of the same order of magnitude of these coding examples, then you are within the training data, and results will generally be good. Go outside of that training data, and it just flounders. It isn’t capable and has no means of reasoning beyond its internal statistical model.

              • Clent@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 小时前

                We have no idea how our brain functions.

                This isn’t even remotely true.

                You should have asked your LLM about it before making such a ridiculous statement.

  • Owl@mander.xyz
    link
    fedilink
    arrow-up
    15
    ·
    1 天前

    They are great if you know what the right answer is just don’t know how to get it right now

  • DrDystopia@lemy.lol
    link
    fedilink
    arrow-up
    10
    arrow-down
    12
    ·
    1 天前

    Ah, to live a life where ones problems can be solved by an LLM. It sounds so… simple and pleasant. 🫀

    • craftrabbit@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      1 天前

      That’s the world we all dream of, right? We work on what we want to with the robots keeping the houses in check and taking care of the menial admin- and paperwork and in the evenings we all sit together by the campfire with the robots bringing us food and drink as we rejoice in talking to each other about the day’s experiences.

      That doesn’t seem to be the world that we’re moving towards though…

    • somerandomperson@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      1 天前

      …NOT!

      It’s just big tech selling convenience for the trillionth time, this time in another form. They are NOT doing out of good will; they’re doing it to sell your data, to train their ai on it (alongaide their pirated media), and do other nefarious stuff with everything you have.

      • DrDystopia@lemy.lol
        link
        fedilink
        arrow-up
        5
        ·
        1 天前

        …NOT!

        I promise you, as someone overcome with sadness from watching the so-far unsolvable problems of mankind that will lead to the end of the world as we know it - Living a life where one believe simulated intelligence could solve anything at all is a dream. Ignorance is bliss.

        It’s just big tech selling convenience for the trillionth time

        No, once more they’re selling the impression of convenience. I.e. having the entire backend exposed to hackers because it was so convenient to vibe-code access control is not a real convenience.

        They are NOT doing out of good will

        Only idiots argue for such an intention.

        they’re doing it to sell your data, to train their ai on it

        No, they’re doing it to harvest our data. This allows them to use machine learning on the datasets but more traditionally, build profiles on their users. Access to the profiles is what they’re selling, not direct access to log data.

        and do other nefarious stuff with everything you have

        Then they need to step up their game as I’m self-hosting everything on a home-server. But I know what you mean. They want to do downright evil stuff with everything they can get their dirty, sticky paws on.