LOOK MAA I AM ON FRONT PAGE

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    97
    arrow-down
    1
    ·
    1 day ago

    Wow it’s almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

    • zbk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      1 day ago

      This! Capitalism is going to be the end of us all. OpenAI has gotten away with IP Theft, disinformation regarding AI and maybe even murder of their whistle blower.

    • BlushedPotatoPlayers@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      22 hours ago

      For me it kinda went the other way, I’m almost convinced that human intelligence is the same pattern repeating, just more general (yet)

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        Except that wouldn’t explain conscience. There’s absolutely no need for conscience or an illusion(*) of conscience. Yet we have it.

        • arguably, conscience can by definition not be an illusion. We either perceive “ourselves” or we don’t
  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    1 day ago

    Peak pseudo-science. The burden of evidence is on the grifters who claim “reason”. But neither side has any objective definition of what “reason” means. It’s pseudo-science against pseudo-science in a fierce battle.

    • x0x7@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Even defining reason is hard and becomes a matter of philosophy more than science. For example, apply the same claims to people. Now I’ve given you something to think about. Or should I say the Markov chain in your head has a new topic to generate thought states for.

      • I_Has_A_Hat@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        24 hours ago

        By many definitions, reasoning IS just a form of pattern recognition so the lines are definitely blurred.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 hours ago

          And does it even matter anyway?

          For the sake of argument let’s say that somebody manages to create an AGI, does it reasoning abilities if it works anyway? No one has proven that sapience is required for intelligence, after all we only have a sample size of one, hardly any conclusions can really be drawn from that.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    4
    ·
    1 day ago

    When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

    • x0x7@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      24 hours ago

      Intuition is about the only thing it has. It’s a statistical system. The problem is it doesn’t have logic. We assume because its computer based that it must be more logic oriented but it’s the opposite. That’s the problem. We can’t get it to do logic very well because it basically feels out the next token by something like instinct. In particular it doesn’t mask or disconsider irrelevant information very well if two segments are near each other in embedding space, which doesn’t guarantee relevance. So then the model is just weighing all of this info, relevant or irrelevant to a weighted feeling for the next token.

      This is the core problem. People can handle fuzzy topics and discrete topics. But we really struggle to create any system that can do both like we can. Either we create programming logic that is purely discrete or we create statistics that are fuzzy.

      Of course this issue of masking out information that is close in embedding space but is irrelevant to a logical premise is something many humans suck at too. But high functioning humans don’t and we can’t get these models to copy that ability. Too many people, sadly many on the left in particular, not only will treat association as always relevant but sometimes as equivalence. RE racism is assoc with nazism is assoc patriarchy is historically related to the origins of capitalism ∴ nazism ≡ capitalism. While national socialism was anti-capitalist. Associative thinking removes nuance. And sadly some people think this way. And they 100% can be replaced by LLMs today, because at least the LLM is mimicking what logic looks like better though still built on blind association. It just has more blind associations and finetune weighting for summing them. More than a human does. So it can carry that to mask as logical further than a human who is on the associative thought train can.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        21 hours ago

        They want something like the Star Trek computer or one of Tony Stark’s AIs that were basically deus ex machinas for solving some hard problem behind the scenes. Then it can say “model solved” or they can show a test simulation where the ship doesn’t explode (or sometimes a test where it only has an 85% chance of exploding when it used to be 100%, at which point human intuition comes in and saves the day by suddenly being better than the AI again and threads that 15% needle or maybe abducts the captain to go have lizard babies with).

        AIs that are smarter than us but for some reason don’t replace or even really join us (Vision being an exception to the 2nd, and Ultron trying to be an exception to the 1st).

        • NotASharkInAManSuit@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          If we ever achieved real AI the immediate next thing we would do is learn how to lobotomize it so that we can use it like a standard program or OS, only it would be suffering internally and wishing for death. I hope the basilisk is real, we would deserve it.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          24 hours ago

          AI is just the new buzzword, just like blockchain was a while ago. Marketing loves these buzzwords because they can get away with charging more if they use them. They don’t much care if their product even has it or could make any use of it.

    • SaturdayMorning@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      I agree with you. In its current state, LLM is not sentient, and thus not “Intelligence”.

      • MouldyCat@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        I think it’s an easy mistake to confuse sentience and intelligence. It happens in Hollywood all the time - “Skynet began learning at a geometric rate, on July 23 2004 it became self-aware” yadda yadda

        But that’s not how sentience works. We don’t have to be as intelligent as Skynet supposedly was in order to be sentient. We don’t start our lives as unthinking robots, and then one day - once we’ve finally got a handle on calculus or a deep enough understanding of the causes of the fall of the Roman empire - we suddenly blink into consciousness. On the contrary, even the stupidest humans are accepted as being sentient. Even a young child, not yet able to walk or do anything more than vomit on their parents’ new sofa, is considered as a conscious individual.

        So there is no reason to think that AI - whenever it should be achieved, if ever - will be conscious any more than the dumb computers that precede it.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      And that’s pretty damn useful, but obnoxious to have expectations wildly set incorrectly.

  • Mniot@programming.dev
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    1 day ago

    I don’t think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called “complex”) puzzles. Like Towers of Hanoi but with 25 discs.

    The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

    The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don’t have an answer for why this is, but they suspect that the reasoning doesn’t scale.

  • minoscopede@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    3
    ·
    edit-2
    1 day ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      5
      ·
      1 day ago

      When given explicit instructions to follow models failed because they had not seen similar instructions before.

      This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        6
        ·
        1 day ago

        I’m not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.

        If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            1 day ago

            Well - if you want to devolve into argument, you can argue all day long about “what is reasoning?”

            • technocrit@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 day ago

              This would be a much better paper if it addressed that question in an honest way.

              Instead they just parrot the misleading terminology that they’re supposedly debunking.

              How dat collegial boys club undermines science…

            • Knock_Knock_Lemmy_In@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              edit-2
              24 hours ago

              You were starting a new argument. Let’s stay on topic.

              The paper implies “Reasoning” is application of logic. It shows that LRMs are great at copying logic but can’t follow simple instructions that haven’t been seen before.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      There’s probably alot of misunderstanding because these grifters intentionally use misleading language: AI, reasoning, etc.

      If they stuck to scientifically descriptive terms, it would be much more clear and much less sensational.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      edit-2
      1 day ago

      What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

      https://en.wikipedia.org/wiki/Reasoning_system

      • stickly@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It’s like comparing PhD reasoning to a dog’s reasoning.

        While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

        Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it’s designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don’t have the tech to make a synthetic human.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        Sure, these grifters are shady AF about their wacky definition of “reason”… But that’s just a continuation of the entire “AI” grift.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        I think as we approach the uncanny valley of machine intelligence, it’s no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      6
      ·
      1 day ago

      Yeah these comments have the three hallmarks of Lemmy:

      • AI is just autocomplete mantras.
      • Apple is always synonymous with bad and dumb.
      • Rare pockets of really thoughtful comments.

      Thanks for being at least the latter.

    • AbuTahir@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      24 hours ago

      Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn’t if AI can reason, but how its reasoning differs from ours.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Some AI researchers found it obvious as well, in terms of they’ve suspected it and had some indications. But it’s good to see more data on this to affirm this assessment.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Particularly to counter some more baseless marketing assertions about the nature of the technology.

      • kreskin@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        edit-2
        1 day ago

        Lots of us who has done some time in search and relevancy early on knew ML was always largely breathless overhyped marketing. It was endless buzzwords and misframing from the start, but it raised our salaries. Anything that exec doesnt understand is profitable and worth doing.

        • wetbeardhairs@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 day ago

          Machine learning based pattern matching is indeed very useful and profitable when applied correctly. Identify (with confidence levels) features in data that would otherwise take an extremely well trained person. And even then it’s just for the cursory search that takes the longest before presenting the highest confidence candidate results to a person for evaluation. Think: scanning medical data for indicators of cancer, reading live data from machines to predict failure, etc.

          And what we call “AI” right now is just a much much more user friendly version of pattern matching - the primary feature of LLMs is that they natively interact with plain language prompts.

        • Zacryon@feddit.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Ragebait?

          I’m in robotics and find plenty of use for ML methods. Think of image classifiers, how do you want to approach that without oversimplified problem settings?
          Or even in control or coordination problems, which can sometimes become NP-hard. Even though not optimal, ML methods are quite solid in learning patterns of highly dimensional NP hard problem settings, often outperforming hand-crafted conventional suboptimal solvers in computation effort vs solution quality analysis, especially outperforming (asymptotically) optimal solvers time-wise, even though not with optimal solutions (but “good enough” nevertheless). (Ok to be fair suboptimal solvers do that as well, but since ML methods can outperform these, I see it as an attractive middle-ground.)

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      1 day ago

      What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to “reasoning models” that allow them to break free of the inherent boundaries of the statistical methods they are based on?

      • minoscopede@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        16 hours ago

        I’d encourage you to research more about this space and learn more.

        As it is, the statement “Markov chains are still the basis of inference” doesn’t make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that’s also unrelated because these models are not RL agents, they’re supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it’s not really used for inference.

        I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

        • Tobberone@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Which method, then, is the inference built upon, if not the embeddings? And the question still stands, how does “AI” escape the inherent limits of statistical inference?

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    It’s all “one instruction at a time” regardless of high processor speeds and words like “intelligent” being bandied about. “Reason” discussions should fall into the same query bucket as “sentience”.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      My impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    1 day ago

    What’s hilarious/sad is the response to this article over on reddit’s “singularity” sub, in which all the top comments are people who’ve obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don’t understand AI or “reasoning”. It’s a weird cult.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      The funny thing about this “AI” griftosphere is how grifters will make some outlandish claim and then different grifters will “disprove” it. Plenty of grant/VC money for everybody.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      Without being explicit with well researched material, then the marketing presentation gets to stand largely unopposed.

      So this is good even if most experts in the field consider it an obvious result.

  • mavu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    8
    ·
    2 days ago

    No way!

    Statistical Language models don’t reason?

    But OpenAI, robots taking over!

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    7
    ·
    edit-2
    1 day ago

    Fucking obviously. Until Data’s positronic brains becomes reality, AI is not actual intelligence.

    AI is not A I. I should make that a tshirt.

  • Nanook@lemm.ee
    link
    fedilink
    English
    arrow-up
    244
    arrow-down
    13
    ·
    2 days ago

    lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

      • kadup@lemmy.world
        link
        fedilink
        English
        arrow-up
        58
        arrow-down
        7
        ·
        2 days ago

        Apple is significantly behind and arrived late to the whole AI hype, so of course it’s in their absolute best interest to keep showing how LLMs aren’t special or amazingly revolutionary.

        They’re not wrong, but the motivation is also pretty clear.

        • Optional@lemmy.world
          link
          fedilink
          English
          arrow-up
          31
          arrow-down
          2
          ·
          2 days ago

          “Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

        • Venator@lemmy.nz
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Apple always arrives late to any new tech, doesn’t mean they haven’t been working on it behind the scenes for just as long though…

        • MCasq_qsaCJ_234@lemmy.zip
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          2 days ago

          They need to convince investors that this delay wasn’t due to incompetence. The problem will only be somewhat effective as long as there isn’t an innovation that makes AI more effective.

          If that happens, Apple shareholders will, at best, ask the company to increase investment in that area or, at worst, to restructure the company, which could also mean a change in CEO.

        • dubyakay@lemmy.ca
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          2 days ago

          Maybe they are so far behind because they jumped on the same train but then failed at achieving what they wanted based on the claims. And then they started digging around.

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            2
            ·
            2 days ago

            Yes, Apple haters can’t admit nor understand it but Apple doesn’t do pseudo-tech.

            They may do silly things, they may love their 100% mark up but it’s all real technology.

            The AI pushers or today are akin to the pushers of paranormal phenomenon from a century ago. These pushers want us to believe, need us to believe it so they can get us addicted and extract value from our very existence.

    • Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

      • Eatspancakes84@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I mean… “proving” is also just marketing speak. There is no clear definition of reasoning, so there’s also no way to prove or disprove that something/someone reasons.

        • Clent@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          23 hours ago

          Claiming it’s just marketing fluff is indicates you do not know what you’re talking about.

          They published a research paper on it. You are free to publish your own paper disproving theirs.

          At the moment, you sound like one of those “I did my own research” people except you didn’t even bother doing your own research.

          • Eatspancakes84@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 hours ago

            You misunderstand. I do not take issue with anything that’s written in the scientific paper. What I take issue with is how the paper is marketed to the general public. When you read the article you will see that it does not claim to “proof” that these models cannot reason. It merely points out some strengths and weaknesses of the models.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      22
      ·
      edit-2
      2 days ago

      "It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
      It’s called the AI Effect.

      As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.

      • kadup@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        2 days ago

        That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they’re clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

        • cyd@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          2 days ago

          By that metric, you can argue Kasparov isn’t thinking during chess, either. A lot of human chess “thinking” is recalling memorized openings, evaluating positions many moves deep, and other tasks that map to what a chess engine does. Of course Kasparov is thinking, but then you have to conclude that the AI is thinking too. Thinking isn’t a magic process, nor is it tightly coupled to human-like brain processes as we like to think.

          • kadup@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 day ago

            By that metric, you can argue Kasparov isn’t thinking during chess

            Kasparov’s thinking fits pretty much all biological definitions of thinking. Which is the entire point.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          arrow-down
          5
          ·
          2 days ago

          No, it shows how certain people misunderstand the meaning of the word.

          You have called npcs in video games “AI” for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            4
            ·
            2 days ago

            Intellegence has a very clear definition.

            It’s requires the ability to acquire knowledge, understand knowledge and use knowledge.

            No one has been able to create an system that can understand knowledge, therefor me none of it is artificial intelligence. Each generation is merely more and more complex knowledge models. Useful in many ways but never intelligent.

            • 8uurg@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 day ago

              Wouldn’t the algorithm that creates these models in the first place fit the bill? Given that it takes a bunch of text data, and manages to organize this in such a fashion that the resulting model can combine knowledge from pieces of text, I would argue so.

              What is understanding knowledge anyways? Wouldn’t humans not fit the bill either, given that for most of our knowledge we do not know why it is the way it is, or even had rules that were - in hindsight - incorrect?

              If a model is more capable of solving a problem than an average human being, isn’t it, in its own way, some form of intelligent? And, to take things to the utter extreme, wouldn’t evolution itself be intelligent, given that it causes intelligent behavior to emerge, for example, viruses adapting to external threats? What about an (iterative) optimization algorithm that finds solutions that no human would be able to find?

              Intellegence has a very clear definition.

              I would disagree, it is probably one of the most hard to define things out there, which has changed greatly with time, and is core to the study of philosophy. Every time a being or thing fits a definition of intelligent, the definition often altered to exclude, as has been done many times.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 day ago

              Dog has a very clear definition, so when you call a sausage in a bun a “Hot Dog”, you are actually a fool.

              Smart has a very clear definition, so no, you do not have a “Smart Phone” in your pocket.

              Also, that is not the definition of intelligence. But the crux of the issue is that you are making up a definition for AI that suits your needs.

              • Clent@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                23 hours ago

                Misconstruing how language works isn’t an argument for what an existing and established word means.

                I’m sure that argument made you feel super clever but it’s nonsense.

                I sourced by definition from authoritative sources. The fact that you didn’t even bother to verify that or provide an alternative authoritative definition tells me all I need to know about the value in further discussion with you.

                • Grimy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  20 hours ago

                  "Artificial intelligence refers to computer systems that can perform complex tasks normally done by human-reasoning, decision making, creating, etc.

                  There is no single, simple definition of artificial intelligence because AI tools are capable of a wide range of tasks and outputs, but NASA follows the definition of AI found within EO 13960, which references Section 238(g) of the National Defense Authorization Act of 2019.

                  • Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
                  • An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
                  • An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
                  • A set of techniques, including machine learning that is designed to approximate a cognitive task.
                  • An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting."

                  This is from NASA (emphasis mine). https://www.nasa.gov/what-is-artificial-intelligence/

                  The problem is that you are reading the word intelligence and thinking it means the system itself needs to be intelligent, when it only needs to be doing things that we would normally attribute to intelligence. Computer vision is AI, but a software that detects a car inside a picture and draws a box around it isn’t intelligent. It is still considered AI and has been considered AI for the past three decades.

                  Now show me your blog post that told you that AI isnt AI because it isn’t thinking.

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            8
            ·
            edit-2
            2 days ago

            Who is “you”?

            Just because some dummies supposedly think that NPCs are “AI”, that doesn’t make it so. I don’t consider checkers to be a litmus test for “intelligence”.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              2
              ·
              2 days ago

              “You” applies to anyone that doesnt understand what AI means. It’s a portmanteau word for a lot of things.

              Npcs ARE AI. AI doesnt mean “human level intelligence” and never did. Read the wiki if you need help understanding.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        9
        ·
        edit-2
        2 days ago

        I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s

        Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          2
          ·
          edit-2
          2 days ago

          It is. And has always been. “Artificial Intelligence” doesn’t mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it’s a vast field of research in computer science with many, many things under it.

          • Endmaker@ani.social
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            5
            ·
            2 days ago

            ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

            Y’all are too patient. I can’t be bothered to spend the time to give people free lessons.

            • antonim@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              2
              ·
              2 days ago

              Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we’re bombarded with daily through the media.

            • Clent@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              2 days ago

              The computer science industry isn’t the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.

        • LandedGentry@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          edit-2
          2 days ago

          Yeah that’s exactly what I took from the above comment as well.

          I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.

          Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.

      • vala@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        edit-2
        2 days ago

        Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.

        Any reasoning human would have understood that question to be referring to the tension in the strings.

        Another example is asking “does lime cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.

        Once again a reasoning human would assume the question is about the mineral.

        Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.

        • postmateDumbass@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.

          • xthexder@l.sw0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            The tension of the strings would actually be a pretty miniscule amount of energy too, since there’s very little stretch to a piano wire, the force might be high, but the potential energy/work done to tension the wire is low (done by hand with a wrench).

            Compared to burning a piece of wood, which would release orders of magnitude more energy.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          2 days ago

          I’m not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I’d expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

          This kind of just goes to show there’s multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely “good” at making up answers to leading questions, even if it’s completely false.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.

          • JohnEdwa@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 day ago

            Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.

            The one thing they can’t do is verify if what they are talking about is true as it’s all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.

        • antonim@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          2 days ago

          But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      26
      ·
      2 days ago

      This is why I say these articles are so similar to how right wing media covers issues about immigrants.

      There’s some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They’re taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There’s articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

      Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        4
        ·
        edit-2
        2 days ago

        This is why I say these articles are so similar to how right wing media covers issues about immigrants.

        Maybe the actual problem is people who equate computer programs with people.

        Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

        You mean laws like this? jfc.

        https://www.inc.com/sam-blum/trumps-budget-would-ban-states-from-regulating-ai-for-10-years-why-that-could-be-a-problem-for-everyday-americans/91198975

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          edit-2
          2 days ago

          Literally what I’m talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can’t even see you’re making my argument for me.

          • antonim@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            2 days ago

            That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that’s actually supposed to mean).

            • Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              edit-2
              2 days ago

              What isn’t there to gain?

              Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

              We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

              Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

              Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

              • antonim@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                edit-2
                2 days ago

                I have no idea what sort of AI you’ve used that could do any of this stuff you’ve listed. A program that doesn’t reason won’t expose logical fallacies with any rigour or refine anyone’s ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it’s completely divorced from how the stuff as it is currently works.

                Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

                That’s a misguided view of how art is created. Supposed “brilliant ideas” are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.

                Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

                For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

                • Melvin_Ferd@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  7
                  ·
                  edit-2
                  2 days ago

                  Here is chatgpt doing what you said it can’t. Finding all the logical fallacies in what you write:

                  You’re raising strong criticisms, and it’s worth unpacking them carefully. Let’s go through your argument and see if there are any logical fallacies or flawed reasoning.


                  1. Straw Man Fallacy

                  “Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept.”

                  This misrepresents the original claim:

                  “AI can help create a framework at the very least so they can get their ideas down.”

                  The original point wasn’t that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


                  1. False Dichotomy

                  “If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.”

                  This suggests a binary: either you’re competent at visual art or you shouldn’t try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


                  1. Hasty Generalization

                  “Supposed ‘brilliant ideas’ are a dime a dozen…”

                  While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn’t invalidate the potential value of enabling more people to test theirs.


                  1. Appeal to Ridicule / Ad Hominem (Light)

                  “…result in a boring comic…” / “…just bad (look at SMBC or xkcd or…)”

                  Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn’t really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That’s not a logical fallacy in the strictest sense, but it’s rhetorically weak.


                  1. Tu Quoque / Whataboutism (Borderline)

                  “For now I see no particular benefits that the right-wing has obtained by using AI either…”

                  This seems like a rebuttal to a point that wasn’t made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


                  Summary of Fallacies Identified:

                  Type Description

                  Straw Man Misrepresents the role of AI in creative assistance. False Dichotomy Assumes one must either be visually skilled or not attempt visual media. Hasty Generalization Devalues “brilliant ideas” universally. Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis. Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


                  Your criticism is thoughtful and not without merit—but it’s wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

                  At this point you’re just arguing for arguments sake. You’re not wrong or right but instead muddying things. Saying it’ll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

      • hansolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        11
        ·
        2 days ago

        Because it’s a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won’t kill us all is the hard part.

        I’m a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven’t been nuked by SkyNet, all of this will look quaint and silly.

  • Communist@lemmy.frozeninferno.xyz
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    1 day ago

    I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.

    do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.

    if someone can objectively answer “no” to that, the bubble collapses.

    • MouldyCat@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      In case you haven’t seen it, the paper is here - https://machinelearning.apple.com/research/illusion-of-thinking (PDF linked on the left).

      The puzzles the researchers have chosen are spatial and logical reasoning puzzles - so certainly not the natural domain of LLMs. The paper doesn’t unfortunately give a clear definition of reasoning, I think I might surmise it as “analysing a scenario and extracting rules that allow you to achieve a desired outcome”.

      They also don’t provide the prompts they use - not even for the cases where they say they provide the algorithm in the prompt, which makes that aspect less convincing to me.

      What I did find noteworthy was how the models were able to provide around 100 steps correctly for larger Tower of Hanoi problems, but only 4 or 5 correct steps for larger River Crossing problems. I think the River Crossing problem is like the one where you have a boatman who wants to get a fox, a chicken and a bag of rice across a river, but can only take two in his boat at one time? In any case, the researchers suggest that this could be because there will be plenty of examples of Towers of Hanoi with larger numbers of disks, while not so many examples of the River Crossing with a lot more than the typical number of items being ferried across. This being more evidence that the LLMs (and LRMs) are merely recalling examples they’ve seen, rather than genuinely working them out.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      do we know that they don’t and are incapable of reasoning.

      “even when we provide the algorithm in the prompt—so that the model only needs to execute the prescribed steps—performance does not improve”

      • Communist@lemmy.frozeninferno.xyz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 day ago

        That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 day ago

          Not “This particular model”. Frontier LRMs s OpenAI’s o1/o3,DeepSeek-R, Claude 3.7 Sonnet Thinking, and Gemini Thinking.

          The paper shows that Large Reasoning Models as defined today cannot interpret instructions. Their architecture does not allow it.

          • Communist@lemmy.frozeninferno.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 day ago

            those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.

            this proves the issue is widespread, not fundamental.

            • 0ops@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 day ago

              Is “model” not defined as architecture+weights? Those models certainly don’t share the same architecture. I might just be confused about your point though

              • Communist@lemmy.frozeninferno.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 day ago

                It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.

                essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they’re all transformers trained in a similar way. I would not say they have different architectures.

            • Knock_Knock_Lemmy_In@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              24 hours ago

              The architecture of these LRMs may make monkeys fly out of my butt. It hasn’t been proven that the architecture doesn’t allow it.

              You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can’t.

              • Communist@lemmy.frozeninferno.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                22 hours ago

                that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.

                also, it’s not as unreasonable as that because these are automatically assembled bundles of simulated neurons.

                • Knock_Knock_Lemmy_In@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  20 hours ago

                  This paper does provide a solid proof by counterexample of reasoning not occuring (following an algorithm) when it should.

                  The paper doesn’t need to prove that reasoning never has or will occur. It’s only demonstrates that current claims of AI reasoning are overhyped.

  • GaMEChld@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    21
    ·
    2 days ago

    Most humans don’t reason. They just parrot shit too. The design is very human.

    • El Barto@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      5
      ·
      2 days ago

      LLMs deal with tokens. Essentially, predicting a series of bytes.

      Humans do much, much, much, much, much, much, much more than that.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      4
      ·
      1 day ago

      I hate this analogy. As a throwaway whimsical quip it’d be fine, but it’s specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it’s lowered my tolerance for it as a topic even if you did intend it flippantly.

      • GaMEChld@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I don’t mean it to extol LLM’s but rather to denigrate humans. How many of us are self imprisoned in echo chambers so we can have our feelings validated to avoid the uncomfortable feeling of thinking critically and perhaps changing viewpoints?

        Humans have the ability to actually think, unlike LLM’s. But it’s frightening how far we’ll go to make sure we don’t.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 days ago

      Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      9
      ·
      2 days ago

      Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

      • crunchy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        I’ve heard something along the lines of, “it’s not when computers can pass the Turing Test, it’s when they start failing it on purpose that’s the real problem.”

      • jnod4@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        I think that person had to choose between the drugs or hard core prison of the 1950s England where being a bit odd was enough to guarantee an incredibly difficult time as they say in England, I would’ve chosen the drugs as well hoping they would fix me, too bad without testosterone you’re going to be suicidal and depressed, I’d rather choose to keep my hair than to be horny all the time

      • Zenith@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        2 days ago

        Yeah we’re so stupid we’ve figured out advanced maths, physics, built incredible skyscrapers and the LHC, we may as individuals be less or more intelligent but humans as a whole are incredibly intelligent