• shnizmuffin@lemmy.inbutts.lol
    link
    fedilink
    English
    arrow-up
    51
    ·
    4 days ago

    If I asked a PhD, “How many Bs are there in the word ‘blueberry’?” They’d call an ambulance for my obvious, severe concussion. They wouldn’t answer, “There are three Bs in the word blueberry! I know, it’s super tricky!”

    • panda_abyss@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      3 days ago

      I don’t feel this is a good example of why LLMs shouldn’t be treated like PhDs.

      My first interactions with gpt5 have been pretty awful, and I’d test it but it’s not available to me anymore

      Edit: I am not having a stroke, I’m bad at typing and autocorrect hates me

    • GissaMittJobb@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      10
      ·
      4 days ago

      LLMs are fundamentally unsuitable for character counting on account of how they ‘see’ the world - as a sequence of tokens, which can split words in non-intuitive ways.

      Regular programs already excel at counting characters in words, and LLMs can be used to generate such programs with ease.

        • GissaMittJobb@lemmy.ml
          link
          fedilink
          arrow-up
          9
          ·
          4 days ago

          This is true. They do not think, because they are next token predictors, not brains.

          Having this in mind, you can still harness a few usable properties from them. Nothing like the kind of hype the techbros and VCs imagine, but a few moderately beneficial use-cases exist.

          • itslilith@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            8
            ·
            4 days ago

            Without a doubt. But PhD level thinking requires a kind of introspection that LLMs (currently) just don’t have. And the letter counting thing is a funny example of that inaccuracy

      • chaos@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        The tokenization is a low-level implementation detail, it shouldn’t affect an LLM’s ability to do basic reasoning. We don’t do arithmetic by counting how many neurons we can feel firing in our brain, we have higher level concepts of numbers, and LLMs are supposed to have something similar. Plus, in the “”“thinking”“” models, you’ll see them break up words into individual letters or even write them out in a numbered list, which should break the tokens up into individual letters as well.