ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

  • platypus_plumba@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2年前

    Is an octopus intelligent? Can an octopus build an airplane?

    Why do you expect these models to have human skills if they are not humans?

    How can they build a house if they don’t even have vision or a physical body? Can a paralized human that can only hear and speak build a house? Is that human intelligent?

    This is clearly not human intelligence, it clearly lacks human skills. Does it mean it isn’t intelligent and it has no skills?

    • Coreidan@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      edit-2
      2年前

      Exactly. They are just “models”. There is nothing intelligent about them.

      Yes octopus are very intelligent. They can think themselves out of a box without relying on curated data to train them.

      Logic, reasoning, and deduction. LLMs have zero ability to reject data based on their understanding of reality. Big diff.

      • platypus_plumba@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        2年前

        You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”

        Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?

        Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.

        Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.

        We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.

        Intelligence doesn’t need to be the same type of human intelligence.

      • QuaternionsRock@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2年前

        LLMs have zero ability to reject data based on their understanding of reality.

        Me:

        I just found a proof that 2+2=5!

        ChatGPT:

        The statement “2 + 2 = 5” is a classic example of a mathematical fallacy or incorrect reasoning, often used for humorous or illustrative purposes. In standard arithmetic, of course, 2 + 2 = 4. Proofs claiming to show otherwise rely on some form of invalid manipulation of algebraic properties or deliberate arithmetic mistakes.

        A common “proof” of 2 + 2 = 5 might go something like this:

        1. Start with an incorrect assumption or manipulate the algebraic steps in a way that hides a division by zero or another illegal operation.
        2. Through these manipulations, arrive at the conclusion that 2 + 2 = 5.

        However, each step of a valid mathematical proof must adhere strictly to the rules and properties of mathematics. Any proof leading to 2 + 2 = 5 will contain an error or a misapplication of those rules.

        If you have a specific “proof” in mind, I can help identify the error in it.