• Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    16 hours ago

    Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?

      And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?

      Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?