• uuldika@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      Roko’s Basilisk is real, but only for LW rationalists. living with contradictions in our thinking and using gut feeling rather than obsessively chaining Bayesian priors together protected the rest of us.

      seriously, Yudkowsky and others were tormented by the thought of the Basilisk. it’s a literal mind virus. just one that requires a very specific host (true believers in Timeless Decision Theory.)

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      10 hours ago

      Roko’s basilisk is a really cool metaphor for fascism. If you help the regime come into existence, you are rewarded; if you fight it, you are punished but only if you are unsuccessful.

      • Fusselwurm@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 hours ago

        If you help the regime come into existence, you are rewarded

        well don’t count on that. totalitarian regimes have a tendency to be paranoid and to enact rather unpleasant purges at every level of the organisation.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        Specifically it will only be real if it becomes real and you didn’t support it becoming real.

        It’s like the inverse of the notion that the proof of God’s omnipotence is that he doesn’t need to exist in order to save you - the whole idea of Roko’s Basilisk is that if the AI super-intelligence machine God comes to be, it might decide to punish everyone who worked against it coming to be, as an incentive for people to help it come to be in the first place. For exactly the right kind of host, this is an effective memetic infohazard, despite essentially being “God will be angry if he don’t assist in his apotheosis”.

        • Natanael@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          46 minutes ago

          Completely ignoring the possibility of “the AI will get angry if we create it, but build it wrong / wastes resources / cause destruction while building it which it decides should’ve been used better”. Like, these guys are explicitly fighting against the goals they claim the AI they’re working towards is supposed to have.