RogerBW's Blog

Roko's Basilisk is Pascal's Wager 04 April 2021

Only without the mathematical literacy.

Pascal's Wager is usually presented as Pascal's Square (rather than his rather more useful Triangle). Either God exists or it doesn't. Either you believe in it or not. If it exists, you go to paradise if you believe it, hell if not.

  • No God + No belief = finite temporal gain
  • God + No belief = finite temporal gain, infinite loss
  • No God + Belief = finite temporal loss
  • God + Belief = finite temporal loss, infinite gain

The basic calculation goes: no matter how small the probability of existence of God, when multiplied by the infinite gain or loss that component outweighs any temporal considerations.

Now consider Roko's Basilisk, which can be stated just the same way: for "God" read "hypothetical future super-powerful AI", for "believe in" read "spend all your resources on working to cause it to be constructed", for "infinite loss" read "many hypothetical simulations of you are tortured". (There is no counterpart to the infinite gain, because we're all edgy and grown-up and paint our rooms black.)

But many of the objections to Pascal's Wager apply here too. Can I choose whether to believe in the AI? Do I need to sell all I have and give to SIAI/MIRI, or can I just send them some money every month? Surely it's better for me to starve to death to bring the Rapture, I mean AI, one day closer to reality given all the people who would suffer in that one day?

But most plainly: which God? Which AI? (Pascal's answer to this objection was basically "shut up, obviously the church I'm in is the only right one".)

And there are new objections too: if you really believe in it, shouldn't you be out trying to force everyone to concentrate on this one big project instead of any other human activity? And apart from all the impossible things you have to believe for this to make sense in the first place, why assume that the AI will be a strict utilitarian of the sort the LessWrongers want to be when they grow up, which has a duty to build a perfect emulation of you in order to hurt it? (And, like the eternal-conscious-torture-in-Hell version of God, is that really something you want to favour? Humans can do better than that, so something more than human should be able to do even better.) It's hardly a majority view. So the probability of all those simulations existing and being tortured is really quite remarkably small even if you grant the AI and all the rest of it.

There is a particular mindset which I think of as the Eddingtonian, which goes: I am very very very good at a particular thing which requires intelligence. I am very smart. Therefore my reasoning is worth more than anything I can learn from the world. This kind of thinking gives you Arthur Eddington's numerology and Linus Pauling's vitamin C megadoses: this makes sense to me but other people say it's nonsense, but I am smart, I won a Nobel Prize, so I must be right and I will refuse to hear any evidence that disagrees with me. It also gets you extremist politics, thinking other people will agree with you if you just explain again, not only being a racist but refusing to stop talking about how big a racist you are, and that creepy guy who won't ever shut up about the age of consent, presumably because limiting his predation to women over 18 narrows the field too much. Roko's Basilisk is very useful as a trap for such people before they can get out into the real world and do some actual damage.


  1. Posted by John Gordon Dallman at 11:32am on 04 April 2021

    Classifying the idea, it makes a bit more sense than Tippler's Omega Point, because it doesn't rely on sleight of mind to create infinite processing power in finite time using finite amounts of matter.

    It entirely fails to explain why a superhuman intelligence should be singular. It also fails to explain why something that presumably became all-powerful via its great intelligence should be so petty.

    Overall, it seems like a failure of imagination on the part of the "LessWrong" group. Having constructed the idea of a superhuman intelligence, and discovered that they axiomatically can't imagine what it would be like, they've fallen back on the less nice bits of traditional religion, and claimed them as part of a rationally imagined future.

Comments on this post are now closed. If you have particular grounds for adding a late comment, comment on a more recent post quoting the URL of this one.

Search
Archive
Tags 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s 2000s 2010s 2300ad 3d printing action advent of code aeronautics aikakirja anecdote animation anime army astronomy audio audio tech base commerce battletech bayern beer boardgaming book of the week bookmonth chain of command children chris chronicle church of no redeeming virtues cold war comedy computing contemporary cornish smuggler cosmic encounter coup covid-19 crime crystal cthulhu eternal cycling dead of winter doctor who documentary drama driving drone ecchi economics en garde espionage essen 2015 essen 2016 essen 2017 essen 2018 essen 2019 essen 2022 essen 2023 essen 2024 existential risk falklands war fandom fanfic fantasy feminism film firefly first world war flash point flight simulation food garmin drive gazebo genesys geocaching geodata gin gkp gurps gurps 101 gus harpoon historical history horror hugo 2014 hugo 2015 hugo 2016 hugo 2017 hugo 2018 hugo 2019 hugo 2020 hugo 2021 hugo 2022 hugo 2023 hugo 2024 hugo-nebula reread in brief avoid instrumented life javascript julian simpson julie enfield kickstarter kotlin learn to play leaving earth linux liquor lovecraftiana lua mecha men with beards mpd museum music mystery naval noir non-fiction one for the brow opera parody paul temple perl perl weekly challenge photography podcast politics postscript powers prediction privacy project woolsack pyracantha python quantum rail raku ranting raspberry pi reading reading boardgames social real life restaurant reviews romance rpg a day rpgs ruby rust scala science fiction scythe second world war security shipwreck simutrans smartphone south atlantic war squaddies stationery steampunk stuarts suburbia superheroes suspense television the resistance the weekly challenge thirsty meeples thriller tin soldier torg toys trailers travel type 26 type 31 type 45 vietnam war war wargaming weather wives and sweethearts writing about writing x-wing young adult
Special All book reviews, All film reviews
Produced by aikakirja v0.1