Only without the mathematical literacy.
Pascal's Wager
is usually presented as Pascal's Square (rather than his rather more
useful Triangle). Either God exists or it doesn't. Either you believe
in it or not. If it exists, you go to paradise if you believe it, hell
if not.
- No God + No belief = finite temporal gain
- God + No belief = finite temporal gain, infinite loss
- No God + Belief = finite temporal loss
- God + Belief = finite temporal loss, infinite gain
The basic calculation goes: no matter how small the probability of
existence of God, when multiplied by the infinite gain or loss that
component outweighs any temporal considerations.
Now consider Roko's
Basilisk, which can
be stated just the same way: for "God" read "hypothetical future
super-powerful AI", for "believe in" read "spend all your resources on
working to cause it to be constructed", for "infinite loss" read "many
hypothetical simulations of you are tortured". (There is no
counterpart to the infinite gain, because we're all edgy and grown-up
and paint our rooms black.)
But many of the objections to Pascal's Wager apply here too. Can I
choose whether to believe in the AI? Do I need to sell all I have and
give to
SIAI/MIRI,
or can I just send them some money every month? Surely it's better for
me to starve to death to bring the Rapture, I mean AI, one day closer
to reality given all the people who would suffer in that one day?
But most plainly: which God? Which AI? (Pascal's answer to this
objection was basically "shut up, obviously the church I'm in is the
only right one".)
And there are new objections too: if you really believe in it,
shouldn't you be out trying to force everyone to concentrate on this
one big project instead of any other human activity? And apart from
all the impossible things you have to believe for this to make sense
in the first place, why assume that the AI will be a strict
utilitarian of the sort the LessWrongers want to be when they grow up,
which has a duty to build a perfect emulation of you in order to
hurt it? (And, like the eternal-conscious-torture-in-Hell version of
God, is that really something you want to favour? Humans can do
better than that, so something more than human should be able to do
even better.) It's hardly a majority view. So the probability of all
those simulations existing and being tortured is really quite
remarkably small even if you grant the AI and all the rest of it.
There is a particular mindset which I think of as the Eddingtonian,
which goes: I am very very very good at a particular thing which
requires intelligence. I am very smart. Therefore my reasoning is
worth more than anything I can learn from the world. This kind of
thinking gives you Arthur Eddington's numerology and Linus Pauling's
vitamin C megadoses: this makes sense to me but other people say it's
nonsense, but I am smart, I won a Nobel Prize, so I must be right
and I will refuse to hear any evidence that disagrees with me. It also
gets you extremist politics, thinking other people will agree with you
if you just explain again, not only being a racist but refusing to
stop talking about how big a racist you
are, and
that creepy guy who won't ever shut up about the age of
consent, presumably
because limiting his predation to women over 18 narrows the field
too much. Roko's Basilisk is very useful as a trap for such people
before they can get out into the real world and do some actual damage.
Comments on this post are now closed. If you have particular grounds for adding a late comment, comment on a more recent post quoting the URL of this one.