The Simulation Argument, in its popular form that asserts we have good reason to believe we are living in a computer simulation, is a fine example of pseudo-philosophy. It’s a good example because in order to understand why it is pseudo-philosophy, not real philosophy, you need to understand a bit about the key developments in the history of philosophy that informed our philosophy of science. That is, you need to understand enough philosophy to know what you are talking about. In doing so, you become someone who is able to tell the difference between real philosophy and its many counterfeit forms: that is, you become an actual philosopher (not a pseudo-philosopher).
This might have a stinky whiff of elitism, but we naturally draw similar distinctions in other fields. If I were to lay claim to some medical insight – say, that all minor ailments are a matter of balancing wetness and dryness, and therefore everything can be cured with the right combination of salt and Vaseline – I would rightly be dismissed as a quack who doesn’t know what they are talking about. You wouldn’t need to test my theory in order to make that judgement. In fact, even if you tested it and found it to ring true (which I think you would, by the way), you still wouldn’t need any more reason to dismiss me as a quack than the fact that I show no evidence of being someone who knows what they are talking about. I am not qualified, I have no medical degree, no experience of practising medicine, I have not been approved by peers in the medical community, because I am not in the medical community. It would be unwise to put your health (or money) in the hands of such pseudo-medics. It would be unwise to listen to what they have to say and take it seriously; it would be unwise to buy their books. When people do so it is because they have been tricked into doing so: they have been sold snake oil. It would be equally unwise to put your money, or the health of your soul, into the hands of pseudo-philosophers, but that doesn’t mean it doesn’t happen.
The Simulation Argument is attractive to young people, and to some very rich people, probably because young people (and possibly some very rich people) play a lot of computer games and spend much of their time in a virtual space. Many other pseudo-philosophical ‘insights’ (whether they take the form of so many rules or secrets for this or that or the other) are popular with young people (and some rich people), and because those people have tremendous power to amplify the reach of ideas in the virtual space, with very little checks to slow any momentum generated, these pseudo-philosophical ‘insights’ come to dominate the idea of what philosophy is. But these people are not in a position to know what they are talking about; they remain pseudo-philosophers, even at their best. They are imitating the game, but they are not yet part of the game.
There’s nothing mysterious about this; we see it everywhere. You need to learn how to play music before you can become a musician. In the process you might learn to play a few tunes, but this is imitating musicianship, not being it; it takes a fair while longer to become the thing you aspire to be. Becoming a musician trains your ear, enabling you to detect when a note is out of tune, amongst other things. A jazz musician improvising is real music; a child doodling on a keyboard is not. But to the untrained ear there might be little discernible difference. A similar thing happens with any learned language. Occasionally we encounter prodigies or whiz-kids, whose raw natural talent overcomes any shortfall in training or knowledge, and it is an open debate whether or not there can be such things in philosophy, but it is clear that no exponents of the Simulation Argument are such philosophical prodigies.
There are too many duff notes in the Simulation Argument for any real philosopher to take it seriously. It strikes the ear as coming from a scientific basis – it does not present itself as a mystical insight, for example – and yet none of its claims are empirically measurable. As with pseudo-medicine, this sets alarm bells ringing in any suitably-trained ears. The argument presents itself as a weighing up of probabilities, but any scientific understanding of probability requires an element of measurement if it is to avoid becoming simply guessing.
Paradigmatic examples of probability have this built in as a set of unspoken assumptions. Consider the rolling of a six-sided die. We understand the probability of rolling any one number to be one-in-six and calculate further probabilities on that basis. It’s too easy to forget that this relies on measurement and a very closely-defined set of circumstances. We know that the probability is one-in-six because have rolled dice, over and over again, across history, and the probability emerges as a result of that repeated measurement. We insist on a certain type of die and a certain type of circumstances – namely the ‘balanced’ die and the ‘random’ throw that delivers results that are consistent with our prior measurements – in order to consider it a legitimate and unbiased claim of probability. With these controls in place, it is the regularity with which the various numbers appear that justifies us in assigning a certain probability to any future outcome. If the numbers had ever appeared irregularly, we could not be so confident; if we do not know what kind of die is being thrown or the circumstance in which it is being thrown, we cannot be so confident.
But we forget that before we rolled any die in any circumstances, we could not know, for instance, whether we lived in a world that favoured the appearance of the number three. Perhaps three really is the magic number, resulting in a far greater-than-average frequency of the number three resulting from the roll of a six-sided die. It might sound obvious now, but it’s worth remembering that gamblers are still vulnerable to these kinds of fallacies of intuition about ‘lucky numbers’ and the like, even though we know about them: as the presence of a ‘previous spins’ board at any roulette table will attest. Regardless of our intuitions on the matter, until we roll the die, again and again, a lot, we cannot know the probability of any result, and we should not assign probabilities to things that we cannot know. Unless we are guessing, of course.
The Simulation Argument relies on the weighing up of three probabilities: that the human species is likely to go extinct before we reach a ‘post-human’ stage; that a post-human civilisation is likely to run computer simulations of human history; that we live in such a simulation. As it stands, none of these are measurable probabilities because none of them have happened and we have no significant data about them happening. Any reasoning that results has no claim to being a ‘probability’, but only a guess. I may as well ask you to assign a probability to a certain number appearing from the roll of a die, but I won’t tell you how many sides the die has, or any circumstances about the throw. You can speculate all you want about the answer, but anything that results will be pure speculation and guesswork.
To pass pure speculation and guesswork off as serious philosophy, let alone serious scientific philosophy, does a disservice to philosophy. The popular form of the Simulation Argument is little more than a child’s imagination game and can be dismissed as such. But if you wanted to do more, then understand Descartes, and Hume, and Kant, and G. E. Moore, and Wittgenstein, and you will find yourself in a position to see the Simulation Argument for what it is. The originator of the Simulation Argument, Nick Bostrom, who is a real philosopher, knows this perfectly well, and obviously does not think the argument gives us good reason to believe we live in a computer simulation. But I fear that he also knows it is not in his interests to downplay the impact of the argument too much.
