(Cont)
Another issue to consider is in the conceivably deeper purposes for simulating a life-sustaining and life-evolving universe. Conceding the problem of anthropomorphizing the motives of our hypothetical simulation-designers, let’s nonetheless indulge and imagine ourselves in their position.
If your simulation includes evolving conscious entities that are allowed to develop an intellect (learning), and they have a recursive method to expand and explore that intellect (science), then it is likely that over time and after enough observations those entities will inevitably bump into the “writing on the wall”, as it were.
In the case of our own universe, physicist Tom Campbell of NASA has argued that the constant speed of light, the observer effect, and the Big Bang—all matter, energy, and physical laws arriving simultaneously out of nowhere—are tells of just such a situation. Brian Whitworth has published several papers on how the physics we experience could be easily explained with computable analogs. Martin Rees’s book Just Six Numbers could be argued as a whole set of tells. Max Tegmark summarizes the position in the PBS documentary The Great Math Mystery:
“If I were a character in a computer game that was so advanced that I were actually conscious, and I started exploring my video game world it would actually feel to me like it was made of real solid objects made of physical stuff. Yet if I started studying, as the curious physicist that I am, the properties of this stuff, the equations by which things move and the equations that gives the stuff its properties, I would discover eventually that all these properties were mathematical. The mathematical properties that the programmer had actually put into the software that describes everything.”
Via Tegmark’s thinking we can assume that if the physics and/or nature of any given universe that lends itself to be described through mathematics or exhibits mathematical constants, then it can be surmised to be analogous to, or a derivative of, a computer simulation—even by the entities within that simulation. In other words, if you can compute it, it’s likely the result of a computer itself.
In the case of our hypothetical evolving lifeforms, their science, if it is robust enough, should show that their universe is indeed logically the result of a computer simulation. Otherwise, what is the value of all their science?
We could call this the Simulated Intelligence Hypothesis. If you grow an evolving intelligence in a simulated environment it should, given enough time, be able to deduce, infer, or observe that its environment is indeed the result of a computed simulation. If this is true then it should lead to an interesting circumstance: an evolving intelligence within a simulated environment
cannot be occluded from the fact that its environment is a simulation, given enough time and a robust enough science. This we could call The Sims Situation—You
cannot evolve an intelligent sample inside a simulation whilst keeping that simulation hidden indefinitely. Eventually their science will reveal their circumstance, unless of course there is some kind of outside intervention—The same kind of intervention that we should supposedly play dumb in an effort to avoid provoking. Nevertheless, let’s return to imagining the evolution of our simulated lifeforms.
If we have a simulated universe that provides a platform for intelligent lifeforms to evolve, we could break these lifeforms up into at least 3 categories: (1) Simple, (2) Complex, (3) Savvy.
- Simple, they can make decisions and engage meaningfully with their environment.
- Complex, they record history as well as develop sciences, cultures, artifacts, and arts.
- Savvy, they are conscious of the fact that they are in a simulated universe.
Once an intelligence moves from a Complex orientation to a Savvy orientation, it has crossed an ontological Rubicon that divides these two distinct viewpoints. We could call this divide the Edge Threshold. If we put any real weight into the computer running this intelligent lifeform evolving universe simulation, then we might in fact hope that it grows something slick enough to figure out what’s really going on. Not just for the sake of amusement either, but for an insight into our own motives and nature as simulation-designers. We would actually want a Savvy intelligence inside our simulated universe. The reason why is very simple: If we only have access to observe intelligent lifeforms that are restricted to
not knowing that they are in a simulation, then our own sample pool and thus knowledge base will always be restricted to intelligences that are out of the loop. Complex level lifeforms (like human beings just prior to the computing revolution) would still be complex and interesting, but they would by definition always already be operating from an ontological ignorance of the true nature of their environment. They would be complex indeed, but far from savvy.