Meditations on simulation

ch07_13

Cellular automata is a model of a system of “cell-like” objects with the following characteristics: they live on a grid, each of these cells has a state (1 or 0, on or off, alive or dead) and each cell has its neighborhood. (1) In 1970, John Conway, an English mathematician, designed the Game of Life, a cellular automaton whose evolution is determined solely by an input of an initial state, requiring no further alteration. This game is a zero-player game, meaning that, after creating an initial configuration, its evolution is observed as well as the patterns it generates. The simulation consists of a grid which extends in all directions infinitely. Every square in this grid can either be on or off. The state of each square (cell) depends on its neighborhood, that is, what is happening in the eight other squares close by.  If a living cell has no cells around it, it will die of loneliness. If a living cell is surrounded by three other living cells, it will die of overcrowding. However, if a dead cell is surrounded by three living cells, it becomes lit; it is born. Once an initial state is set and the simulation initiated, this very simple  set of rules determines the outcome of the system in the future. The results are astounding. In the system’s further progression, complex shapes start emerging and disappearing spontaneously. These shapes interact with each other, some even reproduce, just as life does. Even though these laws contain no conceptions of reproduction, movement and growth, they manage to produce complex properties and patterns. It becomes clear that it might be possible to imagine something like the game of life resulting in some highly complex systems, for example – intelligence. Considering the numbers of cells in our brains, this mere analogy seems much closer to dynamical nonlinear systems present in nature.

In Western thought, there has been and still exists a strong tendency to think that there must be something fundamentally special about humans, in terms of the intelligence we exhibit and its level of complexity. In A New Kind of Science, Stephen Wolfram presents an empirical study of computational systems such as cellular automata, and argues that the approaches and philosophy to studying these “simple” programs are also very relevant to other fields of science. Wolfram introduces the principle of computational equivalence – all processes that are not obviously simple can be viewed as computations of equivalent sophistication. (2) Apart from this, the principle says that systems found in the natural world have the capacity to perform computations up to the universal (maximal) level of computational power, hence most systems are computationally equivalent. So this means that, in the end, there is no difference between the level of computational equivalence achieved by humans and other systems in nature. Whether a human brain, a fluid, evolution of a weather system or cellular automata, the behavior it exhibits corresponds to computation of equivalent sophistication. Computation is therefore “simply a question of translating inputs and outputs from one system to another”. (2) There certainly exist many systems in nature, whose behaviors are complex enough to attribute human features to them, an example of this being animism and similar pantheistic beliefs. Even if the underlying rules of different systems are as simple as possible, abstract systems like cellular automata can still achieve exactly the same levels of computational sophistication as anything else.

CARule600116

The main dogma of digital physics (digital ontology or digital philosophy) relies on theoretical views based on the idea that nature, and hence, the universe, is describable by information and therefore computable. According to this theory, the universe may be formulated as either the output of a deterministic or probabilistic computer program, a vast digital computation device, or mathematically isomorphic to such a device. (1) The foundation of digital physics lies in the following premises: the physical world is 1. informational, 2. computable, 3. can be digitally described, 4. is in essence digital, 5. is itself a computer and 6. is the output of a simulated reality exercise. The man who first introduced the hypothesis that the universe is a digital computer was Konrad Zuse, a German computer pioneer. In his book Rechnender Raum (Calculating Space), Zuse suggested that the universe is governed and computed by some sort of cellular automaton or a similar discrete computing machinery, focusing on these “substrates” of computation and pointing out that the long-held views of entropy growth do not hold in deterministically computed universes. (3) One of the lead assertions here is that there exists, at least in principle, a program computing the evolution of the universe, computed either by a huge cellular automaton or universal Turing machine. (1)

In the paper “Are You Living in a Computer Simulation?”, Nick Bostrom argues that at least one of the following propositions are true: 1. the human species is very likely to go extinct before reaching a “posthuman” stage, 2. any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history and 3. we are almost certainly living in a computer simulation. (4) Interestingly, he offers statistical arguments to support his claims, reaching a final conclusion that, unless we are living in a simulation right now, our descendants will almost never run an ancestor simulation. Roughly estimating the computing power needed to emulate the human mind, we could primarily rely on how computationally expensive would it be to replicate the operations of a piece of nervous tissue, given the fact that it already has been replicated in silico, would yield ~10^14  operations per second of the entire human brain. (5) Another estimate may be conducted through the synaptic quantities and their firing frequencies, giving a figure of~10^16-10^17 operations per second (4). Reducing this potential neural computation even more would reveal the nervous system’s high degrees of microscale redundancy, which compensates for the noisiness and unreliability of its neuronal components. In response to this, Bostrom argues that one would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.” Simulating the environment is also considered, however, the main point is that, in order to get a realistic simulation of human experience, much less is required; in fact, “only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.” So, according to this simulation argument, it is highly likely that, at the present moment, all of us are living in a massive computer simulation. This simulation may be selective, that is, focused on a single individual consciousness, or it could be simulating billions of brains simultaneously. 

Yet, if we were to step away from the simulation argument and simply consider the mind itself, and the external reality it is immersed in, we may conclude that it can only access them indirectly, “For, since the things the mind contemplates are none of them, besides itself, present to the understanding, it is necessary that something else, as a sign or representation of the thing it considers, should be present to it: and these are ideas”. (6) Moreover, many if not most of mental representations are also, in essence, virtual. (7) For example the binocular visual field is synthesized from two monocular sources of visual input. Hence, the human perception of the surrounding environment is actually “fused” together from two separate physical sources. Examples like these demonstrate that virtually all perceived reality is merely a construct, or a simulation. Therefore the mind itself acts like a virtual reality machine, in a way. (8) As the engineer Andy Fawkes pointed out at a talk at London’s Digital Shoreditch festival, “the human brain is one of the best simulators we’ve got”. (9) The embodiments of simulation seem to include many modes, which all seem to point at an inevitable conclusion – representations of reality, whether “objective” and “external” or constructed within the mind’s ecology have to be simulated. Bostrom’s simulation argument is in line with some of the main principles of digital physics and philosophy. If all nature, including human intelligence, is indeed computable, then living in a massive simulation, whether run by a posthuman civilization or an automaton does not seem as impossible. 

 

References:

  1. Schmidhuber, J. (2000). “Computer Universes and an Algorithmic Theory of Everything”.
  2. Wolfram, J. (2002). A New Kind of Science.
  3. Zuse, K. (1970). Calculating Space.
  4. Bostrom, N. (2003). “Are You Living in a Computer Simulation?”. Philosophical Quarterly
  5. Moravec, H. (1989). Mind Children. Harvard University Press
  6. Locke, J. (1690). An Essay Concerning Human Understanding
  7. Merker, B. (2007). Consciousness without a cerebral cortex: a challenge for neuroscience and medicine. Behav. Brain Sci.
  8. Edelman, S. (2015). “Mind as a Virtual Reality Machine”. H Plus Magazine
  9. Turk, V. (2015). “Simulated Worlds Will Soon Be Indistinguishable From Reality”. Motherboard
  10. Shiffman, D. (2012). The Nature of Code 

 

 

Leave a Reply