Are we but ghosts in the machine? Pt. 2

In part one we looked at the basic shape of the theory of Ancestor Simulation. In part 2, we will take a closer peek at philosopher Nick Bostrom’s postulation and see the framework of logic that is its foundation.

We saw how Bostrom realised that the “brain in a vat” hypothesis was impractical, but that the idea of a simulated reality itself was not. He began to see that although the method was unfeasible, the concept that, for one reason or another, an “Architect” could create a wholly virtual reality in which the inhabitants believe that the reality they perceive is “really real” was not impossible and would be very hard to refute.

But what led Bostrom to this conclusion?

Simply put, rather than looking at the “why” of Ancestor Simulation, he looked at the “how”. And he began to theorise that, with certain assumptions as to technological advancement and the path that our descendants will take (assuming they are real), there are only three ways our future can pan out. And that these three paths we may follow will, more or less, determine whether we are, in fact, living in a simulation right now.  But, firstly, we have to ask could such a creation be possible, and if so, why make one? This gets mathematical from here in, so bear with me!

The Skinny

Much like the scientist with the envatted brain, Bostrom’s theory has a protagonist. He uses a sufficiently advanced future posthuman civilization who have the technological capability and desire to simulate not only a conscious thinking entity via artificial means, but the entire world in which that entity lives. Furthermore, he suggests that the motivation and effort involved in such a simulation would mean that the posthuman civilization would be more likely to produce not just one entity in a world full of “mindless bots” – that the target entity cannot distinguish from itself – but rather the entire sentient civilization in which that entity exists, and the universe around it, to a varying level of granularity. He suggests that one possible reason for such an endeavor could be for the posthuman society to simulate and study its own  history through recreation of their ancestors, or (as they would have sufficient computing power) to simulate multiple or even infinite histories and outcomes, to either see “what would have happened if…?” or, more plausibly perhaps, to simply ensure greater success of achieving the one, true history. Further, It is not impossible that, given such technological prowess, the posthuman civilization could create such a simulated universe for simple entertainment – Truman Show style – or as a “game” in which the posthumans can inject themselves at will. There is nothing to say that such a posthuman society would have the same ethical mores or desires as ours. He assumes that creating the entire synaptic pathways of every living organism in the target civilization is within the means of the protagonist.

Of course, this is all dependent on two factors: the possibility of being able to harness such technology, and the ability to successfully and artificially simulate a thinking, conscious mind.

Wire The Sucker Up!

Our current understanding of physics and computing technology means that right now, there is no way we could ever simulate a conscious mind. Some experts feel that given the ever increasing rate of technological advancement, such a feat may be only decades away. Others think that this process may take hundreds or even thousands of years. Either way, most experts would agree that at some point, we will reach that level of sophistication. Such a process is discussed in the next section below. But for the moment, let us assume that are destined to be able to acheive such a level of computing power.

Bostrom explains in his 2003 paper Are You Living In A Computer Simulation  that the limits of current knowledge about physics and technology render us unable to make a confident estimate of the possible upper limits of future computing power. Currently lacking a unified theory of everything, it is unclear what amazing discoveries in quantum physics may allow us to “transcend those constraints that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter.”  Instead, we can only use our current knowledge to impose lower limits of future computing power. From the 2003 paper:

For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10^21 instructions per second.[3] Another author gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet.[4] (If we could create quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits.[5] However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain.[6] An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~10^16-10^17 operations per second.[7] Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dendritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.

So, in principle, and utterly in theory, even imposing our primitive lower limits of computational development, assuming there are no major and unexpected changes in quantum manipulation and the laws of physics, it is not unfeasible to create a system with enough power and complexity to create a simulated human brain, and so potentially a human mind. Bear in mind that the proposed computational power of the brain as listed above is well within the means of the sugar-cube sized system proposed by Drexler.

So the first assumption is that even with todays limited understanding, a posthuman civilization could, in all likelihood, have at its disposal the computational wherewithall to replicate one, if not billions, of thinking, sentient brains. But, even with the horsepower to do such a thing, is such a simulation actually feasibly possible?

Mind  Over Matter

Bostrom’s theory makes a second assumption: that of substrate independence. Simply put, he assumes that the process of conscious thought and the creation of the physical processes that lead to it are not in any way dependent on the medium in which those process occur. He posits that as long as the process reaches a sufficient and key level of complexity, the emergence of consciousness will inevitably occur, regardless of whether the medium (or substrate) in which the process occurs is carbon-based (as in the neural networks of the human brain) or silicon-based (as in the processors of a sufficiently powerful computer). It is the complexity that is key, not the material. However, he does not take this assumption lightly, and again in the 2003 paper explains:

The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) – just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).

So, we have the two key assumptions in place:

  • The computational power to create conscious, thinking simulations is possible
  • The creation of consciousness within an artificial system is also possible.

Now we know possibly why a posthuman society would create an Ancestor Simulation and that it is not impossible that they would be able to, we have to look at exactly how likely this is, and what that means for what we perceive as “reality”.

In part 3 we will look at some utterly scary mathematical equations and break our entire existence into three simple and distinct possibilities. Hint: it’s not all good news.

 

Advertisements

3 Comments

  1. Posted August 12, 2008 at 6:02 pm | Permalink | Reply

    I found your site on technorati and read a few of your other posts. Keep up the good work. I just added your RSS feed to my Google News Reader. Looking forward to reading more from you down the road!

  2. Posted September 13, 2008 at 5:26 pm | Permalink | Reply

    Have you heard of Ibm’s blue brain project?

  3. Posted September 13, 2008 at 6:11 pm | Permalink | Reply

    Yes, it is a fscinating project. It is interesting how they do not have the aim to produce consciousness or a de facto thinking mind within their simulation, but it remains to be seen within the future of the project what does actually emerge. It’s a bit like recreating a car engine from the molecular level, without expecting it to actually burn fuel or rotate a driveshaft – it seems inevitable!

Post a Comment

Required fields are marked *

*
*

%d bloggers like this: