Computation

From qri
Jump to navigation Jump to search

The concept of computation is notoriously difficult to pin down. Wikipedia defines it as "any type of arithmetic or non-arithmetic calculation that is well-defined"[1] (where a calculation is "a deliberate mathematical process that transforms one or more inputs into one or more outputs or results"[2]) and subsequently concedes the difficulty of specifying what it means to be well-defined. Conversely, the Stanford Encyclopedia of Philosophy opens the article on Computation in Physical Systems by merely gesturing at the concept informally.[3]

Despite these difficulties, computation plays a central role in the study of consciousness. On the one hand, functionalism is often understood to claim that consciousness is entirely about computation, in which case the concept is crucial for determining which qualia are exhibited by any one system. On the other hand, non-functionalist models might hold that certain computational systems cannot support consciousness at all, in which case one may wish to understand how such systems differ from the human brain. Thus, there are merits in providing clarity around the concept that go beyond intellectual curiosity.

Existing essays often focus primarily on attempts to formalize computation, primarily notably through the use of mathematical models. Conversely, this article will identify such models as covering only a small subset of all computational systems, and conclude that the mathematical approach cannot yield a general definition of the concept. For these reasons, computation throughout this wiki is defined as any physical process that is used (by any system) to obtain information. Thus, the article endorses a weak form of pancomputationalism.

The Computational Spectrum

The following section will argue that all computational systems can be classified on a spectrum based on the complexity of the physical mechanisms they utilize for computation. We will look at an example first and then generalize the principles to arbitrary systems.

An Example Task: Integrating a Function

An abstract illustration of a mechanical integrator. Here, the second ball (that rests atop the first) indicates how the output can be transferred to a different component. The cam is not shown.
A plot of the function , with an integral (i.e., area under the curve) of . A cam corresponding to this function would have a relatively simple, approximately elliptic shape.

Consider the task of creating a system that can compute the integral of polynomial functions, such as the sample function in the second figure to the right. One such system is a mechanical integrator. Its core components are a physical disk that spins at a fixed rate and a ball atop the disk that rotates at a rate depending on its distance to the disk's center. The system's output – i.e., the value of the integral – corresponds to the number of rotations performed by the ball.[4] Thus, the disk's rotation corresponds to a processing of the integral along the x-axis at a fixed speed. It follows that, to compute the integral correctly, the ball's distance from the center must correspond to the y-value of the function at every point in time, such that the integral grows faster when the y-value is high. This can be achieved using a mechanical arm that's tracing along a cam whose physical surface corresponds to the function. Since the arm is attached to the ball, it will automatically push the ball further away from the disk's center when the function's y-value is high. Finally, an additional mechanism may be employed to convert the output (i.e., the number of rotations performed by the ball) into a readable format.

If applied to our example function (with bounds and ), the disk would spin a little over ten and a half times, corresponding to the solution , or .

A different approach would be to utilize a suitable program on a classical Von Neumann computer. This is done when, for example, a user computes the integral using the public platform WolframAlpha, or if they use a program or script run on their personal computers. In this case, the input is represented symbolically rather than physically, where the precise format depends on which program is used.

Physical Mechanisms in the Example Task

In the task discussed above, the mechanical integrator uses at least four physical mechanisms to perform the integration:

  1. a mechanism for rotating the ball such that the speed of rotation is proportional to the ball's position (the spinning disk)
  2. a mechanism for representing the function as a physical slope (the mechanical cam)
  3. a mechanism for moving the ball on the disk according to the slope (the mechanical arm connected to the ball that's tracing along the cam)
  4. a mechanism to convert the number of rotations into a readable format

Note that this list is imprecise (one could decompose the tasks differently) and possibly incomplete. It is given here only to illustrate a qualitative comparison, not to provide a serious analysis of real mechanical devices.

When a similar analysis is applied to the second approach, it becomes apparent that the mechanisms employed do not depend on the task. Any other math problem – indeed, any task of any kind carried out on a Von Neumann computer – utilizes a very similar set of mechanisms, with only minor differences that themselves depend on the system's hardware rather than the task:

  1. a mechanism to represent a bit, typically as one of two voltage levels on a physical wire
  2. a small set of mechanisms to represent logical arithmetic (e.g., NOT, AND, OR, NAND, NOR, XOR, XNOR)

Note that these mechanisms are highly primitive compared to those of the mechanical integrator – a bit is the smallest possible unit of information, and the logical arithmetic deals entirely with functions that have two bits as input (or one in the case of NOT) and one bit as output. Even in real computers, the bulk of computation is usually carried out by gates of these types, although marginally more complex units like multiplexers and decoders are occasionally implemented by physical components.[5] Conversely, the logical arrangement of these mechanisms is vastly more complex than that of a mechanical integrator, with a modern CPU typically containing billions of transistors[6] which are the building blocks for both logic gates and data storage.

Physical Mechanisms for General Computation

A visualization of the computational spectrum. The Ising machine is an example of a more complex physical mechanism that will not be discussed here, but see, e.g., this presentation.

Can all modes of computation be reduced to their underlying physical mechanisms? Due to physicalism, the answer is yes: all causality is governed by the laws of physics, so any physical process relies on "physical mechanisms" in the broadest sense. That said, since the notion of "mechanism" is not precise, the same system might be broken down in several ways.

In addition to physical complexity, we can identify logical complexity as a measure of the number of ways in which the physical mechanisms of a system can be combined to obtain more complex effects. In a digital computer, logical complexity is a measure of both the spatial arrangement of logic gates and the complexity of their sequential usage. The graphic to the right depicts several example systems loosely grouped according to these two axes.

Since physical mechanisms are usually static and task-specific whereas logical arrangement is highly flexible, the generality of systems tends to go up as a function of how much they rely on logical complexity. In the case of computers, humans can control their utilization with high precision through the use of program code. When executed, such code is translated to machine code (sometimes with an intermediate step), which directly corresponds to operations executable by the processor. Such operations, in turn, bottom out in the usage of logical bits and gates. Conversely, the mechanisms used by the mechanical integrator are substantially more specific; depending on whether one considers the cam part of the integrator, it can either integrate a small class of functions or a single function only. Thus, generality can vary between two extremes: a system that can solve only a single task and a fully general system. This notion of full generality will be discussed more formally in the upcoming section.

Standard Computation

In the taxonomy suggested by our computational spectrum, Standard Computation refers to all computational processes that only use highly primitive physical mechanisms. As mentioned in the introduction, this type of computation has been analyzed most extensively in the literature.

Turing Machines

The Turing machine is an abstract mathematical object first proposed by Alan Turing in 1936. It is at the center of the theory of computation and the basis for many attempts to formalize the concept.

Technical Definition

An abstract illustration of a Turing machine starting on input . The center box indicates the current symbol being read, the wheel indicates that the function is in exactly one of its states, and the table symbolizes the transition function. Note that the state space ( wheel) and the transition function are static (they are part of the Turing machine's specification), whereas the current state ( position of the wheel) and the current symbol change frequently during execution.

The following section will provide a technical definition of the Turing machine; it may be skipped by readers uninterested in the details.

In its simplest form, the Turing Machine operates on an infinite discrete tape, wherein each cell contains either one stroke or no stroke. The initial state of the tape represents the input of the calculation and will generally affect how the machine behaves (in fact, the entire machine can be viewed as implementing a function that transforms the input into an output). The machine will operate in discrete time steps and either halt after a finite number of such steps or run indefinitely. If it halts, the state of the tape at that point is considered the machine's output; if not, the output is considered ill-defined or non-existent. It is generally assumed that both input and output consist of a single, undivided block of strokes, in which case data is represented as a unary number. (In this case, inputs and outputs that consist of several blocks of strokes with non-strokes in between would again be considered ill-defined.) Since any object with finite complexity can be represented by such a number, this format does not restrict the theoretical capabilities of the model. However, the numbers quickly get extremely large, which is why Turing machines are almost exclusively of theoretical interest, where results can be proved for arbitrarily large numbers.

As mentioned, the machine operates in discrete time steps. At any such step, it is positioned on exactly one square of the tape. Its only operations are (1) to draw or remove a stroke from the current position and (2) to move one square left or right. The Turing machine has an (arbitrarily large) set of internal states, and at any point in time, it is in exactly one of those states. Its action at each step depends on the current state and on whether the square at the current position does or doesn't contain a stroke. A specification of a Turing machine consists of a set of possible states, a starting state, an end state, and the transition function, which precisely determines its behavior for each non-end state.

An example of a complete specification for a Turing machine is as follows:

  • The state space is .
  • The starting state is .
  • The end state is .
  • The transition function is, "in state A, if there is a stroke, make a stroke, go right, remain in state A ; in state A, if there is no stroke, make a stroke, go right, switch to state B".

The resulting machine will travel to the end of the initial set of strokes, draw one additional stroke, and then halt. Thus, it implements the mathematical function that adds 1 to every input, i.e., the function defined by (this holds even for ). Note that drawing a stroke on a cell that already has a stroke has no effect, and that if the Turing machine had more than one non-end state, it would have to specify its behavior for each of those additional states.

The Turing Thesis

As originally proposed by Turing, the Turing thesis roughly states that every effective computation can be carried out by a Turing machine, where "effective" is loosely operationalized as operations that a mathematician could perform given suitable instructions and sufficient time. However, the thesis is so frequently misquoted as applying to any computable function that one can argue that this stronger claim has since become the meaning of the term "Turing thesis". Either way, the current article will assume both statements are true, which is also the mainstream position in the literature.

More formally, the thesis can be understood as only applying to functions that transform a natural number into another. As mentioned in the previous section, this framing is not restrictive since any finite object can be represented by such a number. (E.g., an image can be represented as a sequence of numbers by transforming each pixel into three color values, and it's a known result that finite sequences of numbers can be transformed into single numbers without loss of information.[7]) An equivalent way to put it is that, once one has settled on exactly how to represent inputs and outputs, then for any function that is computable within this format, there exists a Turing machine that implements this function.

The Turing thesis cannot be proven formally. A sufficient reason for this is that it relies on the concept of "computable function" (whether this means computable by any means or by a smart mathematician), which itself cannot be formalized. Nonetheless, the Turing thesis is widely accepted, primarily due to the lack of negative examples. In practice, the way it is applied is usually in reverse, i.e., the set of computable functions is defined as the set of functions implemented by a Turing machine. Consequently, the terms "computable" and "Turing computable" are often used interchangeably.

Further evidence for the Turing thesis comes from the development of other formal systems and the fact that many of them have been proven to implement the same set of functions as Turing machines (meaning that any function computable by one of them is computable by all of them). Such systems include both entirely different constructions, such as recursive functions and expressions, as well as different versions of the Turing machine itself. Because of these results, talking about "Turing machines" without specifying a formalism is generally unproblematic. The definition presented in this article is the simplest possible form; other versions may use a finite alphabet in place of the two-symbol alphabet and/or a set of end states rather than a single state, allowing the machine to encode properties of its output through the selection of a particular end state.

Turing Completeness

One of the main advantages of the theory surrounding the Turing thesis is that it provides a simple way to verify the generality of a computational system. Any system capable of emulating a Turing machine must be at least as powerful as Turing machines, which, if the Turing thesis is correct, makes it fully general. Such systems are also called Turing complete. Since the operations of the Turing machine are so primitive, this bar is easy to meet, and as a result, many different systems are Turing complete, including virtually all programming languages and the operation sets of all modern CPUs. In fact, the bar is so low that it's not unusual to find that systems that haven't been designed for computation are Turing complete anyway, such as Conway's Game of Life.

Flavors of Standard Computation

While the current section has focused primarily on the Von Neumann architecture, it's worth noting that there are several variations that would still be considered standard, or almost standard, on the computational spectrum. Examples of these include:

  • -nary computers, which differentiate many logical levels as supposed to two (e.g., ternary computers operate using logical units called trits that can take on three different states)
  • analog computers, which utilize continuous voltage levels of computation rather than rounding them to discrete categories
  • computers utilizing neuromorphic architectures in which a large number of components are autonomously active (this feature differentiates such computers from the Von Neumann architecture, in which almost all components implement logical functions but cannot execute themselves)

Among these examples, analog computers may be considered to have the most complex physical mechanisms with their usage of continuous voltage levels. However, each such mechanism still represents a single number only, making it significantly less complex than the mechanisms used in devices such as mechanical integrators.

Nonstandard Computation

Whereas standard computation refers to the leftmost part of the spectrum, nonstandard computation refers to the rest of it. Thus, any process performing computation that utilizes physical mechanisms significantly more complex than single voltage levels counts as performing nonstandard computation.

It is immediate from this definition that the space of such systems is vast. Furthermore, and unlike with standard computation, different points on the spectrum may have significant differences (since complex mechanisms are more specialized, such systems might differ in what tasks they can solve efficiently or at all).

Despite various attempts to utilize nonstandard computation for commercial purposes, the potential of such systems is still vastly under-explored. Furthermore, there exists no mathematical theory for nonstandard computation analogous to that for standard computation – in fact, the language typically used in the literature ("theory of computation", "computational models", etc.) indicates that standard computation is viewed as synonymous with computation in general. Conversely, insofar as nonstandard computation is studied academically, the discussion tends to be narrowly focused on a single mechanism or system.

Due to these reasons, it is difficult to provide an analogous introduction to the topic; however, a number of relevant articles and videos are linked in the Resources section.

Computation in the Brain

In light of the preceding discussion, it is natural to ask which form of computation is employed by the human brain. Of central importance for this question is the widely accepted neuron doctrine, which can be roughly summarized as the claim that the neuron is the brain's central computational unit. Thus, to a first approximation, the neuron doctrine supports the idea that the brain utilizes standard computation, though the fact that the doctrine is not formally defined makes a precise statement impossible. Furthermore, it has since been accepted that the neuron doctrine is not absolute. For example, Wikipedia writes:[8]

While the neuron doctrine is a central tenet of modern neuroscience, recent studies suggest that there are notable exceptions and important additions to our knowledge about how neurons function.

Electrical synapses are more common in the central nervous system than previously thought. Thus, rather than functioning as individual units, in some parts of the brain large ensembles of neurons may be active simultaneously to process neural information. Electrical synapses are formed by gap junctions that allow molecules to directly pass between neurons, creating a cytoplasm-to-cytoplasm connection, known as a syncytium.

Furthermore, the phenomenon of cotransmission, in which more than one neurotransmitter is released from a single presynaptic terminal (contrary to Dale's law), contributes to the complexity of information transmission within the nervous system.

Nonetheless, it is still likely correct to describe the consensus as stating that the brain predominantly relies on standard computation, with exceptions playing modulatory roles only.

On the other hand, QRI holds that standard computation cannot support consciousness. Since much of the brain's processing is unconscious, this view does not imply that the entire brain relies on nonstandard computation, but it does imply that nonstandard computation is the basis of all human qualia, such as visual qualia. That said, note that the conscious and unconscious parts of the brain are likely in constant communication, which suggests that the construction of qualia is frequently influenced by unconscious results, which, in turn, may be computed via standard computation. For example, there is strong evidence for the existence of a complex, hierarchically structured feature detection pipeline throughout the cerebral cortex,[9] which likely employs exclusively standard computation yet is causally upstream of conscious object recognition.

Applications to Functionalism

The following section will sketch the relationship between functionalism and computation in light of the prior concepts discussed in this article. Note that this discussion assumes a realist point of view; if one views consciousness as an abstraction rather than as a fundamental and frame-invariant entity, the question of whether any given system exhibits consciousness becomes a matter of definition rather than fact.

Consciousness via Standard Computation

The xkcd comic "A bunch of Rocks" illustrates an extreme example of an exotic Turing complete system that (arguably) would exhibit consciousness under functionalism.

Recall from the section on Standard Computation that any system which implements a Turing complete set of physical mechanisms is fully general, i.e., can perform arbitrary computations. In particular, any such system is capable of supporting human-level intelligence.

If one accepts functionalism, one is likely forced to hold that any such system also exhibits consciousness, regardless of its physical structure. This implication is not absolute – the most notable exception is Integrated Information Theory, which is a functionalist proposal according to the definition used in this wiki (if not necessarily according to the mainstream definition), yet implies that Von Neumann computers cannot support consciousness. More generally, the fact that functionalism has never been stated formally makes it impossible to draw any conclusion with certainty. Nonetheless, the claim that any Turing complete system is also consciousness-complete is at least suggested by functionalism, for reasons that will be outlined in the following.

As a first consideration, note that functionalism generally emphasizes the importance of "role", "behavior", or "function" over that of implementation. Thus, a coarse application of functionalism suggests that any two systems implementing the same program on an abstract level must also share the same consciousness. If one accepts this conclusion (and if at least one Turing complete system supports consciousness), then it logically follows that every Turing complete substrate supports consciousness. (Given any Turing complete system , take a conscious program running on and implement it on ; the aforementioned conclusion implies that the program running on exhibits the same consciousness as that running on .)

A similar but more complex argument relies on a hypothetical gradual transformation of one system into another. For example, consider a Turing complete system implementing binary logic gates with slow physical mechanisms, such as water or sand. Suppose we decide not to ascribe consciousness to this system due to its exotic nature, despite its ability to implement intelligent behavior. To derive a contradiction from this assumption, we will start with a classical digital computer running a program that exhibits consciousness. Then, we can imagine gradually increasing the spatial distribution of its hardware and lowering execution speed until the resulting system is spatially distributed over many miles and takes years to solve elementary problems. Since no step in this process will have changed the system's abstract description (or the "function" of its represented "mental objects"), functionalism presumably implies that the resulting system is equally conscious to the original. However, once the electronic system is both sufficiently slow and sufficiently spatially distributed, switching to hardware that implements elementary operations using water instead of electricity would arguably constitute only a minor change. (Alternatively, one could perform the transition separately for each logic gate.) Note also that a gradual degradation of consciousness throughout this process likely violates Dual-Aspect Monism since there is no corresponding change in the system's behavior.

For Integrated Information Theory in particular, the above thought experiment isn't applicable since the initial system (on the Von Neumann computer) is not conscious. Conversely, a computer using neuromorphic architecture starts off with a different algorithmic description, making the changes required for the transformation more substantial. It is arguable that the argument would then be more plausible given the conventional definition of functionalism vs. the one from this wiki; however, since the conventional definition is even less precise, this conclusion is again unclear.

In any case, recall that QRI rejects functionalism and is hence neither forced to accept that all Turing complete systems can exhibit consciousness, nor to find a rebuttal to the arguments discussed above.

Consciousness via Nonstandard Computation

Unlike for standard computation, there is no theorem guaranteeing that any one system utilizing complex mechanisms (i.e., nonstandard computation) is fully general. However, a hybrid system mixing both kinds of computation would, if properly set up, be at least as general as either component. Thus, if the component utilizing standard computation is Turing complete, the combined system would "inherit" Turing completeness while retaining the ability to utilize the computational capabilities of its nonstandard component whenever they are applicable.

The preceding paragraph may, to a first approximation, describe the human brain. But even if it does not, it remains logically possible to construct such a system, thus begging the question of whether functionalism would ascribe consciousness to hybrid systems as well.

At a first approximation, the answer appears to be yes since at least the first argument outlined in the previous section still holds. That is, if we have two systems and , where is a human-level intelligence implemented on a Von Neumann computer and a system with identical input-output behavior implemented by a hybrid system, then it seems against the spirit of functionalism to argue that could be conscious even while is not. This seems especially true for the conventional definition of functionalism since whatever is implemented via nonstandard computation in arguably fulfills the same function as in the corresponding component in .

On the other hand, the more formal argument relying on the gradual transformation of one system into another no longer works since and have fundamentally different architectures. However, it is unclear why this asymmetry should favor the uniform system; one would arguably have equal justification to claim that only hybrid systems are conscious. In fact, if functionalism is compatible with discriminations based on the type of computation, then even the claim that only components using nonstandard computation can be conscious seems defensible, which is widely considered the antithesis of functionalism.

Due to these considerations, it seems reasonable to conclude that, given an input/output level description corresponding to a human-level intelligence, any system that implements this description in a constructive way (i.e., by deriving results according to generalizable principles rather than via memorization) would be conscious under functionalism, regardless of whether it utilizes standard or nonstandard computation.

If one accepts this result, it follows that whether or not a physical process contributes to consciousness is not a property of the process itself but depends on whether the process is utilized by a larger system that implements intelligent behavior (or satisfies whatever other criterion is necessary for consciousness). If it is, one would consider the process to perform nonstandard computation and (due to the result mentioned above) ascribe consciousness to the entire system. Furthermore, even if the surrounding system didn't satisfy the requirements for consciousness, one would likely still describe the inner process as performing nonstandard computation if the surrounding system utilized its outputs in a systematic way.

Thus, we have derived a justification for the definition given in this article, according to which any process performs computation if its outputs are utilized (by any system) to obtain information. One might be less restrictive and claim that a process still performs computation if its results are not utilized (this position would be regular pancomputationalism); however, any conception that is more restrictive will result in situations wherein a (possibly conscious) system systematically uses a process for information gain, yet the inner process does not perform computation. Such situations seem to contradict the core intention of the concept, and because of this, a definition that precludes the existence of such cases is adopted in this wiki.

Functionalism vs. Frame-Invariance

The central result from the previous section can also be viewed as an argument against functionalism, or more broadly, against computation (particularly standard computation) as a valid lens through which to determine whether a given system is conscious. This type of argument (which is also the main point made in Mike Johnson's article on the topic) can be stated in four steps as follows:

  1. (Standard) computation appears to be frame-dependent – i.e., the question of what is being computed by any one process, and even whether the process computes anything at all, seems to depend on how the process is utilized by a different system, rather than on the process itself.
  2. Conversely, consciousness is frame-invariant (note again that this step – and consequently, the entire argument – relies on a realist conception of consciousness).
  3. Due to #2 and dual-aspect monism, the lens through which consciousness is analyzed must be frame-invariant as well.
  4. Thus, consciousness cannot be understood via (standard) computation.

Importantly, note that this argument does not imply that consciousness cannot be analyzed for its computational properties; it only implies that a computational analysis is unsuited to decide whether a system is conscious in the first place. For example, if the EM Hypothesis is correct, then consciousness is ultimately a physical phenomenon, which may be exhibited by systems that show no intelligent behavior. However, in the case of the human brain, evolution has recruited a physical architecture that segments the electromagnetic field in a way that is computationally relevant, and thus, human consciousness can be analyzed for its computational properties.

Once again, Integrated Information Theory provides the primary counter-example to this argument by utilizing a frame-invariant view of computation. (This is achieved by considering every possible frame and choosing the one that maximizes the amount of consciousness as quantified by its formalism.) However, note that while this solution does evade the argument sketched above, it comes at the cost of implications that themselves violate dual-aspect monism.

Finally, note that the argument does imply the frame-invariance of the (nonstandard) computation employed by the brain, i.e., that the nature of human qualia does not depend on their subsequent utilization by the unconscious part of the brain. Thus, different types of qualia are most accurately viewed as building blocks with inherent structure that can be strategically utilized for different purposes. For example – and contrary to what is often claimed in academic philosophy[10] – visual processing need not produce visual qualia, not even if it is conscious. Instead, the proper statement is that evolution has selected visual qualia for visual processing in humans, presumably because its computational properties are suited for the task. Furthermore, apparent exceptions to this rule ought to be taken at face value, or at any rate, do not present a logical contradiction or type error. It is even coherent to speculate about the information-to-qualia mapping employed by non-human animals, e.g., whether information obtained via echolocation is transformed into auditory or visual qualia in bat consciousness.

Resources

References

  1. Wikipedia contributors. (2023, December 1). Computation. In Wikipedia, The Free Encyclopedia. Retrieved 11:07, January 7, 2024, from https://en.wikipedia.org/w/index.php?title=Computation&oldid=1187871863
  2. Wikipedia contributors. (2023, October 22). Calculation. In Wikipedia, The Free Encyclopedia. Retrieved 11:06, January 7, 2024, from https://en.wikipedia.org/w/index.php?title=Calculation&oldid=1181407609
  3. Piccinini, G., & Maley, C. (2021). Computation in Physical Systems. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2021 ed.). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2021/entries/computation-physicalsystems/
  4. Wikipedia contributors. (2023, July 4). Ball-and-disk integrator. In Wikipedia, The Free Encyclopedia. Retrieved 15:55, October 16, 2023, from https://en.wikipedia.org/w/index.php?title=Ball-and-disk_integrator&oldid=1163371333
  5. Harris, D., & Harris, S. (2010). Digital design and computer architecture. Morgan Kaufmann.
  6. Wikipedia contributors. (2023, October 13). Transistor count. In Wikipedia, The Free Encyclopedia. Retrieved 15:31, October 16, 2023, from https://en.wikipedia.org/w/index.php?title=Transistor_count&oldid=1179883986.
  7. Boolos, G. S., Burgess, J. P., & Jeffrey, R. C. (2002). Computability and logic (pp. 12-13). Cambridge University Press.
  8. Wikipedia contributors. (2023, November 16). Neuron doctrine. In Wikipedia, The Free Encyclopedia. Retrieved 16:25, January 15, 2024, from https://en.wikipedia.org/w/index.php?title=Neuron_doctrine&oldid=1185362846
  9. Grossberg, S. (2021). Conscious mind, resonant brain: How each brain makes a mind. Oxford University Press.
  10. E.g., Dennett, D. C. (1993). Consciousness explained (pp. 85-95). Penguin UK.