The Concept of Internal Representation in The Qbit Theory of Consciousness

The QBIT theory is an attempt toward solving the problem of consciousness in the light of Quantum mechanics, Biology, Information theory, and Thermodynamics. “Internal representation” is a key concept in the QBIT theory of consciousness. An internal representation is defined as a pack of information (within a cognitive system) that represents an external stimulus.The QBIT theory suggests that when robustness of an internal representation exceeds a certain threshold, a conscious experience (or a quale) is generated. In this paper, the concept of internal representation and its relation with consciousness is explored.


Introduction
The problem of consciousness is one of the most difficult problems in biology, and even in science as a whole. The QBIT theory is an attempt toward solving the puzzle of consciousness by putting together relevant pieces of evidence provided by Quantum mechanics, Biology, Information theory, and Thermodynamics.
To approach the problem of consciousness, the QBIT theory asks four fundamental questions and suggests preliminary answers for each of them: Question 1: How a conscious experience (or a quale) is generated? Answer: When robustness of an internal representation exceeds a certain threshold, a conscious experience is generated.
Question 2: What is the nature of a conscious experience? Answer: A conscious experience is a dense pack of quantum information encoded in maximally entangled pure states.
Question 3: Why are conscious experiences (or qualia) subjective? Answer: Qualia are subjective because maximally entangled pure states are private and unshareable.
Question 4: How does a conscious experience acquire its particular quality or meaning? Answer: The quality or meaning of a quale is assigned by a series of computational operations that removes irrelevant information and adds new information to a representation.
These preliminary answers have been broadly discussed in a previous paper 1 . In the present paper, the concept of "internal representation" and its relation to consciousness is explored in more detail. The QBIT theory assumes that the concept of internal representation could play a key role in solving the problem of consciousness.
As Cyriel Pennartz 2 argues many neuroscientists and cognitive scientists are convinced that consciousness requires formation and transformation of internal representations by the nervous system. But what is an internal representation and how is it related to consciousness? example, in animals, it is the nervous system that is responsible for generating representations. Therefore, an event that occurs outside the nervous system is considered an external event, while an event that occurs inside the nervous system is an internal event. Even if an event occurs within the body but outside the nervous system, it is still an external event. For example, a sudden release of adrenaline into the bloodstream is an external event, while a sudden release of neurotransmitters into the synaptic cleft is an internal event.
When an external stimulus stimulates a sensory receptor, a series of internal representations of that stimulus is formed within the nervous system. These internal representations are organized in a hierarchical manner, ranging from low-level representations to mid-level and high-level ones. Higher-level representations are built upon lower-level ones and inherit their contents from them. 3 Formation and transformation of these representations are computational procedures that occur at different computational nodes of the nervous system. For example, in the visual system, the lowestlevel representation is formed by computations within the retina. This representation is transferred to the next computational node (i.e., the lateral geniculate nucleus). Computations within the lateral geniculate nucleus transform this representation into a slightly-higher-level representation. This representation is, in turn, transferred to the next computational node (i.e., the primary visual cortex), where it is transformed into another higher-level representation. This process continues in an increasingly sophisticated manner at higher and higher computational nodes.
At successive stages of this hierarchy, the representation becomes more and more efficient by selective loss of irrelevant information and by reduction of informational redundancy. Findings from an experiment on the auditory system of cats support the idea that producing internal representations with reduced redundancy in higher computational nodes is a universal organizational principle of sensory systems. 4 Removing redundant information is something like the simplification of big data entering into a sensory system. Computational studies suggest that simplification of a representation by retaining salient information and discarding less salient information is an effective strategy for processing input data of high complexity using limited resources. 5 Furthermore, creating simpler representations has survival advantages because it consistently leads to better predictions and decisions. 6 Moreover, Maximizing the simplicity of a representation maximizes its explanatory power.
Redundancy reduction is an essential part of computations performed at each stage of the hierarchy. Another important computational operation (that is performed at some but not all stages) is the addition of some new information to the representation. This new information (provided by recurrent feedbacks and top-down inputs) is integrated to the body of information already available in the representation. This progressive process of deletion and addition of information makes representations increasingly more robust and abstract. When robustness of a representation exceeds a certain threshold, a quale (or conscious experience) is generated.
According to the QBIT theory of consciousness, robustness of a representation is determined by two factors: (1) the amount of information that is contained within that representation, and (2) the amount of information compression that the representation provides.

The hierarchy of representations
Inspired by the taxonomy proposed by Stanislas Dehaene and his colleagues 7 , the QBIT theory classifies internal representations into three distinct groups: subliminal, preconscious, and conscious. Subliminal representations are low-level representations, and the least robust ones. Preconscious representations (also called internal models) are mid-level representations. Conscious representations (also called qualia) are high-level representations, and the most robust ones.
The higher the level of a representation, the more robust it is. This means that a quale contains more information and provides more information compression than its associated internal model. The same is true when comparing an internal model with its preceding low-level representations.
As the level of a representation is elevated, it not only becomes more robust, but also becomes simpler and more meaningful. Therefore, a quale, sitting at the top of the hierarchy, is the simplest and the most meaningful representation that a conscious system can generate.
Both the increase in simplicity and the increase in meaningfulness are direct consequences of information compression. But how could information compression give rise to simplicity and meaningfulness? This question has been answered, to some extent, in a previous paper 1 that introduced the QBIT theory of consciousness. The focus of the present paper is on the concept of internal representation. The concept of information compression, and its consequences, will be explored in more detail in a future paper.
Representations in the visual system In the visual system, the lowest level computational node is the retina. Experimental evidence implies that the main goal of computations in the retina is to transform the visual input into statistically independent (or decorrelated) outputs as the first step in creating efficient representations in the visual system. 8 Nick Chater 6 argues that lateral inhibition in the retina could be viewed as a process of removing local correlations in retinal input, thus providing a less redundant and hence more compressed representation of that input.
Creating statistically independent and redundancy reduced representations is a desirable strategy for processing sensory information. For this reason, the nervous system attempts to transform its sensory inputs into statistically independent (and thus efficient) representations at the earliest stages of sensory processing. 9,10 Efficient representations allow the nervous system to obtain more information about its environment without the need to evolve to larger sizes. Furthermore, efficient representations facilitate certain cognitive tasks, such as associative learning and pattern recognition. 11 To create efficient representations, the retina decomposes an image into distinct elements or features (such as motion, shape, edge, and color) that are statistically independent. This process is commonly known as "feature extraction". Therefore, the retina extracts visual features from a visual stimulus by means of creating a redundancy reduced (or compressed) representation for each of the extracted features. These are the first and the lowest-level representations generated in the visual system. Upon formation, these retinal representations are transferred to the lateral geniculate nucleus (LGN) of the thalamus.
A growing body of evidence suggests that LGN is not a simple relay station that passively transfers the retinal input to the visual cortex. In contrast, it is an active computational node that optimizes representations provided by the retina. 12 Evidence shows that, in the LGN, the representation of visual space is improved beyond the computational limits of the retina. 13 Martin Usrey and Henry Alitto 14 have described how visual representations generated by the retina are transformed within the LGN before being sent to the primary visual cortex (V1).
Representations received by V1 contain some irrelevant and redundant information that should be detected and removed as much as possible in an attempt to generate more efficient representations. This is what actually happens in V1. Experimental evidence shows that, in the human brain, top-down expectations about the external world sharpens representations in V1. 15 This sharpening effect is achieved by suppressing neural activities and hence removing information that are inconsistent with current expectations. This expectation-induced suppression is actually a kind of redundancy reduction and information compression. It is noteworthy that, whereas top-down expectation leads to suppressed responses in V1, it concurrently increases the amount of information contained in V1 representations. 15 Therefore, it is plausible to suggest that computations within V1 make representations more robust by increasing both the amount of information and the amount of compression.
In general, expectations (or prior information) are able to modify the contents of internal representations not only in V1 but also in other visual cortical areas. 16 This prior information about the external world allows the brain to quickly deduce plausible interpretations from (and attribute a kind of meaning to) visual inputs. In other words, integration of prior information (or top-down expectations) with bottom-up sensory information could potentially give rise to formation of a representation that is meaningful for the brain. This integration of information occurs in V1 as well as in other early visual areas, including V2 and V3. 16 Area V1 generates different kinds of low-level representations, including representation of color, shape, contour, and motion. These representations pass from V1 to V2 for further transformation. Area V2 is engaged in the analysis of several different visual features including shape, size, color, and motion. The fact that V2 is involved in transforming such a broad spectrum of representations is consistent with its hierarchical position as the primary recipient of inputs from V1 and the source of projections to numerous visual areas in both the ventral and dorsal visual pathways. 5 Area V4 is a computational node in the ventral visual pathway that receives representations of color and contour from V2. There is evidence that V4 generates a representation of color that is tolerant not only to changes in luminance, but also to changes in saturation. 17,18 Furthermore, it has been demonstrated that representations of contour generated by V4 are compact and precise, and they form a good basis for object detection by shape. 19 With this compact, efficient, and very accurate representation, algorithms can easily detect the existence of an object with a certain shape and locate the outline of the object. Additionally, V4 performs data compression and generates compact representations of shape that are good for further computations. 20 Although representations generated by V4 are very useful for performing some complex tasks as well as for further computation, they do not directly reflect subjective experiences. 21 Therefore, it seems plausible to suggest that representations generated at the level of V4 are mid-level representations or internal models. In fact, in the hierarchy of visual cortical areas, area V4 could be considered as an intermediate stage that transforms low-level representations into mid-level ones. 19 These mid-level representations will be transferred to the next computational node, where they will be transformed to representations that resemble more closely to subjective conscious experiences.
One cortical area that receives representations from the area V4 is the inferior temporal cortex. Electrophysiological recordings in macaque monkeys imply that the inferior temporal cortex generates a collection of high-level representations of objects that can facilitate different object-related tasks, including the essential task of identifying the color of an object. 22 The inferior temporal cortex is considered to be the final stage in the ventral cortical visual pathway. Representations generated in this stage are transferred to a variety of brain areas including the prefrontal cortex. 23 Some of these brain areas lie outside the chain of computational nodes responsible for generation of conscious representations. These brain areas receive conscious or preconscious representations and use them for other cognitive functions such as the control of behavior. Computational nodes responsible for conscious and preconscious representations are highly connected to, but distinct from, computational nodes mediating goal-directed behavior and declarative memory. 2 It is not clear which computational nodes are responsible for the generation of conscious representations of color, shape, motion, and other visual features. However, evidence shows that ventro-medial part of the occipito-temporal cortex generates representations that are relatively high level and more closely related to the subjective experience, compared to representations generated by posterior-lateral part of the occipito-temporal cortex. 24

Functions of internal representations
Creating an internal representation of an external stimulus helps an animal (or a cognitive system) to behave appropriately with respect to that stimulus. In fact, a main function of internal representations is to provide guidance to a behaving agent. Agents without the ability to create internal representations are purely reactive agents and their decisions are solely based on sensory inputs. For this reason, their performance in a challenging environment may remain below an optimal level. 25 Internal representations enable complex and context-dependent behaviors that are necessary for survival in a challenging environment. 26 It has been demonstrated that when the environment or the tasks that should be performed to survive in the environment are complex enough, a cognitive system reacts to this challenge by developing internal representations. Some tasks and behaviors require high-level (or conscious) representations. However, many tasks (even complex ones) could be performed using mid-level representations or internal models. Therefore, if an animal shows the capacity to generate internal representations and perform complex tasks by using those representations, this does not necessarily imply that the animal is conscious.
Even lower animals are able to generate internal representations and use them to solve complex problems and perform sophisticated tasks. Insects, for example, have the capacity to perceive polarized light. The plane of light polarization varies systematically across the blue sky, depending on the position of the Sun. For many insects, the polarization pattern of the blue sky serves as a cue for spatial navigation. Evidence shows that, in locusts, a specialized area of the brain (called the central complex) generates an internal representation of the polarization pattern of the sky. 27 To generate this internal representation, the central complex receives and computes spatial data coming from multiple other areas of the brain. The output of the central complex is a compass-like information that stabilizes the direction of the moving insect relative to the external celestial cue. 28 This compass-like information is a kind of internal representation that endows locusts with remarkable navigational skills. With our current state of knowledge, we cannot determine how robust this compass-like representation is. Therefore, we cannot tell whether this is a conscious or an unconscious representation. However, Andrew Barron and Colin Klein 29 proposes that insects have consciousness because this compass-like representation of the world from the animal's perspective is sufficient for subjective experience. This proposal is not correct according to the QBIT theory of consciousness. What they call a "representation of the world from the animal's perspective" could be an internal model, not a conscious representation. It should be emphasized that an internal representation can guide the behavior of an animal even in the absence of consciousness.

Concluding remarks
In cognitive neuroscience, internal representations are considered as information-bearing structures, and transformations are considered as computational procedures that operate on those structures. 30 Consistent with this, the QBIT theory of consciousness defines "internal representation" as a pack of information (more precisely, a pack of quantum information). Transformation of a representation refers to a series of computational operations by which some redundant or irrelevant information is removed and some relevant information is added.
A body of information that is received from the external world by a sensor is initially meaningless for the nervous system. A series of transformations puts this body of information in a context shaped by the organism's values and expectations as well as by the requirements for survival. As Renato Ramos 31 also argues, this contextualization could be considered as part of the process responsible for ascribing meaning to information and representations. The QBIT theory of consciousness suggests that the meaning of a pack of information is not clear as soon as it is received by a sensor. Only when this pack of information is serially transformed by a hierarchy of computational nodes, it obtains a particular meaning.
A final remark should be made regarding the trace that a representation leaves in a computational node. Each time a representation is generated and transferred, it leaves a trace which acts as a kind of memory. A large body of evidence suggests that internal representations can be stored in a latent form within sensory cortices, and reactivated at a later time. 32 This kind of memory is probably supported by rapid rearrangement of synaptic weights within sensory cortices following initial activation. Reactivation of this reconfigured network, through either top-down attention or a bottom-up boost from a following stimulus, would reveal this hidden structure and reactivate the latent representation. 32 References 1. Beshkar M.