In the context of “Kolmogorov consciousness”, the complexity of an object to be computed can be defined by the complexity of a computing system―typically a computer―represented as the number of bits in the minimal algorithm required to compute the object. In condensed matter systems under equilibrium and nonequilibrium conditions, macroscopic properties distinct from elementary ones can emerge from large numbers of particles. Such a large system size allows system properties to change as control parameters change, producing phase transitions. Inspired by this, it is natural to consider that a similar state transition could occur in computing systems at some critical point of Kolmogorov complexity when the system is optimized. In this paper, we propose a computational model that realizes a functional differentiation of random neural network into two subnetworks―one specialized for memory and the other for program execution―through an evolutionary change of the network. The model suggests a neural mechanism for a human-like thought process, described as inference based on extracting rules embedded in random input and storing those inputs in long- or short- term memory. Because the proposed evolutionary model with sufficiently large complexity is driven by constraints that lead to an optimized state―such as selecting minimal-complexity systems among many high-complexity alternatives―the resulting network can be regarded as an optimized “descriptor” of memory and program. The computational results indicate that the seeds of consciousness emerge when the system’s Kolmogorov complexity decreases from that of random neural networks to that of organized networks configured as descriptors of fluctuating environments. We refer to this stage as algorithmic consciousness.