Search for dynamical origin of social networks

The challenge of this work is to re-define the concept of intelligent agent as a building block of social networks by presenting it as a physical particle with additional non-Newtonian properties. The proposed model of an intelligent agent described by a system of ODE coupled with their Liouville equation has been introduced and discussed. Following the Madelung equation that belongs to this class, non-Newtonian properties such as superposition, entanglement, and probability interference typical for quantum systems have been described. Special attention was paid to the capability to violate the second law of thermodynamics, which makes these systems neither Newtonian, nor quantum. It has been shown that the proposed model can be linked to mathematical models of livings as well as to models of AI. The model is presented in two modifications. The first one is illustrated by the discovery of a stochastic attractor approached by the social network; as an application, it was demonstrated that any statistics can be represented by an attractor of the solution to the corresponding system of ODE coupled with its Liouville equation. It was emphasized that evolution to the attractor reveals possible micro-mechanisms driving random events to the final distribution of the corresponding statistical law. Special attention is concentrated upon the power law and its dynamical interpretation: it is demonstrated that the underlying microdynamics supports a “violent reputation” of the power-law statistics. The second modification of the model of social network associated with a decision-making process and applied to solution of NP-complete problems known as being unsolvable neither by classical nor by quantum algorithms. The approach is illustrated by solving a search in unsorted database in polynomial time by resonance between external force representing the address of a required item and the response representing the location of this item.


Introduction.
All the previous attempts to develop models for so called active systems (i.e., systems possessing certain degree of autonomy from the environment that allows them to perform motions, which are not directly controlled from outside) have been based upon the principles of Newtonian and statistical mechanics.These models appear to be so general that they predict not only physical, but also some biological and economical, as well as social patterns of behavior exploiting such fundamental properties of nonlinear dynamics as attractors.Not withstanding indisputable successes of that approach (neural networks, distributed active systems, etc.) there is still a fundamental limitation that characterizes these models on a dynamical level of description: they propose no difference between a solar system, a swarm of insects, and a stock market.Such a phenomenological reductionism is incompatible with the first principle of progressive biological evolution associated with Darwin.According to this principle, the evolution of living systems is directed toward the highest levels of complexity if the complexity is measured by an irreducible number of different parts, which interact in a well-regulated fashion (although in some particular cases deviations from this general tendency are possible).At the same time, the solutions to the models based upon dissipative Newtonian dynamics eventually approach attractors where the evolution stops while these attractors dwell on the subspaces of lower dimensionality, and therefore, of the lower complexity (until a "master" reprograms the model).Therefore, such models fail to provide an autonomous progressive evolution of living systems (i.e.evolution leading to increase of complexity).Let us now extend the dynamical picture to include thermal forces.That will correspond to the stochastic extension of Newtonian models, while the Liouville equation will extend to the Fokker-Planck equation that includes thermal force effects through the diffusion term.Actually, it is a well-established fact that evolution of life has a diffusion-based stochastic nature as a result of the multi-choice character of behavior of living systems.
Such an extended thermodynamics-based approach is more relevant to model of living systems, and therefore, the simplest living species must obey the second law of thermodynamics as physical particles do.However, then the evolution of living systems (during periods of their isolation) will be regressive since their entropy will increase.Therefore, Newtonian physics is not sufficient for simulation the specific properties typical for intelligence.There is another argument in favor of a non-Newtonian approach to modeling intelligence.As pointed out by Penrose, [1], the Gödel's famous theorem has the clear implication that mathematical understanding cannot be reduced to a set of known computational rules.That means that no knowable set of purely computational procedures could lead to a computer-control robot that possesses genuine mathematical understanding.In other words, such privileged properties of intelligent systems as intuition or consciousness are non-computable within the framework of classical models.That is why a fundamentally new physics is needed to capture these "mysterious" aspects of intelligence, and in particular, to decisionmaking process.The proposed dynamical model that captures behavior of Livings is based upon extension of the First Principles of classical physics to include phenomenological behavior of living systems, i.e. to develop a new mathematical formalism within the framework of classical dynamics that would allow one to capture the specific properties of natural or artificial living systems such as formation of the collective mind based upon abstract images of the selves and non-selves, exploitation of this collective mind for communications and predictions of future expected characteristics of evolution, as well as for making decisions and implementing the corresponding corrections if the expected scenario is different from the originally planned one.The approach is based upon the assumption that even a primitive living species possesses additional non-Newtonian properties that are not included in the laws of Newtonian or statistical mechanics.These properties follow from a privileged ability of living systems to possess a self-image (a concept introduced in psychology) and to interact with it.The proposed mathematical formalism is quantum-inspired: it is based upon coupling the classical dynamical system representing the motor dynamics with the corresponding Liouville equation describing the evolution of initial uncertainties in terms of the probability density and representing the mental dynamics.The coupling is implemented by the information-based supervising forces that can be associated with the self-awareness.These forces fundamentally change the pattern of the probability evolution, and therefore, leading to a major departure of the behavior of living systems from the patterns of both Newtonian and statistical mechanics.Special attention is paid to dynamical interpretation of statistics laws.Unlike a law in science that is expressed in the form of an analytic statement with some constants determined empirically, a law of statistics represents a function all values of which are determined empirically; in addition to that, the micromechanisms driving the underlying sequence of random events to their final probability distribution remained beyond any description.As an example, the power law and its dynamical interpretation have been analyzed and interpreted: it was demonstrated that the underlying dynamics supports a "violent reputation" of the power-law statistics.
The proposed model has properties similar to those of the Madelung equations, [2].However since the model is non-conservative, it can have attractors, and these attractors is the central point of this work.

General model of random ODE.
In this section, following [3,5,6,11], we introduce a general model of random ODE as a quantum-classical hybrid.Actually our approach is based upon a modification of the Madelung equation, and in particular, upon replacing the quantum potential with a different Liouville feedback, Fig.
where v is velocity vector.It describes the continuity of the probability density flow originated by the error distribution in the initial condition of ODE (3).
Let us rewrite Eq. (2) in the following form where v is a velocity of a hypothetical particle.This is a fundamental step in our approach: in Newtonian dynamics, the probability never explicitly enters the equation of motion.In addition to that, the Liouville equation generated by Eq. ( 4) could be nonlinear with respect to the probability density ρ and therefore, the system (4), (5) departs from Newtonian dynamics.However although it has the same topology as quantum mechanics (since now the equation of motion is coupled with the equation of continuity of probability density), it does not belong to it either.Indeed Eq. ( 4) is more general than the Hamilton-Jacoby equation : it is not necessarily conservative, and F is not necessarily the quantum potential although further we will impose some restriction upon it that links F to the concept of information.The topology of the system (4), (5) is illustrated in Fig. 1.

Remark.
Here and below we make distinction between the random variable v(t) and its values V in probability space.
Prior to considering a specific form of the force F, we will make a comment concerning the normalization constrain satisfaction in which V is the volume where Eqs. ( 4) and ( 5) are defined.Turning to Eq. ( 5) and integrating it over the volume V ∂ ∂t ρdV where Φ is the surface bounding the volume V.
Therefore, if the normalization constraint ( 4) is satisfied at t = 0, it is satisfied for all the times.

Mathematical model.
In this section we consider a special feedback that does not belong to the family of feedbacks described in [3], but it was considered in [5,6,11] Here ρ is a preset probability density satisfying the constraints (6), and ζ is a positive constant with dimensionality [1/sec].As follows from Eq. ( 9), f has dimensionality of a force per unit mass that depends upon the probability density ρ , and therefore, it can be associated with the concept of information, so we will call it the information force.In this context, the coefficient ζ can be associated with the Planck constant that relates Newtonian and information forces.But since we are planning to deal with objects that belong to the macro-world, ζ must be of order of a viscose friction coefficient.
With the feedback (9), Eqs. ( 4) and ( 5) take the form, respectively The last equation has the analytical solution Subject to the initial condition that satisfy the constraints (6).This solution converges to a preset stationary distribution Rewriting Eq. ( 12) in the form As follows from Eq. ( 12), the solution of Eq. ( 11) has an attractor that is represented by the preset probability density . Substituting the solution (12) in to Eq. (10), one arrives at the ODE that describes the stochastic process with the probability distribution (12) As will be shown below, the randomness of the solution of Eq. ( 16) is caused by instability that is controlled by the corresponding Liouville equation.
It is reasonable to assume that the solution ( 12) starts with a sharp initial condition As a result of that assumption, all the randomness is supposed to be generated only by the controlled instability of Eq. ( 16).Substitution of Eq. ( 17) into Eq.( 16) leads to two different domains of v: 0 ≠ v and v=0 where the solution has two different forms, respectively and that leads to Eq. ( 18) that presents an implicit expression for v as a function of time since ρ * is the known function.
Eq. ( 19) represents a singular solution, while Eq. ( 18) is a regular solution that includes arbitrary constant C .The regular solutions is unstable at t=0, | v |→ 0 where the Lipschitz condition is violated and therefore, an initial error always grows generating randomness.
Let us analyze the behavior of the solution (18) in more details.As follows from this solution, all the particular solutions for different values of C intersect at the same point v=0 at t=0, and that leads to nonuniqueness of the solution due to violation of the Lipcshitz condition.Therefore, the same initial condition v=0 at t=0 yields infinite number of different solutions forming a family (18); each solution of this family appears with a certain probability guided by the corresponding Liouville equation (11).For instance, in cases plotted in Fig. 2 since it passes through the maximum of the probability density .However, with lower probabilities, other solutions of the same family can appear as well.Obviously, this is a non-classical effect.Qualitatively, this property is similar to those of quantum mechanics: the system keeps all the solutions simultaneously and displays each of them "by a chance", while that chance is controlled by the evolution of probability density (12).
Let us emphasize the connections between solutions of Eqs. ( 10) and ( 11): the solution of Eq. ( 10) is an one-parametrical family of trajectories (18), and each trajectory occurs with the probability described by the solution (15) of Eq. (11).The scenario of transition from determinism to randomness here similar to that of quantum mechanics, [2]:combination of failure of the Lipchitz condition and emergence the Hadamard instability at the same point t=0 leads to disturbance that can take any trajectory of the multivalued family with the probability controlled by the Liouville equation (11).However unlike the classical chaos, here (as well as in quantum mechanics) the source of randomness is concentrated in one point t=0; beyond of this point, the solution evolves along the "chosen" trajectory.
The approach is generalized to n-dimensional case simply by replacing v with a vector v = v 1 ,v 2 ,...v n since Eq. ( 11) does not include space derivatives

Examples.
Let us start with the following normal distribution (24) As another example, let us choose the target density * ρ as the Student's distribution, or so-called power law distribution  random events to power law.
Since our further analysis will be concentrated on the power law statistics, we take a closer look at the solution (27) and consider the trajectories of the underlying particle.The evolution of a particular solutions at C=1, 10, and 100 is plotted in Fig 5 .Here C is the parameter representing a particular random sample of the solution.This solution describes (in an implicit form) a one-parametric family of dynamical processes v = v(t,C) .Each scenario (for a fixed C) occurs with the current probability (12) that asymptotically approaches the power-law distribution (26) at critical points.Those critical points that occur at large v can be associated with catastrophes.

Relations to chaos.
In this sub-section we outline similarities and differences between the dynamical attractor described above and classical chaos in Newtonian dynamics.Turning to [2] we observe that the origin of randomness in Newtonian mechanics is instability of ignorable variables, i.e., such variables that do not contribute to In dynamics of ODE described in [3] as well as in this section, in the previous section, as well as in quantum systems [2], the instability has a different nature: it is caused by the loss of uniqueness of the solution at a singular point due to failure of Lipchitz condition at this point.In context of terminal dynamics, [4], this point represents a terminal repeller that is characterized by infinite divergence of trajectories.In other words, it is characterized by an unbounded positive Liapunov exponent, and the power of the non-Lipschitz-originated instability is equivalent to the Hadamard, or blown-up instability.As a result of that, this system makes a random choice of the trajectory only once -at the beginning of the transition from determinism to randomness, while a Newtonian system may change trajectories continuously during its chaotic motion.However as shown in [3], the random ODE considered above belongs neither to Newtonian, nor to quantum mechanics since they violate the second law of thermodynamics due to a capability to decrease entropy without external interaction.Actually these systems can be considered as quantum-classical hybrids.However regardless of some differences, the attractors of the system (10), (11) have to be considered as a special type of chaos since randomness there is originated from dynamical instability rather than from an external random input.

A mystery of power-law statistics.
Random ODE with chaotic attractors have been introduced and applied to demonstrate interference of probability in non-quantum systems, [4], and finding global maximum, [5].The objective of this section is application to foundation statistics.It was inspired by a mysterious power-law statistics that predicts social catastrophes: wars, terrorist attacks, market crushes est.Resent interest in literature is concentrated on halfa-century finding that the severity of interstate wars is power-law distributed, and that belongs to the most striking empirical regularities in world politics.Surprisingly, similar catastrophes were identified in physics (Ising systems, avalanches, earthquakes), and even in geometry (percolation).Although all these catastrophes have different origins, their similarity is based upon the power law statistics, and as a consequence, on scale invariance, self-similarity and fractal dimensionality, [6].According to the theory of self-organized criticality, that explains the origin of this kind of catastrophes, each underlying dynamical system is attracted to a critical point separating two qualitatively different states (phases).This attraction is represented by a relaxation process of slowly driven system.Transitions from one phase to another are accompanied by sudden release of energy that can be associated with a catastrophe, and the severity of the catastrophe is power law distributed.However, in order to overcome the critical point and enter a new phase, a slow input of external energy is required.The origin of this energy is well understood in physical systems, but not in social ones, since there are no well-established models of social dynamics.For that reason, we turn to the previous section and start with comparison the underlying dynamics of normal and power law distribution, (see Figs. 3,4, and 6).Let us recall that the normal distribution is commonly encountered in practice, and is used throughout statistics, natural sciences, and social sciences as a simple model for complex phenomena.For example, the observational error in an experiment is usually assumed to follow a normal distribution, and the propagation of uncertainty is computed using this assumption.But statistical inference using a normal distribution is not robust to the presence of outliers (data that is unexpectedly far from the mean, due to exceptional circumstances, observational error, etc.).When outliers are expected, data may be better described using a heavy-tailed distribution such as the power-law distribution.As demonstrated in Fig. 5, normal and power law distributions have very close configurations excluding the tails.However despite of that, the types of the random events described by these statistics are of fundamental difference.Indeed, processes described by normal distributions are usually coming from physics, chemistry, biology, etc., and they are characterized by a smooth evolution of underlying dynamical events.On the contrary, the processes described by power laws are originated from events driven by human decisions (wars, terrorist acts, market crushes), and therefore, they are associated with catastrophes.Surprisingly, the 3D plots of Eqs.(25) and (27) (see Figs. 3 and 4) describing dynamics that drives random events to the normal and the power law distributions, respectively, demonstrate the same striking difference between these distributions, that is: a smooth evolution to normal distribution, and "violent", full of densely distributed discontinuities (see Fig. 4) transition to power law distribution.In Fig. 1, C is the parameter representing a particular random sample of the solution.This solution describes (in an implicit form) a one-parametric family of dynamical processes v = v(t,C) .Each scenario (for a fixed C) occurs with the current probability (12) that asymptotically approaches the power-law distribution (26) at critical points.Those critical points that occur at large v can be associated with catastrophes.Is this a coincidence?Indeed, the proposed random dynamics is based upon global assumptions, and it does not bear any specific information about a particular statistics as an attractor.However the last statement should be slightly modified: actually the model of random dynamics is tailored to describe Livings' behavior, and in particular, decision making process, [7].Is that why the random dynamics captures "violent" properties of power law statistics that is associated with human touch?We will discuss possible answers to this question below.First of all we have to consider the problem of uniqueness of a dynamical system which solutions approach a preset attractor.In classical dynamics the answer to this problem is clear: there is infinite number of different dynamical systems that approach the same attractor, and that is true for static, periodic and chaotic attractors.Although in random dynamics (10), (11) we are dealing with a special type of chaotic attractor that represents a preset statistics, the answer is similar.Indeed, let us turn to Eq. ( 10) and add an arbitrary terms as following where A is an arbitrary constant, and B(t) is arbitrary function of time.
It easily verifiable that this modification does not change Eq. (11), and therefore, the attractor ρ * remains the same, despite of different solution of Eq.( 28).
That excludes the possibility of a rigorous proof that the random dynamics necessarily describes the same random events that constitute the corresponding statistics.But in order to explain, at least, some correlations between them, we will turn to a less rigorous approach known as an Occam Razor.After several modifications, a "scientific" version of this approach can be formulated as following: one should proceed to simpler theories until simplicity can be traded for greater explanatory power.Obviously application of this principle does not guarantee a success, however, there is known some encouraging results of its applications: In science, Occam's razor is used as a heuristic (rule of thumb) to guide scientists in the development of theoretical models rather than as an arbiter between published models.In physics, parsimony was an important heuristic in the formulation of special relativity by Albert Einstein, the development and application of the principle of least action by Pierre Louis Maupertuis and Leonhard Euler, and the development of quantum mechanics by Ludwig Boltzmann, Max Planck, Werner Heisenberg and Louis de Broglie.There have been attempts to derive Occam's Razor from probability theory, notable attempts made by Harold Jeffreys and E. T. Jaynes.Using Bayesian reasoning, a simple theory is preferred to a complicated one because of a higher prior probability.Hence, application of the Occam Razor requires a definition of the concept of complexity/simplicity.However in our case the simplest dynamical system is obvious: it follows from Eq. ( 28) when ) i.e. the most likely scenario describing dynamics that drives random events to power law distribution is still a dynamical system Eq.( 10) and ( 11) that was discussed above.However if additional information about the random events dynamics is available, it should be incorporated to the enlarged model Eq. ( 28) through a best-fit adjustment of the constant A and the parameterized function B(t) thereby trading simplicity for greater explanatory power.We have to emphasize that the enlarged dynamical system still has the same power-law statistics as an attractor.

General case.
Based upon the proposed model, a simple algorithm for finding a dynamical system that is attracted to a preset n-dimensional statistics can be formulated.The idea of the proposed algorithm is very simple: based upon the system (22), and (23), introduce the probability density ρ * (v 1 ,v 2 ,...v n ) representing the statistics to which the solution of Eq. ( 23) is to be attracted, and insert it in Eqs. ( 22) and ( 23).The solution of Eq. ( 22) will eventually approach a chaotic attractor which probability density coincides with the preset statistics.As in the case of the one-dimensional power-law statistics, here the solution of Eq. ( 22) will present the most likely scenario describing dynamics that drives random events to preset statistics.Moreover if additional information about the random events dynamics is available, it should be incorporated to the enlarged model of Eq. ( 28) that is through the best-fit adjustment of the constants A i , the parameterized functions B i (t) ), as well as the constants T ij that introduce zero-divergence terms for n >1, thereby trading simplicity for greater explanatory power and without a change of the preset statistics as is the one-dimensional case.23), and therefore, they do not change the static attractor in the probability space (that corresponds to the chaotic attractor in physical space).However, they may significantly change the configuration of the random trajectories in physical space making the dynamics more sophisticated.
It should be noticed that the proposed approach imposes a weak restriction upon the space structure of the function ρ({v}) : it should be only integrable since there is no space derivatives included in Eq. ( 23).This means that ρ({v}) is not necessarily to be differentiable.For instance, it can be represented by a and ab > 1+1.5π .

Departure of social network from Newtonian dynamics.
We will start with mathematical formulation of n-dimensional versions of Eqs. ( 22) and ( 23).
The model is represented by a system of nonlinear ODE (Eqs.(22)), and a linear ODE (Eq.( 23) coupled in a master-slave fashion: Eq. ( 23) is to be solved independently, prior to solving Eqs.( 22).
The terms describing the contributions of the feedbacks in Eqs. ( 22), have the dimensionality of acceleration, and therefore, they can be interpreted as forces referred to unit mass.In order to distinguish them from "physical" forces that are associated with energy and that are not included into the equations of motion, we will call them information forces.
The major departure of the model of intelligent agent from Newtonian systems is the transition from determinism to randomness and formation of stochastic attractors.Unlike chaos in Newtonian systems, randomness in social network dynamics dynamics is fully controlled by the Liouville equation (23) via the information force Eq. (10).The Langeven version of Newtonian dynamics has similar dynamical structure to that of model of intelligent agent, however the fundamental difference between them is in the origin of randomness: in Langeven systems, randomness is due to random external input, while in L 2 -particle it is self-generated.In terms of transition to randomness, this departure links to quantum mechanics, and that could be expected based upon similarity between dynamical structures of intelligent agent and quantum systems: in both cases there is a feedback from the Liouville equation to the equation of motion although these feedback are different.Indeed, the Madelung version of the Schrödinger equation has the feedback from the Liouville equation to the Hamilton-Jacobi equation in the form of quantum potential, [2] , while in the model of intelligent agent the feedback is rpresented by the information force (10).Another fundamental departure from Newtonian dynamics is a violation of the second law of thermodynamics by the capability of moving from disorder to order without help from outside.In order to demonstrate it, let us turn to Eq. ( 11) and select the initial ρ 0 and the preset (target) ρ *densities such that Then expressing Eq. ( 11) in terms of entropy, one obtains i.e. the entropy of the current probability density is monotonously decreasing.Therefore, the model of intelligent agent is not necessarily complies with the second law of thermodynamics.
It should be emphasized that if the feedback ( 10) is de-activated, the agent becomes a simple Newtonian one.

Similarity of intelligent agent and quantum particle.
In this sub-section we will demonstrate that an intelligent agent is similar, but not identical to a quantum particle.Such a conclusion should be expected since the dynamical structure of the agent differs from a quantum particle only by the type of the feedback from the Liouville equation, (see Fig. 1).a.Superposition.Let us analyze the behavior of the intelligent agent..As follows from Eq. ( 18), all the particular solutions intersect at the same point v=0 at t=0, and that leads to non-uniqueness of the solution due to violation of the Lipcshitz condition (see Eq. ( 20)).Therefore, the same initial condition v=0 at t=0 yields infinite number of different solutions forming a family (18); each solution of this family appears with a certain probability guided by the Liouville equation (11).Obviously, this is a non-classical effect.Qualitatively, this property is similar to those of quantum mechanics: the system keeps all the solutions simultaneously and displays each of them "by a chance", while that chance is controlled by the evolution of probability density (11).
b. Uncertainty Principle.As follows from Eq. (18) after differentiating it over time and elimination i.e. the combination (35) of the position and the velocity of the agent is constant along a "fixed" trajectory.
In particular, at t=0, v and v cannot be defined separately.This is the analog of the uncertainty principle formulated by Heisenberg in quantum mechanics.

c. Wave-particle duality.
The "duality" follows from Eq. ( 10) that describes the "trajectories" of particles, while Eq. ( 11) represents the wave of probability that captures the particle "scattering".Thus, the similarity between the social network and quantum systems is due to a feedback from the Liouville equation to the equation of motion (that does not exist in Newtonian physics), while the difference between these models is due to different types of feedbacks, i.e. between quantum potential and information force.d.Entanglement.Let us consider a two-dimensional case of social network that follows from Eqs. ( 22) and (23 The solution of Eq. (38) has the same form as for one-dimensional case Following the same steps as in one-dimensional case, one arrives at the following solutions of Eqs. ( 40) and (41) respectively that are similar to the solution (18).Since ρ(v 1 , v 2 ) is the known (preset) function, Eqs. ( 42) and ( 43) implicitly define v 1 and v 2 as functions of time.Eliminating time t and orbitary constants C 1 , C 2 , one obtains Thus, the ratio (44) is deterministic although both the numerator and denominator are random,(see Eqs.( 42) and ( 43)).This is a fundamental non-classical effect representing a global constraint.Indeed, two random functions are considered statistically equal if they have the same statistical invariants, but their point-topoint equalities are not required (although it can happen with a vanishingly small probability).As demonstrated above, the diversion of determinism into randomness via instability (due to a Liouville feedback), and then conversion of randomness to partial determinism (or coordinated randomness) via entanglement is the fundamental non-classical paradigm.
Let us discuss more general characteristic of entanglement in physics.

Criteria for non-local interactions.
Based upon analysis of all the known interactions in the Universe and defining them as local, one can formulate the following criteria of non-local interactions: they are not mediated by another entity, such as a particle or field; their actions are not limited by the speed of light; the strength of the interactions does not drop off with distance.All of these criteria lead us to the concept of the global constraint as a starting point.

Global constraints in physics.
It should be recalled that the concept of a global constraint is one of the main attribute of Newtonian mechanics.It includes such idealizations as a rigid body, an incompressible fluid, an inextensible string and a membrane, a non-slip rolling of a rigid ball over a rigid body, etc.All of those idealizations introduce geometrical or kinematical restrictions to positions or velocities of particles and provides "instantaneous" speed of propagation of disturbances.Let us discuss the role of the reactions of these constraints.One should recall that in an incompressible fluid, the reaction of the global constraint ∇ ⋅ v ≥ 0 (expressing non-negative divergence of the velocity v) is a non-negative pressure p ≥ 0 ; in inextensible flexible (one-or two-dimensional) bodies, the reaction of the global constraint g ij ≤ g 0 ij , i,j =1,2 (expressing that the components of the metric tensor cannot exceed their initial values) is a nonnegative stress tensor σ ij ≥ 0 , i,j=1,2.It should be noticed that all the known forces in physics (the gravitational, the electromagnetic, the strong and the weak nuclear forces) are local.However, the reactions of the global constraints listed above do not belong to any of these local forces, and therefore, they are nonlocal.Although these reactions are being successfully applied for engineering approximations of theoretical physics, one cannot relate them to the origin of entanglement since they are result of idealization that ignores the discrete nature of the matter.However, there is another type of the global constraint in physics: the normalization constraint, (see Eq. ( 6).This constraint is fundamentally different from those listed above for two reasons.Firstly, it is not an idealization, and therefore, it cannot be removed by taking into account more subtle properties of matter such as elasticity, compressibility, discrete structure, etc.Secondly, it imposes restrictions not upon positions or velocities of particles, but upon the probabilities of their positions or velocities, and that is where the entanglement comes from.Indeed, if the Liouville equation is coupled with equations of motion as in quantum mechanics, the normalization condition imposes a global constraint upon the state variables, and that is the origin of quantum entanglement.In quantum physics, the reactions of the normalization constraints can be associated with the energy eigenvalues that play the role of the Lagrange multipliers in the conditional extremum formulation of the Schrödinger equation, [9].In the intelligent agent models, the Liouville equation is also coupled with equations of motion (although the feedback is different).And that is why the origin of entanglement in the social network model is the same as in quantum mechanics.
3. Speed of action propagation.Further illumination of the concept of quantum entanglement follows from comparison of quantum and Newtonian systems.Such a comparison is convenient to perform in terms of the Madelung version of the Schrödinger equation, [2].As follows from these equations, the Newtonian mechanics ( = 0 ), is of a hyperbolic type, and therefore, any discontinuity propagates with the finite speed , i.e. the Newtonian systems do not have non-localities.But the quantum mechanics ( ≠ 0 ) is of a parabolic type.This means that any disturbance of in one point of space instantaneously transmitted to the whole space, and this is the mathematical origin of non-locality.But is this a unique property of quantum evolution?Obviously, it is not.Any parabolic equation (such as Navier-Stokes equations or Fokker-Planck equation) has exactly the same non-local properties.However, the difference between the quantum and classical non-localities is in their physical interpretation.Indeed, the Navier-Stokes equations are derived from simple laws of Newtonian mechanics, and that is why a physical interpretation of non-locality is very simple: If a fluid is incompressible, then the pressure plays the role of a reaction to the geometrical constraint , and it is transmitted instantaneously from one point to the whole space (the Pascal law).One can argue that the incompressible fluid is an idealization, and that is true.However, it does not change our point: Such a model has a lot of engineering applications, and its non-locality is well understood.The situation is different in quantum mechanics since the Schrodinger equation has never been derived from Newtonian mechanics: It has been postulated.In addition to that, the solutions of the Schrodinger equation are random, while the origin of the randomness does not follow from the Schrodinger formalism.That is why the physical origin of the same mathematical phenomenon cannot be reduced to simpler concepts such as "forces": It should be accepted as an attribute of the Schrodinger equation.Let us turn now to the L 2 -particle model.The formal difference between it and quantum systems is in a feedback from the Liouville equation to equations of motion: the gradient of the quantum potential is replaced by the information forces, while the equations of motion are written in the form of the second Newton's law rather than in the Hamilton-Jacoby form.The corresponding Liouville equation becomes ODE (see Eq. (4.2)) where the state variables play the role of parameters; it can be easily seen from the solution (4.3) that all the changes in the state variables occurs simultaneously.Thus, both quantum systems and L 2 -particle model possess the same non-locality: instantaneous propagation of changes in the probability density, and this is due to similar topology of their dynamical structure, and in particular, due to a feedback from the Liouville equation.

Origin of randomness in physics.
Since entanglement in quantum systems as well as in social network models are exposed via instantaneous propagation of changes in the probability density, it is relevant to ask what is the origin of randomness in physics.The concept of randomness has a long history.Its philosophical aspects first were raised by Aristotle, while the mathematical foundations were introduced and discussed much later by Henry Poincare who wrote: "A very slight cause, which escapes us, determines a considerable effect which we cannot help seeing, and then we say this effect is due to chance".Actually Poincare suggested that the origin of randomness in physics is the dynamical instability, and this viewpoint has been corroborated by theory of turbulence and chaos.However, the theory of dynamical stability developed by Poincare and Lyapunov revealed the main flaw of physics: its fundamental laws do not discriminate between stable and unstable motions.But unstable motions cannot be realized and observed, and therefore, a special mathematical analysis must be added to find out the existence and observability of the motion under consideration.However, then another question can be raised: why turbulence as a post-instability version of an underlying laminar flow can be observed and measured?In order to answer this question, we have to notice that the concept of stability is an attribute of mathematics rather than physics, and in mathematical formalism, stability must be referred to the corresponding class of functions.For example: a laminar motion with sub-critical Reynolds number is stable in the class of deterministic functions.Similarly, a turbulent motion is stable in the class of random functions.Therefore the same physical phenomenon can be unstable in one class of functions, but stable in another, enlarged class of functions, [10].Thus, we are ready now to the following conclusion: any stochastic process in Newtonian dynamics describes the physical phenomenon that is unstable in the class of the deterministic functions.This elegant union of physics and mathematics has been disturbed by the discovery of quantum mechanics that complicated the situation: Quantum physicists claim that quantum randomness is the "true" randomness unlike the "deterministic" randomness of chaos and turbulence.Richard Feynman in his "Lectures on Physics" stated that randomness in quantum mechanics in postulated, and that closes any discussions about its origin.However, recent result disproved existence of the "true" randomness.Indeed, as shown in [2], the origin of randomness in quantum mechanics can be traced down to instability generated by quantum potential at the point of departure from a deterministic state if for dynamical analysis one transfer from the Schrödinger to the Madelung equation.As demonstrated there, the instability triggered by failure of the Lipchitz condition splits the solution into a continuous set of random samples representing a "bridge" to quantum world.Hence, now we can state that any stochastic process in physics describes the physical phenomenon that is unstable in the class of the deterministic functions.Actually this statement can be used as a definition of randomness in physics.

Communications and self-organization in social network.
In this sub-section we consider specific properties of social networks associated with the concepts of attractors, communications, self -organization, competition etc. with emphasis on such aspects that cannot be captured by Newtonian dynamics.a. Attractors.Recent advances in non-linear dynamics opened up a new direction in information processing based upon special properties of solutions to dynamical systems.In this new role, the dynamical system is not derived from the Lagrange or Hamilton principles, but it is rather created to simulate behavior of an observed object whose law of motion is not well understood.For instance, even the simplest living systems interact in a ''non-Newtonian'' way via flows of information that are produced and processed by a signaling system whose complexity on a bio-chemical level is enormous.In order to incorporate this kind of phenomena into the process of self-organization and pattern formations on the physical level of description, one has to find a dynamical equivalent that would capture the phenomenology of the observed behavior.Such an equivalent can be associated with the concept of a static attractor that is the most powerful modeling tool for synthesis of complex patterns of behavior.An attractor is a stable dissipative structure that does not depend (at least, within a certain basin) upon the initial conditions.Due to this property, the whole history of evolution prior to attraction becomes irrelevant, and that represents a great advantage for information processing, and in particular, for associative memory and pattern recognition.However classical artificial neural networks are effective in a deterministic and repetitive world, but faced with uncertainties and unpredictability, both of them fail.At the same time, many natural and social phenomena exhibit some degree of regularity only on a higher level of abstraction, i.e., in terms of some invariants.For instance, each particular realization of a stochastic process can be unpredictable in details, but the whole ensemble of these realizations i.e., "the big picture", preserves the probability invariants (expectation, moments, information, etc), and therefore, predictable in terms of behavior "in general."Therefore, the next step in expansion of the concept of the attractor is a stochastic attractor that dwells in the probability rather than physical space.In order to illustrate both concepts of static and stochastic attractors, let us turn to Eq. (11).Obviously, this equation describes static attractor in probability space: any initial probability density ρ 0 eventually approaches a preset probability density ρ *. physical space, the same process represents a stochastic attractor described by the limit set of the system (4.2): each trajectory of this set is random, but the statistical invariants of the whole system approach their static values corresponding to those in probability space.Moreover, as follows from Eq. ( 11), each statistical invariant of the initial density ρ 0 approaches the corresponding statistical invariant of the preset attraction density ρ *.Obviously the relationship between dynamics in physical and probability spaces is the following: if Eqs. ( 9) are run independently many times, then the collected statistical properties of the solutions obtained are described by Eq. ( 11).Thus, there is a correspondence between the static attractor in probability space and the stochastic attractor in physical space.However, this in not a one-to-one correspondence: as will be shown below, the same static attractor in probability space may have infinite number of stochastic representations in physical space.Indeed, let us augment Eq. ( 9) with the following neural-net-like "vortex" terms It is easily verifiable that the augmented terms do not effect the corresponding Liouville equation, and therefore, they do not change the static attractor in the probability space described by Eq. ( 11).However, they may significantly change the configuration of the random trajectories in physical space making their entanglement more sophisticated.
Turning to the attractor in the form of Eqs. ( 9) and ( 11), one concludes that among the random solutions of Eq. ( 9), the highest probability to appear has the solution that delivers the global maximum to the probability density (11).This property has been exploited for finding global maxima of preset functions, [6].b.Entanglement-based active systems.1. Prediction with uncertainty.The most natural application of the proposed dynamics is to modeling active systems.By an active system we will understand here a set of interacting intelligent agents capable of processing information, while an intelligent agent is an autonomous entity that observes and acts upon an environment and directs its activity towards achieving goals.The active system is not derivable from the Lagrange or Hamilton principles, but it is rather created for information processing.One of specific differences between active and physical systems is that the former are supposed to act in uncertainties originated from incompleteness of information.Indeed, an intelligent agent almost never has access to the whole truth of its environment.Uncertainty can also arise because of incompleteness and incorrectness in the agent's understanding of the properties of the environment.That is why the proposed model is well suited for representation of active systems.In order to illustrate that, consider cooperating agents whose interaction is described by Eq.(45).Recall that here v i are state variables identified with the agent's positions, and ij T are constant coefficients providing prescribed properties of the cooperation.It should be emphasized that an i th agent runs the corresponding th i equation while updating state variables via communications with other agents.The central problem of the synthesis of dynamical networks for the purpose of information processing is to place a prescribed type of attractors (within the corresponding basins) at prescribed locations and explicitly control these locations subject to changes of the objective of the performance.Assuming that each agent possesses the model of the probability density evolution (11), one concludes that an i th agent can predict its own expected future position, as well as the expected position of any th j agent.But these predictions are coming with uncertainties measured by variances and higher moments that uniquely defined by Eq. (11).It should be noticed that similar effects exist in Langeven dynamics although the proposed social network is autonomous, and it does not need external forces to generate randomness.However, the next effect that is to be discussed below is unique: it does not exist neither in Newtonian nor in Langeven dynamics.

Instantaneous transmission of conditional information on remote distance.
Let us consider 2 observers, and assume that initially they run the system (36), (37), and (38) jointly up to time 0 0 > t .During this time, they can replace Eq. (38) as following: each of them can run the system (36), and (37) many times for a short interval Δt , collect statistics for v 1 and v 2 , compute probability density ρ and its derivatives and substitute them into Eqs.(36) and (37).Let us assume that at t = t 0 the observers are separated, and each of them runs its own part: the observer 1 runs Eq. ( 36), and the observer 2 runs Eq. (37).Now each observer can collect statistics only for its own velocity that is not sufficient for continuing cooperation.However, they can exploit the entanglement effect!Indeed, let us turn to Eq. ( 11) and recall that in this equation, ρ * is known (the preset) function of its arguments.Therefore this equation has the form in which F 1 and F 2 are known random functions.Hence if the observer 1 knows the value v 1 of his state variable at the instant t = t 0 , he can calculate the value v 2 of the state variable of the observer 2 at the same instant as well by using Eq. ( 46), despite the fact that both values are random.The observer 2 can perform the same correlation.As an illustration let us now choose the preset density as a uniform distribution where a and b are positive constants.Substituting Eq. (47) into Eq.( 44), on obtains whence or If the observers remember the values of their state variables at the instant t then substituting Eqs.(51) into Eq.(50), they find the value of the constant and therefore, as follows from Eqs. (50),( 51) and ( 52), if the first observer knows his own motion v 1 (t) at t > t 0 , then he can calculate from Eqs. (50) the motion of the second observer v 2 (t) at t > t 0 .
Thus, due to the entanglement (44), the observers can continue their cooperation even after disconnection.However, this cooperation is non-classical: both functions in Eq. ( 50) are random, but at the same time, they are point-to-point identical to accuracy of a constant factor * * ~1 2 v v C = . In other words, each observer cannot predict his own future motion, but as soon as he learns about his present state, he can predict the present state of the entangled observer.
Obviously, the transmission of this "knowledge" from one observer to another is instantaneous.In addition to that, the distance between the observers after their separation is irrelevant since the x-coordinate does not enter the governing equations.Indeed, the message does not it exists only at the location of the observers.However, the Shannon information transmitted is zero since the observers cannot control the outcomes of their measurements: they are random.In other words, the observers cannot transmit intentional messages.Nevertheless, based upon the transmitted knowledge, they can coordinate their actions based upon conditional information: if the observer 1 knows his own measurements, he can fully determine the measurements of the other one.The performance is illustrated in Fig. 13.Several properties of the described correlation should be emphasized.First, there is no centralized source, or a sender of the signal since each receiver can become a sender as well.Indeed, an observer receives a signal by performing certain measurements synchronized with the measurements of the others.Thereby the signal uniformly and simultaneously distributed over the observers in a decentralized way.Second, the signals transmit no intentional information that would favor one agent over another.Third, all the sequence of signals received by different observers are not only statistically equivalent, but also point-by-point identical.Fourth, it is important to assume that each agent knows that the other agent simultaneously receives the identical signals.Finally, the sequences of the signals are true random so that no agent could predict the next step with the probability different from those described by the density (39).It turns out that under these quite general assumptions, the entangled observers-agents can perform non-trivial tasks that include transmission of conditional information from one agent to another, simple paradigm of cooperation, etc.
It should be emphasized again that the origin of entanglement of all the observers is the joint probability density that couples their actions, and such a constraint does not exist in Newtonian mechanics.Hn The problem of behavior of intelligent agents correlated by identical random messages in a decentralized way has its own significance: it simulates evolutionary behavior of biological and social systems correlated only via simultaneous censoring sequences of unexpected events.In order to justify the usefulness of the described correlation paradigm, consider an earthquake that is represented by some sequence of totally unpredictable jolts.All the "agents" (humans, animals) receive these unexpected signals simultaneously, and from that moment their activity became correlated and organized: they run to shelter, turn off pipelines, etc.It is interesting to investigate different parts of a crowd that received different messages; if initial state of the crowd can be described as a Brownian motion, after these messages the crowd will become selforganized: its movements could include polarization, convergence to certain patterns of behavior, etc.Let us discuss possible application of the entanglement introduced above to security of communications.It is always a temptation to simulate any new quantum or quantum-inspired phenomenon by classical tools.In the case of entanglement such a possibility was excluded from the very beginning since this is a nonlocal phenomenon that does not have any classical equivalents.However, one can argue that actually the system under consideration becomes classical as soon as the message is received and interpreted by the agent; therefore, instead of entanglement-based correlations between the agents, one can generate a pool of samples of stochastic processes in advance, make copies and distribute them over the agents, so that any two agents to be correlated would have identical records of "random" messages.However, there is a fundamental flaw in such an implementation, since in that case the whole scenario of the agent's evolution is fully predetermined, and someone (for instance, those who generated, copied and distributed the messages) can know this scenario in advance.In principle, each agent also can find out his future messages since the knowledge about this future has already existed.The difference between the entangled and classical cases is similar to that between real-time and pre-recorded TV programs: in the first case, future is unpredictable, while in the second case "future" has already happened, although the viewer may not know about that.In a more practical sense, the difference between the entangled and classical implementations becomes important when the communications between the agents are supposed to be confidential: in the classical case, the confidential information, in principle, is available long before it is needed, and that makes such communications less secure.In order to illustrate a security aspect of the proposed algorithm, suppose that a sender possesses N different messages, which he can choose only at random with equal probability, and assume that any of these messages allows each receiver to achieve his goal as long as the secrecy of the message is preserved.(For instance, if a military attack can be conducted in many different ways, the most important is the secrecy of the selected strategy.)Then from the viewpoint of Shannon information, the transmission of such message is useless.However, if one is asked what the chance is that the message can be decoded by a wild guess, the answer will be: 1/N.This means that the number of equally acceptable (but randomly chosen) messages is proportional to the degree of secrecy of the transmission, and that represents the value of this transmission.Actually, the sender coordinates and synchronizes the actions of the receivers (regardless of the origin of the message itself) and preserves the secrecy of the communications by making the choice of his message random.It should be emphasized again that the whole procedure makes sense only under the condition that a receiver can use any of these messages to achieve the same objective, but nobody else must know what kind of message has been received.36), (37)) and ( 38) have been applied as a model for a cooperative behavior of two intelligent agents with the same collective objective: to approach a preset attractor, or to maximize a preset function.It should be emphasized that all the communications between the agents are due to entanglement, so that cooperating agents can predict each other states even on a distance and without actual exchange of information.In this subsection we will address a more complex situation when agents are competing.That means that they have different objectives.Eqs.(36), (37)) and (38) can be rewritten then for the case of n competing agents

Competing agents. Eqs. (
where ρ k * is the preset density of the th k agent that can be considered as his objective, a k is a constant weight of the th k agent's effort to approach his objective.
Thus, each th k agent is trying to establish his own static attractor ρ k * , but due to entanglement, the whole system will approach the weighted average ) Substituting the solution (55) into Eqs.(53), one arrives at a coupled system of n ODE with respect to n state variables v i .Although a closed form analytical solution of the system (53) and ( 55) is not available, its property of the Lipchitz instability at t=0 (see Eq. ( 20)) can be verified.This means that the solution of the system (53) and ( 55) is random, and if the system is run many times, the statistical properties of the whole ensemble will be described by Eq. ( 55).Obviously, those agents who have chosen density with a sharp maximum are playing more risky game.
Here we have assumed that competing agents are still entangled, and therefore, their information about each other is complete.More complex situation when the agents are not entangled, and exchanged information is incomplete is address in [11].The simplest way to formalize the incompleteness of information possessed by competing agent is to include the "vortex" terms into Eqs.(53) (See Eqs.(45)): these terms could change each particular trajectory of the agent motion, but they do not change the statistical invariants that remain available to the competing agents.

a. Introduction.
Let us recall that social network is a social structure made up of individuals who are connected by one or more specific types of interdependency, such as friendship, common interest, financial exchange, dislike, or relationships of beliefs, knowledge or prestige.Recent events demonstrate that social network can inflict revolutionary changes into world political and economical structure.The least predictable and, therefore, more dangerous are those social networks that emerge spontaneously being triggered by random events such as an earthquake, fire, intrusion of a hacker into communication system etc.In this sub-section we will apply the proposed model formalism to modeling spontaneous formation of social network, prediction their possible evolution, as well as suppression of undesirable consequences of their evolution.We will start with the assumptions about random events (signals) that trigger formation of a social network: 1.The signals transmit no intentional information that would favor one agent over another.2. Each agent knows that the other agents simultaneously receive the identical signals. Here is the probability density that represents the message received by all the agents as a result of a random event.It is reasonable to assume that the corresponding agents are weakly entangled unless they were involved in a joint initial performance.As noticed above (see Section 4.8), weakly entangled agents are not completely independent: they can make random decisions, but the probability of these decisions will be correlated via the joint probability (see Eq.(7.86))As a result, the agents will be able to predict expected decisions of each other.
Let us assume that the random message (59) affects only those agents whose expected state variable value is below a certain level where Actually such massage selects a sub-group of agents that attain stronger correlations due to the property (60) in common.In terms of social structure, it means that the agents get acquainted, obtain more information about each other, and start-up some joint activity.From the viewpoint of the proposed dynamics, it means that these agents become entangled and therefore, they can predict activities of each other, (see sub-section 3.8.).
Let us fortify these arguments by a mathematical formalism.First of all, we have to rewrite Eqs. ( 57) and (58) as following Now Eqs. ( 62) and (63) distinguish the agents who received the message (61) and reacted to it from the agents who received, but not reacted to the same message.
The rule (64) can be implemented in a dynamics way.Indeed, suppose that This equation has two equilibrium points: When |V i |<|V * |, the first point in Eq. ( 65) is a repeller, and the second one is an attractor, i.e., a i = 1.
Conversely, when |V i |≥|V * |, the first point is an attractor, and the second one is a repeller, i.e. a i = 0 .
Therefore, the rule (64) is implemented regardless of the initial values of a i .It is implied that the dynamics in Eq. (65) must have much smaller time scale than those in Eqs. ( 62) and ( 63), so that the agent's state variables can be treated as frozen during the transient dynamics in Eq. ( 65).
Since the agents that belong to the emerged sub-structure are strongly entangled, their motions are identical, i.e.
although they can be random.
If the condition (60) is generalized to one arrives at k sub-groups of entangled agents .Each of this group can act as one entity, and instead of games of individual agents, the social net can experience games between the sub-groups.

introduction.
In this section we depart from the model of social network with attractor and move to another modelsocial network with diffusion -that belong to the same class introduced in Section 2. The difference between these two models in in different feedbacks from the Liouville equation: instead of the feedback Eq. ( 9) we introduce the following This model was studied in [3,8,11].As shown there, it possesses the same properties of quantum-classical hybrids that were discussed in the previous section, namely: superposition, entanglement, and the capability to violate the second law of thermodynamics.The purpose to include such a modification of social network in this paper is to demonstrate a unique capability of the proposed models to solve NP-complete problems that link to decision-making process, while NP-complete problems are known as being unsolvable neither by Newtonian nor by quantum algorithms.

Link to decision-making process.
Decision-making is the cognitive process of selecting a course of action from among multiple alternatives.We will distinguish two types of decision-making processes.The first type is associated with the concept of a rational agent; the models of this type are largely quantitative and are based on the assumptions of rationality and near perfect knowledge.They are composed of agent's believe on the basis of evidence followed by construction and maximization of utility function.The main limitation of these models is in their exponential complexity: on the level of believe nets, the complexity is caused by the fact that encoding a joint probability as a function of n propositional variables requires a table with n 2 entries, [12 ]; the same rate of complexity occurs in rule-based decision trees.The second type of the decision making process is based upon psychological models; these models concentrate on psychological and cognitive aspects such as motivation, need reduction, and common sense.They are qualitative rather than quantitative ones being built on sociological factors like cultural influences, personal experience, etc.The model proposed in this section is better structured for the first type of decision-making process that exploits interaction between motor and mental dynamics, in which the mental dynamic plays the role of a knowledge base replacing unavailable external information.

Combinatorial optimization .
Combinatorial problems are among the hardest in the theory of computations.They include a special class of so called NP-complete problems, which are considered to be intractable by most theoretical computer scientists.A typical representative of this class is a famous traveling-salesman problem (TSP) of determining the shortest closed tour that connects a given set of n points on the plane.As for any of NPcomplete problem, here the algorithm for solution is very simple: enumerate all the tours, compute their lengths, and select the shortest one.However, the number of tours is proportional to n! and that leads to exponential growth of computational time as a function of the dimensionality n of the problem, and therefore, to computational intractability.It should be noticed that, in contradistinction to continuous optimization problems where the knowledge about the length of a trajectory is transferred to the neighboring trajectories through the gradient, here the gradient does not exist, and there is no alternative to a simple enumeration of tours.The class of NP-complete problems has a very interesting property: if any single problem (including its worse case) can be solved in polynomial time, then every NP-complete problem can be solved in polynomial time as well.But despite that, there is no progress so far in removing a curse of combinatorial explosion: it turns out that if one manages to achieve a polynomial time of computation, then the space or energy grow exponentially, i.e., the effect of combinatorial explosion stubbornly reappears.That is why the intractability of NP-complete problems is being observed as a fundamental principle of theory of computations, which plays the same role as the second law of thermodynamics in physics.
At the same time, one has to recognize that the theory of computational complexity is an attribute of a digital approach to computations, which means that the monster of NP-completeness is a creature of the Turing machine.As an alternative, one can turn to an analog device, which replaces digital computations by physical simulations.Indeed, assume that one found such a physical phenomenon whose mathematical description is equivalent to that of a particular NP-complete problem.Then, incorporating this phenomenon into an appropriate analog device one can simulate the corresponding NP-complete problem.In this connection it is interesting to note that, at first sight, NP-complete problems are fundamentally different from natural phenomena: they look like man-made puzzles and their formal mathematical framework is mapped into decision problems with yes/no solutions.However, one should recall that physical laws could also be stated in a "man-made" form: the least time (Fermat), the least action (in modifications of Hamilton, Lagrange, or Jacobi), and the least constraints (Gauss).Moreover self-controlled systems under consideration have the direct link to model of livings.So may be this is the answer to the problem?
Finally let us quote a recent statement posed in [13]: Can NP-complete problems be solved efficiently in the physical universe?The answer given by the author, Scott Aaronson, is negative.To our opinion, it could be positive if we complement the "physical world" with self-controlled systems capable to violate the second law of thermodynamics in order to find short cuts to solutions of combinatorial problems.67) that leads to diffusion modification of the social network changes the motor dynamics Eq. ( 10) to the following This equation should be complemented by the corresponding Liouville equation that in this particular case takes the form of the Fokker-Planck equation, [14], replacing Eq. ( 11) Here σ 2 stands for the constant diffusion coefficient.
The solution of Eq. ( 69) subject to the sharp initial condition replaces Eq. ( 12) describes diffusion of the probability density, and that is why the feedback (67) is called a diffusion feedback.Substituting this solution into Eq.( 68) at V = v one arrives at the differential equation with respect to v (t) and therefore, where C is an arbitrary constant.Since v=0 at t=0 for any value of C, the solution (72) is consistent with the sharp initial condition for the solution (70) of the corresponding Liouvile equation (69).The solution (72) describes the simplest irreversible motion: it is characterized by the "beginning of time" where all the trajectories intersect (that results from the violation of Lipcsitz condition at t=0, Fig 8), while the backward motion obtained by replacement of t with (-t) leads to imaginary values of velocities.One can notice that the probability density (70) possesses the same properties.
It is easily verifiable that the solution (72) has the same structure as the solution Eq. ( 18) and its examples Eqs.( 25) and ( 27).The explanation of such a "coincidence" is very simple: the system (68), (69) has the same dynamical topology as that of Eqs.(10) and (11) where the equation of conservation of the probability is coupled with the equation of conservation of the momentum.As shown in [3], the system (68), ( 69 social network.Further analysis of the solution (72) demonstrates that it is unstable since and therefore, an initial error always grows generating randomness.Initially, at t=0, this growth is of infinite rate since Lipchitz condition at this point is violated This type of instability has been introduced and analyzed in [4].The unstable equilibrium point ( 0 = v ) has been called a terminal repeller, and the instability triggered by the violation of the Lipchitz condition -a non-Lipchitz instability.The basic property of the non-Lipchitz instability is the following: if the initial condition is infinitely close to the repeller, the transient solution will escape the repeller during a bounded time while for a regular repeller the time would be unbounded.Indeed, an escape from the simplest regular repeller can be described by the exponent , unless the time period is unbounded.On the contrary, the period of escape from the terminal repeller (72) is bounded (and even infinitesimal) if the initial condition is infinitely small, (see Eq. ( 74)).
Considering first Eq.(72) at fixed C as a sample of the underlying stochastic process (70), and then varying C, one arrives at the whole ensemble characterizing that process, (see Fig. 8).One can verify that, as follows from Eq. ( 18), [14], the expectation and the variance of this process are, respectively The same results follow from the ensemble (72) at ∞ ≤ ≤ −∞ C . Indeed, the first equality in (75) results from symmetry of the ensemble with respect to v=0; the second one follows from the fact that It is interesting to notice that the stochastic process (72) is an alternative to the following Langevin equation, [14] that corresponds to the same Fokker-Planck equation (69).Here ) (t Γ is the Langevin (random) force with zero mean and constant variance σ .Thus, the emergence of self-generated stochasticity is the first basic non-Newtonian property of the dynamics with the Liouville feedback, and qualitatively it is the same as in the previous modification of the social network.

Inhomogenious version of the diffusion social network.
Following [15], let us introduce the inhomogeneous version of Eq. (68 Then the corresponding Liouville equation takes the form of an inhomogeneous parabolic equation subject to an aperiodic force It should be noticed that the sums in Eqs. ( 78) and (79) are finite, and they do not represent even truncated Fourier expansions, while all the harmonic terms are equally powerful.Obviously this system is still self-supervising, but not isolated any more.
and the normalization constraint Before writing down the solution, we will verify satisfaction of the constraint (81).For that purpose, let us integrate Eq. ( 79) with respect to v As follows from the boundary conditions in (80), Therefore, the normalization constraint will be satisfied for all 0 ≥ t Exploiting the superposition principle for the linear equation (79), we will represent the solution as a sum of free and forced components.These components are, respectfully Here we will be interested only in the case (85) that represents a resonance between two aperiodic terms, namely: exponentially decaying force and exponentially decaying free motion.Indeed, the solution (85) has a well-pronounced maximum at while the solutions (46) and (47) are monotonously decay.
Let us now reaffirm the scenario of transition from deterministic to random state described by Eqs.(68), (69).For that purpose, rewrite Eq. (83) in a different, but an equivalent form (based upon reflections from the boundaries) and therefore, the transition scenario remains the same.
It should be noticed that prior to running Eq. ( 87), the analytical of Eq. ( 88) in the form of the sum of Eqs. ( 83), (84), and (87) is to be substituted for ρ.
Before moving to n-dimensional case, we will discuss the basic properties of the solution to Eqs. ( 68), (69).Although there are many similarities to quantum systems, we will concentrate again on superposition since it will be essential for the described approach.
In quantum mechanics, any observable quantity corresponds to eigenstate of a Hermitian linear operator.The linear combination of two or more eigenstates results in quantum superposition of two or more values of the quantity.If the quantity is measured, the projection postulate states that the state will be randomly collapsed onto one of the values in the superposition (with a probability proportional to the square of the amplitude of that eigenstate in the linear combination).Let us compare the behavior of the model under consideration from that viewpoint.As follows from Eq. ( 72), all the particular solutions intersect at the same point v=0 at t=0, and that leads to non-uniqueness of the solution due to violation of the Lipcshitz condition.Therefore, the same initial condition v=0 at t=0 yields infinite number of different solutions forming a family (72); each solution of this family appears with a certain probability guided by the corresponding Fokker-Planck equation, Fig. 9 .

Figure 9. Resonance in the probability space
Turning to n-dimensional case we have Let us now briefly review the procedure of the retrieval.Assume that the label of the item to be found is where C j are constants to be found from the initial conditions, and The second step is to substitute the solution (97) into Eq.( 89).The third step is to run the system (89), measure the values of  It should be noticed that the capacity of the unsorted database is of order ) ( n n O i.e. exponential with respect to its dimensionality n, while all the resources providing its implementation are of order O(n), i.e. polynomial since the number of equations in the system (89) is n, and the number of terms in the analytical solution to Eq.(81) (to be substituted into Eqs.(80)) are of the order O(n) as well.Indeed, the infinite sum in Eq. (98) converges very fast to equal distribution of the probability density, and practically, only the forced component of the solution represented by Eq. ( 99) is important, and this component contains O(n 2 ) number of terms.

Comparison to quantum algorithms.
The the challenge of this section is to relate a new model of intelligent agent incorporated into a special modification of social network to the capability of solving NP-complete problems.The basic idea is to apply the dynamical structure of social network to create an algorithm that would preserve superposition of random solutions, while allowing one to measure its state variables using classical methods.In other words, such a hybrid system would reinforce the advantages and minimize limitations of both quantum and classical aspects.These systems have been analyzed in [11] and [15].It has been shown there that along with preservation of superposition, such an important property of quantum systems as direct-productdecomposability in hybrids is lost.Let us recall that the main advantage of this property in terms of quantum information is in blowing up an input of a polynomial complexity into an output of exponential complexity, with no additional resources required, Fig. 11.The purpose of our approach in this section was in finding a ''replacement" for the fundamental property of the Schrödinger equation in quantum-classical hybrids.It turns out that eigen-values of linear parabolic PDE possess similar property.Indeed, consider a linear n-dimensional parabolic PDE subject to boundary conditions.Then the eigen-values corresponding to each variable form a sequence of monotonously increasing positive numbers λ i (1) ...λ (n)   i .However, each linear combination of these eigen-values represents another eigen-value of the solution, and that is the same ''combinatorial explosion" that is illustrated in Fig. 11.Due to that property, for each n-string-number label, one can find an excitation force that activates the corresponding eigen-value.The second challenge was to satisfy a global (normalization) constraint imposed upon the probability density (in addition to boundary conditions).That was achieved via a special form of the excitation force.Finally these work ads a positive comment to a question posed in [13]: Can NP-complete problems be solved efficiently in the physical universe?The answer given by the author, Scott Aaronson, is negative.To our opinion, it could be positive if we complement the "physical world" with self-supervised systems capable to violate the second law of thermodynamics in order to find short cuts to solutions of combinatorial problems.

Discussion and conclusion.
The challenge of this work is to re-define the concept of intelligent agent as a building block of social networks by presenting it as a physical particle with additional non-Newtonian properties.The proposed model of an intelligent agent described by a system of ODE coupled with their Liouville equation has been introduced and discussed.Following the Madelung equation that belongs to this class, non-Newtonian properties such as superposition, entanglement, and probability interference typical for quantum systems have been described.Special attention was paid to the capability to violate the second law of thermodynamics, which makes these systems neither Newtonian, nor quantum.It has been shown that the proposed model can be linked to mathematical models of livings as well as to models of AI.The models o social network is presented in two modifications.The first one is illustrated by the discovery of a stochastic attractor approached by the social network; as an application, it was demonstrated that any statistics can be represented by an attractor of the solution to the corresponding system of ODE coupled with its Liouville equation.It was emphasized that evolution to the attractor reveals possible micro-mechanisms driving random events to the final distribution of the corresponding statistical law.Special attention is concentrated upon the power law and its dynamical interpretation: it is demonstrated that the underlying micro-dynamics supports a "violent reputation" of the power-law statistics.Other applications of this model that include the algorithm for instantaneous transmission of conditional information, as well as the concept of cooperating and competing active systems have been introduced.It should be recalled that some applications of this model were considered in our previous publications: the algorithm for finding global maximum of integrable functions, [6], and interference of probabilities applied to two processes performed by human mind: formation of language and intuition-based decision-making process, [5].
The second modification of the model of social network associated with a decision-making process and applied to solution of NP-complete problems known as being unsolvable neither by classical nor by quantum algorithms.The approach is illustrated by solving a search in unsorted database in polynomial time by resonance between external force representing the address of a required item and the response representing the location of this item.Other applications of this model were introduced in [3].Firstly, the model is expended to include nonlinear and dispersion terms that lead to existence of a stochastic attractor that has the configuration of a soliton in probability space.Secondly, it has been demonstrated that complexity of social network measured by a number of mental layers m is In this connection, it is interesting to pose the following problem.What is a more effective way for Livings to promote Life: through a simple multiplication, i.e. through increase of the number of "primitives" n, or through individual self-perfection, i.e. through increase of the number m of the levels of abstractions ("What do you think I think you think. . .")?The solution of this problem may have fundamental social, economical and geo-political interpretations.But the answer immediately follows from Eq. (101) demonstrating that the complexity grows exponentially with the number of the levels of abstractions m, but it grows only linearly with the dimensionality n of the original system.Thus, in contradistinction to Darwinism, a more effective way for Livings to promote Life is through higher individual complexity (due to mutually beneficial interactions) rather than through a simple multiplication of "primitives".This statement can be associated with recent consensus among biologists that the symbiosis, or collaboration of Livings, is even more powerful factor in their progressive evolution than a natural selection

Figure 3 .
Figure 3. Dynamics driving random events Figure 4. Dynamics driving to normal distribution.randomevents to power law.

Figure 6 .
Figure 6.Normal and power law distributions.

Figure 7 .
Figure 7. Secure instantaneous transmission of conditional information.a.The message to be sent is chosen randomly, and it is transmitted instantaneously to the receivers.b.Each receiver is aware of all possible messages in advance, and it knows what actions should be taken for each message.c.The coordination of the receivers' activity is absolutely secure since nobody (including the sender and the receivers) knows what message will be sent prior to actual transmission, while the decoding of the transmitted message can be performed only by those receivers that are entangled with the sender.

.
The first step is to write down the analytical solution to Eq. (90) that consists of free and forced motions as in the one-dimensional case: address of the item in the form of a string of coordinates