Preprint
Review

This version is not peer-reviewed.

Mathematical Modelling of Physical Reality: From Numbers to Fractals, Quantum Mechanics and the Standard Model

A peer-reviewed article of this preprint also exists.

Submitted:

09 September 2024

Posted:

11 September 2024

You are already at the latest version

Abstract
In physics we construct idealized mathematical models in order to explain various phenomena which we observe or create in our laboratories. In this article, I recall how sophisticated mathematical models evolved from the concept of a number created thousands years ago and I discuss some challenges and open questions in quantum foundations and in the standard model. We liberated nuclear energy, landed on the Moon and built ‘quantum computers’. Encouraged by these successes many believe, that when we reconcile the general relativity with the quantum theory then we will have the correct theory of everything. Perhaps, we should be much more humble. Our perceptions of reality are biased by our senses and by our brain bending them to meet our priors and expectations. Our abstract mathematical models describe only in an approximate way different layers of physical reality. To describe the motion of a meteorite we can use a concept of a material point but the point-like approximation breaks completely, when the meteorite hits the Earth. Similarly, thermodynamic, chemical, molecular, atomic, nuclear, elementary particle layers of physical reality are described using specific abstract mathematical models and approximations. In my opinion, the theory of everything does not exist.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Gauss said: “mathematics is the queen of sciences, and arithmetic the queen of mathematics”, but the physical reality is much more than mathematical models we create to describe it.
As soon as we are born we learn that our and our parents’ actions have effects. If we cry we get fed, covered cuddled or cleaned. If we open our eyes we see external world. If we notice a toy we have to move our hand to grasp it, or have to crawl or walk before getting it. This is how we acquire a basic notion of causality by which one event contributes to the occurrence of another event. From the early childhood we are asking a question :“Why…” and we are getting answers: “Because...”, but to any answer “Because...” there is immediately another question “Why...?” and so on so on.
Causality is probably the most fundamental notion which any living organism had to understand in order to survive. Any action has a consequence and what is happening around may have an immediate or subsequent impact on the organism’s well- being and fate.
We agree with Robb [1] and Whitehead [2], that the notion of causality is prior to the notions of time and space, because it is necessary for the interpretation of observations and empirical experiments.
In any place on the Earth, there are specific diurnal, monthly and yearly patterns, the Sun and the Moon are moving, seasons are changing, animals mate, give birth, migrate and die. Moreover, Man always has been searching the answer to the following questions. How the universe did come about? What happens after the death? Is there a plan of the solar system? What causes the light? [3].
There existed curious individuals called later astronomers, philosophers, mathematicians and scientists who believed that observed periodic natural phenomena should be studied more in detail and that they reflect some intelligent design of the universe. Therefore, they observed and recorded how the sun, moon and planets were moving and searched for a rational explanation. Such explanation became possible due to the study of the properties of numbers by Pythagoreans in 6-th century BC, followed by the creation of arithmetic, logic and abstract geometry by Greeks.
These efforts led to the highlights of Euclidean geometry taught still in our schools, to Aristotelian principles of logical reasoning still used in courts and to the Ptolemy’s quite precise but complicated model of solar system which survived 15 centuries before being replaced by Copernican and Kepler’s heliocentric model.
Copernicus and Kepler were searching for a systematic harmonious mathematical model which should please the God creator. Kepler, who was a mystic and an astrologer, after discovering his three laws governing the motion of planets on their elliptical orbits concluded in Harmony of the World (1619): ‘The wisdom of the Lord is infinite; so also are his Glory and His power’. He believed that different angular velocities of planets are arranged to play music for God. In fact this belief helped him to discover his laws [3].
Galileo, Newton, Leibniz, Euler, Gauss, Descartes, Spinoza, Kant, Darwin, Einstein rejected many religious dogmas, but strongly believed in the intelligent divine design of the universe. Darwin’s religious views evolved from Christian orthodoxy to agnostic stance.
For Einstein the problem of God transcended human limited understanding , nevertheless he admitted that : “I believe in Spinoza’s God, who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with the fates and actions of human beings.” He believed that “God does not play dice” and that quantum theory cannot be considered as a complete theory of the physical reality.
Our perceptions of reality are biased by our senses and by our brain bending them to meet our priors and expectations. This is why, several philosophers and scientists pointed out that our models describe the physical reality as we perceive it and not as it is.
Emmanuel Kant strongly insisted that our knowledge is limited to the realm of empirical phenomena and that the nature of things as they are in themselves (i.e., beyond our perceptual experience) remains unknowable. Nevertheless the human mind supplies the concepts and axioms building up the reliable knowledge with the sensations it receive [3].
In 1878, von Helmholtz posed the following philosophical questions [4,5]: “What is true in our intuition and thought? In what sense do our representations correspond to actuality?” He criticized objective conception of physical theory. In his Build conception, the physical theory is only an intellectual construct of our brain: “ : In as much as the quality of our sensation gives us a report of what is peculiar to the external influence by which it is excited, it may count as a symbol of it, but not as an image…”.[4]
The Bild conception was further developed and promoted in by Hertz [6,7], Boltzmann [8], Schrodinger [9,10] and it was reviewed by Agostino [11] and Khrennikov [12].
Laplace believed that with classical mechanics and probability theory man is capable to explain the causes and laws governing the universe. Many contemporary physicists also believe that, if we succeed to reconcile the general theory of relativity with the quantum field theory we will obtain the final theory of everything.
.It is true that the successes of modern science and technology are impressive, but we should be much more humble. Our abstract mathematical models describe only in an approximate way different layers of Physical Reality. The theory of everything does not exist.
In this article, we review how different physical and mathematical concepts and models evolved during the centuries and how they have been used to describe the physical reality. We discuss also some challenges and open questions in the standard model and in the foundations of quantum mechanics.

2. A Short History of Numbers and Greek Geometry.

Homo sapiens evolved in Africa approximately from 300,000 to 200,000 years ago from its early predecessors. His important capacity of language developed around 50,000 years ago or earlier. During the 4th millennium BC, Sumerians developed cuneiform writing on clay tablets to represent spoken language and Egyptian started to use hieroglyphs. Chinese writing developed around 1400 BC. The invention of writing marked an important turning point in human history because it allowed to transfer the culture, acquired skills and knowledge to next generations.
Different animal species have different sensorial organs to explore their environment. Migrating birds, fishes and whales and even dogs walking with their owners have a different sensations, perceptions and different “understanding” of the physical reality. As we mentioned in the introduction in order to survive they had to acquire a rudimentary notion of causality. Birds construct complex nests follow sophisticated mating rituals, Chimps and gorillas make strategic plans, construct simple tools and carry them to the place they need them to use.
We know now that that number of species such as gorillas, rhesus, capuchin, squirrel monkeys, lemurs, dolphins, elephants, black bears, birds, salamanders and fish developed numerical abilities. Even 3 day old domestic chicken differentiate between the numbers [13]. When it sits in front of two small opaque screens, and one ball disappears behind the first screen, followed by four balls disappearing behind a second screen, the chicken walks towards the screen that hides four balls. It is even more impressive, that when two balls are moved from the second screen to the first screen 80% of a time chicken decide to walk to the first screen “evaluating” that now there are more balls behind the first screen than behind the second screen. Chimpanzees are able to select quickly the set of bowls containing the largest combined number of chocolate pieces by adding together the number of pieces in each individual bowl [13].
The recent research by Martin Muller and Rudiger Wehner demonstrated that the Tunisian desert ants in spite of the lack of visual landmarks and scent trails are able to compute always their present location and to return to their nest by choosing the direct route rather than retracing its outbound trajectory [14]. This is why it would be surprising, if the dinosaurs could not count.
Homo Sapiens developed quite early superior counting and reasoning skills. The first numbers were used in Middle East around 10,000 BC. The counting started with the number “1” and after evolved from using fingers and tally marks to sets of glyphs representing any conceivable number.
Babylonian mathematics is impressive [15]. They used accounting devices, such as bullae and tokens, already in the 5th millennium BC. The majority of recovered clay tablets date from 1800 to 1600 BC, and cover topics that include fractions, algebra, quadratic and cubic equations and the Pythagorean Theorem.
Babylonians used a sexagesimal (base 60) numeral system because “60” has 10 different divisors, what is crucial in calculations with fractions. In comparison, “10” has only 2 divisors. Moreover, they were probably the first to use the positional notation,where digits written in the left column represented larger values. They introduced also written symbols for digits. We inherited from them the usage of 60, 360, 12 and 24.
The Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) gives an approximation of √2 in four sexagesimal figures, 1;24,51,10, which is accurate to about six decimal digits {15]:
2 1 ; 24 , 51 , 10 = 1 + 24 60 + 51 60 2 + 10 60 3 = 30547 21600 1 , 41421 2 ¯ 9 ¯ 6 ¯
As well as arithmetical calculations, Babylonian mathematicians also developed methods of solving equations without using algebraic notation. . These were based on pre-calculated tables. Babylonians measured perimeters, areas and volumes using the correct rules. For example, they used 3 or later 25/8 to approximate π. Circle perimeter was equal to 3 diameters and circle area was equal to 3 radiuses squared. They knew and applied the Pythagorean rule. Babylonian astronomers kept detailed records of the rising and setting of stars, the motion of the planets, and the solar and lunar eclipses, all of which required familiarity with angular distances measured on the celestial sphere [15].
Egyptian mathematics developed from around 3000 BC to 300 BC [16]. The ancient Egyptians utilized a numeral system for counting and solving written mathematical problems, often involving multiplication and fractions. Egyptians understood quadratic equations and concepts of geometry, such as determining the surface area and volume of three-dimensional shapes useful for architectural engineering.
Ancient Egyptian texts could be written on papyruses in either hieroglyphs or in hieratic. The number system was always given in the base 10. The number “1” was depicted by a simple stroke, the number “2” was represented by two strokes, etc. The numbers 10, 100, 1000, 10,000 and 100,000 had their own hieroglyphs. Number 1000 is represented by a lotus flower; the number 100,000 is represented by a frog etc.
The Egyptian number system was additive. Large numbers were represented by collections of the glyphs [16].
The impressive evidence of the use of the base 10 number system can be found on the Narmer Macehead [17] which depicts offerings of 400,000 oxen, 1,422,000 goats and 120,000 prisoners.
An interesting feature of ancient Egyptians mathematics is the use of unit fractions. With the exception of 1/2, 1/3 and 2/3 Egyptians used unit fractions of the form 1/n or sums of such unit fractions. Scribes used tables to rewrite any fraction a sum of unit fractions [16].
Babylonians and Egyptians developed sophisticated mathematical tools to solve concrete even complicated problems in everyday life, accounting and architecture. They were also able to predict seasonal changes and astronomical events. More information may be found for example in excellent articles on Wikipedia [15,16,17].
The abstract concept of numbers, of geometrical figures and solids were only created later and studied extensively by Greeks. They can be considered as fathers of mathematics, which became the indispensable tool for modelling of physical reality.
Pythagoras was a philosopher who came up with the idea of numbers as symbols instead of just being numerals. He was born on the island of Samos and around 570 BC and settled in Croton ,where he established the first Pythagorean community, described as a secret society [18]. For Phytagoreans, whole numbers explained the true nature of the Universe. Not only they described important regularities and harmony in the world but also they represented certain concepts and social relationships. The number one was identified with the reason and being, two was identified with opinion, four represented justice, five signified marriage, seven was identified with health and eight with love and friendship [3,19,20].
Pythagoreans used pebbles to represent numbers in triangles, squares, rectangles and pentagons. This helped them to investigate the relationships between different numbers. They defined prime numbers, triangular, square, and odd and pair numbers. Particularly important was the sacred number “10” ( called Tetractys) because there were 4 pebbles on each edge.
The geometrical representation of numbers allowed to detect several regularities and to prove by inductions several theorems. Since 1+3=4, 3+6=9, 6+10=16, thus any square number can be represented as a sum of 2 subsequent triangular numbers.
Using Figure 5 we can derive another interesting theorem. We notice that 1+3=4, 1+3+5=9 and 1+3+5+7=16. We see also, that 7=2x4-1 and 16=42 thus by inductionwe conclude:
1 + 3 + + ( 2 n 1 ) = n 2
which is valid for all n greater or equal to 1.
Phythagoreans defined a specific numerology believing that a person’s date of birth corresponds to a specific combination of numbers, which can be used to describe their psychological type [19]. Moreover they associated numbers to the letters this is why later Greeks in their manuscripts and books denoted the numbers by the combination of letters.
They searched also for the perfect numbers, being the sum of all their divisors, such as 6=1+2+3, 28 = 1 + 2 + 4 + 7 + 14. Since the next perfect numbers were 496, 8128 and 33550336, Nichomachus concluded:” the good and beautiful are rare and easy counted, but the ugly and bad are prolific.”
Pythagoreans discovered Pythagorean Theorem and proved other simple geometrical theorems, including “the sum of the angles of a triangle equals two right angles”. They studied also three regular solids: the tetrahedron, the cube and the dodecahedron. They demonstrated that in the pentagram each diagonal divides the two others at the golden ratio. When linear geometrical figures replaced the dots, the combination of Babylonian algebra and Pythagorean arithmetic provided the basis for Greek geometric algebra.
Pythagoreans, Aristotle and Plato believed that the number is the essence of matter and that nature is composed of “fourness” [20,21]. The point, line, surface and solid are the only 4 possible dimensions of all forms. All matter is built out of four elements: earth, air, fire and water. Unlike most Greeks, they believed that the Earth is in motion and that there should be 10 celestial bodies because ten was the sacred number [20].
They believed that the planets produced sounds which varied with their distances from the earth and that all these sounds were harmonized. Nearly 2000 years later Kepler searching for harmony in the music of spheres discovered his 3 important laws.
Philolaus of Croton proposed the following model of the universe: Earth, Moon, Sun, five remaining known planets, the sphere of stars and Anticthon ( invisible Counter-Earth) were : revolving around a fixed central fire.
As “10” was a sacred umber, the nature should be describable in terms of 10 pairs of categories: such as: odd and even, bounded and unbounded, right and left, one and many, male and female, good an evil. The natural science of Pythagoreans was speculative and not satisfactory,but they recognized the importance of number underlying diverse natural phenomena.
Numbers and geometrical figures are suggested by physical objects, but Greek philosophers understood that they were abstract idealized concepts and undertook extensive study of their properties. These studies were resumed, extended and arranged by Euclid around 300 BC in Elements divided into 13 books. Starting from one set of 10 axioms, which seemed to be unquestionable, he deduced rigorously 467 interesting theorems and many corollaries. Axioms 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures and Euclid explains how they can be constructed with no more than a compass and a straightedge.
The abstract geometry not only helped to calculate distances and areas in everyday life, but due to the contributions of Archimedes, Aristarchus of Samos, Eratosthenes, Apollonius of Perga and Hipparchus it led to quite precise Ptolemy’s geocentric planetary model which survived 1500 years till the Copernican revolution. In fact Aristarchus of Samos was the first who proposed the heliocentric planetary system and perhaps his idea inspired Copernicus. :
Ptolemaic system provided an accurate predictive model for celestial motions. In this system Earth is stationary and at the center of the universe. The heavenly bodies move in uniform motion along the most “perfect” path, which was considered to be a circle. To explain apparently irregular movements of planets Ptolemy assumed that they were a combination of several regular circular motions called epicycles seen in perspective from a stationary Earth. Namely, each planet revolves uniformly along a circular path called an epicycle and the center of the epicycle itself revolves around Earth along a larger circular path called the deferent. Ptolemy had to place the Earth not at the center of the epicycles what was called eccentricity to explain the varying motion of the Sun through the zodiac.
The beauty and rigor of deductive reasoning in Euclid Elements has inspired philosophers and scientists till today. Being an obligatory subject taught in schools during several centuries it contributed significantly to the success of the technological and scientific revolution initiated by Galileo, Copernicus, Kepler in 16th century.
As we mentioned above Pythagoreans recognized the importance of numbers, in particular whole numbers, and they made the first steps toward applying this concept to the study of nature. During the centuries the concept of number has been extended and efficient schemes to writing them and to calculating with them have been created. Zero, negative numbers and negative decimal fractions were defined but only in 17th century, mathematicians generally accepted to use them in modern notation. The irrational numbers and negative numbers were often considered to be absurd and even Descartes rejected negative solution of algebraic equations.
Only in the 19th century mathematicians accepted complex numbers, separated irrationals into algebraic and transcendental parts and undertook the serious scientific study of irrationals the topic which remained almost dormant since Euclid. More information about the history of numbers may be found in [3,24,25].
It is impressive, that the uses of numbers, we make today for understanding and mastering our description of nature ,are similar to those made by Pythagoreans. As Kronecker said: “God created the integers, all else is the work of man.” We will talk about it in subsequent sections.

3. Copernican Revolution and Newtonian Classical Mechanics.

Following the Fall of Rome, monasteries and convents remained bastions of scholarship in Western Europe and clergymen were the leading scholars of the age – studying nature, mathematics, and the motion of the stars (largely for religious purposes [26]. The Council of Nicaea prescribed that Easter would fall on the first Sunday following the first full moon after the vernal equinox. Thus, it became necessary to predict the date of Easter with enough accuracy. This necessity fueled constant innovation and refinement of astronomical practice as the solar and lunar years diverge over centuries. In the 12th century, the church sponsored the translation into Latin of Arabic-language version of Greek philosophical and mathematical texts. This was done to help .astronomical study.
Aristotle put Earth in the center of the Cosmos and the Ptolemaic geocentric model seemed to reinforce the message of creation in the Bible and other Sacred Scriptures.
Catholic Church has been an important patron of sciences, arts and architecture. It played a significant role in the foundation and funding of schools and hospitals. Some cathedral schools became first universities. Catholic scientists, both religious and lay, have led scientific discovery in many fields searching for a divine design of the World which might be considered as an additional proof of the existence of God [26].
The Church tolerated also Aristotelian science which was taught and venerated by scholars in the universities. Aristotle’s cosmos was a series of concentric spheres. The terrestrial sphere was composed of four elements: earth, air, fire, and water. These elements were subject to change and decay. The celestial spheres were made of unchangeable aether. Aristotle explained phenomena on Earth in terms of qualities or substances e.g. hot and cold, wet and dry, solid and fluid etc. Objects made of earth and water tended to fall and the speed of motion depended on their weights and the density of the medium. To maintain the constant motion of the body the force has to be constantly applied. The objects made from air and fire tended to rise. The vacuum could not exist because speeds would become infinite. Aristotle insisted on causal explanation of any change and defined: material, formal, efficient and final causes.
The conflict between the Church and the science started when Nicolaus Copernicus constructed a precise heliocentric model of the planetary system in the book De Revolutionibus..published in 1543. According to this model Earth lost her privileged place in the universe. It was revolving around Sun as other planets and it was rotating around its axes. At the beginning, having realized that Copernican model allowed more precise astronomical predictions, the Church considered it to be false but useful and did not declare it as the heresy.
Copernicus’ theory lacked necessary evidence to be universally accepted. There were several unanswered questions such as how the heavy object as Earth can be kept in motion or why Earth’s rotation does not cause objects fly away, thus Copernican model was only a bold but a questionable hypothesis. Nevertheless, when Galileo in his book : Dialogue Concerning the Two Chief World Systems, explicitly endorsed the Copernican model, breaking the agreement with Pope Urban VIII, he was forced to recant, and was sentenced by the inquisition to house arrest. Copernican model was declared a dangerous heresy contrary to Holy Scriptures. De Revolutionibus and Galileo’s Dialogue Concerning the Two Chief World Systems were only dropped from the Catholic Church Index of Prohibited Books at 1835 [3].
For Galileo faith and reason were complementary, this is why he endorsed and promoted Copernican heliocentric model. He demonstrated that several Aristotelian views were wrong. He pointed out that one should not describe nature by qualities such as white or red, sounding or silent but by measurable observables like shape, quantity and motion. He formalized the concept of experimentation and recording of results. Using the lever law, he could measure the specific gravity of objects by immersing them in water and balancing weights. He used telescope to observe: Jupiter’s moons, sunspots, phases of Venus and challenged the idea of a perfect celestial sphere. He disproved Aristotelian dynamics and discovered that a falling object accelerated at the same rate regardless of its weight (in the absence of air resistance). He also showed that projectiles follow a parabolic path. His work on inertia contributed to the formulation of the first Newton’s law.
Kepler improved Copernican heliocentric system and discovered three fundamental laws that describe how planets move around the Sun:
1)
Planets move in elliptical orbits, with the Sun at one of the foci.
2)
A line joining the Sun and a planet sweeps out equal areas in equal times.
3)
There’s a precise relationship between a planet’s orbital period and its average distance from the Sun.
Kepler and Copernicus asked man to accept a theory that violated his sense impressions because this is a more satisfactory mathematical theory. They believed that reason and mathematics should be determining factor in accepting what is true in nature [3]. Modern science follows this line of thought.
Reason and mathematics were also fundamental methods of inquiry recommended by René Descartes. He said that in order to search truth it was necessary once in the course of one’s life to doubt all the things. In Discourse on Method he constructed his philosophy by a deductive method based on the axioms that seemed self-evident to him.
In his Geometry he connected the previously separate fields of geometry and algebra creating analytical geometry. The Cartesian coordinate system, which we commonly use today, was named after him. In this system, geometric points on the plane are uniquely specified by a pair of real numbers (coordinates) representing their distances from two fixed perpendicular lines (the coordinate axes). For the points in space one has to add an additional coordinate axis. Descartes demonstrated that to each curve there belongs an equation that describes the position of any point on the curve. Moreover, each equation relating x and y can be pictured as a curve on a plane. In this way all paths, curves and surfaces that occur in the physical world can be studied efficiently using the algebraic methods.
Newton’s contributions to mathematics and physics were vast, including his development of calculus, laws of motion, and universal gravitation. Newtonian mechanics, describes the motion of objects based on deterministic laws. If we know the initial conditions (positions and velocities) of all objects in the universe and the forces acting upon them, we can predict precisely their future behavior. Three Newtonian law of motions and 4th law of universal gravitation laid the foundation for classical physics which remains valid for most everyday scenarios.
Newton’s three fundamental laws of motion are:
4)
An object at rest remains at rest, and an object in motion continue moving with constant velocity unless acted upon by an external force.
5)
The acceleration of an object is directly proportional to the net force applied to it and inversely proportional to its mass (F = ma).
6)
For every action, there is an equal and opposite reaction.
7)
Every mass attracts every other mass with a force proportional to the product of their masses and inversely proportional to the square of the distance between them.
Newton introduced an important notion of a mass point as an idealization of the material bodies which are far away. It allowed him to derive the motion of planets consistent with the heliocentric system and with Kepler’s laws. The gravitation force is defined between any two mass points and if there are many mass points the force acting on a particular mass point is the sum of all the forces acting on it. Newton knew, that planets are not points but spheres. However, massive solids can be described as rigidly connected material points or by assuming a continuous mass distribution and defining the mass density. This is probably why Newton was waiting 20 years and published his Mathematical Principles of Natural Philosophy only in 1687 , when he demonstrated that the gravitational force between two spheres can be calculated as their total masses were located in their centers.
Using his law of gravitation he calculated the masses of the sun and of all the planet, explained ocean tides etc. His Principia inspired and guided subsequent generations of scientists. In the preface to the first edition he defined a program of research which did not lose its actuality today: “I offer this work as mathematics principles of philosophy [science]; for all the difficulty in philosophy seems to consists in this –from the phenomena of motions to investigate the forces of nature, and then from these forces to demonstrate other phenomena.”
Newton’s law of gravitation asserts that the force of gravitation acts between the sun and planets over the huge distances. It was in conflict with general beliefs because as Aristotle said “For action and passion (in the proper sense of the terms) can only occur between things which are such as to touch one another” (Aristotle, De generatione Et corruption, I, 6, 322b 28–29; see also 323a 33–34; and Aristotle, Physics, VII, 2, 243a 32–35). The gravitational force was exerted locally on each planet, but it acted instantaneously and constantly through empty space and it couldn’t be blocked. This is why Newton wrote : “I here design only to give mathematical notion of these forces, without considering their physical causes and seats”.
In Einsteinian theory of gravity, which is another abstract mathematical model, one is not talking about the forces. Objects move along the geodesics in 4 dimensional curved Space-Time. The curvature represents gravity and depends on relative positions of massive objects. When a planet orbits the Sun, it’s essentially following the geodesic determined by the Sun’s mass and the curvature of space-time. In general relativity the light follows different geodesics and the massive objects (like galaxies) bend light as it passes near them. This effect, called gravitational lensing, has been observed and confirmed. In general relativity similarly to Newtonian mechanics we do not answer a question “Why” but on a question “How”. We don’t know physical causes and saying that massive objects warp the fabric of space-time around them as a heavy ball on the trampoline is simply misleading. Both Newtonian and Einsteinian theories are only mathematical abstract models of some aspects of the physical reality.
Standing on the shoulder of the giants, Copernicus, Kepler and Galileo, Newton provided a comprehensive, systematic and rationally connected account of terrestrial and celestial motions. He established the existence of universal mathematical laws providing strong arguments in favor of the mathematical design of Universe. It allowed sweeping away the last traces of mysticism [3].
During next 200 years the Newtonian mechanics was the inspiration for philosophers, physicists and mathematicians. Newton’s laws were used to describe solids, liquids and gases. In order to solve complicated physical problems new mathematical concepts and methods were defined and studied, such as ordinary differential equations, partial differential equations, integral equations and a calculus of variations. One may say that it was a golden epoch of science due to the continuous “cross-fertilization” between physics and mathematics. In fact Euler, Lagrange, d’Alambert, Bernoulli, Laplace, Hamilton and several other scientists made equally important contributions to physics and mathematics.
Newton’s equations of motions in contrast to the average velocity contained instantaneous velocity and acceleration. A position of the body at time t in a chosen Cartesian reference frame is described by a vector r(t) =(x(t),y(t), z(t)) and instantaneous velocities and accelerations are defined as:
v ( t ) = r ˙ ( t ) = lim h 0 ( r ( t + h ) r ( t ) ) ; a ( t ) = r ¨ ( t ) = v ˙ ( t )
If the initial position r(t0) and the velocity v(t0) are known, the future motion of a material point of mass m, in the absence of constraints is strictly predetermined by Newton’s second order differential equation:
m r ¨ = F ( r , r ˙ , t )
where F ( r , r ˙ , t ) are external forces acting on a mass point. The equation (3) is a vector notation for a system of three differential equations of the second order for functions x(t), y(t) and z(t). For one mass point the most important were: a gravitational force at the surface of Earth: F=mg and central forces F(r)=f(|r|)r where |r| is the length of the vector r , in particular if f(|r|)= k(|r| and f(|r|)= c/|r|3 where k and c are some constants. The work to move a material point in the field of central forces from a point P to Q does not depend on the path. The total angular momentum and total energy being a sum of kinetic and potential energy are conserved.
To describe motions of N material points in presence of constraints physicists had to introduce generalized coordinates and solve complicated differential equations. Often there was no exact solution and only approximated solutions could be found. The equations of motion are nowadays derived using Least (Stationary) Action Principle [3,27,28,29,30], which plays also a fundamental role in quantum electrodynamics and in quantum field theory. We discuss shortly this principle and the development of the Hamiltonian mechanics in the Appendix.
The principle of least action can be generalized for various physical systems including electromagnetism, relativity, and quantum mechanics.Its importance cannot be underestimated because Noether’s theorem [31] connects symmetries to conservation laws.
8)
Translation Symmetry: If the action is invariant under translations in space (i.e., the laws of physics remain the same regardless of where we are in space), then the linear momentum is conserved.
9)
Time Translation Symmetry: If the action is invariant under translations in time (i.e., the laws of physics remain the same regardless of when we observe them), then energy is conserved
10)
Space Rotation Symmetry : If the action is invariant under rotations in space, then the angular momentum is conserved
Symmetry transformations play a crucial role in understanding the fundamental laws of physics. In particle physics several additional intrinsic discrete symmetries and corresponding conservation laws were discovered and helped physicist to construct the Standard Model [32,33].
It was difficult and practically impossible to find the solutions of Newton’s equations for a system of many material points but it was believed that, if one knew the general solution, initial positions and velocities of all these points, - then the future evolution of the universe could be predicted. As we explain the next section this belief was unfounded.

4. Three Body Problem, Strange Attractors and the Chaos Theory

Newtonian mechanics is a deterministic theory and if we know the initial conditions the future of a physical system is completely determined. However, Newton’sequations are difficult to solve, if the number of material points is increasing. This is why, in 1887, Oscar II, King of Sweden established a prize for anyone who could find the solution to the n-body problem:
Given a system of arbitrarily many mass points that attract each according to Newton’s law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly.
In 1881–1882, Henri Poincaré showed that it is possible to derive the important information about the behavior of a family of solutions of the differential equations without having to solve the equation (since this may not always be possible). He successfully used this approach to prove that there is no solution to the n-body problem and that even the deterministic system of three bodies can exhibit chaotic behavior strongly dependent of the initial conditions [34,35,36].
Three-Body- Problem (TBP) is a system of 9 differential second order equations describing possible motions of three point masses which attract each other through gravity. A general solution of these equations does not exist.The motion of three bodies is generally chaotic for most initial conditions. Only if the mass of one body is much smaller than other two masses one may find analytic solutions. Therefore, to determine how the positions change in time computer simulations have to be used. In 2017, two scientists, XiaoMing Li and ShiJun Liao using a super computer determined 695 families of periodic orbits of planar TBP [37,38] . In their simulation, gravitational constant G=1, all masses are equal to 1 and are placed in the corners of the isosceles triangle.
Detailed characteristic parameters (such as periods, scale-invariant averaged periods, initial velocities etc.), and the movies of the motions on these periodic orbits can be found in [38].
In the subsequent publications they found also several periodic and chaotic families for non-equal masses [39,40,41].
TBP is inherently chaotic. No computer can predict the behavior of three bodies indefinitely for all possible initial conditions and chosen values of the 3 masses. The orbits become unpredictable, leading sometimes to cataclysmic events such as collisions or one planet leaving the system. Nevertheless the computer simulation, allow to discover some regular patterns such as periodic orbits and attractors. Attractors are sets of points to which a system tends to evolve regardless of its initial conditions.A strange attractor is a specific type of attractor characterized by sensitive dependence on initial conditions.
A strange attractor is a set of points in phase space (the space of all possible system states) that describes how a chaotic system evolves. We cannot precisely predict where on the attractor the system will be at a given time. Small differences in initial conditions lead to vastly different trajectories on the attractor. Strange attractors have intricate shapes and are often characterized by fractal-like patterns.
A classic example is the Lorenz attractor, or what is better known as “butterfly effect” image. Edward Lorenz and collaborators used a set of 3 simple equations to model the Earth’s dry atmospheric convection and noticed that no reliable predictions could be made about the future behaviour of this deterministic system [42]. Nevertheless some regularities were observed and the possible motions of the system were limited to some region of space which is now call the Lorenz attractor [43]
A discovery of chaotic behaviour in TBP and Lorentz attractor contributed to the creation of Chaos Theory, which is an interdisciplinary branch of science and mathematics studying deterministic systems which are predictable for a while and then ’appear’ to become random. Examples of chaotic systems include a double-rod pendulum, fluid dynamics, climate and weather processes, biological processes, heart arrhythmias population dynamics, and stock markets valuations [44].
The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured. A time scale depending on the dynamics of the system is called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits- about 1 millisecond; weather systems-a few days (unproven); the inner solar system- 4 to 5 million years.
In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. This means, in practice, that a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random. Since Lyapunov time for the inner solar system is very long, the orbits of Earth and other close planets will remain stable in the human time-scale.
Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, repetition, self-similarity, fractals and self-organization. The “butterfly effect”, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state [45,46]. A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas
Chaos has become applicable to geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics and robotics.
As we mentioned above, strange attractors have sometimes fractal structures. Fractals are mathematical objects characterized by self-similarity-patterns that repeat at smaller and smaller scales [40]. We are going to discuss them in the subsequent section.

5. The Fractal Geometry of Nature

The term “fractal” was popularized by Benoit Mandelbrot in the 1960s and 1970s and has been studied intensively afterwards [47,48,49,50,51].
The solutions of differential equations are smooth curves or surfaces what means that a tangent line or a tangent plane exists at all points. In nature we observe “roughness” (no tangent lines or planes do exist), thus in order to describe this “roughness” and self -similar patterns we have to use different mathematical concepts and description than Newtonian Mechanics.
As Mandelbrot said: "Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line" [51,52].
Long time ago, British cartographers encountered a problem in measuring the length of Britain coast. The coastline measured on a large scale map was approximately half the length of coastline measured on a detailed map. It is obvious that the measurements of the length depend on the precision (a size and units of the measuring rod) . However, if the curve is smooth the measurements made with higher and higher precision converge to a constant value. If we have a rough object like coast line the measurements seem to diverge instead to converge. For fractals the Euclidean measure tends to infinity thus mathematicians and Mandelbrot decided to characterize the fractals by their fractional dimension D which is consistent with much more rigorously defined Hausdorff dimension.
The dimension represents the measure of the object changes, if we scale the unit of length. For example let us start with a linear segment of the length 1. If we divide this length by S=2 (scaling factor) we have N=2 line segments of length ½ and N x (1/S)1=1 If we have a square 1x1,the measure on a plane is not the length but the area. Thus, if we divide each side by S=2, then we obtain N=4 small squares having each the area ¼ and now N x (1/S)2=1. If we subdivide a unit cube into 8 small identical cubes then measure in space is a volume and also N x (1/S)3=1. By generalising this approach the Hausdorff dimension of the fractal may be defined as:
where N is a number of self-similar pieces on which a geometric object is transformed after first iteration and S is a scaling factor. As an example we will calculate the fractal dimension of the Koch Snowflake curve [52,53]:
To construct Koch Snowflake, we start with an isosceles triangle with sides of length 1. .
11)
We divide each side on three equal segments of length 1/3
12)
On the segment of each side, we add a new equilateral triangle one-third the size and we erase its base thus each side is replaced by 4 identical shorter segments
13)
We repeat this process at infinity.
The scaling factor S =3 and N=4, thus the fractal dimension is
Another fractals for which D can be easily calculated are
The dimension for a) is and for b) is . As Pythagoreans anticipated few first natural numbers are important in nature.
Let us now calculate the length of the perimeter of the Koch Snowflake. At each iteration the length of each side is increased by a factor 4/3, thus after n iteration the perimeter
Pn =3(4/3)n and tends to infinity when n increases. At the same time the area remains smaller than the area of a circle drawn around the original triangle. That means that the infinitely long line surrounds a finite area.Similarly the surface of the fractal surface around the finite volume may be also infinite. Koch Snowflake resembles the coastline of a shore.
Various fractals can be constructed using similar algorithms. One can construct also higher dimensional fractals such as “rough “(nowhere smooth) surfaces having infinite area around the finite volume. Fractal dimension is a measure of space-filling ability for curves and surfaces having irregular shapes For irregular surfaces one covers the shades of them by a grid of squares and studies how this number N of squares intersecting the boundary of the shade changes, if the scaling factor S changes. Next for many values of S one plots N vs. S as points on a log-log graph. The approximate fractal dimension of the boundary Db is the slope of the best fit straight line through the points, and the approximate fractal dimension the surface is D= Db +1 ˃2.
Many fractal patterns are found in nature:
Koch Snowflakes and Sierpinski gasket were examples of so called Iterated Function System Fractals (IFS) created by iterating simple plane transformations: scaling, dislocation and plane axes rotation.
Each point on the plane can be represented by a complex number z=x+iy. Displacements of points on the plane can be described by subsequent iterations of complex value functions defined by a recurrence equation:
To construct the Mandelbrot set M [47,52,55], we choose a constant complex number c, Z0=0 and a second order polynomial function:
Z n = Z 2 n 1 + c
M is defined as a set of all complex numbers c such that the sequences of points generated by repeatedly applying the quadratic map (6), called orbits, remain bounded. M is a compact single connected fractal set, since it is closed and contained in the closed disk of radius 2 around the origin.
14)
A point inside M remains inside this set during all iterations of the map (6).
15)
Points far from M rapidly move towards infinity.
16)
Points close to M slowly escape to infinity
M may be depicted as a colorful image where each pixel corresponds to a complex number, and its color depend on how many iterations have been required to determine that they are outside the Mandelbrot set.
Another important fractal family are Julia sets [47,52,56]. A Julia set associated with a specific polynomial map is the set of initial points, whose orbits exhibit certain behavior, where an orbit is a sequence of points generated by repeatedly applying the map to an initial point. If the orbit remains bounded the point belongs to the filled Julia set. If the orbit escapes to infinity, the point belongs to the basin of infinity.
Julia sets for the quadratic complex maps (6) are closely related to the Mandelbrot set, but now c is treated as constant complex parameter and for each c we have a different unique filled Julia set of all points satisfying the specific criteria. The quadratic complex map is defined as in (6) by the function
17)
The filled Julia set for is constructed following the following steps:
17)
We choose the initial point z0 =x+iy from a rectangular grid on the complex plane such that { ( x , y ) | a x b , c y d }
19)
If the magnitude of zn exceeds 2 we say that zn escapes to infinity. Otherwise wise, we continue iterating until either the escape criterion is met or a maximum number of iterations is reached.
20)
If z0 escapes, its color is based on the number of iterations before escape (this creates the intricate patterns). If z0 remains bounded, its color is usually black.
21)
We repeat this process for all points in the grid
The parameter space for a Julia set is the whole complex plane. In general the Julia sets are disconnected and when c in the parameter space passes by the boundary the Julia set changes abruptly and becomes connected. The phenomena in which smooth changes made to the parameter values (the bifurcation points) cause a sudden “qualitative” or topological change in its behavior are studied by the Catastrophe theory created by Rene Thom. Using this definition the boundary of the Mandelbrot set can be defined as the bifurcation locus of this quadratic family of mappings.
Catastrophe theory [57,58] is a part of bifurcation theory, which studies and classifies phenomena characterized by sudden shifts in behavior due to small changes in circumstances. It analyzes degenerate critical points of a potential function. For some values of certain parameters describing a nonlinear system, called bifurcation points, equilibria can appear or disappear, leading to large and sudden changes in system behavior. Catastrophe theory has been applied to various fields, including physics, biology, and social sciences. It can help to explain phenomena like earthquakes, phase transitions, and biological shifts.
Chaos theory studies the behavior of dynamic systems that are highly sensitive to initial conditions. These systems exhibit unpredictable and complex behavior, even though their underlying rules are deterministic. Bifurcations play a crucial role in chaos theory, as they lead to chaotic behavior [59,60]. Fractals are geometric shapes that exhibit self-similarity at different scales. As we saw fractals are found in nature (coastlines, clouds, snowflakes) and are essential in chaos theory because they represent complex, infinitely detailed structures.
In summary, chaos theory, catastrophe theory, bifurcations, and fractals all contribute to our understanding of complex systems, their behavior, and the underlying mathematical principles. They reveal the beauty and intricacy of natural phenomena, from weather patterns to seashells.
They are sophisticated tools to model, often in a qualitative way complicated nonlinear phenomena observed in nature which cannot be described quantitatively by Newtonian mechanics.

5. From Democritus and Mendeleyev.

In this section, we resume how a belief in the existence of the quantitative laws of nature led scientists to sophisticated mathematical descriptions of various levels of the physical reality consistent with numerous experimental data.
Greeks not only developed the abstract concept of number and geometry. Already around 400 BC, Democritus created the first atomistic theory, which after being criticized by Aristotle, was rediscovered after Copernican revolution and led to the development of the modern atomistic theory. Being inspired probably by Pythagorean pebbles and numerology Democritus believed that all matter is made up of tiny, indivisible particles called atoms. Atoms varied in size, shape, and weight. They were constantly in motion and could combine to form different substances. He believed that atoms are unchangeable and eternal what was disproved only during the last 200 years.
The creation of Newtonian mechanics, the discovery of electromagnetic phenomena and electric currents was followed by the development of the modern chemistry
The history of chemistry reflects humanity’s quest to understand the composition of matter and its transformations, from ancient fire-making to cutting-edge scientific discoveries. Gold, silver, copper, tin, and meteoric ironwere among the earliest metals used by humans. The Varna culture in Bulgaria (around 4600 BC) practiced gold metallurgy.
As the astrology led to modern astronomy, the alchemy, which emerged during Middle Ages laid the groundwork for modern chemistry. Alchemists sought to transform base metals into gold and discover the elixir of life. The 17th and 18th centuries marked the transition from alchemy to modern chemistry. Scientists like Robert Boyle, Antoine Lavoisier, and Joseph Priestley made significant contributions [61].
Antoine Lavoisier established the law of conservation of mass during chemical reactions. He also coauthored the modern system for naming chemical substances, discovered that water is a compound of hydrogen and oxygen, that sulfur is an element and that diamond is a form of carbon.
In 1774, Joseph Louis Proust discovered the law of multiple proportions telling that: when two elements form more than one compound, the weight of one element in one compound is in simple, integer ratios to its weights in other compounds. He also verified that water always has a fixed ratio of hydrogen to oxygen, regardless of its source.
John Dalton extended Proust’s work and converted the ancient Greek atomic philosophy into a scientific theory. His book, A New System of Chemical Philosophy [63,64], was the first application of atomic theory to chemistry. Dalton proposed that atoms are not infinite in variety; each element has a unique kind of atom. Proposing that all the atoms of a given element have the same fixed mass, he concluded that elements react in definite proportions to form compounds because their constituent atoms react in definite proportion to produce compounds. He then tried to figure out the masses for well-known compounds.
In 1809, in his memoir [65,66] Joseph-Louis Gay-Lussac discovered that at constant temperature and pressure, gases always combine in simple numerical proportions by volume. He wrote: Thus it appears evident to me that gases always combine in the simplest proportions when they act on one another; and we have seen in reality in all the preceding examples that the ratio of combination is 1 to 1, 1 to 2 or 1 to 3…
Gay-Lussac’s work raised the question of whether atoms differ from molecules and, if so, how many atoms and molecules are in a volume, of gas.
Avogadro building on Dalton’s efforts, solved the puzzle, but his work was ignored for 50 years. He proposed, that the atoms of elementary gases form molecules rather than existing as separate atoms, as Dalton believed, and that equal volumes of gases contain equal numbers of molecules at the same conditions This hypothesis proved useful in determining atomic and molecular weights, led to the concept of the mole and explained why only half a volume of oxygen is necessary to combine with a volume of carbon monoxide to form carbon dioxide. Each oxygen molecule has two atoms, and each atom of oxygen joins one molecule of carbon monoxide: 2 C O + O 2 = 2 C O 2 .
The mole was initially defined as a weight in grams equal to the molecular weight of the substance in the atomic unit. It was used for quantitatively describing the composition of substances and performing calculations involving mass and number of particles. In 1991 the mole was redefined as the amount of substance containing exactly NA elementary, entities, where NA =6.02214076 x 1023 is the Avogadro number.
The mole concept is crucial for quantitatively describing the composition of substances and performing calculations involving mass and number of particles
A balanced chemical equation represents a chemical reaction. Elements are represented using their element symbols and the same number and type of atoms are present on both sides of the reaction. For example:
4   F e S + 7   O 2 2   F e 2 O 3 + 4   S O 2
3   C a C l 2 + 2   N a 3 P O 4 C a 3 P O 4 2 + 6   N a C l
6   C O 2 + 6   H 2 O C 6 H 12 O 6 + 6   O 2
where (7) describes the iron sulfide combustion,(8) the calcium phosphate precipitation and (9) the photosynthesis.
The equations (7-9) illustrate an important concept of the valence introduced in 1868. It determines the number of other atoms with which an atom of an element can combine. The valence of the Hydrogen and Sodium is 1, the valence of the Calcium is 2, of the Iron is 3 , of the Carbon is 4 and of the Phosphorus is 5. Later, the theory of valence was reformulated in terms of electronic structures. In various compounds the atoms can exchange or share electrons in order to form the stable valence shells with 2 or 8 electrons. Therefore, the elements in different compounds may have a variable positive or negative valence. For example in the reaction (7) the Sulphur exhibits the valence 4 and -3.
Phosphorus, which has an atomic number of 15 has fifteen electrons: two in the first energy level (1s2), eight in the second energy level (2s2 and 2p⁶), and five in the third energy level (3s2 and 3p3). Phosphorus is very reactive and can have a different valence in different compounds. It can use single bonds (sharing a pair of the valence electrons) or the double bonds (sharing 4 valence electrons). Such bonds are represented by lines on the Lewis’ diagrams [66] and dots are representing the valence electrons not used to create a bond. In nature one finds the white phosphorus whose chemical symbol is P4.
The important argument in favor of the atomistic theory of nature gave Dimitri Mendeleev [66,68]. He organized elements in a table based on the atomic weight and similar chemical properties such as valence etc. He left gaps in places where he believed unknown elements would eventually find their place. Remarkably, he even predicted the likely properties of three of these potential elements. The subsequent confirmation of many of his predictions during his lifetime brought him fame as the founder of the periodic law.
His work laid the foundation for our modern understanding of the periodic table, which now orders elements by increasing atomic number. Mendeleev’s groundbreaking work Significantly advanced the field of chemistry.
In chemistry and kinetic theory of gases atoms and ions were used as indivisible units In 1865, Joseph Loschmidt [66] using various available rough experimental data estimated that the diameter of an atom was approximately 10−8 cm. His estimation of the Avogadro constant was close to the present accepted value.

6. From Faraday to Quantum Mechanics.

Scientists ignored the nature of the forces binding atoms together in a molecule. Faraday [69] discovered that electrical forces existed inside the molecule. He had produced an electric current and a chemical reaction in a solution with the electrodes of a voltaic cell. No matter what solution or electrode material he used, a fixed quantity of current sent through an electrolyte always caused a specific amount of material to form on an electrode of the electrolytic cell. Faraday concluded that each ion of a given chemical compound has exactly the same charge and that the ionic charges are integral multiples of a single unit of charge, never fractions. The unit of charge that releases one gram-equivalent weight of a simple ion is called the faraday in his honor. For example, one faraday of charge passing through water releases one gram of hydrogen and eight grams of oxygen.
By far the richest clues about the structure of the atom came from spectral line series.
Isaac Newton allowed already sunlight to pass through a small, circular hole and fall on a prism, what produced a rainbow of colors, which he called a spectrum. He explained that light consists of different rays, some more refrangible than others. Joseph von Fraunhofer made a significant leap forward in the early 1800s. Mounting a particularly fine diffraction grating on a telescope, he had discovered between hundreds of dark lines in the spectrum of the Sun. He labeled the most prominent of these lines with the letters A through G. They are now called Frauenhofer lines. Stars emit light from their photospheres. When this light passes through the outer atmosphere (chromosphere), certain atoms absorb specific wave lengths. These absorbed wavelengths correspond to the energy levels of electrons in the atoms, what gives the information about the composition of the star [66].
Around 1860, Gustav Kirchhoff heated different elements to incandescence in order to study the different colored vapors. Observing these vapors through a spectroscope, he discovered that each element has a unique and characteristic pattern of spectral lines. Each element produces the same set of identifying lines, even when it is combined chemically with other elements [66].
In 1865, Maxwell [70] unified the laws of electricity and magnetism and concluded that light is an electromagnetic wave. Maxwell’s theory failed to describe spectral lines and the fact that atoms do not lose all their energy when they radiate light.
In 1853, Anders Ångström had measured the four visible spectral lines of the Hydrogen to have wavelengths 656.21, 486.07, 434.01 and 410.12 nm,.
In 1985, Johann Balmer, a Swiss secondary-school mathematics teacher found a constant relation between the wavelengths of the element’s four visible lines [71]:
λ m = b ( m   2 m 2 4 )
where b= 364.56 nm and m=3,4,5,6. He predicted that other lines existed in the ultraviolet that corresponded to m ≥7 and some of them had been discovered. The Balmer formula is a special case of a more general formula discovered by Johannes Rydberg in 1890:
1 λ = R H ( 1 n 1   2 1 n 2   2 )
where RH = 1,09737 m-1 is the Rydberg constant and n2 ˃ n1 are integers.
The value of n1 defines a particular series of spectral lines . For Lyman series n1=1, for Balmers eries n1=2, for Paschen seriesn1=3 etc.
In 1897, J. J. Thomson discovered the electron as a carrier of the electricity in cathodic rays and found that the mass of the electron was very small, merely 1/1,836 that of a hydrogen ion and the scientists realized how electric current could flow through copper wires. In deriving the mass-to-charge ratio, Thomson had calculated the electron’s velocity. It was 1/10 the speed of light, thus amounting to roughly 30,000 km (18,000 miles) per second. The electron was the first subatomic particle identified, the smallest and the fastest bit of matter known at the time. In 1909 American physicist Robert Andrews Millikan measured directly the charge of the electron: 1.602 × 10−19 coulomb [66].
Wilhelm Conrad Röntgen had discovered X-rays in 1895. Like Thomson’s discovery of the electron, the discovery of radioactivity in Uranium by French physicist Henri Becquerel in 1896 forced scientists to radically change their ideas about atomic structure. Radioactivity demonstrated that the atom was neither indivisible nor immutable. In 1898 Pierre and Marie Curie discovered the strongly radioactive elements polonium and radium, which occur naturally in uranium minerals. In 1899, Ernest Rutherford showed that radioactive substances emit more than one kind of radiation. The beta rays are beams of electrons and alpha rays are beams of positively charged Helium ions. A third kind of radiation was identified and called the gamma rays, it was not deflected by magnets and was much more penetrating than alpha particles. Gamma rays were later shown to be a form of electromagnetic radiation, similar to light or X-rays, but with much shorter wavelengths [66].
In 1902, Rutherford and English chemist Frederick Soddy discovered that radioactivity was associated with changes inside the atom that transformed Thorium into a different element. They found that Thorium continually generates a chemically different substance that is intensely radioactive which gradually disappears. Watching the process, they discovered the exponential radioactive decay which states that a fixed fraction of the element will decay in each unit of time. For example, half of the Thorium product decays in four days, half the remaining sample in the next four days, and so on.
In his gold foil experiments Rutherford observed that only very few of the alpha particles in his beam were scattered by large angles after striking the gold foil while most passed completely through. He concluded that the gold atom’s mass must be concentrated in a tiny dense nucleus and proposed a model of the atom as a miniature solar system, with electrons orbiting around a massive nucleus consisting only of protons, , and as mostly empty space, with the nucleus occupying only a very small part of the atom. However, according to classical electrodynamics the model was unstable because the electron would gradually lose energy and spiral into the nucleus. No electron could thus remain in any particular orbit indefinitely. Besides, the model disagreed with the Mendeleev table because neutron was not discovered yet [66].
In 1905, Einstein discovered that the exchanges of energy between light and matter are quantized. In other words, a monochromatic light with the frequency ν behaves like a beam of photons carrying energy E=h ν and a linear momentum p=hk ( k=1/λ) and thus the energy of the electron in atom can change only in multiples of hν, h is a Planck’s constant h = 6.6 × 10−34 (joule ∙ second) . Planck had introduced this constant in 1900 in a formula explaining the light radiation emitted from heated bodies He postulated that energy can only be emitted or absorbed in discrete amounts h ν, which he called quanta.
In 1913, Henry Moseley found that each element radiates X-rays of a different and characteristic wavelength. The wavelength and frequency vary in a regular pattern according to the charge on the nucleus. He called this charge the atomic number. His results, Balmer and Rydberg spectral series and Planck’s and Einstein’s quantized exchanges of the energy between light and mater inspired Bohr to postulate the first successful model of the Hydrogen atom.
In 1913, Niels Bohr modified the Rutherford model by requiring that electrons move in orbits of fixed size and energy. The energy of an electron depends on the size of the orbit and is lower for smaller orbits. Radiation can occur only when the electron jumps from one orbit to another. The atom will be completely stable in the state with the smallest orbit, since there is no orbit of lower energy into which the electron can jump.
Bohr assumed that the angular momentum of the electron is quantized—i.e., it can have only discrete values and electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies The energy of an electron in the n-th shell is given by: E(n)=−13.6/ n2 eV. The energy of the emitted photon hν=ΔE=E(n2)-E(n1) agrees completely with Balmer-Rydberg formula (11) and Bohr was able to calculate the value of the Rydberg constant [72]. Bohr’s model doesn’t work for systems with more than one electron.
At the same time, J.J.Thomson, found that a beam of neon atoms subjected to electric and magnetic forces split into two parabolas instead of one on a photographic plate. Chemists had assumed the atomic weight of neon was 20.2, but the traces on Thomson’s photographic plate suggested atomic weights of 20.0 and 22.0 with the former parabola much stronger than the latter. He concluded that neon consisted of two stable isotopes: primarily neon-20, with a small percentage of neon-22. Eventually a third isotope, neon-21, was discovered in very small quantities. He disproved, Dalton’s assumptions that all atoms of an element have an identical mass and that the atomic weight of an element is its mass were thus disproved. Today the atomic weight of an element is recognized as the weighted average of the masses of its isotopes.
As we explained above, the light, which was initially thought to be a wave, was found to have particle-like properties. In 1924, Louis de Broglie proposed the wave nature of electrons and suggested that all matter has wave properties. De Broglie wavelength λB=h/p, where p is a particle momentum and h is a Planck constant. For example, a beam of electrons can be diffracted just like a beam of light or a water wave. Wave-like behavior of matter has been experimentally demonstrated, first for electrons in 1927 and later for neutrons, neutral atoms and molecules in numerous experiments. This concept is known as wave–particle duality and inspired Erwin Schrödinger in his formulation of wave mechanics which evolved into a modern quantum mechanics. Wave–particle duality is sometimes incorrectly interpreted that a particle is at the same time wave and particle and that the electron can be here and a meter away at the same time.
Bohr atom and wave mechanics were the last attempts to explain the atomic and subatomic physics using semi- classical models. Classical mechanics was created as the abstraction from our everyday observations. The objects had attributive properties which could be measured with the increasing (theoretically unlimited) precision. Similarly during their motion in the absolute Newtonian space at each moment of the absolute time they had precise positions, energies, linear and angular momenta in a chosen inertial reference frame. Of course the measurement of the distance was only direct when a measuring stick, rod or tape could be used; other distances could be only determined using the Euclidean geometry and triangulation. Nevertheless, measurements by definition were noninvasive; it means they did not change the value of the physical observable it wanted to measure.
According to the law of universal gravitational attraction the distant masses should influence each other motions instantaneously across the empty space what was contrary to every day experience and Aristotelean physics. Leibniz and Huygens called it the unacceptable interaction at the distance. Newton insisted that his model is an abstract mathematical model consistent with the observations and it is sufficient. With the discovery of the electromagnetism and the contributions of Faraday and Maxwell it became clear that the space is not empty and that the electromagnetic waves carry energy and linear momentum and can mediate the interaction between the distant bodies. As Planck and Einstein demonstrated exchange of the energy between the wave and matter were quantized.

7. From Quantum Mechanics to the Standard Model

Quantum mechanics is an abstract mathematical theory allowing making probabilistic predictions about observed phenomena and outcomes of various experiments. There are different interpretations of quantum mechanics. For me, the most consistent is the statistical contextual interpretation [73,74,75,76,77,78,79]. An ensemble of identically prepared physical systems is described by a state vector (wave function) ψ in a Hilbert space H. A measured physical observable A is represented by a self- adjoint operator A ^ acting in H, whose eigenvalues λ i are the only possible outcomes of the measurement and the expectation value E ( A ) = i λ i p ( λ i ) = ψ | A ^ | ψ . In contrast to classical mechanics there exist incompatible physical observables which cannot be measured avec arbitrary precision at the same time and are represented by non-commuting operators e.g. for the position and the corresponding linear momentum component we have [ x ^ , p ^ x ] = i where =h/2π.
The measurement outcomes in quantum mechanics are not preexisting values of physical observables recorded with errors by the measuring instruments. The measurement outcomes are created in the interaction of the measuring instrument with the physical systems. This is why the precise measurement of the position of a quantum system is impossible. Since the speed of light is a universal constant thus in the special relativity the coordinates of the event are determined using a radar method for example in one spatial dimension we have:
x = c ( t 2 t 1 ) / 2 ; t = ( t 2 + t 1 ) / 2
where t1 and t2 are respective times of sending and receiving the reflected light signal. In order to measure precisely the position, using this method, we have to use light signals with shorter and shorter wave length and higher and higher energy of “photons”. The collision of these photons with an atom not only change immediately its position but they can destroy it. The collision of the photon with an electron can produce additional photons and particle-antiparticle pairs. This is why, only linear momentum, spin and some additional quantum numbers are the only valid observables in the relativistic quantum electrodynamics (QED) and quantum field theory (QFT) which are sophisticated mathematical theories created in order to reconcile quantum mechanics with special relativity and to describe the processes in which particles may be created and annihilated.
As we can read in the article in Stanford Philosophical Encyclopedia [80]:Quantum Field Theory (QFT) is the mathematical and conceptual framework for contemporary elementary particle physics. It is also a framework used in other areas of theoretical physics, such as condensed matter physics and statistical mechanics. In a rather informal sense QFT is the extension of quantum mechanics (QM), dealing with particles, over to fields, i.e., systems with an infinite number of degrees of freedom.
QFT is a complicated mathematical model [80,81]. Its equations cannot be solved and to explain experimental data one constructs various semi-empirical models inspired by QFT. We explain below in a simplified way how QFT and the Standard Model are used to make quantitative predictions in particle physics
A quantum field it is an operator valued distribution defined at each point of the four dimensional Murkowski space-time. With each free quantum field is associated a specific particle (excitation). The states of the quantum field are n-particle states (n changing from 1 to infinity). If one has k-interacting different quantum fields they can only describe how the collision of two particles change their linear momenta and energies and which other particles described by these k-fields can be created as the effect of the interaction. In general at a given initial total energy several possible final states may be created and observed. The probability of observing a particular final state f from the initial i is given by P i f = | f | S ^ | i | 2 , where S ^ a unitary operator being is a complicated nonlinear function of interacting fields and their partial derivatives. If S ^ depends on a small parameter g , called a coupling constant, one replaces S ^ (g) by an infinite series in powers of gn with coefficients which are complicated analytical expressions and products of creation and annihilation operators. Finally one uses only one or two non -trivial first terms of this series to calculate an approximated value of P i f ( g , .. ) | m f m ( g , .. ) | 2 where
f m ( g , ) are complex valued functions of the coupling constant and quantum numbers describing the initial state i . These functions are graphically represented by Feynman graphs and are often incorrectly interpreted as the images of physical process happening during the interaction. [80,81,82,83].
In QED we have a fermionic field corresponding to electrons and positrons and bosonic field corresponding to γ particles.
Several integrals in the perturbative expansion of the transition probabilities discussed above are divergent and specific renormalization and regularization procedures are used [83] to extract a meaningful quantitative predictions to be compared with experimental data. Telling all that, it is surprising how well these predictions agree with the data. The infinities arrive because the fields are defined on a continuous space-time and we are dealing with point-like charges and masses. It would be much more elegant to construct a theory which does not require any renormalization. This was the opinion of Dirac who at the end of his book wrote:” the difficulties being of a profound character can be removed only by some drastic change in the foundations of the theory, probably a change as drastic as the passage from Bohr’s orbit theory to the present quantum mechanics” [84]. Feynman was also dissatisfied with the renormalization/regularization procedures [82].
The neutron was only discovered by John Chadwick in 1932, When Beryllium was bombarded with α particles (Helium ions) neutrons were created: 9Be + 4 α → 12C + n. Also in 1932, the positron (an anti-electron predicted by Dirac) was discovered by Carl David Anderson in the experiments with cosmic rays in a Wilson cloud chamber . Charged particles moving across cloud chambers are leaving visible traces. The Lorentz force F acting on a charged particle is given by the following equation :F = q (E + v × B) where: q is the charge of the particle in (C), E is the electric field vector in (V/m), v is the velocity vector of the particle in (m/s) and B is the magnetic field vector in tesla (T). By applying external magnetic and electric fields on a charged particle moving across the cloud chamber one may determine its mass and charge.
Cosmic rays are high-energy particles that move through space at nearly the speed of light. They originate from various sources: the Sun, Supernova explosions, distant galaxies etc. When cosmic rays hit Earth’s atmosphere, they produce showers of secondary particles, some of which reach the surface. In 1932 one could think that all building ingredient of matter were discovered. It was not true. The discovery of muon in 1937, was followed by the discovery of pions, kaons and many other particles and resonances in cosmic rays and in high energy scattering experiments made possible due to the constructions of different particle accelerators and colliders.
More and more precise particle detectors were developed: bubble chambers, wire chambers, spark chambers, wire proportional chambers, drift chambers, silicon detectors and various calorimeters. Calorimeters, measure the energy of particles. Particles enter the calorimeter, initiate a particle shower in which their energy is deposited and measured. It is the most practical way to detect and measure neutral particles from an interaction. Calorimeters allow also calculating “missing energy” which can be attributed to particles that rarely interact with matter and escape the detector, such as neutrinos.
In 1950s, in the interactions of pions and neutrons in the atmosphere they were discovered “strange particles”: the kaon (K) , lambda (Λ) and sigma (Σ) which exhibited unusual properties in the production and decay. Another peculiar feature was that they were always produced in pairs. To explain this, a new conserved quantum number strangeness s was introduced. Strange particles are produced by the strong interactions at high rate, but they decay slowly only via the weak interactions [85]. Their half-lives are in the range 10-10s to 10-8 s and they can be studied using bubble chamber photographs.
For the example on the photograph below, from the bubble chamber, we can see the production of K0 and Λ0 particles followed by their successive decays into charged particles leaving the visible traces:
π + p K 0 + Λ 0 Λ 0 π + p a n d K 0 π + + μ + ν ¯ μ
Elementary particles and resonances have a wide range of lifetimes, depending on their specific properties. The lifetimes range from that of the neutron 10-3 s to 10−23s. If the lifetime of a particles is of the order of 10−23, then traveling at the speed of light, this particle could only travel about 10−15 meters, or about the diameter of a proton, before decaying.
Therefore, such lifetimes are typically determined using the energy-time uncertainty principle
Δ E Δ t 2
which suggests that for particles with extremely short lifetimes, there will be a significant uncertainty in the measured energy. The measurement of the mass-energy of the decay products of an unstable particles a distribution of energies called a Lorentzian or a Breit-Wigner distribution [86]. The width of this distribution at half-maximum is labeled
Γ=2ΔE. For example, in the collisions of electrons with protons:
e + p e + Δ + e + π + + n
we detect only electrons and π + + n . We discover that they are decay products of Δ + by studying the distribution of the invariant total mass Z of them:
Z = ( E π + E n ) 2 + ( p π + p n ) 2 c 2 1 / 2
On the figure below, we can see the histogram of values of Z, for all observed collision events, observed allowing to estimate the mass and the half-life time of unstable particle Δ + .
The broad background (dashed curve) is produced by direct events in which no Δ + was created. The sharp peak Z=1232MeVcorresponds to the events in which Δ + was formed and decayed. Its life time is extremely short: Δ t 2 E = Γ = 5.7 × 10 24 s [85].
Hundreds of new particles and resonances were identified. Following Pythagoreans, Aristotle, Democritus and Mendeleev physicists succeeded to reduce the number of “elementary building blocks of matter” to relatively small number in so called the Standard Model which we are going to review shortly below [87,88,89,90,91].
Pythagoreans believed that natural numbers played the important role in nature. By chance or not, they play also an important role in the Standard Model (SM). In SM we have:
4- fundamental forces: strong, weak, electromagnetic, gravitation
6 leptons, 6- quarks in 3 colors, 4- gauge bosons; 1- Higgs (God`s particle)
white baryons ( 3 quarks) : p—uud , n—udd …; mesons(quark-antiquark)
symmetry groups :SU(3), SU(6)… ; triplets, octets, decuplets…
Fermions are fundamental particles with no measurable internal structure. They include quarks (which make up protons and neutrons) and leptons (such as electrons and neutrinos). Fermions have half-integer spin. Quarks are the building blocks of hadrons (protons, neutrons and mesons). They interact via the strong force and come in six flavors: up, down, charm, strange, top, and bottom. Bosons mediate forces. The Higgs boson (discovered in 2012) gives masses to other particles. Baryons consist of three quarks, while mesons have one quark and one antiquark.
Similarly to Mendeleev, who regrouped elements according to their properties, the physicists regrouped discovered elementary particles into specific “families” and “ multiplets” . Particles are sorted into groups as mesons or baryons. Within each group, they are further separated by their spin angular momentum.
Symmetrical patterns appear, when these groups of particles have their strangeness plotted against their electric charge. This is the most common way to make these plots today but originally physicists used an equivalent pair of properties called hypercharge and isotopic spin, the latter of which is now known as isospin. The symmetry in these patterns is a hint of the underlying symmetry of the strong interaction between the particles themselves. This led to the discovery of SU(3) and SU(6) symmetries, and to successive quark models [88,89,90] .
In the plots above, points representing particles that lie along the same horizontal line share the same strangeness, s, while those on the same left-leaning diagonals share the same electric charge, q (given as multiples of the elementary charge).
Pythagoreans would be happy to see their sacred number 10 represented by Tetractys in baryon and anti- baryon spin 3/2 decuplets and 4 fundamental forces of Nature.
We are talking about the” building blocks of matter”, draw nice diagrams but in fact we are not allowed to make any mental pictures. SM as is a complicated abstract and semi- empirical mathematical model containing 26 free parameters. It contains the algorithms “recipes“: how to make calculations and how to compare them with data gathered by different counters and detectors. Nevertheless, SM allows to explain several regularities in these experimental data and to make verifiable predictions confirmed by future experiments.
Free stable quarks do not exist in nature. By 1977, physicists had isolated five of the six quarks in the lab — up, down, strange, charm and bottom — but it wasn’t until 1995 that researchers at Fermilab National Accelerator Laboratory in Illinois “found” the final quark, the top quark.. Searching for it had been as intense as the later hunt for the Higgs boson. The top quark was so hard to produce because it’s about 100 trillion times heavier than up quarks, meaning it required a lot more energy to make in particle accelerators.
We explain below in some detail how the hadron –hadron strong collision is described in the Standard Model .Quantum Chromodynamics (QC) [90,91] is a theory of the strong
f < P i f = | f | S i | 2 Interactions between quarks and gluons which is a generalization of QED. If is the initial state vector of n free quarks, the probability of finding a final state of m free quarks is defined as. . The S matrix is replaced by a perturbative series and only few first terms of this series are evaluated and used as an approximation of :
P i f ( s , t , q u a n t u m n u m b e r s , p a r a m e t e r s ) | p r o d u c t s o f F e y n m a n n g r a p h s | 2
All Feynman graphs are built using the following elementary vertices displayed below [90].
Colliding hadrons are represented by free quark states via universal semi- empirical parton distribution functions (PDFs) [92]. PDFs describe the probability distributions of quarks and gluons (collectivelycalled partons) inside a hadron. They provide information about the momentum fraction carried by each parton at a given energy scale. PDFs are universal, meaning they are process-independent and apply to all high-energy interactions involving hadrons.
PDFs are used in collider experiments (e.g., LHC) to predict cross sections for various processes. Uncertainties in PDFs directly affect the predicted cross sections. PDFs have associated uncertainties due to experimental data limitations and theoretical assumptions.
These uncertainties are quantified using error bands. Collider observables (e.g., Higgs boson production) depend on PDFs.
Then using (17) various probabilities are calculated. Hadronization , how at the end free quarks recombine to form final particles and resonances, cannot be described rigorously in the SM. No exact theory for hadronization is known but two empirical models for parameterization are used within event generators which simulate particle physics events [92].
SM falls short of being a complete theory: it doesn’t explain baryon asymmetry, gravity (as described by general relativity), or dark energy. It lacks a viable dark matter particle and doesn’t account for neutrino oscillations and their masses. Moreover, estimates of the values of quark masses depend on the version of QCD used to describe quark interactions. Quarks are always confined in an envelope of gluons that confer vastly greater mass to the mesons and baryons where quarks occur, so values for quark masses cannot be measured directly. Since their masses are so small compared to the effective mass of the surrounding gluons, slight differences in the calculation make large differences in the masses.
In LHC experiments millions of collision events are produced and completely different methods have to be used in order to extract a meaningful information about created particles, quarks and their life-time. These methods are based on the interplay of the semi – empirical theoretical models, sophisticated computer data processing and simulations. Experiments use trigger systems to select interesting events for further analysis. Only a fraction of data is stored, reducing the volume significantly. Experiments rely on powerful computing clusters to process and analyze data. Algorithms compress data without losing essential information. Lossless compression techniques are used.
Several event generators [94] simulate interesting events as the creation of Higgs boson in function of the semi-empirical and theoretical inputs and experimental data. Then particular computer art software creates “event images” for scientists and for the general public.
As we see the Standard Model and the description of high energy collisions is quite far from the picture of planets playing a harmonious music to please the Creator. Therefore we should be perhaps much more humble.

8. Bild Conception of the Physical Theoryand the Modern Neuroscience.

As we mentioned in the introduction Helmholtz, Hertz, Boltzmann and Schrodinger insisted that our models of physical reality, based on our sensorial sensations, are only intellectual constructs of our brain unable to describe nature as it is.
Helmholtz [4,5] had no doubts that the laws in nature really existed but the laws presented in scientific theories were only mental representations of these laws. They were only “parallel” to natural laws, not identical, since our mind operates not with precise images of real objects but only with symbols assigned to them [12].
Hertz believed that Helmholtz’s parallelism of laws was impossible, if theory were limited to describing observable quantities, because the manifold of the actual universe is greater than the manifold of the universe which is directly revealed to us by our senses.
Only by introducing hidden quantities (concepts that correspond to no perceptions) can Helmholtz’s parallelism of laws become a general principle in physical theory. Such theory should be constrained by causality and simplicity. Namely, if our images are well adapted to the things, the actual relations of the things must be represented by simple relations between the images….Even a “good model” does not describe reality as it is; it provides just a mathematical symbolic representation involving a variety of elements having no direct relation with the observational quantities [6,7,12]. This conception was further developed and promoted by Boltzmann [8] and Schrodinger [9,10].
Recent studies in the neuroscience [95], which we resume shortly below, provide additional arguments in favor of the Bild conception, because the physical reality, as we perceive it, is in fact created by our brain. Patrick Cavanagh (GLENDON): “We’re seeing a story that’s being created for us… Most of the time, the story our brains generate matches the real world, but not always”. A detailed explanation and several examples of visual illusions may be found in [95,96,97].Our brains unconsciously bend our perception of reality to meet our desires or expectations. They fill in gaps using our past experiences creating visual illusions.
The visual cortex is at the back of our brain; the frontal lobes are the higher-level thinking area dedicated to anticipation and decision-making. Sam Schwarzkopf, a vision scientist at the University of Auckland, says: “we’re not trying to measure wavelengths, we’re trying to tell something about the color and the color is an illusion created by our brain.”[95].
Susana Martinez-Conde (SUNY): “We’re not seeing reality. Our vision runs 100 milliseconds behind the real world. Why are we seeing a story… It’s actually an adaptation. We don’t have the necessary machinery to process carefully all the information that we’re constantly bombarded with.”
Adam Hantman, a neuroscientist at Howard Hughes Medical Institute’s Janelia Research Campus : “Our brains like to predict as much as possible, then use our senses to correct, when the predictions go wrong. This is true not only for our perception of motion but also for so much of our conscious experience”. The stories our brain tells us about physical reality are often misleading and are influenced by our life experience.
Pascal Wallisch, a Clinical Associate Professor at New York University:“When an image, event, or some other stimulus isn’t perfectly clear, we fill in the gaps with our priors, or presumptions. Neuroscience is deeply humbling. We should cultivate a habit of seeking out perspectives, that are not our own”. Political partisans perceive the facts of current events differently depending on their political beliefs. The illusions and political thinking don’t involve the same brain processes, but they follow the similar overarching way the brain works “[95].
Progress in model building in science follows a self-improving epistemological cycle. We define physical observables, design and perform experiments to measure their values. Analyzing experimental data we discover empirical laws and construct an observational model (OMs) which are not constrained by causality. Next we guess and construct causal theoretical models (CTMs) from which we deduce” fundamental” laws, define new observables and predict outcomes of new experiments and observations. On basis of these observations and new experimental outcomes we improve our initial OMs, modify or replace our old CTMs make new experiments and gather new observations [12]. During this epistemological cycle we construct new measuring instruments, the precision of our observation increases and we explore new layers of the physical reality.
We should not forget that our OMs and CTMs are only mental constructions, providing symbolic mathematical descriptions of natural phenomena.Epistemological questions refer to the knowledge of information gathering by human beings. From the Bild perspective, it is totally meaningless even to refer to the structure and behavior of a system as such [12].

9. Conclusions

Physical Reality is a subtle notion. All our science is built on the assumption that there exists an external world governed by some laws of Nature which we want to discover and harness. In physics, we construct idealized mathematical models in order to explain in qualitative and quantitative way various phenomena which we observe or create in our laboratories.
Pythagoreans playing with their pebbles understood that numbers were the important abstract notion and believed that the laws of nature could be expressed using them. In particular by experimenting with strings of different length they discovered that musical harmony is related to simple whole-number ratios 1:2, 2:3, 3:4… Now we also know, that simple fractions describe the symmetry and proportions of a human face and body: 1:3, 1:4, 1:6, 1:8, and 1:10.
As we saw in previous sections, there was a long way from Pythagoreans’ pebbles to quantum mechanics and quarks, but the sacred Pythagorean symbol Tetractys representing the number “10” can be easily recognized in the baryon decuplets in the Standard Model. In binary positional system all the numbers are represented using two digits :“0” and “1”. Computational bases in quantum computing are n-dimensional unit vectors.
From Galileo to Einstein, scientists and philosophers were searching for the intelligent design of the universe and constructed sophisticated mathematical models. Einstein asked: “How can it be that mathematics a product of human thought independent of experience is so admirably adapted to the objects of reality?” Probably it is less surprising as it seems to be. Man has learned to reason studying what happens in nature this is why his reasoning yields the results that accord with nature.
In spite of what some contemporary physicists believe, the law of contradiction appears to be inescapable: the objects do not possesses contradictory qualities at the same time. The successes of the science were achieved by following this and other Aristotelian principles of reasoning. Moreover, the man “has more means at his disposal to make his mathematics fit the physical world. If his “theorems/models” do not fit, he is free to change his axioms/assumptions.”[3].
In Mathematics and the Physical World [3], Morris Kline concluded: “Mathematics provides the supreme plan for the understanding and mastery of nature. Mathematics may be the queen of the sciences and therefore entitles to royal prerogatives, but the queen who loses touch with her subjects may lose support and even be deprived of her realm.
Mathematicians may like to rise to the clouds of abstract thought, but they should, and indeed they must, return to earth for nourishing food or else die from mental starvation. They are on safer and saner grounds, if they stay close to nature”.
Similar advice can be given to some physicists and philosophers who claim that the quantum mechanics proves that an electron can be here and a meter away at the same time, that two perfectly random events in distant locations can be perfectly correlated, that there are millions of parallel worlds or that the nature operates according to retro-causality.
Our perceptions are limited and biased by our senses, instruments we construct and by our brains bending our perception of reality to meet our priors, desires or expectations. The stories our brain tells us are influenced by our whole life experience. It is surprising that we succeeded not only to describe and predict various phenomena but we created new materials, liberated nuclear energy, landed on the Moon and build ‘quantum computers’.
To explain invisible world of atoms and elementary particles we succeeded to create quantum mechanics, quantum electrodynamics and quantum field theory (QFT) which allowed us to provide a quantitative description of many physical phenomena. Quantum theories are complicated mathematical models, which do not contain intuitive images and explanations why observed phenomena and individual experimental outcomes, registered by macroscopic instruments, are produced.
Encouraged by these successes several scientists believe, that when we reconcile the general relativity with the quantum theory, then we will have the correct quantum theory of everything. In my opinion, we should be much more humble. There is no quantum wave function of the universe and the theory of everything does not exist. Our abstract mathematical models describe only and in an approximate way different layers of the Physical Reality.
The mathematics is a rigorous theory but often exact solutions of mathematical equations cannot be found. This problem we encountered, when we tried to solve Newton’s equations of motion, Schrodinger equations, interacting quantum field equations etc. Several macroscopic phenomena can only be studied using the chaos theory and the catastrophe theory.
QFT requires renormalization and is unable to describe exactly scattering of bound states. Therefore semi- empirical models containing several adjustable parameters are added to a theory in order to explain various phenomena in particle physics. In particular the comparison of the Standard Model with experimental data is a difficult task requiring many free parameters, various phenomenological inputs and Monte Carlo simulation of events [77,98,99]. Standard Model faces also serious challenges related to the discovery of black matter, massive neutrinos, tetra-quarks and penta-quarks.
Bohr correctly emphasized that there is no quantum world but only an abstract quantum physical description and that the knowledge presents itself within a conceptual framework adapted to previous experience and . . . any such frame may prove too narrow to comprehend new experience. Nevertheless, in the phenomena which we observe and create there should be something behind the scenes which is responsible for their occurrence. In our opinion quantum probabilities neither correspond to irreducible propensities of individual physical systems nor beliefs of some human agents, but they are properties of quantum phenomena and experiments as a whole.
Contrary to Bohr, Einstein believed that there should be some more detailed explanation of quantum probabilities. In spite of what is often believed, the Bohr- Einstein quantum debate cannot be closed [74,75,76]. The loophole free Bell Tests gave additional arguments in favor of Bohr’s contextuality/complementarity but they proved neither the completeness of quantum mechanics nor its nonlocality [78,79,100,101,102,103,104,105,106,107,108,109,110,111,112]. In fact, we even don’t know whether quantum mechanics is predictably complete for the phenomena it wants to describe [74,76,77,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117].
In Bell Tests we can only assess the plausibility of particular probabilistic models/couplings and it is true that we may reject so called local hidden variable model based on Bell locality assumption (the assumption, which should be rather called non-contextuality) [78,107,108]. It does not the mean, that long range correlations in Bell Tests are due to spooky influences. Bell Tests cannot reject a contextual probabilistic model in which individual binary outcomes in distant laboratories are produced locally in deterministic way. Moreover, contrary to what many believe closing of freedom of choice loophole in Bell Tests does not close theoretical contextuality loophole [78,102,103]. A true resource for quantum information is entanglement and contextuality [118,119].
Only if an experiment is outputting in each trial triples or quadruplets Bell and CHSH inequalities hold for any finite sample. Therefore, if one is sticking only to the experimental data and avoids any metaphysical conclusions, then the violation of Bell and CHSH inequalities by the data, gathered in physics and in social sciences, proves only, that corresponding two column data spreadsheets cannot be reshuffled to form triplets or quadruplets [116,117].
In spite, of the fact that QM and QFT are abstract mathematical models we should not abandon analyzing metaphysical implications of them. An interesting recent discussion of these implications may be found in [120,121]
As we explained in this article our successes in harnessing the forces of nature were due to the assumption that behind our imperfect sensorial observations there is an intelligent design to be discovered. Assuming that there is nothing behind the scenes and evoking magic, to explain some quantum phenomena is not only unjustified but counter-productive.
Appendix. Lagrangian and Hamiltonian Mechanics
The motion of a planet around the Sun is obtained by solving Newton’s equation for two material points of masses m and M:
m r ¨ 1 = G m M | r 2 r 1 | 3 ( r 2 r 1 ) ; M r ¨ 2 = G m M | r 2 r 1 | 3 ( r 2 r 1 )
where G is the universal gravitational constant and ri denote 3 dimensional vectors.. From (A1) by adding two equations we obtain that m r ¨ 1 + M r ¨ 2 =0 and we find that the total linear momentumP=mv1+Mv2 is conserved. Next we define the center of mass position vector R= (mr1+Mr2)/(m + M) . The center of mass is moving with the constant velocity P/ (m+M). The position vectors of the planet and Sun can be determined using R and a relative position of the planet with respect to Sun r = r1-r2. By subtracting the second equation from the first in (A1) we obtain a simple equation allowing to determine r :
r ¨ = G M m | r | 3 r
The equations (A1) and (A2) determine completely the motion of the planet. Using them one can also easily demonstrate the conservation of the total energy E:
E = K + U = m r ˙ 1 2 + M r ˙ 2 2 2 G M m | r 1 r 2 |
here K is a kinetic energy , U is a potential energy. If we choose the origin of the coordinate frame in the center of mass frame, then the equation (A3) can be rewritten as:
E = μ r ˙ 2 2 G M m | r |
where reduced mas μ = m M m + M .
Another important law is the conservation of the total angular momentum L:
L=p1 x r1+p2 x r2
where pi =mi vi are the corresponding individual linear momenta and the ”x” denotes the vector product.
It is easy to show that the total energy, total linear and angular momentum are conserved for any isolated system of N mass points mn, n=1…N, evolving under the influence of conservative forces F n = U r n , where the total potential energy U=U(r1…rN ).
The motion of a system of N material points can be represented as a motion of one point in a configuration space R3N : (r1, r2rN) = x =(x1,…x3N) . If we do not impose any constraints on the motion of N material points the system has 3N degrees of freedom. In most practical cases there are constraints imposed. The object has one degree of freedom, if it can only slide inside a curved tube in the gravitational field or 2 degrees of freedom if it can slide on the inclined plane. Similarly a simple pendulum (a suspended small mass m) has one degree of freedom and its motion is completely determined by one generalized coordinate: an angle θ.
In general, if we impose several constraints a system has s degrees of freedom and its time evolution can be completely described by general coordinates q=(q1,…qs) describing a hypersurface in the configuration space. If forces depend on time, then this hypersurface is moving inside the configuration space. Thus after expressing the position vectors and their derivatives in terms of q and q ˙ the kinetic energy K = K ( q , q ˙ ) and the potential energy U = U ( q ) . To find q(t) for given initial conditions q ( t 0 ) and q ˙ ( t 0 ) one has to solve the Euler- Lagrange equations (E-L) [27,28,29].
d d t L q ˙ i L q i = 0
Where
L = K U = L ( q 1 , q s , q ˙ 1 , q ˙ s , t )
For one dimensional harmonic oscillator L = m x ˙ 2 2 k x 2 2 and:
d d t 2 m x ˙ 2 2 k x 2 = m x ¨ + k x = 0
which is the exactly the Newton’s equation.
Since the antiquity Man wanted to maximize area bounded by a curve with a given perimeter or maximize the volume bounded by a surface with a given area. These and similar problems can be solved using the calculus of variations developed by John Bernoulli, Euler and Lagrange [3,29]. In 17th century Pierre de Fermat has demonstrated the Principle of Least Time according to which the light travelling between two points the P to Q takes the path requiring the shortest (extremal) time. It suggested that perhaps this principle can be generalized to include other natural phenomena. In 1744, Pierre de Maupertuis announced that nature always behaves so as to minimize a certain integral called action. From this principle he deduced Newton’s equations of motion and the optical phenomena. He thought that his principle was the scientific proof of the existence of God, for it was: “so wise a principle as to be worth only of the Supreme Being” [3]. The principle of the Least Action was rephrased and generalized by Lagrange, Jacoby and Hamilton [28,29,30]. It can be summarized as follows.
If a system evolves from a point q1 =q(t1) to another point q2 =q(t2) under the influence of conservative forces following the path parametrized by q(t) which is the solution of E-L equations (A6), then a certain integral S called action remains stationary (δS = 0 ) for small arbitrary independent changes in the path from q(t) to q(t)+ δq(t) such that δq(t1)= δq(t2)=0. The action S is usually defined as:
S [ q , q ˙ , t ] = t 1 t 2 L ( q 1 , q s , q ˙ 1 , q ˙ s , t ) d t
and the variation δS as difference in S up to the first orders of δq and δ q ˙ :
δ S = S [ q + δ q , q ˙ + δ q ˙ , t ] S [ q , q ˙ , t ] t 1 t 2 ( q L δ q + q ˙ L δ q ˙ ) d t = 0
The action remains stationary for the motion in the configuration space between any two points q1 and q2 and is the least for close points ( short “path”). It can also be proven that by adding to the original Lagrangian the total derivative of an arbitrary function f(q, t) we obtain the same solution for the stationary path [28,29]
The mathematical condition δS = 0 choses from the infinity of possible “evolutions” of the system the evolution consistent with Newton’s equations. Since L=K-U one can correctly conclude that physical systems, in the field of conservative forces, follow the paths in the configuration space in such a way that that the time average of the difference between the kinetic energy and potential energy on each segment of the path remains minimal ( extremal).
One should not forget that the “equivalence” between (A6) and (A10 ) is the equivalence of two mathematical descriptions and it does not justify teleological speculations.
Nevertheless, the Least action principle allows an easy derivation of Hamilton -Jacoby equations and Hamilton’s equations of motion which are first order partial differential equations in new modified coordinates (q, p)=(q1,…qs , p1,…ps ) where a generalized momentum   p i = L q ˙ i   .
By introduction of generalized momenta all information about system evolution is contained in a curve (q(t), p(t)) in the 2s dimensional phase space F.
The important function Hamiltonian is defined as [27,30]:
H ( q , p , t ) = i = 1 s p i q ˙ i L ( q 1 , q s , q ˙ 1 , q ˙ s , t )
where   q ˙ i = q ˙ i   ( q , p ) . Using (A6), (A(11) and the definition of the generalized momenta we obtain immediately Hamilton equations of motion:
q ˙ i = H p i p ˙ = H q i
For one dimensional oscillator (A8) we obtain L x ˙ = m x ˙ = p and :
H = p x ˙ m x ˙ 2 2 + k x 2 2 = p 2 m p 2 2 m + k x 2 2 = p 2 2 m + k x 2 2 = E
where p 2 2 m + k x 2 2 = E is a constant energy of the system (because its Lagrangian does not depend on time). Hamilton’s equations of motion are again equivalent to theNewton’s equation:
x ˙ = H p = p m p ˙ = H x = k x m x ¨ = k x
The trajectory in the space of the system is in general an ellipse (see the energy conservation equation) and one can see the animation of this motion for example on https://en.wikipedia.org/wiki/Phase_space.
Hamiltonian plays an important role in different domain of science including chaos theory, quantum mechanics, quantum field theory and the Standard Model. The canonical quantization consists on the replacement of coordinates and momenta by operators; and the Poisson brackets (https://en.wikipedia.org/wiki/Poisson_bracket) by the commutators.

References

  1. Robb, A.A. Optical Geometry of Motion: A New View of the Theory of Relativity; Kessinger Publishing: Whitefish, MT, USA, 1911. [Google Scholar]
  2. Whitehead, A.N. Process and Reality; An Essay in Cosmology; Cambridge University Press: Cambridge, UK, 1929; Gifford Lectures Delivered in the University of Edinburgh During the Session 1927–1928. [Google Scholar]
  3. Kline, M,. Mathematics and Physical World, Thomas Y. Crowell, New York ,1959.
  4. von Helmholtz, H. Die Thatsachen in der Wahrnehmung. In Vorträge und Reden, FünfteAuflage, Zweiter Band; Friedrich Vieweg und Sohn: Braunschweig, Germany, 1903; pp. 215–247. [Google Scholar]
  5. von Helmholtz, H. The facts in perception. In Epistemological Writings: The Paul Hertz/Moritz Schlick Centenary Edition of 1921, with Notes and Commentary by the Editors; Boston Studies in the Philosophy of Science, 37; Cohen, R., Elkana, Y., Lowe, M., Eds.; Springer: Dordrecht, The Netherlands, 1977; pp. 115–185, Talk first given in 1878. [Google Scholar]
  6. Hertz, H. Untersuchungenueber die Ausbreitung der Elektrischen Kraft; J.A. Barth: Leipzig, The Netherlands, 1892. [Google Scholar]
  7. Hertz, H. Die Prinzipien der Mechanik in NeuenZusammenhangeDargestellt; J.A. Barth: Leipzig, The Netherlands, 1894. [Google Scholar]
  8. Boltzmann, L. On the development of the methods of theoretical physics in recent times. In Theoretical Physics and Philosophical Problems; Vienna Circle Collection; McGuinness, B., Ed.; Springer: Dordrecht, The Netherlands, 1974; Volume 5. [Google Scholar]
  9. Schrödinger, E. Science Theory and Man; Dover: New York, NY, USA, 1957. [Google Scholar]
  10. Schrödinger, E. Mind and Matter; Cambridge University Press: Cambridge, UK, 1958. [Google Scholar]
  11. D’Agostino, S. Boltzmann and Hertz on the Bild conception of physical theory. Hist. Sci. 1990, 28, 380–398. [Google Scholar] [CrossRef]
  12. Khrennikov, A. Bild Conception of Scientific Theory Structuring in Classical and Quantum Physics: From Hertz and Boltzmann to Schrödinger and De Broglie. Entropy 2023, 25, 1565. [Google Scholar] [CrossRef] [PubMed]
  13. www.scienceworld.ca/stories/chickens-can-do-math/.
  14. Müller, M. and Wehner,R. . Path integration in desert ants, Cataglyphisfortis,, PNAS, July 1, 1988, 85 (14) 5287-5290. [CrossRef]
  15. https://en.wikipedia.org/wiki/Babylonian_mathematics.
  16. https://en.wikipedia.org/wiki/Ancient_Egyptian_mathematics.
  17. https://en.wikipedia.org/wiki/Narmer_Macehead.
  18. https://en.wikipedia.org/wiki/Pythagoras.
  19. https://numerologynamecalculator.com/pythagorean/.
  20. https://en.wikipedia.org/wiki/Pythagoreanism.
  21. https://en.wikipedia.org/wiki/Aristotle.
  22. https://www.researchgate.net/publication/228537232_The_Heliocentric_System_from_the_Orphic_Hymns_and_the_Pythagoreans_to_the_Emperor_Julian.
  23. https://en.wikipedia.org/wiki/Ancient_Greek_astronomy.
  24. Ifrah, G. The universal history of numbers: From prehistory to the invention of the computer. New York: John Wiley & Sons. ISBN 0-471-37568-3, 1981.
  25. Weyl, H. Philosophy of Mathematics aand Natural Science, Atheneum, New York. 1963.
  26. https://en.wikipedia.org/wiki/Science_and_the_Catholic_Church.
  27. Landau, L.D.; Lifshitz, E.M. (1972). Course of Theoretical Physics, Vol. 1 – Mechanics. Franklin Book Company. ISBN 978-0-08-016739-8.
  28. https://en.wikipedia.org/wiki/Lagrangian_mechanics.
  29. History of variational principles in physics - Wikipedia.
  30. https://en.wikipedia.org/wiki/Hamiltonian_mechanics.
  31. https://en.wikipedia.org/wiki/Noether%27s_theorem#Informal_statement_of_the_theorem.
  32. https://en.wikipedia.org/wiki/Symmetry_(physics).
  33. https://en.wikipedia.org/wiki/Standard_Model.
  34. Poincare,J.H. "Sur le probleme des trois corps et les equations de la dynamique", Acta Mathematica 13, 1-271 (1890).
  35. https://en.wikipedia.org/wiki/Three-body_problem.
  36. https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9.
  37. https://phys.org/news/2017-10-scientists-periodic-orbits-famous-three-body.html#google_vignette.
  38. Li, X .and Liao, S. "More than six hundred new families of Newtonian periodic planar collisionless three-body orbits", SCIENCE CHINA Physics, Mechanics & Astronomy 60, 129511 (2017). [PDF] arXiv:1705.00527v4.
  39. Li, X and Liao,S "Collisionless periodic orbits in the free-fall three-body problem", New Astronomy 70, 22-26 (2019). [PDF] arXiv:1805.07980.
  40. Li, X..and Liao, S. , "One family of 13315 stable periodic orbits of non-hierarchical unequal-mass triple systems", SCIENCE CHINA Physics, Mechanics & Astronomy 64, 219511 (2021). [PDF] arXiv:2007.10184.
  41. Liao, S. , Li , X. and Yang,Y. "Three-body problem - from Newton to supercomputer plus machine learning", New Astronomy 96, 101850 (2022). [PDF] arXiv:2106.11010v2.
  42. Lorenz, E. N. (1963). "Deterministic non-periodic flow". Journal of the Atmospheric Sciences. 20 (2): 130–141. [CrossRef]
  43. https://en.wikipedia.org/wiki/Lorenz_system.
  44. https://en.wikipedia.org/wiki/Chaos_theory.
  45. Shen, Bo-Wen; Pielke, Sr., Roger; Zeng, Xubin (2023-08-12). "The 50th Anniversary of the Metaphorical Butterfly Effect since Lorenz (1972): Multistability, Multiscale Predictability, and Sensitivity in Numerical Models". Atmosphere. 14 (8): 1279. [CrossRef]
  46. Shen, Bo-Wen (2023-09-04). "A Review of Lorenz’s Models from 1960 to 2008". International Journal of Bifurcation and Chaos. 33 (10): 2330024–2330220. [CrossRef]
  47. https://en.wikipedia.org/wiki/Fractal.
  48. Mandelbrot, B. (1977). The Fractal Geometry of Nature. New York: Freeman. p. 248.
  49. Mandelbrot, B.; Hudson, R,. (2004). The (Mis)behavior of Markets: A Fractal View of Risk, Ruin, and Reward. New York: Basic Books. p. 201. ISBN 9780465043552.
  50. Mandelbrot, Benoît (5 May 1967). "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension". Science. 156 (3775): 636–8. [CrossRef]
  51. Mandelbrot, B. (1982). The Fractal Geometry of Nature. New York: Macmillan. ISBN 978-0716711865.
  52. Edyta Patrzalek , Fractals: Useful Beauty (General Introduction to Fractal Geometry IPO, Centre for User-System Interaction, Eindhoven University of Technology; https://www.fractal.org/Bewustzijns-Besturings-Model/Fractals-Useful-Beauty.htm.
  53. https://en.wikipedia.org/wiki/Koch_snowflake.
  54. https://en.wikipedia.org/wiki/Sierpi%C5%84ski_triangle.
  55. https://en.wikipedia.org/wiki/Mandelbrot_set.
  56. https://en.wikipedia.org/wiki/Julia_set.
  57. Thom, René (1989) Structural Stability and Morphogenesis: An Outline of a General Theory of Models, Reading, MA: Addison-Wesley ISBN 0-201-09419-3.
  58. Ekeland, I. Le calcul, l’imprevu. Les Figure du Temps de Kepler à Thom, Éditions du Seuil, Paris, 1984.
  59. https://en.wikipedia.org/wiki/Catastrophe_theory.
  60. https://en.wikipedia.org/wiki/Bifurcation_theory.
  61. https://en.wikipedia.org/wiki/History_of_chemistry.
  62. https://en.wikipedia.org/wiki/Atomism.
  63. https://en.wikipedia.org/wiki/John_Dalton.
  64. Dalton, J. A New System of Chemical Philosophy. Cambridge University Press; 2010.
  65. Gay-Lussac’s article (1809) "On the combination of gaseous substances", online and analyzed on BibNum Archived 2019-06-16 at the Wayback Machine (for English, click ’à télécharger’).
  66. https://www.britannica.com/science/atom/The-beginnings-of-modern-atomic-theory.
  67. https://en.wikipedia.org/wiki/Lewis_structure.
  68. https://en.wikipedia.org/wiki/Dmitri_Mendeleev.
  69. https://en.wikipedia.org/wiki/Michael_Faraday.
  70. https://en.wikipedia.org/wiki/James_Clerk_Maxwell.
  71. https://en.wikipedia.org/wiki/Balmer_series.
  72. https://en.wikipedia.org/wiki/Bohr_model.
  73. Ballentine LE. 1998 Quantum mechanics: a modern development. Singapore: World Scientific.
  74. Kupczynski, M. Seventy years of the EPR paradox. AIP Conf. Proc. 2006, 861, 516–523. [Google Scholar].
  75. Khrennikov, A. Contextual Approach to Quantum Formalism; Springer: Dordrecht, The Netherlands, 2009. [Google Scholar]
  76. Kupczynski, M. Can we close the Bohr-Einstein quantum debate? Phil. Trans. R. Soc. A 2017, 375, 20160392. [Google Scholar] [CrossRef]
  77. Kupczynski M. Quantum mechanics and modeling of physical reality. Phys. Scr., 2018, 93 123001 (10pp). [CrossRef]
  78. Kupczynski, M. Quantum Nonlocality: How Does Nature Do It? Entropy 2024, 26, 191 . [Google Scholar] [CrossRef] [PubMed]
  79. Khrennikov, A. Contextuality, Complementarity, Signaling, and Bell Tests. Entropy 2022, 24, 1380 [Google Scholar] [CrossRef]. [Google Scholar] [CrossRef]
  80. Kuhlmann, Meinard, "Quantum Field Theory", The Stanford Encyclopedia of Philosophy (Summer 2023 Edition), Edward N. Zalta& Uri Nodelman (eds.), URL = .
  81. https://en.wikipedia.org/wiki/Quantum_field_theory.
  82. https://en.wikipedia.org/wiki/Quantum_electrodynamics.
  83. https://en.wikipedia.org/wiki/Renormalization.
  84. Dirac, P.A.M. The Principles of Quantum Mechanics,4th ed.;Clarendon:Oxford,England,1958.
  85. Servay R.A., Moses,C.J., Moyer.C.A.,Modern Physics , 2nd edition, Harcourt Brace , Orlando, Florida, 1989.
  86. https://en.wikipedia.org/wiki/Relativistic_Breit%E2%80%93Wigner_distribution.
  87. https://en.wikipedia.org/wiki/Elementary_particle.
  88. https://en.wikipedia.org/wiki/Quark_model.
  89. https://en.wikipedia.org/wiki/Eightfold_way_(physics).
  90. https://en.wikipedia.org/wiki/Standard_Model.
  91. https://en.wikipedia.org/wiki/Quantum_chromodynamics.
  92. https://en.wikipedia.org/wiki/Parton_(particle_physics).
  93. https://en.wikipedia.org/wiki/Hadronization.
  94. https://en.wikipedia.org/wiki/Event_generator.
  95. https://neuroscience.stanford.edu/news/reality-constructed-your-brain-here-s-what-means-and-why-it-matters (2020).
  96. https://en.wikipedia.org/wiki/Optical_illusion.
  97. https://en.wikipedia.org/wiki/Visual_perception.
  98. Belitsky, A. V.; Radyushkin, A. V. Unraveling hadron structure with generalized parton distributions. Phys. Rep. 2005, 418, 1–387. [Google Scholar] [CrossRef]
  99. Pancheri, G.; Srivastava, Y.N. Introduction to the physics of the total cross section at LHC. The European Physical Journal C 2017, 77, 150. [Google Scholar] [CrossRef]
  100. Kupczynski, M., Is quantum theory predictably complete? Phys. Scr., 2009, T135, 014005. [CrossRef]
  101. Kupczynski, M. , Time series, stochastic processes and completeness of quantum theory. AIP. Conf. Proc. 2011, 1327, 394–400. [Google Scholar]
  102. Nieuwenhuizen, T.M. Is the contextuality loophole fatal for the derivation of Bell inequalities. Found. Phys. 2011, 41, 580–591. [Google Scholar] [CrossRef]
  103. Nieuwenhuizen, T.M.; Kupczynski, M. The contextuality loophole is fatal for derivation of Bell inequalities: Reply to a Comment by, I. Schmelzer. Found. Phys. 2017, 47, 316–319. [Google Scholar] [CrossRef]
  104. Kupczynski, M. Closing the Door on Quantum Nonlocality. Entropy 2018, 20, 877. [Google Scholar] [CrossRef] [PubMed]
  105. Kupczynski, M. Is the Moon there when nobody looks: Bell inequalities and physical reality. Front. Phys. 2020, 8, 273. [Google Scholar] [CrossRef]
  106. Kupczynski, M. : Contextuality-by-Default Description of Bell Tests: Contextuality as the Rule and Not as an Exception. Entropy 2021, 23(9), 1104. [Google Scholar] [CrossRef] [PubMed]
  107. Kupczynski, M. , Contextuality or nonlocality; what would John Bell choose today? Entropy 2023, 25, 280. [Google Scholar] [CrossRef]
  108. Kupczynski, M. My Discussions of Quantum Foundations with John Stewart Bell. Found Sci (2024). [CrossRef]
  109. Khrennikov, A. Get rid of nonlocality from quantum physics. Entropy 2019, 21, 806. [Google Scholar] [CrossRef]
  110. Khrennikov, A. Two faced Janus of quantum nonlocality. Entropy 2020, 22, 303. [Google Scholar] [CrossRef]
  111. Jung, K. Violation of Bell’s inequality: must the Einstein locality really be abandoned? J Phys Conf Ser. [CrossRef]
  112. Dzhafarov, E.N. Assumption-Free Derivation of the Bell-Type Criteria of contextuality/Nonlocality. Entropy 2021, 23, 1543. [Google Scholar] [CrossRef]
  113. Boughn, S., There Is No Spooky Action at a Distance in Quantum Mechanics,Entropy 2022, 24(4), 560. [CrossRef]
  114. Hance, J.R.; Hossenfelder, S. Bell’s theorem allows local theories of quantum mechanics. Nat. Phys. 2022, 18, 1382. [Google Scholar] [CrossRef]
  115. Hess, K. A Critical Review of Works Pertinent to the Einstein-Bohr Debate and Bell’s Theorem. Symmetry 2022, 14, 163. [Google Scholar] [CrossRef]
  116. De Raedt, H., et al. (2023). Einstein–Podolsky–Rosen–Bohm experiments: A discrete data driven approach. Annals of Physics, 453, 169314. [CrossRef]
  117. De Raedt, H., et al. (2024). Can foreign exchange rates violate Bell inequalities? [CrossRef]
  118. Raussendorf, R. Contextuality in measurement-based quantum computation. Phys. Rev. A 2013, 88, 022322. [Google Scholar] [CrossRef]
  119. Howard, M.; Wallman, J.; Veitch, V.; Emerson, J. Contextuality supplies the ‘magic’ for quantum computation. Nat. Cell Biol. 2014, 510, 351–355. [Google Scholar] [CrossRef] [PubMed]
  120. Jaeger, G. The Ontology of Haag’s Local Quantum Physics. Entropy 2024, 26, 33. [Google Scholar] [CrossRef]
  121. Plotnitsky, A. In Our Mind’s Eye: Thinkable and Unthinkable, and Classical and Quantum in Fundamental Physics, with Schrödinger’s Cat Experiment. Entropy 2024, 26, 418. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Hieroglyphics from Egyptian numerals.The complex numbers were formed by addition.For example writing from right to left 23 was depicted as 111 .
Figure 1. Hieroglyphics from Egyptian numerals.The complex numbers were formed by addition.For example writing from right to left 23 was depicted as 111 .
Preprints 117767 g001
Figure 2. Glyphs copied from decorated mace-head which depicts a ceremony where captives and other gifts are presented to Pharaoh Narmer c. 3100 BC, who is enthroned beneath a canopy on a stepped platform.
Figure 2. Glyphs copied from decorated mace-head which depicts a ceremony where captives and other gifts are presented to Pharaoh Narmer c. 3100 BC, who is enthroned beneath a canopy on a stepped platform.
Preprints 117767 g002
Figure 3. The fraction 1/2 was represented by a glyph that may have depicted a piece of linen folded in two. The fraction 2/3 was represented by the glyph for a mouth with 2 (different sized) strokes. The rest of the fractions were always represented by a mouth super-imposed over a number.
Figure 3. The fraction 1/2 was represented by a glyph that may have depicted a piece of linen folded in two. The fraction 2/3 was represented by the glyph for a mouth with 2 (different sized) strokes. The rest of the fractions were always represented by a mouth super-imposed over a number.
Preprints 117767 g003
Figure 4. The first six triangular numbers.
Figure 4. The first six triangular numbers.
Preprints 117767 g004
Figure 5. We easily notice that 32+2x3+1=42 etc. The number 2n+1 was called gnomon.
Figure 5. We easily notice that 32+2x3+1=42 etc. The number 2n+1 was called gnomon.
Preprints 117767 g005
Figure 6. Greek’s numbers represented by letters .
Figure 6. Greek’s numbers represented by letters .
Preprints 117767 g006
Figure 7. The incomplete diagram of the model of the universe proposed by Philolaus of Croton. There are missing: the Moon between the Earth and the Sun, five more distant known planets and the celestial sphere of stars. The existence of Anticthon helped to explain the diurnal cycle [22].
Figure 7. The incomplete diagram of the model of the universe proposed by Philolaus of Croton. There are missing: the Moon between the Earth and the Sun, five more distant known planets and the celestial sphere of stars. The existence of Anticthon helped to explain the diurnal cycle [22].
Preprints 117767 g007
Figure 8. Early printed version of Ptolemaic system (Christian Aristotelian Cosmos. From Peter Apian, Cosmographia, 1524.
Figure 8. Early printed version of Ptolemaic system (Christian Aristotelian Cosmos. From Peter Apian, Cosmographia, 1524.
Preprints 117767 g008
Figure 9. God the Geometer — Gothic frontispiece of the Bible moralized, representing God’s act of Creation. France, mid-13th century.
Figure 9. God the Geometer — Gothic frontispiece of the Bible moralized, representing God’s act of Creation. France, mid-13th century.
Preprints 117767 g009
Figure 10. Six families of periodic orbits discovered recently by two Chinese scientists ..
Figure 10. Six families of periodic orbits discovered recently by two Chinese scientists ..
Preprints 117767 g010
Figure 11. Two examples of periodic orbits for equal masses.
Figure 11. Two examples of periodic orbits for equal masses.
Preprints 117767 g011
Figure 12. The relatively periodic BHH satellites orbits of the three-body system with various masses in a rotating frame of reference. Blue line: body-1; red line: body-2; black line: body-3.
Figure 12. The relatively periodic BHH satellites orbits of the three-body system with various masses in a rotating frame of reference. Blue line: body-1; red line: body-2; black line: body-3.
Preprints 117767 g012
Figure 13. Lorentz strange attractor and the butterfly effect.
Figure 13. Lorentz strange attractor and the butterfly effect.
Preprints 117767 g013
Figure 14. First 4 iterations of the algorithm constructing Koch Snowflake curve.
Figure 14. First 4 iterations of the algorithm constructing Koch Snowflake curve.
Preprints 117767 g014
Figure 15. a) Snowflake dendrite [53]; b) the first and the forth iteration of Sierpinski gasket [54].
Figure 15. a) Snowflake dendrite [53]; b) the first and the forth iteration of Sierpinski gasket [54].
Preprints 117767 g015
Figure 16. Three examples of the fractal structures in nature.
Figure 16. Three examples of the fractal structures in nature.
Preprints 117767 g016
Figure 17. Fractal art inspired by nature. Colours at different points depend on how these points are transformed in the successive iterations. Of course the final choice is motivated by the artistic effect one wants obtain [51,52].
Figure 17. Fractal art inspired by nature. Colours at different points depend on how these points are transformed in the successive iterations. Of course the final choice is motivated by the artistic effect one wants obtain [51,52].
Preprints 117767 g017
Figure 18. Mandelbrot set. A system in a black initial point remains inside the set. Colours indicate how fast a system in these points escape to infinity.
Figure 18. Mandelbrot set. A system in a black initial point remains inside the set. Colours indicate how fast a system in these points escape to infinity.
Preprints 117767 g018
Figure 19. Details of the Mandelbrot set.
Figure 19. Details of the Mandelbrot set.
Preprints 117767 g019
Figure 20. Connected and a disconnected Julia set C2,.
Figure 20. Connected and a disconnected Julia set C2,.
Preprints 117767 g020
Figure 21. One mole of carbon C-12.
Figure 21. One mole of carbon C-12.
Preprints 117767 g021
Figure 22. Phosphorus electronic stricture, Lewis diagram and tetrahedral P4 molecule.
Figure 22. Phosphorus electronic stricture, Lewis diagram and tetrahedral P4 molecule.
Preprints 117767 g022
Figure 23. Periodic Tables 1869 and the modern table in which atomic number instead of the mass is used.
Figure 23. Periodic Tables 1869 and the modern table in which atomic number instead of the mass is used.
Preprints 117767 g023
Figure 24. The visible solar spectrum, ranging from the shortest visible wavelengths (violet light, at 400 nm) to the longest (red light, at 700 nm). Shown in the diagram are prominent Fraunhofer lines, representing wavelengths at which light is absorbed by elements present in the atmosphere of the Sun.
Figure 24. The visible solar spectrum, ranging from the shortest visible wavelengths (violet light, at 400 nm) to the longest (red light, at 700 nm). Shown in the diagram are prominent Fraunhofer lines, representing wavelengths at which light is absorbed by elements present in the atmosphere of the Sun.
Preprints 117767 g024
Figure 25. Balmer series of Hydrogen visible spectral lines.
Figure 25. Balmer series of Hydrogen visible spectral lines.
Preprints 117767 g025
Figure 26. Full hydrogen spectrum including infrared and ultraviolet.
Figure 26. Full hydrogen spectrum including infrared and ultraviolet.
Preprints 117767 g026
Figure 27. Bohr model of Atom. Maximum number of electrons: the first shell 2, the second shell 8 and the third shell 18.
Figure 27. Bohr model of Atom. Maximum number of electrons: the first shell 2, the second shell 8 and the third shell 18.
Preprints 117767 g027
Figure 28. Feynman graphs as a mnemonic tools to account for the important mathematical terms to be included in the calculations in QED.
Figure 28. Feynman graphs as a mnemonic tools to account for the important mathematical terms to be included in the calculations in QED.
Preprints 117767 g028
Figure 29. The bubble chamber photography shows many events after a high energy collision of π with a proton (12); the insert is a drawing of identified tracks [85].
Figure 29. The bubble chamber photography shows many events after a high energy collision of π with a proton (12); the insert is a drawing of identified tracks [85].
Preprints 117767 g029
Figure 30. The histogram of the invariant mass proving the existence of the elementary particle Δ + [85].
Figure 30. The histogram of the invariant mass proving the existence of the elementary particle Δ + [85].
Preprints 117767 g030
Figure 31. Building blocks of the matter according to the Standard Model.
Figure 31. Building blocks of the matter according to the Standard Model.
Preprints 117767 g031
Figure 32. Meson nonets, baryon octet and decuplet.
Figure 32. Meson nonets, baryon octet and decuplet.
Preprints 117767 g032
Figure 33. Interactions in the Standard Model, all Feynman diagrams in the model are built from combinations of these vertices; q is any quark, g is a gluon, X is any charged particle, γ is a photon, f is any fermion, mB is any boson with mass. In diagrams with multiple particle labels separated by / one particle label is chosen. In diagrams with particle labels separated by | the labels must be chosen in the same order. For example, in the four boson electroweak case the valid diagrams are WWWW, WWZZ, WWγγ, WWZγ. The conjugate of each listed vertex (reversing the direction of arrows) is also allowed [90].
Figure 33. Interactions in the Standard Model, all Feynman diagrams in the model are built from combinations of these vertices; q is any quark, g is a gluon, X is any charged particle, γ is a photon, f is any fermion, mB is any boson with mass. In diagrams with multiple particle labels separated by / one particle label is chosen. In diagrams with particle labels separated by | the labels must be chosen in the same order. For example, in the four boson electroweak case the valid diagrams are WWWW, WWZZ, WWγγ, WWZγ. The conjugate of each listed vertex (reversing the direction of arrows) is also allowed [90].
Preprints 117767 g033
Figure 34. Simulation showing the production of the Higgs boson in the collision of two protons at the Large Hadron Collider. The Higgs boson quickly decays into four muons, which are a type of heavy electron that is not absorbed by the detector. The tracks of the muons are shown in yellow. (Image credit: Lucas Taylor/CMS).
Figure 34. Simulation showing the production of the Higgs boson in the collision of two protons at the Large Hadron Collider. The Higgs boson quickly decays into four muons, which are a type of heavy electron that is not absorbed by the detector. The tracks of the muons are shown in yellow. (Image credit: Lucas Taylor/CMS).
Preprints 117767 g034
Figure 35. The Kanizsa triangle: the Pac-Man-like shapes give the impression of a triangle in our minds. It seems like a triangle, because we’re used to seeing triangle.
Figure 35. The Kanizsa triangle: the Pac-Man-like shapes give the impression of a triangle in our minds. It seems like a triangle, because we’re used to seeing triangle.
Preprints 117767 g035
Figure 36. We see a horse’s head or a seal depending on our previous life experience.
Figure 36. We see a horse’s head or a seal depending on our previous life experience.
Preprints 117767 g036
Figure 37. In reality the Crocs are pink, the pixels in the strawberries are only grey and cyan.Courtesy of Pascal Wallisch.
Figure 37. In reality the Crocs are pink, the pixels in the strawberries are only grey and cyan.Courtesy of Pascal Wallisch.
Preprints 117767 g037
Figure 38. Epistemological cycle, using theoretical model CTM observables are chosen and an experiment is designed and performed. Regularities in experimental data are discovered and observational model OM is postulated and tested. An improved CTM is constructed, additional observables are defined and new experiments are designed and performed.
Figure 38. Epistemological cycle, using theoretical model CTM observables are chosen and an experiment is designed and performed. Regularities in experimental data are discovered and observational model OM is postulated and tested. An improved CTM is constructed, additional observables are defined and new experiments are designed and performed.
Preprints 117767 g038
Figure A1. A simple pendulum with one degree of freedom and one generalized coordinate θ.
Figure A1. A simple pendulum with one degree of freedom and one generalized coordinate θ.
Preprints 117767 g039
Figure A2. The action S is greater on the path 2comparing with the path chosen by a material point in the gravitational field on the Earth. .
Figure A2. The action S is greater on the path 2comparing with the path chosen by a material point in the gravitational field on the Earth. .
Preprints 117767 g040
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated