ARTICLE | doi:10.20944/preprints201709.0158.v1
Subject: Engineering, Automotive Engineering Keywords: variational mode decomposition; Euclidean Distance; diesel engine; vibration signal; denoising algorithm
Online: 29 September 2017 (14:53:38 CEST)
Variational mode decomposition (VMD) is a recently introduced adaptive signal decomposition algorithm with a solid theoretical foundation and good noise robustness compared with empirical mode decomposition (EMD). There is a lot of background noise in the vibration signal of diesel engine. To solve the problem, a denoising algorithm based on VMD and Euclidean Distance is proposed. Firstly, a multi-component, non-Gauss, and noisy simulation signal is established, and decomposed into a given number K of band-limited intrinsic mode functions by VMD. Then the Euclidean distance between the probability density function of each mode and that of the simulation signal are calculated. The signal is reconstructed using the relevant modes, which are selected on the basis of noticeable similarities between the probability density function of the simulation signal and that of each mode. Finally, the vibration signals of diesel engine connecting rod bearing faults are analyzed by the proposed method. The results show that compared with other denoising algorithms, the proposed method has better denoising effect, and the fault characteristics of vibration signals of diesel engine connecting rod bearings can be effectively enhanced.
ARTICLE | doi:10.20944/preprints201710.0104.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: evolutionary dynamics; quantitative trait; Manhattan norm; Euclidean norm; Chebyshev norm; parasitism; exploitation; egalitarianism
Online: 16 October 2017 (07:35:46 CEST)
Various distance metrics and their induced norms are employed in the quantitative modeling of evolutionary dynamics. Minimization of these distance metrics when applied to evolutionary optimization are hypothesized to result in different outcomes. Here, we apply the different distance metrics to the evolutionary trait dynamics brought about by the interaction between two competing species infected by parasites (exploiters). We present deterministic cases showing the distinctive selection outcomes under the Manhattan, Euclidean and Chebyshev norms. Specifically, we show how they differ in the time of convergence to the desired optima (e.g., no disease), and in the egalitarian sharing of carrying capacity between the competing species. However, when randomness is introduced to the population dynamics of parasites and to the trait dynamics of the competing species, the distinctive characteristics of the outcomes under the three norms become indistinguishable. Our results provide theoretical cases when evolutionary dynamics using different distance metrics exhibit similar outcomes.
BRIEF REPORT | doi:10.20944/preprints202303.0490.v1
Subject: Computer Science And Mathematics, Geometry And Topology Keywords: Triangles; Geometry; Euclidean Geometry
Online: 28 March 2023 (13:50:44 CEST)
The research paper proposes an algorithm to find congruence criteria between two convex polygons in Euclidean Geometry. It begins with a review of triangles, then extends to quadrilaterals and eventually gener- alizes the case to n-sided polygons. It attempts to prove said algorithm using a method of induction and a case-by-case analysis. It also states a corollary to said algorithm
ARTICLE | doi:10.20944/preprints202207.0399.v18
Subject: Physical Sciences, Quantum Science And Technology Keywords: Euclidean relativity; cosmology; Hubble constant; quantum mechanics; wave–particle duality; quantum entanglement
Online: 3 January 2023 (06:44:05 CET)
Today's concept of time traces back to Albert Einstein's theories of special (SR) and general relativity (GR). In SR, uniformly moving clocks are slow with respect to my clocks. In GR, clocks in a more curved spacetime are slow with respect to my clocks. Many physicists anticipate that GR has an issue as it isn't compatible with quantum mechanics. Here we show: "Einstein time" (Einstein's concept of time) has an issue because it takes the proper time of an observer as the fourth coordinate of all objects in the universe. We replace Einstein time with "Euclidean time", which takes the proper time of an object as its fourth coordinate. SR and GR work very well as long as we describe the world on or close to Earth. Only then does time flow in one 4D direction for all objects. In all other cases, we must take a 4D vector "flow of time" into account. Unlike other models of Euclidean relativity (ER), we claim that reality is formed by projecting 4D Euclidean spacetime to an observer's 3D space. We prove: The Lorentz factor is recovered in ER; gravitational time dilation is also recovered in ER; ER is compatible with quantum mechanics. We solve 14 mysteries of physics, such as time's arrow, the c2 in mc2, dark energy, the wave–particle duality, and quantum entanglement. Our theory is supported by experimental data: ER empowers us to match the two competing values of the Hubble constant by adjusting redshift measurements of celestial objects.
ARTICLE | doi:10.20944/preprints202311.0799.v1
Subject: Computer Science And Mathematics, Geometry And Topology Keywords: Euclidean space; Hypersurface; Killing vector field
Online: 13 November 2023 (10:21:04 CET)
An odd-dimensional sphere admits a killing vector field, induced by the transform of the unit normal by the complex structure of the ambiant Euclidean space. In this paper, we study orientable hypersurfaces in a Euclidean space those admit a unit Killing vector field and find two characterizations of odd-dimensional spheres. In first result, we show that a complete and simply connected hypersurface of Euclidean space Rn+1, n>1 admits a unit Killing vector field ξ that leaves the shape operator S invariant and has sectional curvatures of plane sections containing ξ positive satisfies S(ξ)=αξ, α mean curvature if and only if n=2m−1, α is constant and the hypersurfaces is isometric to the sphere S2m−1(α2). Similarly, we find another characterization of unit sphere S2(α2) using the smooth function σ=g(S(ξ),ξ) on the hypersurface.
ARTICLE | doi:10.20944/preprints202209.0226.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: Graph Neural Networks, Non-Euclidean Data, Bioinformation
Online: 15 September 2022 (08:49:51 CEST)
With the development of data science, more and more machine learning technologies have been designed to solve complicated and challenging real-world tasks containing a large volume of data. And many significant real-world datasets contain data in the form of networks or graphs. Graph Neural Networks is one of the powerful machine learning tools, which could provide a perfect solution to processing a large amount of non-Euclidean data. And because most bio information data in bioinformatics is in the non-Euclidean domain, Graph Neural Networks could then directly be applied to solve problems in bioinformatics. Much research has been done in the field of GNN, and there are also some surveys related to GNN and its applications. However, there has been little research focusing on GNN in bioinformatics, and we think in the future we could better utilize GNN in the field of biology, so we would like to write a literature review to help take a comprehensive look at GNN and their applications in the field of bioinformatics. In this paper, we would first introduce SOTA models in Graph Neural Networks, and second, introduce their applications in bio information. And then we would provide future directions of Graph Neural Networks in bioinformatics.
ARTICLE | doi:10.20944/preprints201912.0389.v1
Subject: Computer Science And Mathematics, Analysis Keywords: Euclidean operator radius; numerical radius; self-adjoint operator
Online: 30 December 2019 (03:46:07 CET)
There are many criterion to generalize the concept of numerical radius; one of the most recent interesting generalization is what so called the generalized Euclidean operator radius. Simply, it is the numerical radius of multivariable operators. In this work, several new inequalities, refinements and generalizations are established for this kind of numerical radius.
ARTICLE | doi:10.20944/preprints202305.0191.v2
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Slope Filter; point cloud; triangular grid; KD-Tree-Based Euclidean Clustering
Online: 30 May 2023 (05:44:25 CEST)
High-precision ground point cloud data has a wide range of applications in various fields, and the separation of ground points from non-ground points is a crucial preprocessing step. Therefore, designing an efficient, accurate, and stable ground extraction algorithm is of great significance for improving the processing efficiency and analysis accuracy of point cloud data. The study area in this article was a Park in Guilin, Guangxi, China. The point cloud was obtained by utilizing the UAV platform. In order to improve the stability and accuracy of the Filter algorithm, this article proposed a triangular grid filter based on Slope Filter, found violation points by the spatial position relationship within each point in the triangulation network, improved KD-Tree-Based Euclidean Clustering, and applied it to the non-ground points extraction, this method has good accuracy, stability and achieves good results in separating ground points from non-ground points. At first, using Slope Filter to remove some non-ground points, to reduce the error of taking ground points as non-ground points; Secondly, established a triangular grid based on the triangular relationship between each point, the violation-triangle can determin through the grid, and then the corresponding violation points were found in the violation-triangle; Thirdly, according to the three-point collinear method to extract the regular points, used these points to extract the regular landmarks by KD-Tree-Based Euclidean Clustering and Convex Hull Algorithm; Finally, removed disperse points and irregular landmarks by Clustering Algorithm. In order to confirm the superiority of this algorithm, this article compared the filter effects of various algorithms on the study area and filtered the 15 sample data provided by ISPRS, and obtained an average error of 3. 46%. The results showed that the algorithm in this article have a high processing efficiency and accuracy, which can greatly improve the processing efficiency of point cloud data in practical applications.
ARTICLE | doi:10.20944/preprints201909.0279.v1
Subject: Physical Sciences, Quantum Science And Technology Keywords: general relativity; euclidean quantum gravity; path integral; lattice field theory; metropolis algorithm
Online: 25 September 2019 (08:56:59 CEST)
We generate numerically on a lattice an ensemble of stationary metrics, with spherical symmetry, which have Einstein action SE « ħ. This is obtained through a Metropolis algorithm with weight exp(-β2SE2) and β » ħ-1. The squared action in the exponential allows to circumvene the problem of the non-positivity of SE. The discretized metrics obtained exhibit a spontaneous polarization in regions of positive and negative scalar curvature. We compare this ensemble with a class of continuous metrics previously found, which satisfy the condition SE=0 exactly, or in certain cases even the stronger condition R(x)=0 for any x. All these gravitational field configurations are of considerable interest in quantum gravity, because they represent possible vacuum fluctuations and are markedly different from Wheeler's ''spacetime foam''.
ARTICLE | doi:10.20944/preprints202304.0140.v1
Subject: Social Sciences, Psychology Keywords: atypicality bias; Euclidean geometry; face perception; face recognition; face space; other-race effect; Riemannian geometry
Online: 10 April 2023 (04:09:28 CEST)
We employed the Riemannian Face Manifold (RFM) as an alternative approach to the conventional linear Euclidean space for explaining the atypicality bias in face likeliness judgments. The RFM posits that the mental representation of faces is better captured as a manifold of stable states, accounting for the nonlinearity in the physical properties of faces. To examine the relationship between physical and psychological distance of morph and typical/atypical faces, we manipulated the parameter k and incorporated a weight function in the Riemannian metric. Our results indicate that the psychological distance between the morph and typical face was longer than that between the morph and atypical face, consistent with prior research on the atypicality bias in perceptual similarity. The RFM approach provides mathematical support and is a powerful tool for studying face perception and recognition, offering potential implications for explaining the other-race effect. Future research can utilize the RFM to investigate individual differences in face recognition abilities and expertise, and how they impact psychological distance and face processing.
ARTICLE | doi:10.20944/preprints202107.0570.v1
Subject: Computer Science And Mathematics, Applied Mathematics Keywords: Face Detection; Euclidean Distance; Fast Fourier Transformation; Discrete Cosine Transformation; Facial Parts Detection; Frequency domain; Spatial domain
Online: 26 July 2021 (11:47:11 CEST)
In today’s world face detection is the most important task. Due to the chromosomes disorder sometimes a human face suffers from different abnormalities. For example, one eye is bigger than the other, cliff face, different chin-length, variation of nose length, length or width of lips are different, etc. For computer vision currently this is a challenging task to detect normal and abnormal face and facial parts from an input image. In this research paper a method is proposed that can detect normal or abnormal faces from a frontal input image. This method used Fast Fourier Transformation (FFT) and Discrete Cosine Transformation of frequency domain and spatial domain analysis to detect those faces.
ARTICLE | doi:10.20944/preprints202101.0504.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: z-transform; time-varying systems; time-varying difference equations; skew polynomial rings; extended Euclidean algorithm; fraction decomposition
Online: 25 January 2021 (14:50:14 CET)
A transform approach based on a variable initial time (VIT) formulation is developed for discrete-time signals and linear time-varying discrete-time systems or digital filters. The VIT transform is a formal power series in z^(-1) which converts functions given by linear time-varying difference equations into left polynomial fractions with variable coefficients, and with initial conditions incorporated into the framework. It is shown that the transform satisfies a number of properties that are analogous to those of the ordinary z-transform, and that it is possible to do scaling of z^(- i) by time functions which results in left-fraction forms for the transform of a large class of functions including sinusoids with general time-varying amplitudes and frequencies. Using the extended right Euclidean algorithm in a skew polynomial ring with time-varying coefficients, it is shown that a sum of left polynomial fractions can be written as a single fraction, which results in linear time-varying recursions for the inverse transform of the combined fraction. The extraction of a first-order term from a given polynomial fraction is carried out in terms of the evaluation of z^(i) at time functions. In the application to linear time-varying systems, it is proved that the VIT transform of the system output is equal to the product of the VIT transform of the input and the VIT transform of the unit-pulse response function. For systems given by a time-varying moving average or an autoregressive model, the transform framework is used to determine the steady-state output response resulting from various signal inputs such as the step and cosine functions.
ARTICLE | doi:10.20944/preprints202102.0392.v1
Subject: Physical Sciences, Particle And Field Physics Keywords: Quantized magnetic monopoles; origins of the physical constants; Photon space fluctuations; 3-D quantized space model; 4-D Euclidean space
Online: 17 February 2021 (13:07:10 CET)
Origins of the time, mass, electric charges and magnetic monopoles are explained. The energies, electric charges and magnetic charges of the particles are defined as E = cDtDV and |q| = cDt = E/DV, and |qm| = c2Dt, respectively, for the 3-D quantized spaces warped along the time axis direction of ct in the 4-D Euclidean space. The energy (or mass) and charges (or electric charges and magnetic charges) are the vectors along the time axis of ct in the 4-D Euclidean space. This new concept is closely connected to the wave function of the quantum mechanics. The electric charges, magnetic charges and energies have the property of the space direction independence. Magnetic monopoles (charges) are the force carrying bosons with the inside electric field time loop. Electric monopoles are the elementary fermions. Photon space fluctuations are explained with the quantized magnetic charges.
ARTICLE | doi:10.20944/preprints202008.0726.v1
Subject: Physical Sciences, Particle And Field Physics Keywords: Big bang; Super unification theory; 4-D Euclidean space; CPT symmetric universe; Origins of the energy (mass); charge and time; Magnetic monopoles
Online: 31 August 2020 (16:28:30 CEST)
Space-time evolution is briefly explained by using the 3-dimensional quantized space model (TQSM) based on the 4-dimensional (4-D) Euclidean space. The energy (mass), charges and absolute time are newly defined based on the 4-D Euclidean space. The photons and gravitons are understood as the 2-D space fluctuations along the space axes and 1-D time fluctuations along the time axis, respectively. This indicates that the electromagnetic (EM) and gravitational (G) waves can be unified into the so called gravityelectromagnetic (GEM) wave which has both space and time fluctuations. The photon with zero force fields is the 2EM wave because an EM wave corresponds to a charged fermion. The dark matter bosons, weak force bosons, strong force bosons are the 1-D time fluctuations which can be expressed by the G waves. This indicates that all five forces are unified by the GEM wave. It is called as the super unification theory in the present work. And the force carrying bosons and mesons are, for the first time, proposed as the possible candidates of the magnetic monopoles like the fermions and baryons are the electric monopoles. The signs of the magnetic charges quantized as qm=cq are newly defined. The big bang is understood by the space-time evolution of the 4-D Euclidean space but not by the sudden 4-D Minkowski space-time creation. The big bang process created the matter universe with the positive energy and the partner anti-matter universe with the negative energy from the CPT symmetry.
ARTICLE | doi:10.20944/preprints202210.0203.v1
Subject: Physical Sciences, Particle And Field Physics Keywords: Absolute time; Length expansion; Absolute time simultaneity; Twin paradox; Quantum entanglement; 4-D Euclidean space; Pauli exclusion principle; Higgs boson; Time clicking; Quantum base
Online: 14 October 2022 (03:52:31 CEST)
The absolute time and relative time are defined in terms of the 4-D Euclidean space. Our universe is the 3-D x1x2x3 quantized photon space which follows the absolute time simultaneity when the universe moves along the absolute time axis of ct. The length expansion of Dx = gDx0 is derived under the condition of the absolute time (ct) simultaneity. From the similarity between this length expansion of Dx = gDx0 and the energy increasing of E = gE0, it is assumed that the energy is proportional to the particle size of Dx. The extension of this assumption to the 4-D Euclidean space gives the new definition that the particle energy (E) is the 4-D volume. Then, the particle mass energy is defined as E= mc2 = cDtDV = gE0. The masses of the elementary particles are originated from the 4-D warped volume of the photon space because the particle is the warped photon space with the velocity of v < c. Therefore, the Higgs boson concept in the standard model (SM) is not needed in the present 3-D quantized space model (TQSM). The scalar boson with the spin of zero, photon with the spin of 1 and graviton with the spin of 2 are the two-boson states. Therefore, the observed Higgs boson is reinterpreted as the two-boson state of the scalar boson with the spin of zero. The cosmic muon observation and twin paradox are explained by using the absolute time and relative time. The relative time is the observed time in the twin paradox and cosmic muon observation. In the twin paradox, a person who travels the long distance is more aged than a person on the earth in terms of the relative time ages because of the space and time conversion effect (STCE effect) of the moving space distance (x). But twins are in the same ages in terms of the absolute time without STCE effect. The fast-moving cosmic muon has the expanded half-life from the time expansion of the relative time. Also, the quantum entanglement and Pauli exclusion principle are explained. The quantum base of the photon space line connects two entangled particles. Two particles and quantum base system is fluctuated along the absolute time axis by the time clicking when one particle is measured. Another particle is instantly selected by the time clicking. This is called as the quantum entanglement. The photons which are the flat photon space with the constant speed of c along the space axis and absolute time axis have the 4-D photon velocity of ceff = 20.5 c. Total 10-D Euclidean space including three 3-D quantized spaces and one absolute time axis is required for the electric charges (EC), lepton charges (CC) and color charges (CC) of the elementary particles. The 3-D photon space is very stiff along the absolute time (ct) axis and very soft along the space axes. The Coulomb force through the photons (2EM waves) of the space fluctuations is much stronger than the Gravitational force through the gravitons (G waves) of the time fluctuations between two electrons.
ARTICLE | doi:10.20944/preprints202111.0431.v1
Subject: Physical Sciences, Particle And Field Physics Keywords: Dark energy; Photon flat space; Charged black holes; Big bang and inflation; Universe evolution; Vacuum energy crisis; Hubble’s constant puzzle; 4-D Euclidean space
Online: 23 November 2021 (15:04:41 CET)
Space-time evolution of our universe is explained by using the 3-dimensional quantized space model (TQSM) based on the 4-dimensional (4-D) Euclidean space. The energy (E = cDtDV), charges and energy density (|q| = r = cDt) and absolute time (ct) are newly defined based on the 4-D Euclidean space. The photon flat space with the constant energy density of r = cDtq is proposed as the dark energy (DE). The dark energy is separated into the n DE and photon DE which create the new photon spaces with the constant energy density of r = cDtq. The v DE is from the n pair production by the CPT symmetry and the photon DE is from the photon space pair production by the T symmetry. The vacuum energy crisis and Hubble’s constant puzzle are explained by the photon space with the n DE and photon DE. The big bang and inflation of the primary black hole is connected to the accelerated space expansion and big collapse of the photon space through the universe evolution. The big bang from the nothing is the pair production of the matter universe with the positive energy and the partner anti-matter universe with the negative energy from the CPT symmetry. Our universe is the matter universe with the negative charges of electric charge (EC), lepton charge (LC) and color charge (CC). This first universe is made of dark matter -, lepton -, and quark - primary black holes with the huge negative charges which cause the Coulomb repulsive forces much bigger than the gravitational forces. The huge Coulomb forces induce the inflation of the primary black holes, that decay to the super-massive black holes and particles.
ARTICLE | doi:10.20944/preprints202109.0467.v2
Subject: Physical Sciences, Particle And Field Physics Keywords: Elementary particles; Galaxy structures; Charged black hole decay; Big bang and inflation; Super-massive black holes; Coulomb forces; Proton spin crisis; Dark matters; 4-D Euclidean space.
Online: 11 October 2021 (11:46:24 CEST)
Space-time evolution is briefly explained by using the 3-dimensional quantized space model (TQSM) based on the 4-dimensional (4-D) Euclidean space. The energy (E=cDtDV), charges (|q|= cDt) and absolute time (ct) are newly defined based on the 4-D Euclidean space. The big bang is understood by the space-time evolution of the 4-D Euclidean space but not by the sudden 4-D Minkowski space-time creation. The big bang process created the matter universe with the positive energy and the partner anti-matter universe with the negative energy from the CPT symmetry. Our universe is the matter universe with the negative charges of electric charge (EC), lepton charge (LC) and color charge (CC). This first universe is made of three dark matter -, lepton -, and quark - primary black holes with the huge negative charges which cause the Coulomb repulsive forces much bigger than the gravitational forces. The huge Coulomb forces induce the inflation of the primary black holes, that decay to the super-massive black holes. The dark matter super-massive black holes surrounded by the normal matters and dark matters make the galaxies and galaxy clusters. The spiral arms of galaxies are closely related to the decay of the 3-D charged normal matter black holes to the 1-D charged normal matter black holes. The elementary leptons and quarks are created by the decay of the normal matter charged black holes, that is caused by the Coulomb forces much stronger than the gravitational forces. The Coulomb forces are very weak with the very small Coulomb constants (k1(EC) = kdd(EC) ) for the dark matters and very strong with the very big Coulomb constants (k2(EC) = knn(EC)) for the normal matters because of the non-communication of the photons between the dark matters and normal matters. The photons are charge dependent and mass independent. But the dark matters and normal matters have the similar and very weak gravitational forces because of the communication of the gravitons between the dark matters and normal matters. The gravitons are charge independent and mass dependent. Note that the three kinds of charges (EC, LC and CC) and one kind of mass (m) exist in our matter universe. The dark matters, leptons and quarks have the charge configurations of (EC), (EC,LC) and (EC,LC,CC), respectively. Partial masses of elementary fermions are calculated, and the proton spin crisis is explained. The charged black holes are not the singularities.