ARTICLE | doi:10.20944/preprints202103.0215.v1
Subject: Engineering, Automotive Engineering Keywords: Indoor Localization; Wi-Fi Fingerprint; Denoising Auto-encoder; JLGBMLoc
Online: 8 March 2021 (12:23:36 CET)
Wi-Fi based localization has become one of the most practical methods for mobile users in location-based services. However, due to the interference of multipath and high-dimensional sparseness of fingerprint data, the localization system based on received signal strength (RSS) is hard to obtain high accuracy. In this paper, we propose a novel indoor positioning method, named JLGBMLoc (Joint denoising auto-encoder with LightGBM Localization). Firstly, because the noise and outliers may influence the dimensionality reduction on high-dimensional sparseness fingerprint data, we propose a novel feature extraction algorithm, named joint denoising auto-encoder (JDAE), which reconstructs the sparseness fingerprint data for a better feature representation and restores the fingerprint data. Then, the LightGBM is introduced to the Wi-Fi localization by scattering the processed fingerprint data to histogram, and dividing the decision tree under leaf-wise algorithm with depth limitation. At last, we evaluated the proposed JLGBMLoc on UJIIndoorLoc dataset and Tampere dataset, experimental results show that the proposed model increases the positioning accuracy dramatically comparing with other existing methods.
ARTICLE | doi:10.20944/preprints201911.0019.v1
Subject: Engineering, General Engineering Keywords: community detection; social network; convolutional neural network; auto-encoder
Online: 3 November 2019 (15:51:34 CET)
With the fast development of the mobile Internet, the online platforms of social networks have rapidly been developing for the purpose of making friends, sharing information, etc. In these online platforms, users being related to each other forms social networks. Literature reviews have shown that social networks have community structure. Through the studies of community structure, the characteristics and functions of networks structure and the dynamical evolution mechanism of networks can be used for predicting user behaviours and controlling information dissemination. Therefore, this study proposes a deep community detection method which includes (1) matrix reconstruction method, (2) spatial feature extraction method and (3) community detection method. The original adjacency matrix in social network is reconstructed based on the opinion leader and nearer neighbors for obtaining spatial proximity matrix. The spatial eigenvector of reconstructed adjacency matrix can be extracted by an auto-encoder based on convolution neural network for the improvement of modularity. In experiments, four open datasets of practical social networks were selected to evaluate the proposed method, and the experimental results show that the proposed deep community detection method obtained higher modularity than other methods. Therefore, the proposed deep community detection method can effectively detect high quality communities in social networks.
ARTICLE | doi:10.20944/preprints202208.0201.v1
Subject: Life Sciences, Genetics Keywords: auto-encoder; high sparse binary data; feature extraction; SNV integration
Online: 10 August 2022 (10:27:32 CEST)
Genomics involving tens of thousands of genes is a complex system determining phenotype. An interesting and vital issue is that how to integrate highly sparse genetic genomics data with a mass of minor effects into prediction model for improving prediction power. We find that deep learning method can work well to extract features by transforming highly sparse dichotomous data to lower dimensional continuous data in a non-linear way. This idea may provide benefits in risk prediction based on genome-wide data associated e.g. integrating most of the information in the genotype data. Hence, we developed a multi-stage strategy to extract information from highly sparse binary genotype data and applied it for risk prediction. Specifically, we first reduced the number of biomarkers via a univariable regression model to a moderate size. Then a trainable auto-encoder was used to extract compact representations from the reduced data. Next, we performed a LASSO problem process over a grid of tuning parameter values to select the optimal combination of extracted features. Finally, we applied such feature combination to two prognostic models, and evaluated predictive effect of the models. The results of simulation studies and real data applying indicated that these highly compressed transformation features could better improve predictive performance and did not easily lead to over-fitting.
ARTICLE | doi:10.20944/preprints202103.0408.v1
Subject: Engineering, Automotive Engineering Keywords: Auto encoder; IoT; Image encryption; Artificial Neural Network; Machine Learning
Online: 16 March 2021 (09:32:11 CET)
Machine Learning has completely transformed health care system, which transmits medical data through IOT sensors. So it is very important to encrypt them to protect patient data. encrypting medical images from a performance perspective consumes time; hence the use of an auto encoder is essential. An auto encoder is used in this work to compress the image as a vector prior to the encryption process. The digital image passes across description function and a decoder to get back the image in the proposed work; various experiments are carried out on hyper parameters to achieve the highest outcome of the classification. The findings demonstrate that the combination of Mean Square Logarithmic Error as the loss function, ADA grad as an optimizer, two layers for the encoder, and another reverse for the decoder, RELU as the activation function generates the best auto encoder results. The combination of Mean square error (lose function), RMS prop (optimizer), three layers for the encoder and another reverse for the decoder, and RELU (activation function) has the best classification result. All the experiments with different hyper parameter has run almost very close to each other even when changing the number of layers. The running time is between 9 and 16 second for each epoch.
ARTICLE | doi:10.20944/preprints202007.0209.v1
Subject: Engineering, Other Keywords: Deep learning; Head Related Transfer Function (HRTF); Restoration; Ambisonics; Spatial Audio; Spherical harmonic; Audio signal processing; Denoising; Auto-Encoder; Neural Network
Online: 10 July 2020 (08:58:11 CEST)
Spherical harmonic (SH) interpolation is a commonly used method to spatially up-sample sparse Head Related Transfer Function (HRTF) datasets to denser HRTF datasets. However, depending on the number of sparse HRTF measurements and SH order, this process can introduce distortions in high frequency representation of the HRTFs. This paper investigates whether it is possible to restore some of the distorted high frequency HRTF components using machine learning algorithms. A combination of Convolutional Auto-Encoder (CAE) and Denoising Auto-Encoder (DAE) models is proposed to restore the high frequency distortion in SH interpolated HRTFs. Results are evaluated using both Perceptual Spectral Difference (PSD) and localisation prediction models, both of which demonstrate significant improvement after the restoration process.
ARTICLE | doi:10.20944/preprints202203.0206.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Ratemaking; Machine Learning; Explainability; Auto Insurance
Online: 15 March 2022 (10:59:32 CET)
This paper explores the tuning and results of two-part models on rich datasets provided through the Casualty Actuarial Society (CAS). These data sets include BI (bodily injury), PD (property damage) and COLL (collision) coverage, each documenting policy characteristics and claims across a four year period. The datasets are explored, including summaries of all variables, then the methods for modeling are set forth. Models are tuned and the tuning results are displayed, after which we train the final models and seek to explain select predictions. All of the code will be made available on GitHub. Data was provided by a private insurance carrier to the CAS after anonymizing the data set. This data is available to actuarial researchers for well-defined research projects that have universal benefit to the insurance industry and the public. Our hope is that the methods demonstrated here can be a good foundation for future ratemaking models to be developed and tested more efficiently.
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Amharic script; Attention mechanism; OCR; Encoder-decoder; Text-image
Online: 15 October 2020 (13:42:28 CEST)
In the present, the growth of digitization and worldwide communications make OCR systems of exotic languages a very important task. In this paper, we attempt to develop an OCR system for one of these exotic languages with a unique script, Amharic. Motivated by the recent success of the Attention mechanism in Neural Machine Translation (NMT), we extend the attention mechanism for Amharic text-image recognition. The proposed model consists of CNNs and attention embedded recurrent encoder-decoder networks that are integrated following the configuration of the seq2seq framework. The attention network parameters are trained in an end-to-end fashion and the context vector is injected, with the previously predicted output, at each time steps of decoding. Unlike the existing OCR model that minimizes the CTC objective function, the new model minimizes the categorical cross-entropy loss. The performance of the proposed attention-based model is evaluated against the test dataset from the ADOCR database which consists of both printed and synthetically generated Amharic text-line images and achieved promising results with a CER of 1.54% and 1.17% respectively.
ARTICLE | doi:10.20944/preprints201804.0258.v2
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: audio classification; multi-resolution analysis; LSTM; auto-ml
Online: 19 July 2018 (05:53:20 CEST)
We describe a multi-resolution approach for audio classification and illustrate its application to the open data set for environmental sound classification. The proposed approach utilizes a multi-resolution based ensemble consisting of targeted feature extraction of approximation (coarse scale) and detail (fine scale) portions of the signal under the action of multiple transforms. This is paired with an automatic machine learning engine for algorithm and parameter selection and the LSTM algorithm, capable of mapping several sequences of features to a predicted class membership probability distribution. Initial results show an improvement in multi-class classification accuracy.
ARTICLE | doi:10.20944/preprints202104.0630.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Paraphrase Identification; Paraphrase Generation; Natural Language Generation; Language Model; Encoder Decoder; Transformer
Online: 23 April 2021 (10:35:20 CEST)
Paraphrase Generation is one of the most important and challenging tasks in the field of Natural Language Generation. The paraphrasing techniques help to identify or to extract/generate phrases/sentences conveying the similar meaning. The paraphrasing task can be bifurcated into two sub-tasks namely, Paraphrase Identification (PI) and Paraphrase Generation (PG). Most of the existing proposed state-of-the-art systems have the potential to solve only one problem at a time. This paper proposes a light-weight unified model that can simultaneously classify whether given pair of sentences are paraphrases of each other and the model can also generate multiple paraphrases given an input sentence. Paraphrase Generation module aims to generate fluent and semantically similar paraphrases and the Paraphrase Identification systemaims to classify whether sentences pair are paraphrases of each other or not. The proposed approach uses an amalgamation of data sampling or data variety with a granular fine-tuned Text-To-Text Transfer Transformer (T5) model. This paper proposes a unified approach which aims to solve the problems of Paraphrase Identification and generation by using carefully selected data-points and a fine-tuned T5 model. The highlight of this study is that the same light-weight model trained by keeping the objective of Paraphrase Generation can also be used for solving the Paraphrase Identification task. Hence, the proposed system is light-weight in terms of the model’s size along with the data used to train the model which facilitates the quick learning of the model without having to compromise with the results. The proposed system is then evaluated against the popular evaluation metrics like BLEU (BiLingual Evaluation Understudy):, ROUGE (Recall-Oriented Understudy for Gisting Evaluation), METEOR, WER (Word Error Rate), and GLEU (Google-BLEU) for Paraphrase Generation and classification metrics like accuracy, precision, recall and F1-score for Paraphrase Identification system. The proposed model achieves state-of-the-art results on both the tasks of Paraphrase Identification and paraphrase Generation.
ARTICLE | doi:10.20944/preprints202007.0474.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Convolutional Neural Network; Encoder-Decoder Architecture; Semantic Segmentation; Feature Silencing; Crack Detection
Online: 21 July 2020 (13:54:13 CEST)
An autonomous concrete crack inspection system is necessary for preventing hazardous incidents arising from deteriorated concrete surfaces. In this paper, we represent a concrete crack detection framework to aid the process of automated inspection. The proposed approach employs a deep convolutional neural network architecture for crack segmentation from concrete image. The proposed network alleviates the effect of gradient vanishing problem present in deep neural network architectures. A feature silencing module is incorporated in the crack detection framework, for eliminating unnecessary feature maps from the network. The overall performance of the network significantly improves as a result. Experimental results support the benefit of incorporating feature silencing within a convolutional neural network architecture for improving the network’s robustness, sensitivity, and specificity. An added benefit of the proposed architecture is its ability to accommodate for the trade-off between specificity (positive class detection accuracy) and sensitivity (negative class detection accuracy) with respect to the target application. Furthermore, the proposed framework achieves a high precision rate and processing time than crack detection architectures present in literature.
ARTICLE | doi:10.20944/preprints201805.0414.v1
Subject: Engineering, Mechanical Engineering Keywords: auto-alignment; intelligent stereo camera; stereo film; three-dimensional
Online: 28 May 2018 (15:57:25 CEST)
This study presents an instant preview and analysis system implementation of intelligent stereo cameras (ISCs). A parameter optimization prototype adopted the instant preview and analysis system of the ISCs has been achieved the automatic alignment function, and obtained optimal stereo films with the automatic alignment function by adjusting gap and angle between dual cameras. The instant preview and analysis system of the ISCs with parameter optimization can enhance the quality of stereo films effectively and reduce filmed errors and save retouched cost and time in harsh filmed environment.
ARTICLE | doi:10.20944/preprints202006.0343.v1
Subject: Engineering, Control & Systems Engineering Keywords: Microphone; Nonlinear auto regressive moving average-L2; Model predictive control
Online: 28 June 2020 (19:38:35 CEST)
In this paper, a capacitor microphone system is presented to improve the conversion of mechanical energy to electrical energy using a nonlinear auto regressive moving average-L2 (NARMA-L2) and model predictive control (MPC) controllers for the analysis of the open loop and closed loop system. The open loop system response shows that the output voltage signal need to be improved. The comparison of the closed loop system with the proposed controllers have been analyzed and a promising result have been obtained using Matlab/Simulink.
ARTICLE | doi:10.20944/preprints202201.0392.v1
Subject: Medicine & Pharmacology, Ophthalmology Keywords: Pterygium; Pterygium surgery; Amniotic membrane; Conjunctival auto-graft; Polish Caucasian population
Online: 26 January 2022 (11:54:17 CET)
This study compares the efficacy of the two most commonly used surgical methods for pterygium in the Polish population, conjunctival autograft versus amniotic membrane transplantation, and to evaluate the postopera-tive recurrence rate. We retrospectively analysed the medical records of 65 patients who underwent surgery for primary or recurrent pterygium at an ophthalmology clinic in Bialystok, Poland between 2016 and 2020. Surgical success (no regrowth) occurred in almost half of the amniotic membrane patients (44%) and in most of the conjunctival autograft patients (79%); this was a significant relationship. The odds of successful surgery were 79% lower for subjects with amniotic membranes than for those with conjunc-tival autografts (OR with 95% CI = 0.21 (0.05; 0.94]; p = 0.045). Our study confirms that in Polish Caucasian population the success rate of the pro-cedure using conjunctival autograft versus the use of amniotic membrane, is in favoured for the procedure with conjunctival graft.
ARTICLE | doi:10.20944/preprints202005.0288.v1
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: COVID-19; Accelerated Failure Time; Proportional Hazard Model; Bayesian; Auto-Regression
Online: 17 May 2020 (08:50:22 CEST)
The constant news about the corona virus is scary. It is not possible to separate treatment for Cancer due to COVID-19. An effective treatment comparison strategy is needed. We need to have a handy tool to understand cancer progression in this unprecedented scenario. Linking different events of cancer progression is the need of the hour. It is a methodological challenge. We provide the solutions to overcome the issue with interval between two consecutive events in motivating head and neck cancer (HNC) data.
ARTICLE | doi:10.20944/preprints201808.0251.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: potts network; attractor neural networks; auto-associative memory; cortex; semantic memory
Online: 14 August 2018 (12:33:52 CEST)
A statistical analysis of semantic memory should reflect the complex, multifactorial structure of the relations among its items. Still, a dominant paradigm in the study of semantic memory has been the idea that the mental representation of concepts is structured along a simple branching tree spanned by superordinate and subordinate categories. We propose a generative model of item representation with correlations that overcomes the limitations of a tree structure. The items are generated through "factors" that represent semantic features or real-world attributes. The correlation between items has its source in the extent to which items share such factors and the strength of such factors: if many factors are balanced, correlations are overall low; whereas if a few factors dominate, they become strong. Our model allows for correlations that are neither trivial nor hierarchical, but may reproduce the general spectrum of correlations present in a data-set of nouns. We provide an estimate of the number of concepts that can be stored and retrieved by a large-scale cortical network, the Potts network, which is perhaps approximately 107 with human cortical parameters. When this storage capacity is exceeded, however, retrieval fails completely only for balanced factors; above a critical degree of imbalance, a phase transition leads to a regime where the network still extracts considerable information about the cued item, even if not recovering its detailed representation: partial categorization seems to emerge spontaneously as a consequence of the dominance of particular factors, rather than being imposed ad hoc. We argue this to be a relevant model of semantic memory resilience in Tulving’s remember/know paradigms.
ARTICLE | doi:10.20944/preprints201806.0247.v1
Subject: Mathematics & Computer Science, Other Keywords: data mining; association rule learning; policyholder lapse; auto insurance; market inefficiency
Online: 15 June 2018 (09:01:03 CEST)
For automobile insurance, it has long been implied that when a policyholder made at least one claim in the prior year, the subsequent premium is likely to increase. When this happens, the policyholder may seek to switch to another insurance company to possibly avoid paying for a higher premium. In such situations, insurers may be faced with the challenges of policyholder retention by keeping premiums low in the face of competition. In this paper, we seek to find empirical evidence of possible association between policyholder switching after a claim and the associated change in premium. In accomplishing this goal, we employ the method of association rule learning, a data mining technique that has its origins in marketing for analyzing and understanding consumer purchase behavior. We apply this unique technique in two stages. In the first stage, we identify policyholder and vehicle characteristics that affect the size of the claim and resulting change in premium regardless of policy switch. In the second stage, together with policyholder and vehicle characteristics, we identify the association among the size of the claim, the level of premium increase and policy switch. This empirical process is often challenging to insurers because they are unable to observe the new premium for those policyholders who switched. However, we used a 9-year claims data for the entire Singapore automobile insurance market that allowed us to track information before and after the switch. Our results provide evidence of a strong association among the size of the claim, the level of premium increase and policy switch. We attribute this to the possible inefficiency of the insurance market because of the lack of sharing and exchange of claims history among the companies.
ARTICLE | doi:10.20944/preprints202002.0273.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: linguistic knowledge; neural machine translation model; machine translation tasks; knowledge-based encoder; model representation ability
Online: 19 February 2020 (10:51:41 CET)
Incorporating source-side linguistic knowledge into the neural machine translation (NMT) model has recently achieved impressive performance on machine translation tasks. One popular method is to generalize the word embedding layer of the encoder to encode each word and its linguistic features. The other method is to change the architecture of the encoder to encode syntactic information. However, the former cannot explicitly balance the contribution from the word and its linguistic features. The latter cannot flexibly utilize various types of linguistic information. Focusing on the above issues, this paper proposes a novel NMT approach that models the words in parallel to the linguistic knowledge by using two separate encoders. Compared with the single encoder based NMT model, the proposed approach additionally employs the knowledge-based encoder to specially encode linguistic features. Moreover, it shares parameters across encoders to enhance the model representation ability of the source-side language. Extensive experiments show that the approach achieves significant improvements of up to 2.4 and 1.1 BLEU points on Turkish→English and English→Turkish machine translation tasks, respectively, which indicates that it is capable of better utilizing the external linguistic knowledge and effective improving the machine translation quality.
ARTICLE | doi:10.20944/preprints202201.0331.v1
Subject: Earth Sciences, Geoinformatics Keywords: Concentration field; Spatial auto-correlation; Association rules; Apriori algorithm; Element co-occurrence
Online: 21 January 2022 (13:42:44 CET)
The spatial distribution of elements can be regarded as a numerical field of concentration values with a continuous spatial coverage. An active area of research is to discover geologically meaningful relationships among elements from their spatial distribution. To solve this problem, we propose an association rule mining method based on clustered events of spatial auto-correlation and applied it to the polymetallic deposits of the Chahanwusu River area, Qinghai Province, China. The elemental data for stream sediments were first clustered into HH (high-high), LL (low-low), HL (high-low), and LH (low-high) groups by using local Moran’s I clustering map (LMIC). Then the Apriori algorithm was used to mine the association rules among different elements in these clusters. More than 86% of the mined rule points are located within 1000 m of faults and near known ore occurrences, and occur in the upper reaches of the stream and catchment areas. In addition, we found that the Indosinian granodiorite is enriched in sulfophile elements, e.g., Zn, Ag and Cd, and the Variscan granite quartz diorite (P1γδο) coexists with Cu and associated elements. Therefore, the proposed algorithm is an effective method for mining co-existence patterns of elements and provides an insight into their enrichment mechanisms.
ARTICLE | doi:10.20944/preprints201804.0057.v1
Subject: Arts & Humanities, Archaeology Keywords: auto-extraction; remote sensing archaeology; tuntian; LATTICs; GF-1; Silk Road; Miran
Online: 4 April 2018 (11:56:47 CEST)
This paper describes the use of the Chinese Gaofen-1 (GF-1) satellite imagery to automatically extract tertiary Linear Archaeological Traces of Tuntian Irrigation Canals (LATTICs) located in the Miran site. The site is adjacent to the ancient Loulan Kingdom at the eastern margin of the Taklimakan Desert in western China. GF-1 data was processed following atmospheric and geometric correction, and spectral analyses were carried out for multispectral data. The low values produced by SSI indicate that it is difficult to distinguish buried tertiary LATTICs from similar backgrounds using spectral signatures. Thus, based on the textual characteristics of high-resolutionGF-1 panchromatic data, this paper proposes an automatic approach that combines joint morphological bottom and hat transformation with a Canny edge operator. The operator was improved by adding stages of geometric filtering and gradient vector direction analysis. Finally, the detected edges of tertiary LATTICs were extracted using the GIS-based draw tool and converted into shapefiles for archaeological mapping within a GIS environment. The proposed automatic approach was verified with an average accuracy of 95.76% for 754 tertiary LATTICs in the entire Miran site and compared with previous manual interpretation results. The results indicate that GF-1 VHR PAN imagery can successfully uncover the ancient tuntian agricultural landscape. Moreover, the proposed method can be generalized and applied to extract linear archaeological traces such as soil and crop marks in other geographic locations.
ARTICLE | doi:10.20944/preprints201708.0019.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Bioimpedance Spectroscopy; Field Programmable Gate Array; Digital Auto Balance Bridge; Multichannel data acquisition;
Online: 4 August 2017 (16:05:29 CEST)
This paper presents the design and implementation of a multichannel bio-impedance spectroscopy system on field programmable gate arrays (FPGA). The proposed system is capable of acquiring multiple signals from multiple bio-impedance sensors, process the data on the FPGA and store the final data in the on-board Memory. The system employs the Digital Automatic Balance Bridge (DABB) method to acquire data from biosensors. The DABB measures initial data of a known impedance to extrapolate the value of the impedance for the device under test. This method offers a simpler design because the balancing of the circuit is done digitally in the FPGA rather than using an external circuit. Calculations of the impedance values for the device under test were done in the processor. The final data is sent to an onboard Flash Memory to be stored for later access. The control unit handles the interfacing and the scheduling between these different modules (Processor, Flash Memory) as well as interfacing to multiple Balance Bridge and multiple biosensors. The system has been simulated successfully and has comparable performance to other FPGA based solutions. The system has a robust design that is capable of handling and interfacing input from multiple biosensors. Data processing and storage is also performed with minimal resources on the FPGA.
ARTICLE | doi:10.20944/preprints201608.0168.v1
Subject: Social Sciences, Economics Keywords: natural capital; human capital; economic growth; small economies; Vector Auto regression; natural resource curse
Online: 18 August 2016 (05:13:21 CEST)
The question of the relevance of human and natural capital, as well as the potential adverse effect of natural capital on economic growth, has gained increased attention in development economics. The aim of this paper is to theoretically and empirically assess the relevance of several forms of capital on economic growth in small economies that are dependent upon tourism or natural resources. The empirical framework is based on Impulse Response Functions obtained from Vector Autoregressive models in which we focus on the model where economic growth is the dependent variable for ten small economies that are dependent upon either tourism or natural resources. We find that there is evidence of the ‘’natural resource curse’’, especially in the economies that have a strong dependence on resources that are easily substitutable and whose prices constantly fluctuate. We further find that in the majority of observed cases the type of capital these small economies are most dependent on for their economic growth causes negative impulses in the majority of the observed periods. The main policy recommendation should be to assure that even these small economies should strive towards further diversification and avoid dependence on only one segment of their economy.
ARTICLE | doi:10.20944/preprints201912.0207.v1
Subject: Engineering, Mechanical Engineering Keywords: press bending; orbital auto welding; steel-tube correction; STKN540B; high-strength steel tube; manufacturing process
Online: 16 December 2019 (06:20:58 CET)
The purpose of this study is to propose a consecutive manufacturing process system to secure the productivity of excellent STKN540B steel tube in the respect of economy and safety as the supporting material for mega structures such as building, bridge and ship. The components of consecutive manufacturing are press-bending, orbital auto welding and steel tube correction. By using STKN540B a high-strength steel material with low yield point that requires a special manufacturing process unlike other steel materials, an actual tube manufacturing is carried out at each stage in this experimental study. With this, the quality of steel tube and the efficiency of the manufacturing process are analyzed to draw out some points to improve in the future.
ARTICLE | doi:10.20944/preprints202201.0209.v1
Subject: Social Sciences, Economics Keywords: Economic Growth; Gross Fixed Capital Formation; Government Expenditure; Government Deficit; Vector Auto-Regression and South Africa
Online: 14 January 2022 (11:36:07 CET)
The study uses annual time series data from the South Africa Reverse Bank (SARB) from 1980 to 2020 to examine the effectiveness of fiscal policy on economic growth in South Africa. The Augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests, as well as the Johansen Co-integration test, Granger causality test, and Vector Auto-Regression (VAR) method, were used in the study. Real GDP per capita (RGDP) is used as proxy of economic growth and gross fixed capital formation (GFCF), government expenditure (GEXP) and government deficit (GOVD) as the proxies of fiscal policy. The ADF test results show that all variables are stationary at the first difference, with the exception of GFCF and GEXP, which are stationary at I(0), whereas the PP test results show that all variables are stationary at I(1), with the exception of GEXP, which is stationary at I(0). At Maximum Eigenvalue, the four variables are not cointegrated. The findings of the Granger causality test demonstrated a unidirectional causation from GOVD to RGDP, as well as a bidirectional causality from RGDP to GFCF and GEXP. Error Correction Model Estimated using VAR shows that GFCF, GEXP have positive effect on RGDP whereas GOVD has a negative effect on RGDP in the short run. The findings also presented that the VAR's residuals are homoscedastic, which means they are normally distributed and have no serial correlation.
ARTICLE | doi:10.20944/preprints202109.0112.v1
Subject: Engineering, Marine Engineering Keywords: 3D point Cloud Classification, 3D point Cloud Shape Completion,Auto-Encoders, Contrastive Learning, Self-Supervised Learning
Online: 6 September 2021 (18:00:28 CEST)
In this paper, we present the idea of Self Supervised learning on the Shape Completion and Classification of point clouds. Most 3D shape completion pipelines utilize autoencoders to extract features from point clouds used in downstream tasks such as Classification, Segmentation, Detection, and other related applications. Our idea is to add Contrastive Learning into Auto-Encoders to learn both global and local feature representations of point clouds. We use a combination of Triplet Loss and Chamfer distance to learn global and local feature representations. To evaluate the performance of embeddings for Classification, we utilize the PointNet classifier. We also extend the number of classes to evaluate our model from 4 to 10 to show the generalization ability of learned features. Based on our results, embedding generated from the Contrastive autoencoder enhances Shape Completion and Classification performance from 84.2% to 84.9% of point clouds achieving the state-of-the-art results with 10 classes.
ARTICLE | doi:10.20944/preprints202111.0430.v1
Subject: Chemistry, Analytical Chemistry Keywords: Ab initio DFT/B3LYP/6-311G/Auto calculation; hydrazine; Hirshfeld surface; hydrogen interactions; single-crystal X-ray study
Online: 23 November 2021 (15:03:30 CET)
The crystals, C11H4BrF5N2S, (I), 1-((4-bromothiophen-2-yl)methylene)-2-(perfluorophenyl)hydrazine and C12H6BrF5N2S, (II), 1-((4-bromo-5-methylthiophen-2-yl)methylene)-2-(perfluorophenyl)hydrazine are molecules with two rings and hydrazone part like a centre of the molecule. The compounds have been synthesized and characterized by elemental, spectroscopic (1H-NMR) analysis. The crystal structures of the solid phase were determined by single crystal X-ray diffraction method. They crystallize in the monoclinic space group with Z = 4 and Z = 2 molecules per unit-cell. The compound (I) crystallizes as a racemate in the centrosymmetric space group and the compound (II) crystallizes as a non-racemate in the non-centrosymmetric space group. The “absolute configuration and conformation for bond values” were derived from the anomalous dispersion (ad) for (II). The crystal structures are revealed diverse non-covalent interactions such as intra- and interhydrogen bonding, π-ring···π-ring, C-H···π-ring and they were investigated. The expected stereochemistry of hydrazones atoms C7, N2 and N1 were confirmed for (I) and (II). The hole molecule of the (I), and (II) possesses “a boat conformation” like a 6-membered ring. The results of the single crystal studies are reproduced with the help of Hirshfeld surface study and Gaussian software.
REVIEW | doi:10.20944/preprints202010.0179.v1
Subject: Life Sciences, Biochemistry Keywords: Non Michaelis-Menten Kinetics; High-throughput screening; allostery; cooperativity; processive kinetics; distributive kinetics; single-molecule; auto-catalytic; drug discovery
Online: 8 October 2020 (13:34:16 CEST)
Biological systems are highly regulated. They are also highly resistant to sudden perturbations enabling them to maintain the dynamic equilibrium essential for sustenance of life. This robustness is conferred by regulatory mechanisms that influence the activity of enzymes/proteins within their cellular context, to adapt to changing environmental conditions. However, the initial rules governing the study of enzyme kinetics were tested and implemented for mostly cytosolic enzyme systems that were easy to isolate and/or recombinantly express. Moreover, these enzymes lacked complex regulatory modalities. Now, with academic labs and pharmaceutical companies turning their attention to more complex systems (for instance, multi-protein complexes, oligomeric assemblies, membrane proteins and post-translationally modified proteins), the initial axioms defined by Michaelis-Menten (MM) kinetics are rendered inadequate and the development of a new kind of kinetic analysis to study these systems is required. The current review strives to present an overview of enzyme kinetic mechanisms that are atypical and, oftentimes, do not conform to the classical MM kinetics. Further, it presents initial ideas on the design and analysis of experiments in early drug-discovery for such systems, to enable effective screening and characterisation of small-molecule inhibitors with desirable physiological outcomes.
ARTICLE | doi:10.20944/preprints202009.0647.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Lung condition; COVID-19; Machine learning; Custom Vision; Core ML; Auto ML; AI; Pneumonia; Smartphone application; Real-time diagnosis
Online: 26 September 2020 (16:14:39 CEST)
AI is leveraging all aspects of life. Medical services are not untouched. Especially in the field of medical image processing and diagnosis. Big IT and Biotechnology companies are investing millions of dollars in medical and AI research. The recent outbreak of SARS COV-2 gave us a unique opportunity to study for a non interventional and sustainable AI solution. Lung disease remains a major healthcare challenge with high morbidity and mortality worldwide. The predominant lung disease was lung cancer. Until recently, the world has witnessed the global pandemic of COVID19, the Novel coronavirus outbreak. We have experienced how viral infection of lung and heart claimed thousands of lives worldwide. With the unprecedented advancement of Artificial Intelligence in recent years, Machine learning can be used to easily detect and classify medical imagery. It is much faster and most of the time more accurate than human radiologists. Once implemented, it is more cost-effective and time-saving. In our study, we evaluated the efficacy of Microsoft Cognitive Service to detect and classify COVID19 induced pneumonia from other Viral/Bacterial pneumonia based on X-Ray and CT images. We wanted to assess the implication and accuracy of the Automated ML-based Rapid Application Development (RAD) environment in the field of Medical Image diagnosis. This study will better equip us to respond with an ML-based diagnostic Decision Support System(DSS) for a Pandemic situation like COVID19. After optimization, the trained network achieved 96.8% Average Precision which was implemented as a Web Application for consumption. However, the same trained network did not perform like Web Application when ported to Smartphone for Real-time inference, which was our main interest of study. The authors believe, there is scope for further study on this issue. One of the main goals of this study was to develop and evaluate the performance of AI-powered Smartphone-based Real-time Applications. Facilitating primary diagnostic services in less equipped and understaffed rural healthcare centers of the world with unreliable internet service.
ARTICLE | doi:10.20944/preprints202208.0233.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Smart Families; Smart Homes; Sustainable Societies; Smart Cities; Deep Learning; Natural Language Processing (NLP); Social Sustainability; Environmental Sustainability; Economic Sustainability; Bidirectional Encoder Representations from Transformers (BERT); Triple Bottom Line (TBL); Internet of Things (IoT)
Online: 12 August 2022 (10:22:17 CEST)
Technological advancements and innovations have profoundly changed the lives of people giving rise to smart environments, cities, and societies. As homes are the building block of cities and societies, smart homes are critical to establishing smart living and are expected to play a key role in enabling smart cities and societies. The current academic literature and commercial advancements on smart homes have mainly focused on developing and providing smart functions for homes to provide security management and facilitate the residents in their various activities such as ambiance management. Homes are much more than physical structures, buildings, appliances, operational machines, and systems. Homes are composed of families and are inherently complex phenomena underlined by humans and their relationships with each other, subject to individual, intragroup, intergroup, and intercommunity goals. There is a clear need to understand, define, consolidate existing research, and actualize the overarching roles of smart homes, the roles of smart homes that would serve the needs of future smart cities and societies. This paper introduces our data-driven parameter discovery methodology and uses it to provide, for the first time, an extensive, rather fairly comprehensive, analysis of the families and homes landscape seen through the eyes of academics and the public using over a hundred thousand research papers and nearly a million tweets. We develop a methodology using deep learning, natural language processing (NLP), and big data analytics methods and apply it to automatically discover parameters that capture a comprehensive knowledge and design space of smart families and homes comprising social, political, economic, environmental, and other dimensions. The 66 discovered parameters and the knowledge space comprising 100s of dimensions are explained by reviewing and referencing over 300 articles from the academic literature and tweets. The knowledge and parameters discovered in this paper can be used to develop a holistic understanding of matters related to families and homes facilitating the development of better, community-specific, policies, technologies, solutions, and industries for families and homes, leading to strengthening families and homes, and in turn, empowering sustainable societies across the globe.
HYPOTHESIS | doi:10.20944/preprints202109.0372.v1
Subject: Life Sciences, Other Keywords: Endometriosis; microchimerism; maternal microchimerism; reproduction; gynaecology; etiology; auto-immune; immune response; hormonal; vascular; genetic; hereditary; male; fetal; fetus; stem cells; pregnancy; Műllerianosis; embryology; ROS; apoptosis; disease; endometrium; basalis; menstruation; post-menopausal; neurogenesis
Online: 22 September 2021 (10:27:52 CEST)
Endometriosis is an oestrogen-dependant reproductive disease, with genetic, vascular, neural, inflammatory and auto-immune characteristics. There are many theories about the etiology of endometriosis, however, all of these theories have limitations and do not explain all the locations that endometriosis is found or types of patients with endometriosis. The objective of this paper is to postulate the hypothesis that endometriosis is caused by Maternal Microchimerism, the presence of maternal cells in the fetus. A literature review was conducted, analysing the characteristics, current etiological theories of endometriosis, theory limitations and relationship of maternal microchimerism and endometriosis. At time of writing, there was no literature on maternal microchimerism and endometriosis. These results suggest that Maternal Microchimerism could be a cause of endometriosis. This could account for the genetic and auto-immune characteristics seen in people with endometriosis, inducing a micro-environment for vascular, neural and epigenetic changes. This could also account for account for endometriosis seen in non-menstruating patients, such as men, fetuses and post-menopausal women and endometriosis found in non-peritoneal locations. If the hypothesis of Maternal Microchimerism is correct, endometriosis could be considered a pregnancy-related disease that could affect all humans, changing the accepted demographics of patients and potentially new diagnostic techniques and treatment options for patients with endometriosis. Further studies are needed to test this hypothesis.