ARTICLE | doi:10.20944/preprints202308.0140.v1
Subject: Engineering, Architecture, Building And Construction Keywords: sheep face recognotion; large benchmark; deep learning; convolutional neural networks; dataset
Online: 2 August 2023 (11:26:54 CEST)
The mutton sheep breeding industry has transformed significantly in recent years, from traditional grassland free-range farming to a more intelligent approach. As a result, automated sheep face recognition systems have become vital to modern breeding practices and have gradually replaced ear tagging and other manual tracking techniques. Although sheep face datasets have been introduced in previous studies, they have often involved pose or background restrictions (e.g., fixing of the subject’s head, cleaning of the face), which restrict data collection and have limited the size of available sample sets. As a result, a comprehensive benchmark designed exclusively for the evaluation of individual sheep recognition algorithms is lacking. To address this issue, this study develops a large-scale benchmark dataset, Sheepface-107, comprised of 5,350 images acquired from 107 different subjects. Images were collected from each sheep at multiple angles, including front and back views, in a diverse collection that provides a more comprehensive representation of facial features. In addition to the dataset, an assessment protocol is developed by applying multiple evaluation metrics to the results produced by three different deep learning models: VGG16, GoogLeNet, and ResNet50, which achieved F1-scores of 83.79%, 89.11%, and 93.44%, respectively. A statistical analysis of each algorithm suggested that accuracy and the number of parameters were the most informative metrics for use in evaluating recognition performance.
ARTICLE | doi:10.20944/preprints202303.0046.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Privacy Policies; NLP; benchmark; general language understanding; domain specialization and generalization
Online: 3 March 2023 (01:10:56 CET)
Benchmarks for general language understanding have been rapidly developing in recent years of NLP research, particularly because of their utility in choosing strong-performing models for practical downstream applications. While benchmarks have been proposed in the legal language domain, virtually no such benchmarks exist for privacy policies despite their increasing importance in modern digital life. This could be explained by privacy policies falling under the legal language domain, but we find evidence to the contrary that motivates a separate benchmark for privacy policies. Consequently, we propose PrivacyGLUE as the first comprehensive benchmark of relevant and high-quality privacy tasks for measuring general language understanding in the privacy language domain. Furthermore, we release performances from multiple transformer language models and perform model-pair agreement analysis to detect tasks where models benefited from domain specialization. Our findings show the importance of in-domain pretraining for privacy policies. We believe PrivacyGLUE can accelerate NLP research and improve general language understanding for humans and AI algorithms in the privacy language domain, thus supporting the adoption and acceptance rates of solutions based on it.
ARTICLE | doi:10.20944/preprints202105.0444.v1
Subject: Engineering, Other Keywords: Spectral Unmixing; Imaging Spectrometer; Hyperspectral; Benchmark Dataset; Dimensionality Estimation; Endmember Extraction; Abundance Estimation; HySpex.
Online: 19 May 2021 (13:25:39 CEST)
Spectral unmixing represents both an application per se and a pre-processing step for several applications involving data acquired by imaging spectrometers. However, there is still a lack of publicly available reference data sets suitable for the validation and comparison of different spectral unmixing methods. In this paper we introduce the DLR HyperSpectral Unmixing (DLR HySU) benchmark dataset, acquired over German Aerospace Center (DLR) premises in Oberpfaffenhofen. The dataset includes airborne hyperspectral and RGB imagery of targets of different materials and sizes, complemented by simultaneous ground-based reflectance measurements. The DLR HySU benchmark allows a separate assessment of all spectral unmixing main steps: dimensionality estimation, endmember extraction (with and without pure pixe assumption), and abundance estimation. Results obtained with traditional algorithms for each of these steps are reported. To the best of our knowledge, this is the first time that real imaging spectrometer data with accurately measured targets are made available for hyperspectral unmixing experiments. The DLR HySU benchmark dataset is openly available online and the community is welcome to use it for spectral unmixing and other applications.
ARTICLE | doi:10.20944/preprints202008.0504.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: geospatial; computation; spatial benchmark; cybergis
Online: 24 August 2020 (03:12:06 CEST)
Technologies around the world produce and interact with geospatial data instantaneously, from mobile web applications to satellite imagery that is collected and processed across the globe daily. Big raster data allows researchers to integrate and uncover new knowledge about geospatial patterns and processes. However, we are also at a critical moment, as we have an ever-growing number of big data platforms that are being co-opted to support spatial analysis. A gap in the literature is the lack of a robust framework to assess the capabilities of geospatial analysis on big data platforms. This research begins to address this issue by establishing a geospatial benchmark that employs freely accessible datasets to provide a comprehensive comparison across big data platforms. The benchmark is a critical for evaluating the performance of spatial operations on big data platforms. It provides a common framework to compare existing platforms as well as evaluate new platforms. The benchmark is applied to three big data platforms and reports computing times and performance bottlenecks so that GIScientists can make informed choices regarding the performance of each platform. Each platform is evaluated for five raster operations: pixel count, reclassification, raster add, focal averaging, and zonal statistics using three different datasets.
ARTICLE | doi:10.20944/preprints201907.0157.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: acrylamide; biscuits; mitigation measures; benchmark levels; contaminant
Online: 11 July 2019 (11:05:40 CEST)
Acrylamide (AA), a molecule which potentially increases the risk of developing cancer, is easily formed in food rich in carbohydrates, such as biscuits, wafers and breakfast cereals, at temperatures above 120 °C. Thus, it is eminent the need to detect and quantify the AA content in processed foodstuffs, in order to delineate the limits and mitigation strategies. This work reports the development and validation of a high-resolution mass spectrometry-based methodology for identification and quantification of AA in specific food matrices of biscuits, by using LC-MSn with electrospray ionization and Orbitrap as mass analyser. The developed analytical method showed good repeatability (RSDr 11.1%) and 3.55 μg kg-1 and 11.8 μg kg-1 as limit of detection (LOD) and limit of quantification (LOQ), respectively. The choice of multiplexed targeted-SIM mode (t-SIM) for AA and AA-d3 isolated ions provided enhanced detection sensitivity, as demonstrated in this work. Results for AA concentration obtained vary between 323.7 and 2056.1 μg kg-1. During baking an increase in AA concentration was observed, as well as between samples taken from different areas of the baking oven. Statistical processing of data was performed in order to compare the AA levels with several production parameters, such as time/cooking temperature, placement on the cooking conveyor belt, color and moisture for different biscuits. The composition of the raw materials was statistically the most correlated factor with the AA content when all samples are considered. The statistical treatment presented herein enables an important prediction of factors influencing AA formation in biscuits contributing for putting in place effective mitigation strategies.
ARTICLE | doi:10.20944/preprints202304.0663.v1
Subject: Computer Science And Mathematics, Discrete Mathematics And Combinatorics Keywords: Orienteering problems; local-search metaheuristics; parallelism; competition; evolution; Benchmark
Online: 21 April 2023 (03:17:49 CEST)
A number of challenging combinatorial optimization problems in logistics, transportations, aeronautics, and astronautics can be modeled as orienteering problems (OPs). To address the classic OP and its real-world variants, a parallel adaptive local-search algorithm based on competition and evolution (Palace) is proposed in this paper. In this algorithm, the parallelism runs proper local-search metaheuristics and operators to obtain the population per generation; then the competition grades those metaheuristics and operators to highlight the outperforming and eliminate the underperforming; also, the evolution explores large solution space and reproduces the best solutions for next generation. In this manner, the parallelism, competition, and evolution are organized in an easy-to-use algorithm and enable the expansibility, adaptivity, and exploration abilities, respectively. The Palace is examined on the classic and real-world Benchmarks about the OP, the time-dependent/independent OP with time windows, and the unmanned aerial vehicle and agile earth observation satellite planning. As a result, the Palace shows good performance in applicability and effectiveness in comparison with the state-of-the-art algorithms.
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Face detection; CSEM; Deep learning; GPU; CPU; Benchmark; Regression
Online: 27 July 2020 (14:54:15 CEST)
Face recognition is a valuable forensic tool for criminal investigators since it certainly helps in identifying individuals in scenarios of criminal activity like fugitives or child sexual abuse. It is, however, a very challenging task as it must be able to handle low-quality images of real world settings and fulfill real time requirements. Deep learning approaches for face detection have proven to be very successful but they require a large computation power and processing time. In this work, we evaluate the speed-accuracy tradeoff of three popular deep-learning-based face detectors on the WIDER Face and UFDD data sets in several CPUs and GPUs. We also develop a regression model capable to estimate the performance, both in terms of processing time and accuracy. We expect this to become a very useful tool for the end user in forensic laboratories in order to estimate the performance for different face detection options. Experimental results showed that the best speed-accuracy tradeoff is achieved with images resized to 50% of the original size in GPUs and images resized to 25% of the original size in CPUs. Moreover, performance can be estimated using multiple linear regression models with a Mean Absolute Error (MAE) of 0.113 what is very promising for the forensic field.
REVIEW | doi:10.20944/preprints202007.0093.v1
Subject: Biology And Life Sciences, Ecology, Evolution, Behavior And Systematics Keywords: Conservation; restoration; reference state; benchmark; vegetation; composition; structure; conceptual framework
Online: 6 July 2020 (04:04:07 CEST)
Measuring the status and trends of biodiversity is critical for making informed decisions about the conservation, management or restoration of species, habitats and ecosystems. Defining the reference state against which status and change are measured is essential. Typically, reference states describe historical conditions, yet historical conditions are challenging to quantify, may be difficult to falsify, and may no longer be an attainable target in a contemporary ecosystem. We have constructed a conceptual framework to help inform thinking and discussion around the philosophical underpinnings of reference states and guide their application. We characterise currently recognised historical reference states and describe them as Pre-Human, Indigenous Cultural, Pre-Intensification and Hybrid-Historical. We extend the conceptual framework to include contemporary reference states as an alternative theoretical perspective. The contemporary reference state framework is a major conceptual shift that focuses on current ecological patterns and identifies areas with higher biodiversity values, regardless of the disturbance history. The specific context for which we design the contemporary conceptual frame is underpinned by an overarching goal—to maximise biodiversity conservation and restoration outcomes in existing ecosystems. The contemporary reference state framework can account for the inherent differences in the diversity of biodiversity values (e.g., species richness, habitat complexity) across spatial scales, communities and ecosystems. In contrast to historical reference states, contemporary references states are measurable and falsifiable. This ‘road map of reference states’ offers perspective needed to define and assess the status and trends in biodiversity and habitats. Our framework for contemporary reference states provides a tractable way for policy-makers and practitioners to navigate biodiversity assessments to maximise conservation and restoration outcomes in contemporary ecosystems. We illustrate how to define a contemporary reference state using an example from south-eastern Australia.
ARTICLE | doi:10.20944/preprints202311.1184.v1
Subject: Engineering, Control And Systems Engineering Keywords: vison-centric perception benchmark; online assessment; streaming inputs; two-dimensional entropy
Online: 21 November 2023 (10:34:50 CET)
In recent years, vision-centric perception has played a crucial role in autonomous driving tasks, encompassing functions such as 3D detection, map construction, and motion forecasting. However, the deployment of vision-centric approaches in practical scenarios is hindered by substantial latency, often deviating significantly from the outcomes achieved through offline training. This disparity arises from the fact that conventional benchmarks for autonomous driving perception predominantly conduct offline evaluations, thereby largely overlooking the latency concerns prevalent in real-world deployment. While a few benchmarks have been proposed to address this limitation by introducing effective evaluation methods for online perception, they do not adequately consider the intricacies introduced by the complexity of input information streams. To address this gap, we propose the Autonomous-driving Streaming I/O (ASIO) benchmark, aiming to assess the streaming inputs characteristics and online performance of vision-centric perception in autonomous driving. To facilitate this evaluation across diverse streaming inputs, we initially establish a dataset based on the CARLA Leaderboard. In alignment with real-world deployment considerations, we further develop evaluation metrics based on information complexity specifically tailored for streaming inputs and streaming performance. Experimental results indicate significant variations in model performance and ranking under different major camera deployments, underscoring the necessity of thoroughly accounting for the influences of model latency and streaming inputs characteristics during real-world deployment. To enhance streaming performance consistently across distinct streaming inputs features, we introduce a backbone switcher based on the identified streaming inputs characteristics. Experimental validation demonstrates its efficacy in perpetually improving streaming performance across varying streaming inputs features.
ARTICLE | doi:10.20944/preprints202010.0319.v1
Subject: Physical Sciences, Acoustics Keywords: function optimization; benchmark function; Whale Optimization (WO); Sine Cosine (SC) Algorithm
Online: 15 October 2020 (11:33:27 CEST)
We developed a novel hybrid approach for solving global optimization, computer science, bio-medical and engineering real life applications that is based on the coupling of the Whale Optimizer and Sine Cosine Algorithms via a surrogate model. We relate the whale optimizer algorithm to balance between the exploitation and the exploration process in the proposed method. There exist confirmed techniques for searching approximate best optimal solutions, but our algorithm will further guarantee that such numerical and statistical solutions satisfy physical bounds of the standard and real life functions. Our experiments with the benchmark, bio-medical, computer science and engineering real life problems have illustrated the advantages of using a newly hybrid approach based on mixing Whale Optimizer and Sine Cosine algorithms. It holds considerable potential for reducing execution time for solving standard and real life problems and at the same time improving the quality of the solution.
ARTICLE | doi:10.20944/preprints201811.0611.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: water resources; natural resources; resource security; SDGs; goal; target; benchmark; standard
Online: 28 November 2018 (14:03:52 CET)
The 2030 Agenda for Sustainable Development, the SDGs, are high on the agenda for most countries of the world. In its publication of the SDGs, the UN has provided the goals and target descriptions that, if implemented at a country level, would lead towards a sustainable future. The IAEG (InterAgency Expert Group of the SDGs) was tasked with disseminating indicators and methods to countries that can be used to gather data describing the global progress towards sustainability. However 2030 Agenda leaves it to countries to adopt the targets with each government setting its own national targets guided by the global level of ambition but taking into account national circumstances. At present, guidance on how to go about this is scant, but it is clear that the responsibility is with countries to implement and that it is actions at a country level that will determine the success of the SDGs. SDG reporting by countries takes on two forms 1) global reporting using prescribed indicator methods and data; 2) National Voluntary Reviews where a country reports on its own progress in more detail but is also able to present data that are more appropriate for the country. For the latter, countries need to be able to adapt the global indicators to fit national priorities and context, thus the global description of an indicator could be reduced to describe only what is relevant to the country. Countries may also, for the National Voluntary Review, use indicators that are unique to the country but nevertheless contribute to measurement of progress towards the global SDG target. Importantly, for those indicators that relate to the security of natural resources security (e.g. water) indicators, there are no prescribed numerical targets/standards or benchmarks. Rather countries will need to set their own benchmarks or standards against which performance can be evaluated. This paper presents a procedure that would enable a country to describe national targets with associated benchmarks that are appropriate for the country. The procedure focusses on those SDG targets that are natural resource-security focussed e.g. extent of water-related ecosystems (6.6), desertification (15.3) etc., because the selection of indicator methods and benchmarks is based on the location of natural resources, their use and present state and how they fit into national strategies.
ARTICLE | doi:10.20944/preprints202305.1658.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Virtual screening; Bioactivity prediction; Equivariant graph neural network; Multiple instance learning; Molecular conformation; Benchmark dataset
Online: 23 May 2023 (13:05:48 CEST)
Ligand-based virtual screening (LBVS) is a promising approach for rapid and low-cost screening of potentially bioactive molecules in the early stage of drug discovery. Compared with traditional similarity-based machine learning methods, deep learning frameworks for LBVS can more effectively extract high-order molecule structure representations from molecular fingerprints or structures. However, the 3D conformation of a molecule largely influences its bioactivity and physical properties, and has rarely been considered in previous deep learning-based LBVS methods. Moreover, the relative bioactivity benchmark dataset is still lacking. To address these issues, we introduce a novel end-to-end deep learning architecture trained from molecular conformers for LBVS. We first extracted molecule conformers from multiple public molecular bioactivity data and consolidated them into a large-scale bioactivity benchmark dataset, which totally includes millions of endpoints and molecules corresponding to 954 targets. Then, we devised a deep learning-based LBVS called EquiVS to learn molecule representations from conformers for bioactivity prediction. Specifically, graph convolutional network (GCN) and equivariant graph neural network (EGNN) are sequentially stacked to learn high-order molecule-level and conformer-level representations, followed by attention-based deep multiple-instance learning (MIL) to aggregate these representations and then predict the potential bioactivity for the query molecule on a given target. We conducted various experiments to validate the data quality of our benchmark dataset, and confirmed EquiVS achieved better performance compared with 10 traditional machine learning or deep learning-based LBVS methods. Further ablation studies demonstrate the significant contribution of molecular conformation for bioactivity prediction, as well as the reasonability and non-redundancy of deep learning architecture in EquiVS. Finally, a model interpretation case study on CDK2 shows the potential of EquiVS in optimal conformer discovery. The overall study shows that our proposed benchmark dataset and EquiVS method have promising prospects in virtual screening applications.
ARTICLE | doi:10.20944/preprints202210.0477.v1
Subject: Computer Science And Mathematics, Analysis Keywords: High Throughput Plant Phenotyping; Deep Neural Network; Flower Detection; Temporal Phenotypes; Benchmark Dataset; Flower Status Report
Online: 31 October 2022 (10:00:24 CET)
A phenotype is the composite of an observable expression of a genome for traits in a given environment. The trajectories of phenotypes computed from an image sequence and timing of important events in a plant’s life cycle can be viewed as temporal phenotypes and indicative of the plant’s growth pattern and vigor. In this paper, we introduce a novel method called FlowerPhenoNet which uses deep neural networks for detecting flowers from multiview image sequences for high throughput temporal plant phenotyping analysis. Following flower detection, a set of novel flower-based phenotypes are computed, e.g., the day of emergence of the first flower in a plant’s life cycle, the total number of flowers present in the plant at a given time, the highest number of flowers bloomed in the plant, growth trajectory of a flower and the blooming trajectory of a plant. To develop a new algorithm and facilitate performance evaluation based on experimental analysis, a benchmark dataset is indispensable. Thus, we introduce a benchmark dataset called FlowerPheno which comprises image sequences of three flowering plant species, e.g., sunflower, coleus, and canna, captured by a visible light camera in a high throughput plant phenotyping platform from multiple view angles. The experimental analyses on the FlowerPheno dataset demonstrate the efficacy of the FlowerPhenoNet.
ARTICLE | doi:10.20944/preprints202208.0232.v3
Subject: Medicine And Pharmacology, Pharmacology And Toxicology Keywords: food safety; risk assessment; Cannabis sativa; tetrahydrocannabinol; food supplements; cannabidiol; benchmark dose; health-based guidance value (HBGV); liver toxicity
Online: 24 November 2022 (02:53:03 CET)
In the European Union (EU), cannabidiol products require pre-marketing authorisation under the novel food regulation. Currently, 19 CBD applications are under assessment at the European Food Safety Authority (EFSA). During the initial assessment of the application files, the EFSA Panel on Nutrition, Novel Foods and Food Allergens (NDA) located several knowledge gaps in their 07 June 2022 statement on safety of cannabidiol as a novel food that need to be addressed before the evaluation of CBD can be concluded. Namely, the effect of CBD on the liver, gastrointestinal tract, endocrine system, nervous system, psychological function, and reproductive system needs to be clarified. Nevertheless, the available literature allows a benchmark dose (BMD)-response modelling of several bioassays, resulting in a BMD lower confidence limit (BMDL) of 20 mg/kg bw/day for liver toxicity in rats. Human data in healthy volunteers found increases in the liver enzymes alanine aminotransferase (ALT) and aspartate aminotransferase (AST) in a study at 4.3 mg/kg bw/day, which was defined by EFSA NDA panel as a lowest observed adverse effect level (LOAEL). The EFSA NDA panel currently concluded that the safety of CBD as a novel food cannot be evaluated, leading to a so-called clock stop of the applications until the applicants provide the required data. Meanwhile, the authors suggest that CBD products still available as food supplements on the EU market despite the lack of authorisation must be considered as “unsafe”. Products exceeding a health-based guidance value of 10 mg/day must be considered as being “unfit for consumption” (Article 14(1) and (2) (b) of Regulation No 178/2002), while the ones in exceedance of the human LOAEL must be considered “injurious to health” (Article 14(1) and (2) (a) of Regulation No 178/2002).
ARTICLE | doi:10.20944/preprints202107.0462.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: London dispersion; ketone complexes; density functional theory; hydrogen bonds; molecular recognition; vibrational spectroscopy; gas phase; benchmark
Online: 20 July 2021 (16:06:47 CEST)
Phenol is added to acetophenone (methyl phenyl ketone) and to six of its halogenated derivatives in a supersonic jet expansion to determine the hydrogen bonding preference of the cold and isolated 1:1 complexes by linear infrared spectroscopy. Halogenation is found to have a pronounced effect on the docking site in this intermolecular ketone balance experiment. The spectra unambiguously decide between competing variants of phenyl group stacking due to their differences in hydrogen bond strength. Structures where the phenyl group interaction strongly distorts the hydrogen bond are more difficult to quantify in the experiment. For unsubstituted acetophenone, phenol clearly prefers the methyl side despite a predicted sub-kJ/mol advantage which is nearly independent of zero point vibrational energy, turning this complex into a challenging benchmark system for electronic structure methods which include long range dispersion interactions in some way.
ARTICLE | doi:10.20944/preprints202203.0241.v1
Subject: Engineering, Control And Systems Engineering Keywords: Adaptive Constrained Control; Barrier Lyapunov Function; Fault-Tolerant Control; Nussbaum-type function; power regulation; wind turbine benchmark
Online: 17 March 2022 (03:01:31 CET)
Motivated for improving the efficiency and reliability of wind turbine energy conversion, this paper presents an advanced control design that enhances the power regulation efficiency and re-liability. The constrained behaviour of the wind turbine is taken into account, by using the barrier Lyapunov function in the analysis of the Lyapunov direct method. This, consequently, guarantees that the generated power remains within the desired bounds to satisfy the grid power demand. Moreover, a Nussbaum-type function is utilized in the control scheme, to cope with the unpre-dictable wind speed. This eliminates the need for accurate wind speed measurement or estimation. Furthermore, via properly designed adaptive laws, a robust actuator fault-tolerant capability is integrated into the scheme, handling the model uncertainty. Numerical simulations are performed on a high-fidelity wind turbine benchmark model, under different fault scenarios, to verify the effectiveness of the developed design. Also, a Monte-Carlo analysis is exploited for the evaluation of the reliability and robustness characteristics against the model-reality mismatch, measurement errors and disturbance effects.
ARTICLE | doi:10.20944/preprints202009.0643.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: dispersion; ketone-alcohol complexes; density functional theory; hydrogen bonds; molecular recognition; vibrational spectroscopy; gas phase; benchmark; pinacolone
Online: 26 September 2020 (14:57:33 CEST)
The influence of distant London dispersion forces on the docking preference of alcohols of different size between the two lone electron pairs of the carbonyl group in pinacolone is explored by infrared spectroscopy of the OH stretching fundamental in supersonic jet expansions of 1:1 solvate complexes. Experimentally, no pronounced tendency of the alcohol to switch from the methyl to the bulkier tert-butyl side with increasing size is found. In all cases, methyl docking dominates by at least a factor of two, whereas DFT-optimized structures suggest a very close balance for the larger alcohols, once corrected by CCSD(T) relative electronic energies. Together with inconsistencies when switching from a C4 to a C5 alcohol, this points at deficiencies of the investigated B3LYP and in particular TPSS functionals even after dispersion correction, which cannot be blamed on zero point energy effects. The search for density functionals which describe the harmonic frequency shift, the structural change and the energy difference between the docking isomers of larger alcohols to unsymmetric ketones in a satisfactory way is open.
ARTICLE | doi:10.20944/preprints202307.0444.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: monocular 3D reconstruction; monocular SLAM comparison; monocular VO comparison; monocular benchmark; 3D reconstruction classification; pure visual 3D reconstruction
Online: 7 July 2023 (10:12:55 CEST)
Pure monocular 3D reconstruction is an ill-posed problem that has attracted the research community's interest due to the affordability and availability of RGB sensors. SLAM, VO, and SFM are disciplines formulated to solve the 3D reconstruction problem and estimate the camera’s ego-motion, so many methods have been proposed. However, most of these methods were not evaluated in large datasets, under various motion patterns, had not been tested under the same metrics, and most of them had not been evaluated following a taxonomy, making their comparison and selection difficult. In this research, we performed a comparison of ten publicly available SLAM and VO methods following a taxonomy, including one method for each category of the primary taxonomy, three machine learning-based methods, and two updates of the best methods, to identify the advantages and limitations of each category of the taxonomy and test if the addition of machine learning or the updates made on those methods improved them significantly. Thus, we evaluated each algorithm under the TUM-Mono benchmark and performed an inferential statistical analysis to identify significative differences through its metrics. Results determined that sparse-direct methods significantly outperformed the rest of the taxonomy, and fusing them with machine learning techniques significantly improves the performance of geometric-based methods from different perspectives.
ARTICLE | doi:10.20944/preprints201705.0004.v1
Subject: Engineering, Control And Systems Engineering Keywords: fault diagnosis; analytical redundancy; fuzzy logic; neural networks; data-driven approaches; nonlinear geometric approach; wind farm benchmark simulator
Online: 1 May 2017 (07:53:08 CEST)
The fault diagnosis of wind farms has been proven to be a challenging task and motivates the research activities carried out through this work. Therefore, this paper deals with the fault diagnosis of a wind park benchmark model, and it considers viable solutions to the problem of earlier fault detection and isolation. The design of the fault indicator involves data-driven approaches, as they can represent effective tools for coping with poor analytical knowledge of the system dynamics, noise, uncertainty and disturbances. In particular, the proposed data-driven solutions rely on fuzzy models and neural networks that are used to describe the strongly nonlinear relationships between measurement and faults. The chosen architectures rely on nonlinear autoregressive with exogenous input models, as they can represent the dynamic evolution of the system along time. The developed fault diagnosis schemes are tested by means of a high-fidelity benchmark model, that simulates the normal and the faulty behaviour of a wind farm installation. The achieved performances are also compared with those of a model-based approach relying on nonlinear differential geometry tools. Finally, a Monte-Carlo analysis validates the robustness and the reliability of the proposed solutions against typical parameter uncertainties and disturbances.
ARTICLE | doi:10.20944/preprints202311.0109.v1
Subject: Engineering, Energy And Fuel Technology Keywords: clean hydrogen; water electrolysis; energy transition; electrolysis technologies; energy analysis; benchmark data; performance measurement; electrolysers’ scaling up; renewable energy sources integration
Online: 1 November 2023 (17:21:42 CET)
This paper explores the latest developments in electrolysis technology, a key player in the transition to sustainable energy systems. Electrolysis, despite currently contributing a small share to global hydrogen production, holds immense potential for producing green hydrogen. The study delves into the efficiency of electrolysis systems, emphasizing ongoing efforts to enhance energy conversion rates. It investigates the impact of high-temperature electrolysis on reducing electricity consumption, thus making the process more efficient. The paper discusses the various challenges in the research on water electrolysis and underscores the critical role electrolysis plays in integrating renewable energy sources. The study emphasizes the need for continuous advancements in electrolysis technology to bridge existing gaps, making a compelling case for its pivotal role in the green hydrogen revolution.
ARTICLE | doi:10.20944/preprints202003.0381.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: algorithmic design; metaheuristic optimisation; evolutionary computation; swarm intelligence; memetic computing; parameter tuning; fitness trend; Wilcoxon Rank-Sum; Holm-Bonferroni; benchmark suite
Online: 26 March 2020 (04:03:41 CET)
The Stochastic Optimisation Software (SOS) is a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. It reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including 1) customised implementations of statistical tests, such as the Wilcoxon Rank-Sum test and the Holm-Bonferroni procedure, for comparing performances of optimisation algorithms and automatically generate result tables in PDF and LaTeX formats; 2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; 3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation operator. each testbed function. Moreover, this article comments on the current state of the literature in stochastic optimisation and highlights similarities shared by modern metaheuristics inspired by nature. It is argued that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them.
ARTICLE | doi:10.20944/preprints201703.0227.v1
Subject: Medicine And Pharmacology, Dermatology Keywords: biodiversity; skin allergy; benchmark skin health values; effect of synthetic cosmetics on skin; 21st century skin ailments; measure skin health; healthy skin ecosystem; healthy skin bacteria; damaged skin bacteria; perfect skin
Online: 31 March 2017 (08:52:14 CEST)
There is a skin allergy epidemic in the western world, and the rate of deterioration has increased significantly in the past 5-10 years. It is probable that there are many environmental contributing factors, yet some studies have linked it primarily to the rise in the use of synthetic chemical ingredients in modern cosmetics. Our challenge, therefore, was to find a mechanism to determine the effect these substances have on skin health, and whether they really are a primary cause of long term damage to the skin. The first problem is the lack of any definitive way to measure skin health. Motivated by the overwhelming evidence for a link between deficient gut flora and ill health, we decided to look at whether our skin microbiota could similarly be used as an indicator of skin health. Our research illustrates how it is microbiota diversity alone that can predict whether skin is healthy or not, after we revealed a complete lack of conclusive findings linking the presence or abundance of particular species of microbe to skin problems. This phenomenon is replicated throughout nature, where high biodiversity always leads to healthy ecosystems. ‘Caveman’ skin, untouched by modern civilisation, was far different to ‘western’ skin and displayed unprecedented levels of bacterial diversity. The less exposed communities were to western practices, the higher the skin diversity, which is clear evidence of an environmental factor in the developed world damaging skin. For the first time we propose benchmark values of diversity against which we can measure skin to determine how healthy it is. This gives us the ability to be able to predict which people are more likely to be prone to skin ailments, and start to test whether cosmetic ingredients and products are a main cause of the skin allergy epidemic.
Subject: Biology And Life Sciences, Immunology And Microbiology Keywords: skin microbiome; skin microbiome biodiversity; biodiversity; skin ecosystem; skin allergy epidemic; benchmark skin health values; skin bacteria; 21st century skin ailments; measure skin health; healthy skin ecosystem; healthy skin bacteria; damaged skin bacteria;
Online: 18 June 2020 (12:40:57 CEST)
A catastrophic loss of microbial biodiversity on the skin has led to alarming increase in the prevalence of allergies and long-term damage to the skin, which could also have damaging knock on effects to overall health. This study uses 50 human participants, to obtain an average (benchmark) value for the biodiversity of ‘healthy’ western skin, which is crucial in updating our 2017 skin health measuring mechanism to use standardised methodology. Previous work with a larger sample size was unsatisfactory for use as a benchmark due to its use of different and outdated diversity indices. We also investigated the effect of age and sex, two known skin microbiome affecting factors. Although no statistical significance is seen for age- and sex- related changes in diversity, there appear to be changes related to age which elaborates on previous work which used larger, more general age ranges. Our study indicates adults age 28-37 have highest diversity, and age 48-57 the lowest. Crucially, because of this study we are now able to update the skin health measuring mechanism from our 2017 work. This will aid diagnostic assessment of susceptibility to cutaneous conditions or diseases, and treatment. Testing any human subject will be rapidly improved by obtaining future benchmark diversity values for any age, sex, body site and area of residence, to which they can be compared. This improvement means we can also more accurately investigate the ultimate question: What factors in the western world are a main cause of the skin allergy epidemic? This could lead to future restriction of certain synthetic chemicals or products found to be particularly harmful to the skin.