ARTICLE | doi:10.20944/preprints201703.0167.v1
Online: 21 March 2017 (04:23:40 CET)
Diagnosing melanocytic lesions is among the most challenging problems in the practice of pathology. The difficulty of physically masking melanin pigment and the similarity of its color to commonly used chromogens often complicate examination of the cytomorphology and immunohistochemical staining results for tumor cells. Melanin bleach can be very helpful for histopathological diagnosis of heavily pigmented melanocytic lesions. Although various depigmentation methods have been reported, no standardized methods have been developed. This study developed a fully automated platform that incorporates hydrogen peroxide-based melanin depigmentation in an automated immunohistochemical analysis. The utility of the method was tested in one cell block of malignant melanoma cells in pleural effusion, ten ocular melanoma tissue samples, and ten cutaneous melanoma tissue samples. Our results demonstrated that the proposed method, which can be performed in only 3 hours, effectively preserves cell cytomorphology and immunoreactivity. The method is particularly effective for removing melanin pigment to facilitate histopathological examination of cytomorphology and for obtaining an unmasked tissue section for immunohistochemical analysis.
ARTICLE | doi:10.20944/preprints202208.0005.v1
Subject: Engineering, Civil Engineering Keywords: Durability; Reinforced concrete; Automated visualization; Risk
Online: 1 August 2022 (05:09:38 CEST)
Reinforced Concrete (RC) durability is a crucial feature to estimate the long-term quality and structural performance. Since life span estimation is vital for maintenance resource planning, a degradation model of RC component extracts by updating the status of structures and trending the components’ state over time in terms of durability. Surface erosion, spalling, cracks, and other expose defects on the RC component lead to increase factors adversely affecting concrete durability in structures. This research presents an approach based on automated visualization for extracting quantitative indexes beside or instead of visual inspection without subjective interspersion of humans or probable human errors during the inspection. The durability index (D_i) will extract based on damage probability and its growth in order to extract the severity of failure and risk. Measurement operation by automated software has been double-checked by manual measurement tools, and data will verify randomly in this method. The result shows damage growth in this load-bearing component by 24 percentages over the definite time. According to degradation models, it shows this component may pass the relative thresholds as a limit state of operation to fail. This significant difference between expected time and designing time determines the D_i equal to 5 out of 10.
Subject: Engineering, Automotive Engineering Keywords: ADAS simulation; scenario generation; automated driving; Testing; innovation in mobility; self-driving cars; transportation
Online: 7 December 2020 (11:24:16 CET)
The increasingly used approach of combining different simulation software in testing of automated driving systems (ADS) increases the need for potential and convenient software designs. Recently developed co-simulation platforms (CSP) provide the possibility to cover the high demand on testing kilometers for ADS by combining vehicle simulation software (VSS) with traffic flow simulation software (TFSS) environments. The emphasis on the demand of testing kilometers is not enough to choose a suitable CSP. The complexity level of the used vehicle, object, sensors and environment models is essential for valid and representative simulation results. Choosing a suitable CSP raises the question of how the test procedures should be defined and constructed and what the relevant test scenarios are. Parameters of the ADS, the environments, objects, sensors in VSS as well as traffic parameters in TFSS can be used to define and generate test scenarios. In order to generate a large number of scenarios in a systematic and automated way, suitable and appropriate software designs are required. In this paper we present a software design for CSP based on the Model-View-Controller (MVC) design pattern and implementation of a complex CSP for virtual testing of ADS. Based on this design, an implementation of a CSP is presented using the VSS from IPG Automotive called CarMaker and the TFSS from PTV Group called Vissim. The results have shown that the presented CSP design and the implementation of the co-simulation can be used to generate relevant scenarios for testing of ADS.
ARTICLE | doi:10.20944/preprints202209.0408.v1
Subject: Engineering, Automotive Engineering Keywords: Public transportation; Automated vehicles; economic viability; business model
Online: 27 September 2022 (03:37:23 CEST)
During the past few years many projects and initiatives were undertaken deploying and testing automated vehicles for public transportation and logistics. However in spite of their ambition, all of these deployments stayed on the level of elaborated experimentation deploying no more than 4 maximum 5 AVs in rather small sites (few Kms of roads) and never really reached the level of large scale “commercial” deployment of transport services. The reasons for this are many, but the most important being the lack of economically viability and commercially realistic models, the lack of scalability of the business and operating models, and the lack of inclusive citizen/user centric services required for the large end-user acceptation and adoption of the solutions. In this paper, based on the experience gained in the H2020 AVENUE project, we present the missing pieces of the puzzle, ad which will be addressed in the Horizon Europe project ULTIMO.
REVIEW | doi:10.20944/preprints202008.0494.v1
Subject: Social Sciences, Business And Administrative Sciences Keywords: internet; Society 5.0; sustainable development; automated content analysis
Online: 22 August 2020 (09:57:13 CEST)
(1) Background: The importance of this article is to analyze the technological developments in the field of the Internet and Internet technologies and to determine their significance for the sustainable development which will result in the emergence of the Society 5.0; (2) The authors used automated content analysis for the analysis of 552 articles published in 306 scientific journals indexed by SCII and/or SCI - EXPANDED (Web of Science (WOS) platform) between the years 1996 and 4/2020. The goal of the research was to present the relationship between the internet and sustainable development. (3) Results: The results of the analysis show that the top four most important themes in the selected journals were “development”, “information”, “data”, and “business and services”. (4) Conclusions: Our research approach emphasizes the importance of the culmination of scientific innovation with the conceptual, technological and contextual frameworks of the internet and internet technology usage and its impact on sustainable development and emergence of the Society 5.0
ARTICLE | doi:10.20944/preprints201907.0115.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Automated Weeding; Mobile Convolutional Neural Netowrks, Semantic Segmentation
Online: 8 July 2019 (12:29:21 CEST)
Automated weeding is an important research area in agrorobotics. Weeds can be removed mechanically or with the precise usage of herbicides. Deep Learning techniques achieved state of the art results in many computer vision tasks, however their deployment on low-cost mobile computers is still challenging. These paper present an advanced version of the system presented in . The described system contains several novelties, compared both with its previous version and related work. It is a part of a project of the automatic weeding machine, developed by Warsaw University of Technology and MCMS Warka Ltd. The obtained model reaches satisfying accuracy at over 10~FPS on the Raspberry Pi 3B+ computer. It was tested for four different plant species at different growth stadiums and lighting conditions. The system performing semantic segmentation is based on Convolutional Neural Networks. Its custom architecture mixes U-Net, MobileNets, DenseNet and ResNet concepts. Amount of needed manual ground truth labels was significantly decreased by the usage of knowledge distillation process, learning final model to mimic an ensemble of complex models on the large database of unlabeled data. Further decrease of the inference time was obtained by two custom modifications: in the usage of separable convolutions in DenseNet block and in the number of channels in each layer. In the authors’ opinion, described novelties can be easily transferred to other agrorobotics tasks.
ARTICLE | doi:10.20944/preprints202107.0386.v1
Online: 16 July 2021 (16:17:02 CEST)
Graphically-rich applications such as games are ubiquitous with attractive visual effects of Graphical User Interface (GUI) that offers a bridge between software applications and end-users. However, various types of graphical glitches may arise from such GUI complexity and have become one of the main component of software compatibility issues. Our study on bug reports from game development teams in NetEase Inc. indicates that graphical glitches frequently occur during the GUI rendering and severely degrade the quality of graphically-rich applications such as video games. Existing automated testing techniques for such applications focus mainly on generating various GUI test sequences and check whether the test sequences can cause crashes. These techniques require constant human attention to captures non-crashing bugs such as bugs causing graphical glitches. In this paper, we present the first step in automating the test oracle for detecting non-crashing bugs in graphically-rich applications. Specifically, we propose GLIB based on a code-based data augmentation technique to detect game GUI glitches. We perform an evaluation of GLIB on 20 real-world game apps (with bug reports available) and the result shows that GLIB can achieve 100\% precision and 99.5\% recall in detecting non-crashing bugs such as game GUI glitches. Practical application of GLIB on another 14 real-world games (without bug reports) further demonstrates that GLIB can effectively uncover GUI glitches, with 48 of 53 bugs reported by GLIB having been confirmed and fixed so far.
ARTICLE | doi:10.20944/preprints202104.0678.v1
Subject: Earth Sciences, Atmospheric Science Keywords: supervised machine learning; automated landscape mapping; digital elevation model
Online: 26 April 2021 (14:44:24 CEST)
Landscapes evolve due to climatic conditions, tectonic activity, geological features, biological activity, and sedimentary dynamics. These processes link geological processes at depth to surface features. Consequently, the study of landscapes can reveal essential information about the geochemical footprint of ore deposits at depth. Advances in satellite imaging and computing power have enabled the creation of large geospatial datasets, the sheer size of which necessitates automated processing. We describe a methodology to enable the automated mapping of landscape pattern domains using machine learning (ML) algorithms. From a freely available Digital Elevation Model, derived data, and sample landclass boundaries provided by domain experts, our algorithm produces a dense map of the model region in Western Australia. Both random forest and support vector machine classification achieve about 98\% classification accuracy with reasonable runtime of 48 minutes on a single core. We discuss computational resources and study the effect of grid resolution. Larger tiles result in a more contiguous map, while smaller tiles result in a more detailed, and at some point, noisy map. Diversity and distribution of landscapes mapped in this study support previous results. In addition, our results are consistent with the geological trends and main basement features in the region.
ARTICLE | doi:10.20944/preprints201804.0184.v1
Subject: Medicine & Pharmacology, General Medical Research Keywords: pharmacy; patient communication; pharmacy communications; interpersonal communications; automated telemarketing telephone calls; telephone messages; automated messages; communication theory; customer relation management; CRM; pharmacy practice
Online: 16 April 2018 (04:31:24 CEST)
Pharmacy personnel often answer telephones to respond to pharmacy customers (subjects) who received messages from automated systems. This research examines the communication process in terms of how users interact and engage with pharmacies after receiving automated messages. No study has directly addressed automated telephone calls and subjects’ interactions. The purpose of this study is to test the interpersonal communication (IC) process of uncertainty in subjects in receipt of automated telephone calls from pharmacies. Subjects completed a survey of validated scales for Satisfaction (S); Relevance (R); Quality (Q); Need for Cognitive Closure (NFC). Relationships between S, R, Q, NFC, and subject preference to an automated telephone call (ATC) were analyzed to determine whether subjects contacting pharmacies display information seeking behavior. This research demonstrates that seeking information occurs if subjects: are dissatisfied with the content of the ATC; perceive that the Q of the ATC is high; perceive that the Q of ATC is high, and like receiving the ATC or with high NFC, and do not like receiving ATCs. Other interactions presented complexities amongst uncertainty and tolerance of NFC within the IC process.
REVIEW | doi:10.20944/preprints202209.0067.v1
Subject: Engineering, Automotive Engineering Keywords: Connected & Automated Vehicles; Navigation; High Definition (HD Map); Map Representation
Online: 5 September 2022 (13:46:43 CEST)
Many studies in the field of robot navigation have focused on environment representation and localization. The goal of map representation is to summarize spatial information in topological and geometrical abstracts. By providing strong priors, maps improve the performance and reliability of automated robots. Due to the transition to fully automated driving in recent years, there has been a constant effort to design methods and technologies to improve the precision of road participants and the environment's information. Among these efforts is the High Definition (HD) Map concept. Making HD maps requires accuracy, completeness, verifiability, and extensibility. Because of the complexity of HD mapping, it is currently expensive and difficult to implement, particularly in an urban environment. In an urban traffic system, the road model is at least a map with sets of roads, lanes, and lane markers. While more research is being dedicated to mapping and localization, a comprehensive review of the various types of map representation is still required. This paper presents a brief overview of map representation, followed by a detailed literature review of HD Map for automated vehicles. The current state of AV mapping is encouraging, the field has matured to a point where detailed maps of complex environments are built in real-time and have been proved useful. Many existing techniques are robust to noise and can cope with a large range of environments. Nevertheless, there are still open problems for future research. AV mapping will continue to be a highly active research area essential to the goal of achieving full autonomy.
REVIEW | doi:10.20944/preprints202207.0448.v1
Subject: Engineering, Other Keywords: automated vehicles; land use; potential implication; urban mobility; use cases
Online: 29 July 2022 (04:31:50 CEST)
Automated vehicles (AVs), which are expected to enter the market within the near future, represent the current frontiers in mobility and urban planning. AVs are assumed to bring substantial benefits to cities in many aspects. The present study attempts to investigate this broad assumption by conducting a literature review on the possible implications of AVs in cities as well as synthesizing the current state of practice of AV pilots to detect trends in their deployment. In this paper, literature findings on AVs’ implication on vehicle ownership, mobility, land use as well thirteen uses cases were synthesized to capture the big picture of them in cities. The findings showed that, in the AV pilots, the operation of AVs is limited to routes stretching less than 3.5km and an operation speed of less than 18km/h; low speed has been one of the main concerns of the participating passengers to use them for daily trips. The results also revealed that although shared AVs are expected in urban mobility, private ownership will stay competitive since vehicle ownership has been a socio-cultural identity in the history of automobiles. The findings also underlined that the potential influence of AVs on active mobility is still unclear as the AVs have not been introduced on a larger scale. Regarding AVs’ impact on land use, their introduction results in the effective use of space, but they will cause suburbanization in the long term.
REVIEW | doi:10.20944/preprints202111.0039.v1
Subject: Social Sciences, Business And Administrative Sciences Keywords: sustainable hospitality; sustainable tourism; holistic sustainability; ESG; automated content analysis
Online: 2 November 2021 (10:48:25 CET)
Analytical study presents the evolution and change in content over time and the emergence of different sustainable tourism (ST) concepts in tourism and hospitality. For this purpose, a Comparative Automated Content Analysis (ACA) analyses scientific articles published between 1990, when the first article in this field was published in the Web of Science (WOS), and the end of 2020. With the ACA for analysis papers research, this research helps explain why and how changing business models cross the time, organizational processes, importance of information and communication technologies in sustainable tourism strategies, green investments, sustainable standards in tourism and hospitality, and sustainable reporting.
ARTICLE | doi:10.20944/preprints202007.0303.v1
Online: 14 July 2020 (11:31:46 CEST)
Lower Back Pain (LBP) is a disease that needs immediate attention. Person with back pain shall go immediately to doctor for treatment. Injury, excessive works and some medical conditions are result of back pain. Back pain is common to any age of human for different reasons. Due to factors such as previous occupation and degenerative disk disease the chance of developing lower back pain increases for older people. It hampers the working condition of people common reason for seeking medical treatment. The result is absence from work and is unable to normal due to pain. It creates uncomfortable and debilitating situations. Hence, detecting this disease at an early stage will assist the medical field experts to suggest counter measures to the patients. Detection of lower back pain is implemented in this paper by applying ensemble machine learning technique. This paper proposes Stacking ensemble classifier as an automated tool that will predict lower back pain tendency of a patient. Experimental result implies that the proposed method reaches an accuracy of 76.34%, f1-score of 0.76 and MSE of 0.34.
ARTICLE | doi:10.20944/preprints201909.0067.v1
Subject: Physical Sciences, Applied Physics Keywords: fluxgate magnetometer; magnetic sensor; calibration; automated systems; geophysics; geomagnetic field
Online: 6 September 2019 (03:55:08 CEST)
Fluxgate magnetometers require calibration methods appropriate to their application levels and particularities; however the development of fully controlled calibration procedures presents a particular challenge regarding the inevitable influence of the local geomagnetic field and other external interferences when a laboratory with magnetically shielded walls is not available. In that context, we discussed the development of an automated calibration method for fluxgate magnetometers, considering those limitations in time and space, and avoiding some of the problems commonly found in other proposed solutions for the same challenge. For this task, we designed and built a new set of high level procedures, electronic systems and software, which perform active testing and automated calibration of fluxgate magnetometers, considering some resource constraints and employing instruments commonly found in electronic calibration laboratories.
ARTICLE | doi:10.20944/preprints201905.0111.v1
Subject: Materials Science, Metallurgy Keywords: pulse volume; signal noise ratio; automated ultrasonic testing; simulation software
Online: 9 May 2019 (12:47:50 CEST)
Titanium’s accelerating usage in global markets is attributable to its distinctive combination of physical and metallurgical properties. The key to best utilizing titanium is to exploit these characteristics, especially as they complement one another in a given application, rather than to just directly substitute titanium for another metal. Titanium alloy are extensively used in aerospace applications such as components in aero-engines and space shuttles, mainly due to their superior strength to weight ratio. For these demanding applications functionality and reliability of components are of great importance. To increase flight safety, higher sensitivity inspections are sought for rotating parts. Increased sensitivity can be applied at the billet stage, the forging stage, or both. Inspection of the forging geometry affords the opportunity to apply the highest sensitivity due to the shorter material paths when compared to those required for billet inspections. Forging inspection is typically performed for titanium (Ti) rotating parts with immersion inspection and fixed-focus, single-element transducers. Increased gain is required with depth because the ultrasonic beam attenuates with distance and diverges beyond the focus position that is placed near the surface. The higher gain that is applied with depth has the effect of increasing the UT noise with depth. The relationships between the UT noise, selection of the examination technique and the smallest detectable defect are presented in this material.
ARTICLE | doi:10.20944/preprints202212.0278.v1
Subject: Chemistry, Physical Chemistry Keywords: NMR; shift spectra; wavelet packet transform; automated small molecule mixture analysis
Online: 15 December 2022 (09:00:55 CET)
Resolving small molecule mixtures by nuclear magnetic resonance (NMR) spectroscopy has been of great interest for a long time for its precision, reproducibility and efficiency. However, spectral analyses for such mixtures are often highly challenging due to overlapping resonance lines and limited chemical shift windows. The existing experimental and theoretical methods to produce shift NMR spectra in dealing with the problem have limited applicability owing to sensitivity issues, inconsistency and / or requirement of prior knowledge. Recently, we have resolved the problem by decoupling multiplet structures in NMR spectra by the wavelet packet transform (WPT) technique. In this work, we developed a scheme for deploying the method in generating highly resolved WPT NMR spectra and predicting the composition of the corresponding molecular mixtures from their 1H NMR spectra in an automated fashion. The four-step spectral analysis scheme consists of calculating WPT spectrum, peak matching with a WPT shift NMR library, followed by two optimization steps in producing the predicted molecular composition of a mixture. The robustness of the method was tested on an augmented dataset of 1000 molecular mixtures, each containing 3 to 7 molecules. The method successfully predicted the constituent molecules with a median true positive rate of 1.0 against the varying compositions, while a median false positive rate of 0.04 was obtained. The approach can be scaled easily for much larger datasets.
ARTICLE | doi:10.20944/preprints202211.0086.v1
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: automated immunoassays; COVID-19; lateral flow immunoassay; performance; SARS-CoV-2
Online: 4 November 2022 (01:59:01 CET)
Background: The duration of the vaccine's protective efficacy against SARS-CoV-2 is unknown. Evaluation of the clinical performance of available tests is required. Objectives: To evaluate the clinical performance of three immunoassays for the detection of IgG antibodies generated by mRNA vaccines against SARS-CoV-2. Methods: Two automated immunoassays (Euroimmun Anti-SARS-CoV-2 ELISA IgG and Abbott SARS-CoV-2 CLIA IgG) and one lateral flow immunoassay (LFIA Test Livzon IgG) were tested. 300 samples distributed in 3 groups were analyzed: 100 subjects over 18 years old and under 45 years old, 100 subjects between 45-65 years old and another 100 over 65 years old. collected before vaccination, at 21 days, 1, 2, 3 and 6 months post-vaccination. Sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and agreement (I. Kappa) were calculated for each serological test. Results: Maximum sensitivity for IgG was 98.7%, 98.1%, and 97.8% for the ELISA Euroimmun, CLIA Abbott, and Livzon LFIA assays and maximum specificity for IgG was 99.4%, 99.9%. % and 98.4% ELISA, CLIA and LFIA respectively at 3 months after vaccination with a decrease in antibody levels from the sixth month. The best agreement was observed between ELISA and CLIA 100%; (k = 1.00). The agreement between ELIA, CLIA and LIFIA was 99% (k = 0964) at the second and third month after vaccination. Seroconversion was faster and longer lasting in the younger age groups. Conclusion: Our study showed an equivalent and homogeneous clinical performance for IgG of three immunoassays after vaccination and that the LIFIA assay is the most cost-effective, reliable and accurate for routine use in population studies of seroconversion and seroprevalence.
ARTICLE | doi:10.20944/preprints202104.0404.v1
Subject: Social Sciences, Accounting Keywords: automated assessment; computer science; learning analytics; process mining; programming; sequence mining
Online: 15 April 2021 (09:40:33 CEST)
Learning programming is a complex and challenging task for many students. It in-volves both understanding theoretical concepts and acquiring practical skills. Hence, analyzing learners’ data from online learning environments alone fails to capture the full breadth of stu-dents’ actions if part of their learning process takes place elsewhere. Moreover, existing studies on learning analytics applied to programming education have mainly relied on frequency analysis to classify students according to their approach to programming or to predict academic achieve-ment. However, frequency analysis provides limited insights into the individual time-related characteristics of the learning process. The current study examines students’ strategies when learning programming, combining data from the learning management system and from an au-tomated assessment tool. To gain an in-depth understanding of students’ learning process as well as of the types of learners, we used learning analytics methods that account for the temporal order of learning actions. Our results show that students have special preferences for specific learning resources when learning programming, namely slides that support search, and copy and paste. We also found that videos are relatively less consumed by students, especially while working on programming assignments. Lastly, students resort to course forums to seek help only when they struggle.
REVIEW | doi:10.20944/preprints201810.0653.v3
Subject: Chemistry, General & Theoretical Chemistry Keywords: automated algorithm; molecular dynamics; graph theory; statistical rate theory; kinetics simulations
Online: 2 December 2018 (09:39:19 CET)
The tsscds method, recently developed in our group, discovers chemical reaction mechanisms with minimal human intervention. It employs accelerated molecular dynamics, spectral graph theory, statistical rate theory and stochastic simulations to uncover chemical reaction paths and to solve the kinetics at the experimental conditions. In the present review, its application to solve mechanistic/kinetics problems in different research areas will be presented. Examples will be given of reactions involved in photodissociation dynamics, mass spectrometry, combustion chemistry and organometallic catalysis. Some planned improvements will also be described.
REVIEW | doi:10.20944/preprints202008.0215.v2
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: COVID-19; deep learning; radiography; automated detection; medical imaging; SARS-CoV-2
Online: 19 October 2020 (10:49:25 CEST)
The COVID-19 pandemic has wreaked havoc on the whole world, taking over half a million lives and capsizing the world economy in unprecedented magnitudes. With the world scampering for a vaccine, early detection and containment is the only redress. Existing diagnostic technologies with high accuracy like RT-PCRs are expensive and sophisticated, requiring skilled individuals for specimen collection and screening, resulting in lower outreach. So, methods excluding direct human intervention are much sought after, and artificial intelligence-driven automated diagnosis, especially with radiography images, captured the researchers' interest. This survey marks a detailed inspection of the deep-learning-based automated detection of COVID-19 works done to date, a comparison of the available datasets, methodical challenges like imbalanced datasets, and others, along with probable solutions with different pre-processing methods, and scopes of future exploration in this arena. We also benchmarked the performance of 315 deep models in diagnosing COVID-19, Normal, and Pneumonia from x-ray images of a custom dataset created from four others. The dataset is publicly available at https://github.com/rgbnihal2/COVID-19-X-ray-Dataset. Our results show that DenseNet201 model with Quadratic SVM classifier performs the best (accuracy: 98.16%, sensitivity: 98.93%, specificity: 98.77%) and maintains high accuracies in other similar architectures as well. This proves that even though radiography images might not be conclusive for radiologists, but it is so for deep learning algorithms for detecting COVID-19. We hope this extensive review will provide a comprehensive guideline for researchers in this field.
ARTICLE | doi:10.20944/preprints202006.0333.v1
Subject: Keywords: Lung Cancer Prediction; Neural Network; Cross-validation; Gradient Boosting Classifier; Automated tool
Online: 28 June 2020 (09:56:30 CEST)
Lung cancer is known as lung carcinoma. It is a disease which is malignant tumor leading to the uncontrolled cell growth in the lung tissue. Lung Cancer disease is one of the most prominent cause of death in all over world. Early detection of this disease can assist medical care unit as well as physicians to provide counter measures to the patients. The objective of this paper is to approach an automated tool that takes influential causes of lung cancer as input and detect patients with higher probabilities of being affected by this disease. A neural network classifier accompanied by cross-validation technique is proposed in this paper as a predictive tool. Later, this proposed method is compared with another baseline classifier Gradient Boosting Classifier in order to justify the prediction performance.
REVIEW | doi:10.20944/preprints202010.0388.v1
Subject: Engineering, Automotive Engineering Keywords: Autism Spectrum Disorder; activity analysis; automated detection; repetitive behavior; abnormal gait; visual saliency
Online: 19 October 2020 (14:49:24 CEST)
Autism Spectrum Disorder (ASD) is a neuro-developmental disorder that limits social interactions, cognitive skills, and abilities. Since ASD can last during an affected person's entire life cycle, the diagnosis at the early onset can yield a significant positive impact. The current medical diagnostic systems (e.g., DSM-5/ICD-10) are somewhat subjective; rely purely on the behavioral observation of symptoms, and hence, some individuals often go misdiagnosed or late-diagnosed. Therefore, researchers have focused on developing data-driven automated diagnosis systems with less screening time, low cost, and improved accuracy while significantly reducing professional intervention. Human Activity Analysis (HAA) is considered one of the most promising niches in computer vision research. This paper aims to analyze its potentialities in the automated detection of autism by tracking the exclusive characteristics of autistic individuals such as repetitive behavior, atypical walking style, and unusual visual saliency. This review provides a detailed inspection of HAA-based autism detection literature published in 2011 on-wards depicting core approaches, challenges, probable solutions, available resources, and scopes of future exploration in this arena. According to our study, deep learning outperforms machine learning in ASD detection with a classification accuracy of 76\% to 95\% on different datasets comprise of video, image, or skeleton data that recorded participants performing a large number of actions. However, machine learning provides satisfactory results on datasets with a small number of action classes and has a range of 60\% to 93\% accuracy among numerous studies. We hope this extensive review will provide a comprehensive guideline for researchers in this field.
TECHNICAL NOTE | doi:10.20944/preprints202003.0038.v1
Subject: Earth Sciences, Environmental Sciences Keywords: Okavango Delta; inundation maps; inundation extent; Landsat; Google Earth Engine; automated time series
Online: 3 March 2020 (11:25:49 CET)
Accurate inundation maps for flooded wetlands and rivers are a critical resource for their management and conservation. In this paper we automate a method (thresholding of the short-wave infrared band) for classifying inundation, using Landsat imagery and Google Earth Engine. We demonstrate the method in the Okavango Delta, northern Botswana, a complex case study due to the spectral overlap between inundated areas covered with aquatic vegetation and dryland vegetation classes on satellite imagery. Inundation classifications in the Okavango Delta have predominately been implemented on broad spatial resolution images. We present the longest time series to date (1990-2019) of inundation maps at high spatial resolution (30m) for the Okavango Delta. We validated the maps using image-based and in situ data accuracy assessments, with accuracy ranging from 91.5 - 98.1%. Use of Landsat imagery resulted in consistently lower estimates of inundation extent than previous studies, likely due to the increased number of mixed pixels that occur when using broad spatial resolution imagery, which can lead to overestimations of the size of inundated areas. We provide the inundation maps and Google Earth Engine code for public use.
ARTICLE | doi:10.20944/preprints202001.0205.v1
Subject: Behavioral Sciences, Other Keywords: itch; scratch; automated real-time detection; machine-learning based image classifier; image sharpness
Online: 19 January 2020 (03:13:48 CET)
A 'little brother' of pain, itch is an unpleasant sensation that creates a specific urge to scratch. To date, various machine-learning based image classifiers (MBICs) have been proposed for quantitative analysis of itch-induced scratch behaviour of laboratory animals in an automated, non-invasive, inexpensive and real-time manner. In spite of MBICs' advantages, the overall performances (accuracy, sensitivity and specificity) of current MBIC approaches remains inconsistent, with their values varying from ~50% to ~99%, for which the reasons underlying have yet to be investigated further, both computationally and experimentally. To look into the variation of the performance of MBICs in automated detection of itch-induced scratch, this article focuses on the experimental data recording step, and reports here for the first time that MBICs' overall performance is inextricably linked to the sharpness of experimentally recorded video of laboratory animal scratch behaviour. This article furthermore demonstrates for the first time that a linearly correlated relationship exists between video sharpness and overall performance (accuracy and specificity, but not sensitivity) of MBICs, and highlight the primary role of experimental data recording in rapid, accurate and consistent quantitative assessment of laboratory animal itch.
ARTICLE | doi:10.20944/preprints202102.0186.v1
Subject: Earth Sciences, Atmospheric Science Keywords: gold; till; gold grain recovery; gold grain size; automated SEM; gold exploration; drift prospecting
Online: 8 February 2021 (11:01:55 CET)
The quantitative and qualitative assessment of gold grains from samples of glacial till is a well-established method for exploring gold deposits hidden under glaciated cover. This method, widely used by the industry and which produced numerous successes in locating gold deposits in glaciated terrain, is still based on artisanal gravity separation techniques and visual identification. However, being artisanal, it is limited by inconsistent recoveries and the difficulty to visually identify the predominantly occurring small gold grains. These limitations hinder its capability to deceipher subtle or complex signal. To improve detection limits through recovery of small gold grains, a new approach has recently been introduced in the industry (commercially referred as “ARTGold” procedure) using an optimized miniature sluice box coupled with an automated scanning electron microscopy routine. The capabilities of this improved method are highlighted by comparing till surveys conducted around the Borden gold deposit (Ontario, Canada) using the conventional and improved methods at both local and regional scales. Relative to the conventional approach, the improved method recovered almost one order of magnitude more gold grains from samples (regional and down-ice mineralization), dominantly in small size fractions. Increasing the counts in low-abundance regional samples enables better discrimination between background signals and significant dispersions. The method offers an alternative to improve characterization of gold dispersal in glaciated terrain and the related gold deposit footprints.
ARTICLE | doi:10.20944/preprints202009.0381.v1
Subject: Life Sciences, Biotechnology Keywords: high throughput screening; rapid phenotyping; model-based experimental design; Escherichia coli; automated bioprocess development
Online: 17 September 2020 (07:34:19 CEST)
In bioprocess development, the host and the genetic construct for a new biomanufacturing process are selected in the early developmental stages. This decision, made at the screening scale with very limited information about the performance of the selected cell factory in larger reactors, has a major influence on the performance of the final process. To overcome this, scaledown approaches are essential to run screenings that show the real cell factory performance at industrial like conditions. We present a fully automated robotic facility with 24 parallel mini-bioreactors that is operated by a model based adaptive input design framework for the characterization of clone libraries under scale-down conditions. The cultivation operation strategies are computed and continuously refined based on a macro-kinetic growth model that is continuously re-fitted to the available experimental data. The added value of the approach is demonstrated with 24 parallel fed-batch cultivations in a mini-bioreactor system with eight different Escherichia coli strains in triplicate. The 24 fed-batches ran under the desired conditions generating sufficient information to define the fastest growing strain in an environment with varying glucose concentrations similar to industrial scale bioreactors.
Subject: Engineering, Electrical & Electronic Engineering Keywords: automated visual inspection; convolutional neural network; deep learning; pattern classification; semiconductor inspection; wafer map
Online: 7 April 2020 (11:30:38 CEST)
This article presents an automated vision-based algorithm for the die-scale inspection of wafer images captured using scanning acoustic tomography (SAT). This algorithm can find defective and abnormal die-scale patterns, and produce a wafer map to visualize the distribution of defects and anomalies on the wafer. The main procedures include standard template extraction, die detection through template matching, pattern candidate prediction through clustering, and pattern classification through deep learning. To conduct the template matching, we first introduce a two-step method to obtain a standard template from the original SAT image. Subsequently, a majority of the die patterns are detected through template matching. Thereafter, the columns and rows arranged from the detected dies are predicted using a clustering method; thus, an initial wafer map is produced. This map is composed of detected die patterns and predicted pattern candidates. In the final phase of the proposed algorithm, we implement a deep learning-based model to determine defective and abnormal patterns in the wafer map. The experimental results verified the effectiveness and efficiency of our proposed algorithm. In conclusion, the proposed method performs well in identifying defective and abnormal die patterns, and produces a wafer map that presents important information for solving wafer fabrication issues.
ARTICLE | doi:10.20944/preprints201906.0251.v1
Subject: Physical Sciences, Applied Physics Keywords: video microscopy, imaging, automated data acquisition, nanoparticle tracking, measurement embedded applications, open-source software
Online: 25 June 2019 (12:53:50 CEST)
We introduce PyNTA, a modular instrumentation software for live particle tracking. By using the multiprocessing library of Python and the distributed messaging library pyZMQ, PyNTA allows users to acquire images from a camera at close to maximum readout bandwidth while simultaneously performing computations on each image on a separate processing unit. This publisher/subscriber pattern generates a small overhead and leverages the multi-core capabilities of modern computers. We demonstrate capabilities of the PyNTA package on the featured application of nanoparticle tracking analysis. Real-time particle tracking on megapixel images at a rate of 50 Hz is presented. Reliable live tracking reduces the required storage capacity for particle tracking measurements by a factor of approximately 103, as compared with raw data storage, allowing for a virtually unlimited duration of measurements
Subject: Engineering, Automotive Engineering Keywords: vehicle detection; automated driving; autonomous vehicles; measurement campaign; 5G; vehicle sensors; infrastructure sensors; UHD map
Online: 15 March 2021 (16:46:28 CET)
The paper presents the measurement campaign carried out on a real-world motorway stretch of Hungary with the participation of both industrial and academic partners from Austria and Hungary. The measurement included vehicle based as well as infrastructure based sensor data. The obtained results will be extremely useful for future automotive R&D activities due to the available ground truth for static and dynamic content. The aim of the measurement campaign was twofold. On the one hand, road geometry was mapped with high precision in order to build Ultra High Definition (UHD) map of the test road. On the other hand, the vehicles - equipped with differential Global Navigation Satellite Systems (GNSS) for ground truth localization - carried out special test scenarios while collecting detailed data using different sensors. All test runs were recorded by both vehicles and infrastructure. As a complementary task, the available 5G network was monitored and tested. The paper also showcases application examples based on the measurement campaign data, in which the added value of having access to the ground truth labeling and the created UHD map of the motorway section becomes apparent. In order to present our work transparently, a part of the measured data have been shared openly such that interested automotive as well as academic parties may use it for their own purposes.
Subject: Biology, Agricultural Sciences & Agronomy Keywords: automated machine learning; Neural Architecture Search; high-throughput plant phenotyping; wheat lodging assessment; unmanned aerial vehicle.
Online: 1 February 2021 (14:11:08 CET)
Automated machine learning (AutoML) has been heralded as the next wave in artificial intelligence with its promise to deliver high performance end-to-end machine learning pipelines with minimal effort from the user. However, despite AutoML showing great promise for computer vision tasks, to the best of our knowledge, no study has used AutoML for image-based plant phenotyping. To address this gap in knowledge, we examined the application of AutoML for image-based plant phenotyping using wheat lodging assessment with UAV imagery as an example. We compared the performance of an open-source AutoML framework, AutoKeras in image classification and regression tasks to transfer learning using modern convolutional neural network (CNN) architectures. For image classification which classified plot images as lodged or non-lodged, transfer learning with Xception and DenseNet-201 achieved best classification accuracy of 93.2%, whereas Autokeras had 92.4% accuracy. For image regression which predicted lodging scores from plot images, transfer learning with DenseNet-201 had the best performance (R2=0.8303, RMSE=9.55, MAE=7.03, MAPE=12.54%), followed closely by AutoKeras (R2=0.8273, RMSE=10.65, MAE=8.24, MAPE=13.87%). Interestingly, in both tasks, AutoKeras models had up to 40-fold faster inference times compared to the pretrained CNNs. The merits and drawbacks of AutoML compared to transfer learning for image-based plant phenotyping are discussed.
ARTICLE | doi:10.20944/preprints201710.0187.v1
Subject: Mathematics & Computer Science, Analysis Keywords: medical image classification; local binary patterns; characteristic curves; whole slide image pro-cessing; automated HER2 scoring
Online: 31 October 2017 (03:10:22 CET)
This paper presents novel feature descriptors and classification algorithms for automated scoring of HER2 in Whole Slide Images (WSI) of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been to keep the computational complexity to the minimum possible level and to use simple, yet robust feature descriptors that can provide accurate classification of the slides. We propose two types of feature descriptors that encode important information about staining patterns and the percentage of staining present in ImmunoHistoChemistry (IHC) stained slides. The first descriptor is called a characteristic curve which is a smooth non-increasing curve that represents the variation of percentage of staining with saturation levels. The second new descriptor introduced in this paper is an LBP feature curve which is also a non-increasing smooth curve that represents the local texture of the staining patterns. Both descriptors show excellent interclass variance and intraclass correlation, and are suitable for the design of automatic HER2 classification algorithms. This paper gives the detailed theoretical aspects of the feature descriptors and also provides experimental results and comparative analysis.
ARTICLE | doi:10.20944/preprints201612.0141.v1
Subject: Earth Sciences, Environmental Sciences Keywords: automated water extraction; landsat 8 Operational Land Imager (OLI); modified histogram bimodal method (MHBM); remote sensing
Online: 29 December 2016 (10:49:38 CET)
Surface water distribution extracted from remote sensing data has been used in water resource assessment, coastal management, and environmental change studies. Traditional manual methods for extracting water bodies cannot satisfy the requirements for mass processing of remote sensing data; therefore, accurate automated extraction of such water bodies has remained a challenge. The histogram bimodal method (HBM) is a frequently used objective tool for threshold selection in image segmentation. The threshold is determined by seeking twin peaks, and the valley values between them; however, automatically calculating the threshold is difficult because complex surfaces and image noise which lead to not perfect twin peaks (single or multiple peaks). We developed an operational automated water extraction method, the modified histogram bimodal method (MHBM). The MHBM defines the threshold range of water extraction through mass static data; therefore, it does not require the identification of twin histogram peaks. It then seeks the minimum values in the threshold range to achieve automated threshold. We calibrated the MHBM for many lakes in China using Landsat 8 Operational Land Imager (OLI) images, for which the relative error (RE) and squared correlation coefficient (R2) for threshold accuracy were found to be 2.1% and 0.96, respectively. The RE and root-mean-square error (RMSE) for the area accuracy of MHBM were 0.59% and 7.4 km2. The results show that the MHBM could easily be applied to mass time-series remote sensing data to calculate water thresholds within water index images and successfully extract the spatial distribution of large water bodies automatically.
ARTICLE | doi:10.20944/preprints202205.0352.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Quality real-time systems; Automated Machine Learning; Real-time embedded control systems; Cyber-physical systems; Neural Networks
Online: 25 May 2022 (11:17:19 CEST)
A correct system design can be systematically obtained from a specification model of a real-time system that integrates hybrid measurements in a realistic industrial environment, this has been carried out through complete Matlab / Simulink / Stateflow models. However, there is a widespread interest in carrying out that modeling by resorting to Machine Learning models, which can be understood as Automated Machine Learning for Real-time systems that present some degree of hybridization. An induction motor controller which must be able to maintain a constant air flow through a filter is one of these systems and it is discussed in the paper as a study case of closed-loop control system. The article discusses a practical application of ML methods that demonstrates how to replace such closed loop in industrial control systems with a Simulink block generated from neural networks to show how the proposed procedure can be applied to derive complete hybrid system designs with artificial neural networks (ANN). In the proposed ANN-based method to design a real-time hybrid system with continuous and discrete components, we use a typical design of a neural network, in which we define the usual phases: training, validation, and testing. The generated output of the model is made up of reference variables values of the cyber-physical system, which represent the functional and dynamic aspects of model. They are used to feed Simulink/Stateflow blocks in the real target system.
ARTICLE | doi:10.20944/preprints202107.0638.v1
Subject: Keywords: Image Processing; Automated Plant Diseases Detection; Histogram Oriented Gradient (HOG); Local Binary Pattern (LBP); Support Vector Machine (SVM)
Online: 28 July 2021 (17:18:04 CEST)
: On earth, plants play the most important part. Every organ of a plant plays a vital role in the ecological field as well as the medicinal field. But on the whole earth there are several species of plants are available. Different plants have different diseases. Therefore it is needed to identify the plants and their diseases to prevent loss. Now to identify the plants and their diseases manually is very time consuming. In this research an automatic plant and their disease detection system is proposed. For experimental purposes, high-quality leaf images are accepted for training and testing. For detecting the healthy and diseased area in a leaf, region-based and color-based region thresholding techniques were used. For feature selection Histogram Oriented Gradient (HOG) and Local Binary Pattern (LBP) method were applied. Finally for classification two-class and multi-class Support Vector Machine (SVM) was used. It is observed that both feature selection processes with SVM give 99% accuracy. Finally to understand the automated system a graphical user interface was created for all users.
ARTICLE | doi:10.20944/preprints202003.0109.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Automated Fare Collection (AFC); Smart Card; Crowding; Practical Waiting Area; Subway Station Platform; Time-Varying; Late-Night Peak
Online: 6 March 2020 (09:02:01 CET)
Management of crowding at subway platform is essential to improving services, preventing train delays and ensuring passenger safety. Establishing effective measures to mitigate crowding at platform requires accurate estimation of actual crowding levels. At present, there are temporal and spatial constraints since subway platform crowding is assessed only at certain locations, done every 1~2 years, and counting is performed manually Notwithstanding, data from smart cards is considered real-time big data that is generated 24 hours a day and thus, deemed appropriate basic data for estimating crowding. This study proposes the use of smart card data in creating a model that dynamically estimates crowding. It first defines crowding as demand, which can be translated into passengers dynamically moving along a subway network. In line with this, our model also identifies the travel trajectory of individual passengers, and is able to calculate passenger flow, which concentrates and disperses at the platform, every minute. Lastly, the level of platform crowding is estimated in a way that considers the effective waiting area of each platform structure.
CONCEPT PAPER | doi:10.20944/preprints202107.0557.v1
Subject: Social Sciences, Other Keywords: Industry 4.0; Cyber-Physical Systems (CPS); Internet of Things (IoT); Human factors; Automated production Systems; Social interactions; Social Networks
Online: 26 July 2021 (09:47:59 CEST)
Since the 1970s, the application of microprocessor in industrial machinery and the development of computer systems have transformed the manufacturing landscape. The rapid integration and automation of production systems have outpaced the development of suitable human design criteria, creating a deepening gap where human factor was seen as an important source of errors and disruptions. Today the situation seems different: the scientific and public debate about the concept of Industry 4.0 has raised the awareness about the central role humans have to play in manufacturing systems, to the design of which they must be considered from the very beginning. The future of industrial systems, as represented by Industry 4.0, will rely on the convergence of several research fields such as Intelligent Manufacturing Systems (IMS), Cyber-Physical Systems (CPS), Internet of things (IoT), but also socio-technical fields such as social approaches within technical systems. This article deals with different Human dimensions associated with CPS and IoT and focuses on their conceptual evolution of automatization to improve the sociability of such automated production systems and consequently puts again the human in the loop. Hereby, our aim is to take stock of current research trends, and to show the importance of integrating human operators as a part of a socio-technical system based autonomous and intelligent products or resources. As results, different models of sociability as way to integrate human into the broad sense and/or the development of future automated production systems, were identified from the literature and analysed.
REVIEW | doi:10.20944/preprints202009.0431.v1
Subject: Social Sciences, Organizational Economics & Management Keywords: sustainability; Industry 4.0; automated content analysis; sustainable investment; corporate social responsibility; sustainable standards; sustainable reporting; smart manufacturing; renewable energy; cleaner production
Online: 18 September 2020 (11:11:47 CEST)
Background (1) In the time of the 4th Industrial Revolution or Industry 4.0, a conglomerate of technical and social inventions, political contexts, socio-cultural circumstances, environmental policies, business models, and economic policies has emerged. Sustainability policy in theory and practice aims to deal with the effects of all these factors and to try to make decisions that ensure both social and economic development sustainably. The question is how to familiarize oneself with the current knowledge about the relationship between Industry 4.0 and sustainability?; (2) Methods: This research utilizes an automated content analysis method to analyses scientific journals, newspapers and magazines. The comparison of results of both research group shows that the scientific literature focuses more on changes in business models, production processes and technologies that enable sustainable development; (3) We found that the scientific literature focuses more on changes in business models, production processes and technologies that enable sustainable development. Newspapers and magazines articles write more about sustainable or green investment, sustainable standards and sustainable reporting. Newspapers and magazines articles write more about sustainable or green investment, sustainable standards and sustainable reporting. Newspapers, as well as some latest research journals, include articles of the COVID-19 outbreak and its effect on the economy and the environment. Indeed, the outbreak of the virus brings a new thought to the reorganization of the complex relationships between consumers, businesses and the state; (4) Conclusions: According to the comparison of the analyses of the results, it can is that the analyses of both types of literature, both scientific and professional, shows that there are common topics they write about, which are related to the field of clean production, emissions, renewable energy, climate change, sustainable investments and corporate sustainability. An urgent global issue that extends all over the world is the promotion of energy-saving technologies and reduction of carbon dioxide emissions.
ARTICLE | doi:10.20944/preprints201905.0090.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: intelligence; inductive methods; deductive methods; pseudorandom number; artificial intelligence; Prolog; Otter; Z3; deep learning; ensemble methods; automated reasoning; coin-weighing puzzles
Online: 8 May 2019 (10:03:46 CEST)
This paper briefly reviews the state of the art in artificial intelligence including inductive and deductive methods. Deep learning and ensemble machine learning lie in inductive methods while automated reasoning implemented in deductive computer languages (Prolog, Otter, and Z3) is based on deductive methods. In the inductive methods, intelligence is inferred by pseudorandom number for creating the sophisticated decision trees in Go (game), Shogi (game), and quiz bowl questions. This paper demonstrates how to wisely use the pseudorandom number for solving coin-weighing puzzles with the deductive method. Monte Carlo approach is a general purpose problem-solving method using random number. The proposed method using pseudorandom number lies in one of Monte Carlo methods. In the proposed method, pseudorandom number plays a key role in generating constrained solution candidates for coin-weighing puzzles. This may be the first attempt that every solution candidate is solely generated by pseudorandom number while deductive rules are used for verifying solution candidates. In this paper, the performance of the proposed method was measured by comparing with the existing open source codes by solving 12-coin and 24-coin puzzles respectively.
ARTICLE | doi:10.20944/preprints202107.0087.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Electric Vehicles; Stationary Battery Energy Storage System; Battery Automated System; Online State Estimation; Thermal Modeling; First-order model; Second-order Model; Kalman Filtering
Online: 5 July 2021 (10:11:31 CEST)
Estimation of core and surface temperature is one of the crucial functionalities of the lithium-ion Battery Management System (BMS) towards providing effective thermal management, fault detection and operational safety. While, it is impractical to measure core temperature using physical sensors, implementing a complex estimation strategy in on-board low-cost BMS is challenging due to high computational cost and the cost of implementation. Typically, a temperature estimation scheme consists of a heat generation model and a heat transfer model. Several researchers have already proposed ranges of thermal models having different levels of accuracy and complexity. Broadly, there are first-order and second-order heat capacitor-resistor-based thermal models of lithium-ion batteries (LIBs) for core and surface temperature estimation. This paper deals with a detailed comparative study between these two models using extensive laboratory test data and simulation study to access suitability in online prediction and onboard BMS. The aim is to guide whether it’s worth investing towards developing a second-order model instead of a first-order model with respect to prediction accuracy considering modelling complexity, experiments required and the computational cost. Both the thermal models along with the parameter estimation scheme are modelled and simulated using MATLAB/Simulink environment. Models are validated using laboratory test data of a cylindrical 18650 LIB cell. Further, a Kalman Filter with appropriate process and measurement noise levels are used to estimate the core temperature in terms of measured surface and ambient temperatures. Results from the first-order model and second-order models are analyzed for comparison purposes.
ARTICLE | doi:10.20944/preprints202106.0020.v1
Subject: Life Sciences, Virology Keywords: Vesicular Stomatitis; Herpes Simplex; Yellow Fever; Animal Viruses; Plaque Assay; Real-time; Live Cell Imaging, Automated Image Analysis; DNA Fluorescent Dyes, Antiviral Screening
Online: 1 June 2021 (10:40:37 CEST)
Conventional plaque assays rely on the use of overlays to restrict viral infection allowing the formation of distinct foci that grow in time as the replication cycle continues leading to counta-ble plaques that are visualized with standard techniques such as crystal violet, neutral red or immunolabeling. This classical approach takes several days until large enough plaques can be visualized and counted with some variation due to subjectivity in plaque recognition. Since plaques are clonal lesions produced by virus-induced cytopathic effect, we applied DNA fluores-cent dyes with differential cell permeability to visualize them by live cell imaging. We could observe different stages of that cytopathic effect corresponding to an early wave of cells with chromatin-condensation followed by a wave of dead cells with membrane permeabilization within plaques generated by different animal viruses. This approach enables an automated plaque identification using image analysis to increase single plaque resolution compared to crystal violet counterstaining and allows its application to plaque tracking and plaque reduction assays to test compounds for both antiviral and cytotoxic activities. This fluorescent real-time plaque assay sums to those next-generation technologies by combining this robust classical method with modern fluorescence microscopy and image analysis approaches for future applica-tions in virology.
Subject: Chemistry, Analytical Chemistry Keywords: glycolipidomics; GIPC; glycosyl inositol phospho ceramides; Lipid Data Analyzer; lipidomics; sphingolipids; ultra-high pressure liquid chromatography; high-resolution mass spectrometry; LC-MS; automated annotation
Online: 8 September 2020 (12:34:56 CEST)
Glycosyl inositol phospho ceramides (GIPCs) are the major sphingolipids on earth as they account for a considerable fraction of the total lipids in plants and fungi which in turn represent a large portion of the biomass on earth. Despite their obvious importance, GIPC analysis remains challenging due to the lack of commercial standards and automated annotation software. In this work, we introduce a novel GIPC glycolipidomics workflow based on reversed-phase ultra-high pressure liquid chromatography coupled to high-resolution mass spectrometry. For the first time, automated GIPC assignment was performed using the open-source software Lipid Data Analyzer based on platform-independent decision rules. Four different plant samples (salad, spinach, raspberry, strawberry) were analyzed and revealed 64 GIPCs based on accurate mass, characteristic MS2 fragments and matching retention times. Relative quantification using lactosyl ceramide for internal standardization revealed GIPC t18:1/h24:0 as the most abundant species in all plants. Depending on the plant sample, GIPCs contained mainly amine, N-acetylamine or hydroxyl residues. Most GIPCs revealed a Hex-HexA-IPC core and contained a ceramide part with a trihydroxylated t18:0 or t18:1 long chain base and hydroxylated fatty acid chains ranging from 16 to 26 carbon atoms in length (h16:0 – h26:0). Interestingly, six GIPCs containing t18:2 were observed in raspberry, which was not reported so far. The presented workflow supports the characterization of different plant samples by automatic GIPC assignment potentially leading to the identification of new GIPCs. For the first time, automated high‑throughput profiling of these complex glycolipids is possible by liquid chromatography-high-resolution mass spectrometry and subsequent automated glycolipid annotation based on decision rules.
ARTICLE | doi:10.20944/preprints202201.0401.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Interstitial Hyperthermia; Automated treatment planning; Electromagnetic simulations; ThermoBrachytherapy; High dose rate brachytherapy; Quasistatic simulations; Capacitive heating; Treatment plan optimization; Finite-Difference Time-Domain; Gamma index analysis.
Online: 26 January 2022 (13:14:59 CET)
The combination of interstitial hyperthermia treatment (IHT) with high dose rate brachytherapy (HDR-BT) for prostate cancer treatment and has the potential to improve clinical outcome, since it highly enhances the efficiency of cell kill, especially when applied simultaneously. Therefore, we have developed the ThermoBrachy applicators. To effectively apply optimal targeted IHT, treatment planning is considered essential. However, treatment planning in IHT is rarely applied since it is regarded difficult to accurately calculate the deposited energy in the tissue in a short enough time for clinical practice. In this study, we investigated various time-efficient methods for fast computation of the electromagnetic (EM) energy deposition resulting from the ThermoBrachy applicators. Initially, we investigated the use of an electro-quasistatic solver. Next, we extended our investigation to the application of geometric simplifications. Furthermore, we investigated the validity of the superpositioning principle, which can enable adaptive treatment plan optimization without the need for continuous recomputation of the EM field. Finally, we evaluated the accuracy of the methods by comparing them to the golden standard Finite-Difference Time-Domain calculation method using gamma-index analysis. The simplifications considerably reduced the computation time needed, improving from >12 h to a few seconds. All investigated methods showed excellent agreement with the golden standard by showing a >99% passing rate with 1%/0.5 mm Dose Difference and Distance-to-Agreement criteria. These results allow the proposed electromagnetic simulation method to be used for fast and accurate adaptive treatment planning.