REVIEW | doi:10.20944/preprints202004.0054.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: pandemic; influenza pandemic; open source; open hardware; COVID-19; COVID-19 pandemic; medical hardware; open source medicine
Online: 6 April 2020 (12:38:59 CEST)
Distributed digital manufacturing offers a solution to medical supply and technology shortages during pandemics. To prepare for the next pandemic, this study reviews the state-of-the-art for open hardware designs needed in a COVID-19-like pandemic. It evaluates the readiness of the top twenty technologies requested by the Government of India. The results show that the majority of the actual medical products have had some open source development, however, only 15% of the supporting technologies that make the open source device possible are freely available. The results show there is still considerable work needed to provide open source paths for the development of all the medical hardware needed during pandemics. Five core areas of future work are discussed that include: i) technical development of a wide-range of open source solutions for all medical supplies and devices, ii) policies that protect the productivity of laboratories, makerspaces and fabrication facilities during a pandemic, as well as iii) streamlining the regulatory process, iv) developing Good-Samaritan laws to protect makers and designers of open medical hardware, as well as to compel those with knowledge that will save lives to share it, and v) requiring all citizen-funded research to be released with free and open source licenses.
ARTICLE | doi:10.20944/preprints202004.0472.v1
Subject: Chemistry, Analytical Chemistry Keywords: 3-D printing; additive manufacturing; distributed manufacturing; laboratory equipment; open hardware; open source; open source hardware; scale; balance; mass
Online: 27 April 2020 (02:59:34 CEST)
This study provides designs for a low-cost, easily replicable open source lab-grade digital scale that can be used as a precision balance. The design is such that it can be manufactured for use in most labs throughout the world with open source RepRap-class material extrusion-based 3-D printers for the mechanical components and readily available open source electronics including the Arduino Nano. Several versions of the design were fabricated and tested for precision and accuracy for a range of load cells. The results showed the open source scale was found to be repeatable within 0.1g with multiple load cells, with even better precision (0.01g) depending on load cell range and style. The scale tracks linearly with proprietary lab-grade scales, meeting the performance specified in the load cell data sheets, indicating that it is accurate across the range of the load cell installed. The smallest loadcell tested(100g) offers precision on the order of a commercial digital mass balance. The scale can be produced at significant cost savings compared to scales of comparable range and precision when serial capability is present. The cost savings increase significantly as the range of the scale increases and are particularly well-suited for resource-constrained medical and scientific facilities.
ARTICLE | doi:10.20944/preprints202101.0082.v2
Subject: Earth Sciences, Atmospheric Science Keywords: Shoreline Evolution; Open-Source Software; GIS; Modeling
Online: 19 February 2021 (09:46:48 CET)
This paper presents the validation of the End Point Rate (EPR) tool for QGIS (EPR4Q), a tool built-in QGIS Graphical Modeler to calculate the shoreline change by End Point Rate method. The EPR4Q tries to fill the gap of user-friendly and free open-source tool for shoreline analysis in Geographic Information System environment, since the most used software - Digital Shoreline Analysis System (DSAS) - although is a free extension, is suited for commercial software. Besides, the best free open-source option to calculate EPR called Analyzing Moving Boundaries Using R (AMBUR), since it is a robust and powerful tool, the complexity and heavy processes can restrict the accessibility and simple usage. The validation methodology consists of applying the EPR4Q, DSAS, and AMBUR on different examples of shorelines found in nature, extracted from the U.S. Geological Survey Open-File. The obtained results of each tool were compared with Pearson correlation coefficient. The validation results indicate that the EPR4Q tool created acquired high correlation values with DSAS and AMBUR, reaching a coefficient of 0.98 to 1.00 on linear, extensive, and non-extensive shorelines, guarantying that the EPR4Q tool is ready to be freely used by the academic, scientific, engineering, and coastal managers communities worldwide.
REVIEW | doi:10.20944/preprints202105.0352.v1
Subject: Life Sciences, Biochemistry Keywords: 3d printing; microscopy; open-source; optics; super-resolution
Online: 14 May 2021 (16:10:24 CEST)
The maker movement has reached the optics labs, empowering researchers to actively create and modify microscope designs and imaging accessories. 3D printing has especially had a disruptive impact on the field, as it entails an accessible new approach in fabrication technologies, namely additive manufacturing, making prototyping in the lab available at low cost. Examples of this trend are taking advantage of the easy availability of 3D printing technology. For example, inexpensive microscopes for education have been designed, such as the FlyPi. Also, the highly complex robotic microscope OpenFlexure represents a clear desire for the democratisation of this technology. 3D printing facilitates new and powerful approaches to science and promotes collaboration between researchers, as 3D designs are easily shared. This holds the unique possibility to extend the open-access concept from knowledge to technology, allowing researchers from everywhere to use and extend model structures. Here we present a review of additive manufacturing applications in microscopy, guiding the user through this new and exciting technology and providing a starting point to anyone willing to employ this versatile and powerful new tool.
ARTICLE | doi:10.20944/preprints202102.0513.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Sea-Level Rise; GIS; Open-Source Software; Modeling
Online: 23 February 2021 (12:39:09 CET)
Sea-level rise is a problem increasingly affecting coastal areas worldwide. The existence of Free and Open-Source Models to estimate the sea-level impact can contribute to better coastal man-agement. This study aims to develop and to validate two different models to predict the sea-level rise impact supported by Google Earth Engine (GEE) – a cloud-based platform for planetary-scale environmental data analysis. The first model is a Bathtub Model based on the uncertainty of projections of the Sea-level Rise Impact Module of TerrSet - Geospatial Monitoring and Modeling System software. The validation process performed in the Rio Grande do Sul coastal plain (S Brazil) resulted in correlations from 0.75 to 1.00. The second model uses Bruun Rule formula implemented in GEE and is capable to determine the coastline retreat of a profile through the creation of a simple vector line from topo-bathymetric data. The model shows a very high cor-relation (0.97) with a classical Bruun Rule study performed in Aveiro coast (NW Portugal). The GEE platform seems to be an important tool for coastal management. The models developed have been openly shared, enabling the continuous improvement of the code by the scientific commu-nity.
ARTICLE | doi:10.20944/preprints202102.0421.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Sea-Level Rise; GIS; Open-Source Software; Modeling
Online: 18 February 2021 (13:52:49 CET)
Sea-level rise is a problem increasingly affecting coastal areas worldwide. The existence 15 of Free and Open-Source Models to estimate the sea-level impact can contribute to better coastal 16 management. This study aims to develop and to validate two different models to predict the 17 sea-level rise impact supported by Google Earth Engine (GEE) – a cloud-based platform for plan-18 etary-scale environmental data analysis. The first model is a Bathtub Model based on the uncer-19 tainty of projections of the Sea-level Rise Impact Module of TerrSet - Geospatial Monitoring and 20 Modeling System software. The validation process performed in the Rio Grande do Sul coastal 21 plain (S Brazil) resulted in correlations from 0.75 to 1.00. The second model uses Bruun Rule for-22 mula implemented in GEE and is capable to determine the coastline retreat of a profile through the 23 creation of a simple vector line from topo-bathymetric data. The model shows a very high correla-24 tion (0.97) with a classical Bruun Rule study performed in Aveiro coast (NW Portugal). The GEE 25 platform seems to be an important tool for coastal management. The models developed have been 26 openly shared, enabling the continuous improvement of the code by the scientific community.
REVIEW | doi:10.20944/preprints202003.0362.v1
Online: 24 March 2020 (14:46:29 CET)
With the current rapid spread of COVID-19, global health systems are increasingly overburdened by the sheer number of people that need diagnosis, isolation and treatment. Shortcomings are evident across the board, from staffing, facilities for rapid and reliable testing to availability of hospital beds and key medical-grade equipment. The scale and breadth of the problem calls for an equally substantive response not only from frontline workers such as medical staff and scientists, but from skilled members of the public who have the time, facilities and knowledge to meaningfully contribute to a consolidated global response. Here, we summarise community-driven approaches based on Free and Open Source scientific and medical Hardware (FOSH) currently being developed and deployed to bolster access to personal protective equipment (PPE), patient treatment and diagnostics.
ARTICLE | doi:10.20944/preprints201805.0470.v1
Subject: Earth Sciences, Environmental Sciences Keywords: remote sensing; python; data management; landsat; open-source
Online: 31 May 2018 (11:12:27 CEST)
Many remote sensing analytical data products are most useful when they are in an appropriate regional or national projection, rather than globally based projections like Universal Transverse Mercator (UTM) or geographic coordinates, i.e., latitude and longitude. Furthermore, leaving data in the global systems can create problems, either due to misprojection of imagery because of UTM zone boundaries, or because said projections are not optimised for local use. We developed the open-source Irish Earth Observation (IEO) Python module to maintain a local remote sensing data library for Ireland. This pure Python module, in conjunction with the IEOtools Python scripts, utilises the Geospatial Data Abstraction Library (GDAL) for its geoprocessing functionality. At present, the module supports only Landsat TM/ETM+/OLI/TIRS data that have been corrected to surface reflectance using the USGS/ESPA LEDAPS/ LaSRC Collection 1 architecture. This module and the IEOtools catalogue available Landsat data from the USGS/EROS archive, and includes functions for the importation of imagery into a defined local projection and calculation of cloud-free vegetation indices. While this module is distributed with default values and data for Ireland, it can be adapted for other regions with simple modifications to the configuration files and geospatial data sets.
TECHNICAL NOTE | doi:10.20944/preprints201804.0047.v1
Online: 4 April 2018 (06:00:40 CEST)
The exceptional increase in molecular DNA sequence data in open repositories is mirrored by an ever-growing interest among evolutionary biologists to harvest and use those data for phylogenetic inference. Many quality issues, however, are known and the sheer amount and complexity of data available can pose considerable barriers to their usefulness. A key issue in this domain is the high frequency of sequence mislabelling encountered when searching for suitable sequences for phylogenetic analysis. These issues include the incorrect identification of sequenced species, non-standardised and ambiguous sequence annotation, and the inadvertent addition of paralogous sequences by users, among others. Taken together, these issues likely add considerable noise, error or bias to phylogenetic inference, a risk that is likely to increase with the size of phylogenies or the molecular datasets used to generate them. Here we present a software package, phylotaR, that bypasses the above issues by using instead an alignment search tool to identify orthologous sequences. Our package builds on the framework of its predecessor, PhyLoTa, by providing a modular pipeline for identifying overlapping sequence clusters using up-to-date GenBank data and providing new features, improvements and tools. We demonstrate our pipeline’s effectiveness by presenting trees generated from phylotaR clusters for two large taxonomic clades: palms and primates. Given the versatility of this package, we hope that it will become a standard tool for any research aiming to use GenBank data for phylogenetic analysis.
ARTICLE | doi:10.20944/preprints201711.0181.v3
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: 3D printing; open source; RepRap; calibration; bed levelling
Online: 12 January 2018 (07:35:31 CET)
Inexpensive piezoelectric diaphragms can be used as sensors to facilitate both nozzle height setting and build platform leveling in FFF (Fused Filament Fabrication) 3D printers. Tests simulating nozzle contact are conducted to establish the available output and an output of greater than 8 Volts found at 20 ºC, a value which is readily detectable by simple electronic circuits. Tests are also conducted at a temperature of 80 ºC and, despite a reduction of greater than 80% in output voltage, this is still detectable. The reliability of piezoelectric diaphragms is investigated by mechanically stressing samples over 100,000 cycles at both 20 ºC and 80 ºC and little loss of output over the test duration is found. The development of a nozzle contact sensor using a single piezoelectric diaphragm is described.
ARTICLE | doi:10.20944/preprints201702.0055.v1
Online: 15 February 2017 (11:20:31 CET)
The process of modelling energy systems is accompanied by challenges inherently connected with mathematical modelling. However, due to modern realities in the 21st century, existing challenges are gaining in magnitude and are supplemented with new ones. Modellers are confronted with a rising complexity of energy systems and high uncertainties on different levels. In addition, interdisciplinary modelling is necessary for getting insight in mechanisms of an integrated world. At the same time models need to meet scientific standards as public acceptance becomes increasingly important. In this intricate environment model application as well as result communication and interpretation is also getting more difficult. In this paper we present the open energy modelling framework (oemof) as a novel approach for energy system modelling and derive its contribution to existing challenges. Therefore, based on literature review, we outline challenges for energy system modelling as well as existing and emerging approaches. Based on a description of the philosophy and elementary structural elements of oemof, a qualitative analysis of the framework with regard to the challenges is undertaken. Inherent features of oemof such as the open source, open data, non-proprietary and collaborative modelling approach are preconditions to meet modern realities of energy modelling. Additionally, a generic basis with an object-oriented implementation allows to tackle challenges related to complexity of highly integrated future energy systems and sets the foundation to address uncertainty in the future. Experiences from the collaborative modelling approach can enrich interdisciplinary modelling activities. Our analysis concludes that there are remaining challenges that can neither be tackled by a model nor a modelling framework. Among these are problems connected to result communication and interpretation.
ARTICLE | doi:10.20944/preprints202005.0479.v1
Subject: Engineering, Mechanical Engineering Keywords: open source; open hardware; COVID-19; medical hardware; RepRap; 3-D printing; open source medical hardware; high temperature 3-D printing; additive manufacturing; ULTEM; polycarbonate
Online: 31 May 2020 (16:18:20 CEST)
Thermal sterilization is generally avoided for 3-D printed components because of the relatively low deformation temperatures for common thermoplastics used for material extrusion-based additive manufacturing. 3-D printing materials required for high-temperature heat sterilizable components for COVID-19 and other applications demands 3-D printers with heated beds, hot ends that can reach higher temperatures than polytetrafluoroethylene (PTFE) hot ends and heated chambers to avoid part warping and delamination. There are several high temperature printers on the market, but their high costs make them inaccessible for full home-based distributed manufacturing required during pandemic lockdowns. To allow for all these requirements to be met for under $1,000, the Cerberus – an open source three-headed self-replicating rapid prototyper (RepRap) was designed and tested with the following capabilities: i) 200oC-capable heated bed, ii) 500oC-capabel hot end, iii) isolated heated chamber with 1kW space heater core and iv) mains voltage chamber and bed heating for rapid start. The Cereberus successfully prints polyetherketoneketone (PEKK) and polyetherimide (PEI, ULTEM) with tensile strengths of 77.5 and 80.5 MPa, respectively. As a case study, open source face masks were 3-D printed in PEKK and shown not to warp upon widely home-accessible oven-based sterilization.
ARTICLE | doi:10.20944/preprints201708.0069.v1
Subject: Mathematics & Computer Science, Other Keywords: energy system analysis; model challenges; open science; open source; energy modelling framework; oemof
Online: 21 August 2017 (03:02:34 CEST)
The research field of energy system analysis is dealing with increasingly complex energy systems and their respective challenges. Moreover, the requirement for open science has become a focal point of public interest. Both drivers have triggered the development of a broad range of (open) energy models and frameworks in recent years. However, there are hardly any approaches on how to evaluate these tools in terms of their capabilities to tackle energy system modelling challenges. This paper describes a first step towards a flexible evaluation of software to model energy systems. We propose a qualitative approach as an useful supplementary to existing model fact sheets and transparency checklists. We demonstrate the applicability by evaluating the newly developed “Open Energy Modelling Framework” with respect to existing challenges in energy system modelling. The case study results highlight that challenges related to complexity and scientific standards can be tackled to a large extent while the challenges of model utilization and interdisciplinary modelling are only tackled partially. However, the challenge of uncertainty remains for the most part unaddressed at present. Advantages of the evaluation approach lie in its simplicity, flexibility and transferability to other tools. Disadvantages mostly stem from its qualitative nature. Our analysis reveals that some challenges in the field of energy system modelling cannot be addressed by a software as they are on meta level like model result communication and interdisciplinary modelling.
ARTICLE | doi:10.20944/preprints202010.0107.v1
Subject: Life Sciences, Biochemistry Keywords: Bioprinting; microextrusion; tissue engineering; bioink; open-source; stem cells
Online: 6 October 2020 (08:24:54 CEST)
Three-dimensional (3D) bioprinting promises to be essential in tissue engineering for solving the rising demand for organs and tissues. Some bioprinters are commercially available, but their impact on the field of TE is still limited due to their cost or difficulty to tune. Herein, we present a low-cost easy-to-build printhead for microextrusion-based bioprinting (MEBB) that can be installed in many desktop 3D printers to transform them into 3D bioprinters. We can extrude bioinks with precise control of print temperature between 2 - 60 ºC. We validated the versatility of the printhead, by assembling it in three low-cost open-source desktop 3D printers. Multiple units of the printhead can also be easily put together in a single printer carriage for building a multi-material 3D bioprinter. Print resolution was evaluated by creating representative calibration models at different temperatures using natural hydrogels such as gelatin and alginate, and synthetic ones like poloxamer. Using one of the three modified low-cost 3D printers, we successfully printed cell-laden lattice constructs with cell viabilities higher than 90% after 24h post printing. Controlling temperature and pressure according to the rheological properties of the bioinks was essential in achieving optimal printability and great cell viability. The cost per unit of our device, which can be used with syringes of different volume, is less expensive than any other commercially available product. These data demonstrate an affordable open-source printhead with the potential to become a reliable alternative to commercial bioprinters for any laboratory.
ARTICLE | doi:10.20944/preprints201904.0207.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: 3-D printing; additive manufacturing; biomedical equipment; biomedical engineering; centrifuge; design; distributed manufacturing; laboratory equipment; open hardware; open source; open source hardware; medical equipment; medical instrumentation; scientific instrumentation
Online: 18 April 2019 (08:03:58 CEST)
Centrifuges are commonly required devices in medical diagnostics facilities as well as scientific laboratories. Although there are commercial and open source centrifuges, costs of the former and required electricity to operate the latter, limit accessibility in resource-constrained settings. There is a need for low-cost, human-powered, verified and reliable lab-scale centrifuge. This study provides the designs for a low-cost 100% 3-D printed centrifuge, which can be fabricated on any low-cost RepRap-class fused filament fabrication (FFF) or fused particle fabrication (FPF)-based 3-D printer. In addition, validation procedures are provided using a web camera and free and open source software. This paper provides the complete open source plans including instructions for fabrication and operation for a hand-powered centrifuge. This study successfully tested and validated the instrument, which can be operated anywhere in the world with no electricity inputs obtaining a radial velocity of over 1750rpm and over 50N of relative centrifugal force. Using commercial filament the instrument costs about US$25, which is less than half of all commercially available systems; however, the costs can be dropped further using recycled plastics on open source systems for over 99% savings. The results are discussed in the contexts of resource-constrained medical and scientific facilities.
ARTICLE | doi:10.20944/preprints202107.0651.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: multiple measures synchronization; automatic device integration; open-source; PsychoPy; Unity
Online: 29 July 2021 (11:48:02 CEST)
Background: The human mind is multimodal. Yet most behavioral studies rely on century-old measures such as task accuracy and latency. To create a better understanding of human behavior and brain functionality, we should introduce other measures and analyze behavior from various aspects. However, it is technically complex and costly to design and implement the experiments that record multiple measures. To address this issue, a platform that allows synchronizing multiple measures from human behavior is needed. Method: This paper introduces an opensource platform named OpenSync, which can be used to synchronize multiple measures in neuroscience experiments. This platform helps to automatically integrate, synchronize and record physiological measures (e.g., electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, body motion, etc.), user input response (e.g., from mouse, keyboard, joystick, etc.), and task-related information (stimulus markers). In this paper, we explain the structure and details of OpenSync, provide two case studies in PsychoPy and Unity. Comparison with existing tools: Unlike proprietary systems (e.g., iMotions), OpenSync is free and it can be used inside any opensource experiment design software (e.g., PsychoPy, OpenSesame, Unity, etc., https://pypi.org/project/OpenSync/ and https://github.com/moeinrazavi/OpenSync_Unity). Results: Our experimental results show that the OpenSync platform is able to synchronize multiple measures with microsecond resolution.
TECHNICAL NOTE | doi:10.20944/preprints202103.0194.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Active Learning, Classification, Machine Learning, Python, Github, Repository, Open Source
Online: 5 March 2021 (21:14:20 CET)
Machine learning applications often need large amounts of training data to perform well. Whereas unlabeled data can be easily gathered, the labeling process is difficult, time-consuming, or expensive in most applications. Active learning can help solve this problem by querying labels for those data points that will improve the performance the most. Thereby, the goal is that the learning algorithm performs sufficiently well with fewer labels. We provide a library called scikit-activeml that covers the most relevant query strategies and implements tools to work with partially labeled data. It is programmed in Python and builds on top of scikit-learn.
ARTICLE | doi:10.20944/preprints202001.0080.v1
Subject: Engineering, Energy & Fuel Technology Keywords: shale gas; MRST; embedded discrete fracture model; open-source implementation
Online: 9 January 2020 (09:59:37 CET)
We present a generic and open-source framework for the numerical modeling of the expected transport and storage mechanisms in unconventional gas reservoirs. These unconventional reservoirs typically contain natural fractures at multiple scales. Considering the importance of these fractures in shale gas production, we perform a rigorous study on the accuracy of different fracture models. The framework is validated against an industrial simulator and is used to perform a history-matching study on the Barnett shale. This work presents an open-source code that leverages cutting-edge numerical modeling capabilities like automatic differentiation, stochastic fracture modeling, multi-continuum modeling and other explicit and discrete fracture models. We modified the conventional mass balance equation to account for the physical mechanisms that are unique to organic-rich source rocks. Some of these include the use of an adsorption isotherm, a dynamic permeability-correction function, and an embedded discrete fracture model (EDFM) with fracture-well connectivity. We explore the accuracy of the EDFM for modeling hydraulically-fractured shale-gas wells, which could be connected to natural fractures of finite or infinite conductivity, and could deform during production. Simulation results indicates that although the EDFM provides a computationally efficient model for describing flow in natural and hydraulic fractures, it could be inaccurate under these three conditions: 1. when the fracture conductivity is very low. 2. when the fractures are not orthogonal to the underlying Cartesian grid blocks, and 3. when sharp pressure drops occur in large grid blocks with insufficient mesh refinement. Each of these results are very significant considering that most of the fluids in these ultra-low matrix permeability reservoirs get produced through the interconnected natural fractures, which are expected to have very low fracture conductivities. We also expect sharp pressure drops near the fractures in these shale gas reservoirs, and it is very unrealistic to expect the hydraulic fractures or complex fracture networks to be orthogonal to any structured grid. In conclusion, this paper presents an open-source numerical framework to facilitate the modeling of the expected physical mechanisms in shale-gas reservoirs. The code was validated against published results and a commercial simulator. We also performed a history-matching study on a naturally-fractured Barnett shale-gas well considering adsorption, gas slippage & diffusion and fracture closure as well as proppant embedment, using the framework presented. This work provides the first open-source code that can be used to facilitate the modeling and optimization of fractured shale-gas reservoirs. To provide the numerical flexibility to accurately model stochastic natural fractures that are connected to hydraulically-fractured wells, it is built atop other related open-source codes. We also present the first rigorous study on the accuracy of using EDFM to model both hydraulic fractures and natural fractures that may or may not be interconnected.
ARTICLE | doi:10.20944/preprints201708.0022.v1
Subject: Mathematics & Computer Science, Other Keywords: real‐time reconstruction; SLAM; kinect sensors; depth cameras; open source
Online: 7 August 2017 (11:03:23 CEST)
Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than simple moving average, which enables accurate reconstruction of high-frequency details such as sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D containing both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as opensource1 for other researchers to reproduce and verify our results.
ARTICLE | doi:10.20944/preprints202105.0498.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Natural Language Processing; Open-domain Question Answering; Multi-choice Question Answering; Clinical Question Answering
Online: 21 May 2021 (07:47:18 CEST)
Open domain question answering (OpenQA) tasks have been recently attracting more and more attention from the natural language processing (NLP) community. In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA, collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. We implement both rule-based and popular neural methods by sequentially combining a document retriever and a machine comprehension model. Through experiments, we find that even the current best method can only achieve 36.7%, 42.0%, and 70.1% of test accuracy on the English, traditional Chinese, and simplified Chinese questions, respectively. We expect MedQA to present great challenges to existing OpenQA systems and hope that it can serve as a platform to promote much stronger OpenQA models from the NLP community in the future.
ARTICLE | doi:10.20944/preprints201910.0232.v1
Subject: Earth Sciences, Geophysics Keywords: solar radiation; diffuse; LSA SAF; aerosols; MSG SEVIRI; open source code
Online: 20 October 2019 (02:24:07 CEST)
Several studies have shown that changes in incoming solar radiation and variations of the diffuse fraction can significantly modify the vegetation carbon uptake. Hence, monitoring the incoming solar radiation at large scale and with high temporal frequency is crucial for this reason along with many others. The EUMETSAT Satellite Application Facility for Land Surface Analysis (LSA SAF) operationally disseminates in near real time estimates of the downwelling shortwave radiation at the surface since 2005. This product is derived from observations provided by the SEVIRI instrument onboard the Meteosat Second Generation series of geostationary satellites, which covers Europe, Africa, the Middle East, and part of South America. However, near real time generation of the diffuse fraction at the surface level has only recently been initiated. The main difficulty towards achieving this goal was the general lack of accurate information on the aerosol particles in the atmosphere. This limitation is nowadays less important thanks to the improvements in atmospheric numerical models. This study presents an upgrade of the LSA-SAF operational retrieval method, which provides the simultaneous estimation of the incoming solar radiation and its diffuse fraction from satellite every 15 minutes. The upgrade includes a comprehensive representation of the influence of aerosols based on physical approximations of the radiative transfer within an atmosphere-surface associated medium. This article explains the retrieval method, discusses its limitations and differences with the previous method, and details the characteristics of the output products. A companion article will focus on the evaluation of the products against independent measurements of solar radiation. Finally, the access to the source code is provided through an open access platform in order to share with the community the expertise on the satellite retrieval of this variable.
ARTICLE | doi:10.20944/preprints202006.0318.v1
Subject: Medicine & Pharmacology, General Medical Research Keywords: ventilator; pandemic; ventilation; influenza pandemic; coronavirus; coronavirus pandemic; pandemic ventilator; single-limb; open source; open hardware; COVID-19; medical hardware; RepRap; 3-D printing; open source medical hardware; embedded systems; real-time operating system
Online: 26 June 2020 (17:25:16 CEST)
This study describes the development of an automated bag valve mask (BVM) compression system, which, during acute shortages and supply chain disruptions can serve as a temporary emergency ventilator. The resuscitation system is based on the Arduino controller with a real-time operating system installed on a largely RepRap 3-D printable parametric component-based structure. The cost of the system is under $170, which makes it affordable for replication by makers around the world. The device provides a controlled breathing mode with tidal volumes from 100 to 800 milliliters, breathing rates from 5 to 40 breaths/minute, and inspiratory-to-expiratory ratio from 1:1 to 1:4. The system is designed for reliability and scalability of measurement circuits through the use of the serial peripheral interface and has the ability to connect additional hardware due to the object-oriented algorithmic approach. Experimental results demonstrate repeatability and accuracy exceeding human capabilities in BVM-based manual ventilation. Future work is necessary to further develop and test the system to make it acceptable for deployment outside of emergencies in clinical environments, however, the nature of the design is such that desired features are relatively easy to add with the test using protocols and parametric design files provided.
REVIEW | doi:10.20944/preprints202208.0434.v1
Subject: Engineering, Mechanical Engineering Keywords: COVID-19; 3D Printing; Additive Manufacturing; Medical Applications; Open-source files; Innovation
Online: 25 August 2022 (10:24:14 CEST)
The Coronavirus disease 2019 (COVID-19) rapidly spread to over 180 countries and abruptly disrupted the production rates and supply chains worldwide. Since then, 3D printing also recognized as additive manufacturing (AM) and known to be a novel technique that uses layer-by-layer deposition of material to produce the intricate 3D geometry, has been engaged in reducing the distress caused by the outbreak. During the early stages of this pandemic, shortages of Personal Protection Equipment (PPE), including facemasks, shields, respirators, and other medical gears, were significantly answered by remotely 3D printing them. Amidst the growing testing requirements, the 3D printing emerged as a potential and fast solution manufacturing process to meet the production needs due to its flexibility, reliability, and rapid response capabilities. In the recent past, some of the other medical applications that have gained prominence in the scientific community include 3D printed ventilator splitters, device components, and patient-specific products. Regarding the non-medical applications, researchers have successfully developed contact-free devices to address the sanitary crisis in public places. This work aims to systematically review the applications of 3D printing or AM techniques that have been involved in producing various critical products essential to limit this deadly pandemic's progression.
REVIEW | doi:10.20944/preprints202003.0220.v1
Subject: Biology, Ecology Keywords: 3D printing; 3D scanning; customized ecological objects; methods; stereolithography; open-source lab
Online: 12 March 2020 (14:46:07 CET)
3D printing is described as the third industrial revolution: its impact is global in industry and progresses every day in society. It presents a huge potential for ecology and evolution, sciences with a long tradition of inventing and creating objects for research, education and outreach. Its general principle as an additive manufacturing technique is relatively easy to understand: objects are created by adding material layers on top of each other. Although this may seem very straightforward on paper, it is much harder in the real world. Specific knowledge is indeed needed to successfully turn an idea into a real object, because of technical choices and limitations at each step of the implementation. This article aims at helping scientists to jump in the 3D printing revolution, by offering a hands-on guide to current 3D printing technology. We first give a brief overview of uses of 3D printing in ecology and evolution, then review the whole process of object creation, split into three steps: (1) obtaining the digital 3D model of the object of interest, (2) choosing the 3D printing technology and material best adapted to the requirements of its intended use, (3) pre- and post-processing the 3D object. We compare the main technologies available and their pros and cons according to the features and the use of the object to be printed. We give specific and key details in appendices, based on examples in ecology and evolution.
Subject: Keywords: 3D object reconstruction, depth cameras, Kinect sensors; open source, signal denoising, SLAM
Online: 9 April 2019 (12:24:34 CEST)
3D object reconstruction from depth image streams using Kinect-style depth cameras has been extensively studied. In this paper, we propose an approach for accurate camera tracking and volumetric dense surface reconstruction assuming a known cuboid reference object is present in the scene. Our contribu¬tion is three-fold. (a) We maintain drift-free camera pose tracking by incorporating the 3D geometric constraints of the cuboid reference object into the image registration process. (b) We reformulate the problem of depth stream fusion as a binary classification problem, enabling high-fidelity surface reconstruction, especially in the con¬cave zones of objects. (c) We further present a surface denoising strategy to mitigate the topological inconsistency (e.g., holes and dangling triangles), which facilitates the generation of a noise-free triangle mesh. We extend our public dataset CU3D with several new image sequences, test our algorithm on these sequences and quantitatively compare them with other state-of-the-art algorithms. Both our dataset and our algorithm are available as open-source content at https://github.com/zhangxaochen/CuFusion for oth-er researchers to reproduce and verify our results.
ARTICLE | doi:10.20944/preprints201902.0087.v1
Subject: Social Sciences, Geography Keywords: Noise mapping; END directive; GIS; open source; standards, road traffic; population exposure
Online: 11 February 2019 (09:48:53 CET)
The urbanisation phenomenon and related cities expansion and transport networks entail preventing the increase of population exposed to environmental pollution. Regarding noise exposure, the Environmental Noise Directive demands on main metropolis to produce noise maps. While based on standard methods, these latter are usually generated by proprietary software and require numerous input data concerning, for example, the buildings, land use, transportation network and traffic. The present work describes an open source implementation of a noise mapping tool fully implemented in a Geographic Information System compliant with the Open Geospatial Consortium standards. This integration makes easier at once the formatting and harvesting of noise model input data, cartographic rendering and output data linkage with population data. An application is given for a French city, which consists in estimating the impact of road traffic-related scenarios in terms of population exposure to noise levels both in relation to a threshold value and level classes.
ARTICLE | doi:10.20944/preprints201912.0063.v1
Subject: Engineering, Other Keywords: Software Quality Metrics; closed source software; open source software; Kahane’s Approach; UCP (Use Case Points) model and William’s Models
Online: 5 December 2019 (08:37:56 CET)
The complexity of software is increasing day by day due to the increase in the size of the projects being developed. For better planning and management of large software projects, estimation of software quality is important. During the development processes, complexity metrics are used for the indication of the attributes/characteristics of the quality software. There are many studies about the effect of the complexity of the software on the cost and quality. In this study, we discussed the effects of software complexity on the quality attributes of the software for open source and closed source software. Though, the quality metrics for open and closed source software are not distinct from each other. In this paper, we comparatively analyzed the impact of complexity metrics on open source and private software. We also presented various models for the management of the project complexity such as William’s Model, Stacey’s Agreement and Certainty matrix, Kahane’s Approach and UCP Model. Quality metrics here refer to the standards for the measurement of the quality of software which contains certain attributes or characteristics of the software that are related to the quality of the software. Certain quality attributes addressed in this study are Usability, Reliability, Security, Portability, Maintainability, Efficiency, Cost, Standards and Availability, etc. Both Open source and Closed source software are evaluated on the basis of these quality attributes. This study also recommended future approaches to manage the quality of project Open source and Closed source software and specify which one of them is mostly used in the industry.
REVIEW | doi:10.20944/preprints202203.0217.v1
Subject: Engineering, Energy & Fuel Technology Keywords: energy policy; energy conservation; climate change; global safety; open hardware; open source; photovoltaic; renewable energy; solar energy; national security
Online: 15 March 2022 (14:27:35 CET)
Free and open source hardware (FOSH) development has been shown to increase innovation and reduce economic costs. This article reviews the opportunity to use FOSH like a sanction to undercut imports and exports from a target criminal country. A formal methodology is presented for selecting strategic national investments in FOSH development to improve both national security and global safety. In this methodology, first the target country that is threatening national security or safety is identified. Next, the top imports from the target country as well as potentially other importing countries (allies) are quantified. Hardware is identified that could undercut imports/exports from the target country. Finally, methods to support the FOSH development are enumerated to support production in a commons-based peer production strategy. To demonstrate how this theoretical method works in practice it is applied as a case study to the current criminal military aggressor nation, who is also a fossil fuel exporter. The results show there are numerous existing FOSH and opportunities to develop new FOSH for energy conservation and renewable energy to reduce fossil fuel energy demand. Widespread deployment would reduce the concomitant pollution, human health impacts, and environmental desecration as well as cut financing of military operations.
Subject: Engineering, Other Keywords: drying; materials processing; vacuum oven; small-scale; lab equipment; air-powered; open hard-ware; open source; digital manufacturing; dehydration
Online: 22 April 2021 (09:16:02 CEST)
Vacuum drying can dehydrate materials further than dry heat methods while protecting sensitive materials from thermal degradation. Many industries have shifted to vacuum drying as cost- or time-saving measures. Small-scale vacuum drying, however, has been limited by high costs of specialty scientific tools. To make vacuum drying more accessible, this study provides design and performance information for a small-scale open source vacuum oven, which can be fabricated from off-the-shelf and 3-D printed components. The oven is tested for drying speed and effective-ness on both waste plastic polyethylene terephthalate (PET) and a consortium of bacteria developed for bioprocessing of terephthalate wastes to assist in distributed recycling of PET for both additive manufacturing as well as potential food. Both materials can be damaged when exposed to high temperatures, making vacuum drying a desirable solution. The results showed the open source vacuum oven was effective at drying both plastic and biomaterials, drying at a higher rate than a hot-air dryer for small samples or for low volumes of water. The system can be constructed for less than 20% of commercial vacuum dryer costs for several laboratory-scale applications including dehydration of bio-organisms, drying plastic for distributed recycling and additive manufacturing, and chemical processing.
ARTICLE | doi:10.20944/preprints201905.0060.v1
Subject: Keywords: open source; 3D printing; Drosophila; laser cutter; lab equipment; open labware; fly-pushing; fly pad; fly plate; CO2 anesthesia
Online: 6 May 2019 (11:51:52 CEST)
One of the most important pieces of equipment used in labs in culturing populations of fruit flies (Drosophila sp.) is that of the “CO2 gas plate”, which is used to anesthesize individuals during “fly-pushing”. This piece of equipment consists of a box with a porous top into which carbon-dioxide is pumped. Flies placed on its surface are left immobilized, permitting the sorting, categorizing and/or counting of flies during population culturing and experimental assays. Unfortunately, commercially available gas plates are typically expensive. Here, we describe a new design for a gas plate that can be easily produced using a 3D printer and a laser cutter, which we are making freely available to the fly community.
ARTICLE | doi:10.20944/preprints202011.0325.v1
Subject: Medicine & Pharmacology, Allergology Keywords: single-subject studies, personalized medicine, precision medicine, reference standards; gold standards; biomarkers; open-source
Online: 10 November 2020 (16:36:56 CET)
Background: Developing patient-centric baseline standards that enable the detection of clinically significant outlier gene products on a genome-scale remains an unaddressed challenge required for advancing personalized medicine beyond the small pools of subjects implied by “precision medicine”. This manuscript proposes a novel approach for reference standard development to evaluate the accuracy of single-subject analyses of metabolomes, proteomes, or transcriptomes. Since distributional assumptions of statistical testing may inadequately model genome dynamics of gene products, the so-called significant results of previous studies may artefactually conflate with real signals. Model confirmation biases escalate when studies use the same analytical methods in the discovery sets and reference standards, as corroboration of results leads to an evaluation of reproducibility confounded with replicated biases rather than a measure of accuracy. We hypothesized that developing method-agnostic reference standards using effect-size and expression-level filtering of results, obtained from multiple discovery methods that are distinct from the one evaluated, would maximize the evaluation of clinical-transcriptomic signals and minimize statistical artefactual biases. We developed and released an R package “referenceNof1” to facilitate the construction of robust reference standards. Results: Since RNA-Seq data analysis methods often rely on binomial and negative binomial assumptions to non-parametric analyses, the differences create statistical noise and make the reference standards method dependent. In our experimental design, the accuracy of 30 distinct combinations of fold changes (FC) and expression levels (EL) were determined for five types of RNA analyses in two different datasets. This design was applied to two distinct datasets: breast cancer cell lines and a yeast study with isogenic biological replicates in two experimental conditions. In addition, the reference standard (RS) comprised all RNA analytical methods with the exception of the method testing accuracy. To mitigate for biased optimization of the RS parameters towards a specific analytical method, similarity between observed results of distinct analytical methods were calculated across all methods (Jaccard Concordance Index). The greatest differences were observed across diametric extremes. For example, filtering out differentially expressed genes (DEGs) using a fold change < 1.2 leads to a 50% increase in concordance between techniques when compared to results with FC > 1.2. Combining this FC cutoff with genes with mean expressions > 30 counts leads to a 65% increase in concordance in comparison to genes with expression levels < 30 counts and with FC < 1.2. Conclusions: We have demonstrated that comparing accuracies of different single-subject analysis methods for clinical optimization requires a new evaluation framework. Reliable and robust reference standards, independent of the evaluated method, can be obtained under a limited number of parameter combinations: fold change (FC) ranges thresholds, expression level cutoffs, and exclusion of the tested method from the RS development process. When applying anticonservative reference standard frameworks (e.g., using the same method for RS development and for prediction), a majority of the concordant signal between prediction and Gold Standard (GS) cannot be confirmed by other methods, which we conclude as biased results. Statistical tests to determine DEGs from a single-subject study generate many biased results that require subsequent filtering for increasing their reliability. Conventional single-subject studies pertain to one or a few measures in one patient over time and need a substantial conceptual framework extension in order to address the tens of thousands of measures in genome-wide analyses of gene products. The proposed referenceNof1 framework addresses some of the inherent challenges in improving transcriptome scale single-subject analyses by providing a robust approach to constructing reference standards. Github: https://github.com/SamirRachidZaim/referenceNof1
ARTICLE | doi:10.20944/preprints201906.0251.v1
Subject: Physical Sciences, Applied Physics Keywords: video microscopy, imaging, automated data acquisition, nanoparticle tracking, measurement embedded applications, open-source software
Online: 25 June 2019 (12:53:50 CEST)
We introduce PyNTA, a modular instrumentation software for live particle tracking. By using the multiprocessing library of Python and the distributed messaging library pyZMQ, PyNTA allows users to acquire images from a camera at close to maximum readout bandwidth while simultaneously performing computations on each image on a separate processing unit. This publisher/subscriber pattern generates a small overhead and leverages the multi-core capabilities of modern computers. We demonstrate capabilities of the PyNTA package on the featured application of nanoparticle tracking analysis. Real-time particle tracking on megapixel images at a rate of 50 Hz is presented. Reliable live tracking reduces the required storage capacity for particle tracking measurements by a factor of approximately 103, as compared with raw data storage, allowing for a virtually unlimited duration of measurements
ARTICLE | doi:10.20944/preprints201808.0545.v2
Subject: Engineering, Electrical & Electronic Engineering Keywords: model intercomparison; renewable energy; production cost modeling; security-constrained unit commitment; open-source software
Online: 24 December 2018 (10:55:11 CET)
Background: New open-source electric-grid planning models have the potential to improve power system planning and bring a wider range of stakeholders into the planning process for next-generation, high-renewable power systems. However, it has not yet been established whether open-source models perform similarly to the more established commercial models for power system analysis. This reduces their credibility and attractiveness to stakeholders, postponing the benefits they could offer. In this paper, we report the first model intercomparison between an open-source power system model and an established commercial production cost model. Results: We compare the open-source Switch 2.0 to GE Energy Consulting’s Multi Area Production Simulation (MAPS) for production-cost modeling, considering hourly operation under 17 scenarios of renewable energy adoption in Hawaii. We find that after configuring Switch with similar inputs to MAPS, the two models agree closely on hourly and annual production from all power sources. Comparing production gave a coefficient of determination of 0.996 across all energy sources and scenarios, indicating that the two models agree on 99.6% of the variation. For individual energy sources, the coefficient of determination was 69–100. Conclusions: Although some disagreement remains between the two models, this work indicates that Switch is a viable choice for renewable integration modeling, at least for the small power systems considered here. Although some disagreement remains between the two models, this work indicates that Switch is a viable choice for renewable integration modeling, at least for the small power systems considered here.
ARTICLE | doi:10.20944/preprints202209.0174.v1
Subject: Engineering, Civil Engineering Keywords: open-source; photovoltaic; mechanical design; electric vehicle; solar energy; solar carport; electric vehicle charging station
Online: 13 September 2022 (10:41:43 CEST)
Solar powering the increasing fleet of electrical vehicles (EV) demands more surface area than may be available for photovoltaic (PV) powered buildings. Parking lot solar canopies can provide the needed area to charge EVs, but are substantially costlier than roof- or ground-mounted PV systems. To provide a lower-cost PV parking lot canopy to supply EV charging beneath them, this study provides a full mechanical and economic analysis on three novel PV canopy systems: (1) exclusively wood, single parking spot spanning system, (2) wood and aluminum double parking spot spanning system, and (3) wood and aluminum cantilevered system for curbside parking. All systems can be scalable to any amount of EV parking spots. The complete designs and bill of materials (BOM) of the canopies are provided along with basic instructions and are released with an open source license that will enable anyone to fabricate them. The results found single-span systems have cost savings of 82%-85%, double-span systems save 43%-50%, and cantilevered systems save 31%-40%. In the first operation year, the PV canopies can provide 157% of energy needed to charge the least efficient EV currently on the market if it is driven the average driving distance in London ON, Canada.
ARTICLE | doi:10.20944/preprints202106.0046.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: English vocabulary learning; Incidental vocabulary acquisition,; Context-aware ubiquitous learning,; Ubiquitous Computing; Open-source software
Online: 1 June 2021 (15:24:35 CEST)
Language learners often face communication problems when they need to express themselves and do not have this ability. On the other hand, continuous advances in technology create new opportunities to improve second language (L2) acquisition through context-aware ubiquitous learning (CAUL) technology. Since vocabulary is the foundation of all language acquisition, this article presents the ULearnEnglish, an open-source system to allow ubiquitous English learning focused on incidental vocabulary acquisition. To evaluate the proposal, 15 learners used the system developed, and 10 answered a survey based on the Technology Acceptance Model (TAM). Results indicate a favorable response to the use of the learner context to assist them in their learning. The ULearnEnglish achieved an acceptance of 78.66% for the perception of the utility, 96% for the perception of ease of use, 86% for user context assessment, and 88% for ubiquity. This study presented a positive response in using the location of users to assist their learning. Among the main contributions, this study demonstrates an opportunity for ubiquity use in future research in language learning. Also, furthers studies can use the source available to evolve the model and system.
ARTICLE | doi:10.20944/preprints201901.0302.v1
Subject: Earth Sciences, Geoinformatics Keywords: interoperability; digital elevation model; Google Sketchup; geographical information systems-science; free and open source software
Online: 30 January 2019 (05:28:53 CET)
Data creation is often the only way for researchers to produce basic geospatial information for the pursuit of more complex tasks and procedures such as those that lead to the production of new data for studies concerning river basins, slope morphodynamics, applied geomorphology and geology, urban and territorial planning, detailed studies, for example, in architecture and civil engineering, among others. This exercise results from a reflection where specific data processing tasks executed in Google Sketchup (Pro version, 2018) can be used in a context of interoperability with Geographical Information Systems (GIS) software. The focus is based on the production of contour lines and Digital Elevation Models (DEM) using an innovative sequence of tasks and procedures in both environments (GS and GIS). It starts in Google Sketchup (GS) graphic interface, with the selection of a satellite image referring to the study area—which can be anywhere on Earth's surface; subsequent processing steps lead to the production of elevation data at the selected scale and equidistance. This new data must be exported to GIS software in vector formats such as Autodesk Design Web format—DWG or Autodesk Drawing Exchange format—DXF. In this essay the option for the use of GIS Open Source Software (gvSIG and QGIS) was made. Correcting the original SHP by removing “data noise” that resulted from DXF file conversion permits the author to create new clean vector data in SHP format and, at a later stage, generate DEM data. This means that new elevation data becomes available, using simple but intuitive and interoperable procedures and techniques which confgures a costless work flow.
ARTICLE | doi:10.20944/preprints201901.0029.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Android; arduino; bluetooth; hand-gesture recognition; low cost; open source; sensors; smart cars; speech recognition
Online: 3 January 2019 (14:32:23 CET)
Gesture recognition has always been a technique to decrease the distance between the physical and the digital world. In this work, we introduce an Arduino based vehicle system which no longer require manual controlling of the cars. The proposed work is achieved by utilizing the Arduino microcontroller, accelerometer, RF sender/receiver, and Bluetooth. Two main contributions are presented in this work. Firstly, we show that the car can be controlled with hand-gestures according to the movement and position of the hand. Secondly, the proposed car system is further extended to be controlled by an android based mobile application having different modes (e.g., touch buttons mode, voice recognition mode). In addition, an automatic obstacle detection system is introduced to improve the safety measurements to avoid any hazards. The proposed systems are designed at lab-scale prototype to experimentally validate the efﬁciency, accuracy, and affordability of the systems. We remark that the proposed systems can be implemented under real conditions at large-scale in the future that will be useful in automobiles and robotics applications.
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: 3-D printing; additive manufacturing; distributed manufacturing; distributed recycling; granulator; shredder; open hardware; fab lab; open-source; polymers; recycling; waste plastic; extruder; upcycle; circular economy
Online: 1 September 2019 (08:25:03 CEST)
Abstract: In order to accelerate deployment of distributed recycling by providing low-cost feed stocks of granulated post-consumer waste plastic, this study analyzes an open source waste plastic granulator system. It is designed, built and tested for its ability to convert post-consumer waste, 3-D printed products and waste into polymer feedstock for recyclebots of fused particle/granule printers. The technical specifications of the device are quantified in terms of power consumption (380 to 404W for PET and PLA, respectively) and particle size distribution. The open source device can be fabricated for less than USD$2000 in materials. The experimentally-measured power use is only a minor contribution to the overall embodied energy of distributed recycling of waste plastic. The resultant plastic particle size distributions were found to be appropriate for use in both recyclebots and direct material extrusion 3-D printers. Simple retrofits are shown to reduce sound levels during operation by 4dB-5dB for the vacuum. These results indicate that the open source waste plastic granulator is an appropriate technology for community, library, makespace, fab lab or small business-based distributed recycling.
ARTICLE | doi:10.20944/preprints201811.0087.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: 3-D printing; additive manufacturing; distributed manufacturing; open-source; polymers; recycling; waste plastic; extruder; upcycle; circular economy
Online: 5 November 2018 (07:45:36 CET)
Although distributed additive manufacturing can provide high returns on investment the current markup on commercial filament over base polymers limits deployment. These cost barriers can be surmounted by eliminating the entire process of fusing filament by 3-D printing products directly from polymer granules. Fused granular fabrication (FGF) (or fused particle fabrication (FPF)) is being held back in part by the accessibility of low-cost pelletizers and choppers. An open-source 3-D printable invention disclosed here provides for precise controlled pelletizing of both single thermopolymers as well as composites for 3-D printing. The system is designed, built and tested for its ability to provide high tolerance thermopolymer pellets from a number of sizes capable of being used in a FGF printer. In addition, the chopping pelletizer is tested for its ability to chop multi-materials simultaneously for color mixing and composite fabrication as well as precise fractional measuring back to filament. The US$185 open-source 3-D printable pelletizer chopper system was successfully fabricated and has a 0.5 kg/hr throughput with one motor, and 1.0 kg/hr throughput with two motors using only 0.24 kWh/kg during the chopping process. Pellets were successfully printed directly via FGF and indirectly after being converted into high-tolerance filament in a recyclebot.
ARTICLE | doi:10.20944/preprints201709.0145.v1
Subject: Engineering, Mechanical Engineering Keywords: Open source; FEA; finite element analysis; linear static structural; Code Aster; Salome Meca; Mecway; SimScale; Z88, CAE
Online: 28 September 2017 (14:58:31 CEST)
The aim of this work was to determine if the development of low-cost or no-cost finite element analysis (FEA) software has advanced to the point where it can be used in place of trusted commercial FEA packages for linear static structural analyses using isotropic material models. Nonlinear structural analysis will be covered in a separate paper. Several suitable packages were identified, these underwent a process of systematic elimination when they were unable to meet the minimum imposed qualitative criteria. Three packages were chosen to be subjected to performance benchmarking, namely: Code_Aster/Salome Meca; Mecway and Z88 Aurora. SimScale, a browser-based analysis package was included as well because it met all the baseline criteria and has the potential to offer a completely cloud-based approach to computer aided engineering, potentially reshaping the way an engineering business views its operational capabilities. This paper presents the test cases and simulation results for packages that fall under the linear static structural analysis type.
REVIEW | doi:10.20944/preprints201905.0302.v1
Subject: Social Sciences, Economics Keywords: open science; open access; open data; economic impacts
Online: 27 May 2019 (11:19:59 CEST)
A common motivation for increasing open access to research findings and data is the potential to create economic benefits – but evidence is patchy and diverse. This study systematically reviewed the evidence on what kinds of economic impacts (positive and negative) open science can have, how these comes about, and how benefits could be maximized. Use of open science outputs often leaves no obvious trace, so most evidence of impacts is based on interviews, surveys, inference based on existing costs, and modelling approaches. There is indicative evidence that open access to findings/data can lead to savings in access costs, labour costs and transaction costs. There are examples of open science enabling new products, services, companies, research and collaborations. Modelling studies suggest higher returns to R&D if open access permits greater accessibility and efficiency of use of findings. Barriers include lack of skills capacity in search, interpretation and text mining, and lack of clarity around where benefits accrue. There are also contextual considerations around who benefits most from open science (e.g. sectors, small vs larger companies, types of dataset). Recommendations captured in the review include more research, monitoring and evaluation (including developing metrics), promoting benefits, capacity building and making outputs more audience-friendly.
LETTER | doi:10.20944/preprints201607.0042.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Thevenin; Norton; voltage source; current source
Online: 15 July 2016 (11:46:17 CEST)
A power conservative Thevenin-Norton and Norton-Thevenin transformations are proposed in this letter. The transformations introduce a voltage and a current generators for which parameters depend on the loading impedance value.
Subject: Social Sciences, Library & Information Science Keywords: bioeconomy; open science; open access
Online: 30 October 2020 (14:45:27 CET)
The purpose of this paper is to assess the degree of openness of scientific articles on bioeconomy. Based on a WoS corpus of 2,489 articles published between 2015 and 2019, we calculated bibliometric indicators, explored the openness of each paper and assessed the share of journals, countries and research areas of these articles. The results show a sharp increase and diversification of articles in the field of bioeconomy, with a beginning long tail distribution. 45.6% of the articles are freely available, and the share of OA papers is steadily increasing, from 31% in 2015 to 52% in 2019. Gold is the most important variant of OA. Open access is low in the applied research areas of chemical, agricultural and environmental engineering but higher in the domains of energy and fuels, forestry, and green and sustainable science and technology. The UK and the Netherlands have the highest rates of OA papers, followed by Spain and Germany. The funding rate of OA papers is higher than of non-OA papers. This is the first bibliometric study on open access to articles on bioeconomy. The results can be useful for the further development of OA editorial and funding criteria in the field of bioeconomy.
CASE REPORT | doi:10.20944/preprints201905.0166.v1
Subject: Social Sciences, Library & Information Science Keywords: Open Annotation; Monographs; Open Access; Higher Education; Open Peer Review
Online: 14 May 2019 (10:03:41 CEST)
The digital format opens up new possibilities for interaction with monographic publications. In particular, annotation tools make it possible to broaden the discussion on the content of a book, to suggest new ideas, to report errors or inaccuracies, and to conduct open peer reviews. However, this requires the support of the users who might not yet be familiar with the annotation of digital documents. This paper will give concrete examples and recommendations for exploiting the potential of annotation in academic research and teaching. After presenting the annotation tool of Hypothesis, the article focuses on its use in the context of HIRMEOS (High Integration of Research Monographs in the European Open Science Infrastructure), a project aimed to improve the Open Access digital monograph. The general line and the aims of a post-peer review experiment with the annotation tool, as well as its usage in didactic activities concerning monographic publications are presented and proposed as potential best practices for similar annotation activities.
DATA DESCRIPTOR | doi:10.20944/preprints202209.0323.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: COVID-19; Open-source dataset; Drug Repurposing; Database system; Web application devel-opment; software development; Drug fingerprints; Bulk upload
Online: 21 September 2022 (10:14:11 CEST)
Although various vaccines are now commercially available, they have not been able to stop the spread of COVID-19 infection completely. An excellent strategy to quickly get safe, effective, and affordable COVID-19 treatment is to repurpose drugs that are already approved for other diseases as adjuvants along with the ongoing vaccine regime. The process of developing an accurate and standardized drug repurposing dataset requires a considerable level of resources and expertise due to the commercial availability of an extensive array of drugs that could be potentially used to address the SARS-CoV-2 infection. To address this bottleneck, we created the CoviRx platform. CoviRx is a user-friendly interface that provides access to the data, which is manually curated for COVID-19 drug repurposing data. Through CoviRx, the data curated has been made open-source to help advance drug repurposing research. CoviRx also encourages users to submit their findings after thoroughly validating the data, followed by merging it by enforcing uniformity and integ-rity-preserving constraints. This article discusses the various features of CoviRx and its design principles. CoviRx has been designed so that its functionality is independent of the data it dis-plays. Thus, in the future, this platform can be extended to include any other disease X beyond COVID-19. CoviRx can be accessed at www.covirx.org.
ARTICLE | doi:10.20944/preprints202011.0282.v1
Subject: Social Sciences, Accounting Keywords: Open Research Data; Open Peer Review; medicine; health sciences; Open Science; Open Access; health scientists; FAIR
Online: 9 November 2020 (16:02:24 CET)
During the last years, significant initiatives have been launched for the dissemination of Open Access as part of the Open Science movement. Nevertheless, the other major pillars of Open Science such as Open Research Data (ORD) and Open Peer Review (OPR) are still in an early stage of development among the communities of researchers and stakeholders. The present study sought to unveil the perceptions of a medical and health sciences community about these issues. Through the investigation of researchers’ attitude, valuable conclusions can be drawn, especially in the field of medicine and health sciences, where an explosive growth of scientific publishing exists. A quantitative survey was conducted based on a structured questionnaire, with 51.8% response rate (215 responses out of 415 electronic invitations). The participants in the survey agreed with the ORD principles However they ignored basic terms like FAIR (Findable, Accessible, Interoperable and Reusable) and appeared incentive to permit the exploitation of their data. Regarding OPR, participants expressed their agreement, implying their interest for a trustworthy evaluation system. Conclusively, researchers urge to receive proper training for both ORD principles and OPR processes which combined with a reformed evaluation system will enable them to take full advantage of the opportunities that arise from the new scholar publishing and communication landscape.
ARTICLE | doi:10.20944/preprints201808.0233.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: 3-D printing; circuit milling; circuit design; distributed manufacturing; electronics; electronics prototyping; free and open-source hardware; P2P; P2P manufacturing
Online: 13 August 2018 (16:42:54 CEST)
Barriers to inventing electronic devices involve challenges of iterating electronic designs due to long lead times for professional circuit board milling or high-costs of commercial milling machines. To overcome these barriers this study provides open source (OS) designs for a low-cost circuit milling machine. First, design modifications for mechanical and electrical sub-systems of the OS D3D Robotics prototyping system are provided. Next, Copper Carve, an OS custom graphical user interface, is developed to enable circuit board milling by implementing backlash and substrate distortion compensation. The performance of the OS D3D circuit mill is then quantified and validated for: positional accuracy, cut quality, feature accuracy and distortion compensation. Finally, the return on investment is calculated for inventors using it. The results show by properly compensating for motion inaccuracies with Copper Carve, the machine achieves a motion resolution of 10 microns, which is more than adequate for most circuit designs. The mill is at least five times less expensive than all commercial alternatives and the material costs of the D3D mill are repaid from fabricating 20-43 boards. The results show that the OS circuit mill is of high-enough quality to enable rapid invention and distributed manufacturing of complex products containing custom electronics.
ARTICLE | doi:10.20944/preprints202008.0570.v1
Subject: Biology, Agricultural Sciences & Agronomy Keywords: apothecium; ascospores; sclerotium formation; carbon source; fungicides; resistance source
Online: 26 August 2020 (09:01:24 CEST)
A new disease causing the tan to light brown blighted stems and pods has occurred in 2.6% pea (Pisum sativum L.) plants with an average disease severity rating of 3.7 in Chapainawabganj district, Bangladesh. A fungus with white appressed mycelia and large sclerotia was consistently isolated from symptomatic tissues. The fungus formed funnel-shaped apothecia with sac-like ascus and endogenously formed ascospores. Healthy pea plants inoculated with the fungus produced typical white mold symptoms. The internal transcribed spacer sequences of the fungus were 100% similar to that recovered from an epitype of Sclerotinia sclerotiorum, considering the fungus to be the causative agent of white mold. Mycelial growth and sclerotial development of S. sclerotiorum were favored at 20°C and pH 5.0. Glucose was the best carbon sources to support hyphal growth and sclerotia formation. Bavistin and Amistar Top inhibited the radial growth of the fungus completely at the lowest concentration. In planta, foliar application of Amistar Top showed the considerable potential to control the disease at 1.0% concentration until 7 days after spraying, while Bavistin prevented infection significantly until 15 days after spraying. A large majority (70.93%) of genotypes including tested released pea cultivars were susceptible, while six genotypes (6.98%) appeared resistant to the disease. These results could be important for management strategies aiming to control the incidence of S. Sclerotinia and eliminate yield loss in pea.
ARTICLE | doi:10.20944/preprints201805.0284.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: dual two-level voltage source inverter; common-mode voltage; discontinuous space vector modulation schemes; centralizing pulse width modulation; open-end load
Online: 22 May 2018 (05:11:06 CEST)
The popular motor drive systems with a single two-level voltage source inverter (VSI) have one main problem that is the occurrence of the common-mode voltage (CMV), which is an effect of the electromagnetic interference, shaft voltage, bearing currents, leakage current. These cause the high stress, increasung temperature and early mechanical failure in machine. To overcome this problem, the technology of the dual two-level VSI fed open-end three-phase ac loads is now available to eliminate the CMV at the ac/induction motor load with the 120-degree modulation technique for controlling each inverter. In this paper, the discontinuous space vector modulation (DSVM) schemes are proposed and applied for the dual two-level VSI fed open-end load. It is based on the 120-degree modulation technique by using only 12 active voltage vectors and the 10 zero voltage vectors from the total 64 voltage vectors along with the different five-segment swicthing sequence designs with centralizing pulse width modulation technqiue in order to not only cancel the CMV in the ac load, but also reduce the switching number/switching loss of the conversion system. Among the various DSVM schemes, their performances are compared in this paper, such as the number of the switching, the step and peak value of the CMV in each inverter, and the quality of the output waveform, etc. The details of the verfication and comparison are carried out by simulation using Matlab/Simulink software.
ARTICLE | doi:10.20944/preprints201801.0139.v1
Subject: Earth Sciences, Other Keywords: data logger; environmental monitoring network; open source; submersible; under-water; critical zone observatory; cave; Yucatan Peninsula, vadose hydrology; subterranean karst estuary
Online: 16 January 2018 (10:40:15 CET)
A low-cost data logging platform is presented for environmental monitoring projects that provides long-term operation in remote or submerged environments. Three premade “breakout boards” from the open-source Arduino ecosystem are assembled into the core of the platform. The components are selected based on low-cost and ready availability, making the loggers easy to build and modify without specialized tools, or a significant background in electronics. Power optimization techniques are explained. The platform has proven to be highly reliable, and capable of operating for more than a year on standard AA batteries. The flexibility of the system is illustrated with two ongoing field studies recording drip rates in a cave, and water flow in a flooded cave system.
ARTICLE | doi:10.20944/preprints202010.0132.v1
Subject: Engineering, Energy & Fuel Technology Keywords: sector coupling; gas grid; district heating grid; grid simulation; network analysis; grid operation; open source; multi-energy grids; energy supply; infrastructure design
Online: 6 October 2020 (14:48:08 CEST)
The increasing complexity of the design and operation evaluation process of multi-energy grids (MEGs) requires tools for the coupled simulation of power, gas and district heating grids. Most tools analyzed in this paper either do not allow coupling of infrastructures, simplify the grid model or are not publicly available. We introduce the open source piping grid simulation tool pandapipes that – in interaction with pandapower - fulfills three crucial criteria: clear data structure, adaptable MEG model setup and performance. In an introduction to pandapipes we illustrate how it fulfills these criteria through its internal structure and demonstrate how it performs in comparison to STANET®. Then we show two case studies that have been performed with pandapipes already. The first case study demonstrates a peak shaving strategy as interaction of a local electricity and district heating grid in a small settlement. The second case study analyzes the potential of a power-to-gas device to serve as flexibility in a power grid under consideration of gas grid constraints. They both show the importance of a clear database, a simple simulation setup and good performance to set up different large and complex studies on grid infrastructure design and operation.
Subject: Behavioral Sciences, Applied Psychology Keywords: multimodal experiment; multisensory experiment; automatic device integration; open-source; PsychoPy; Unity; Virtual Reality (VR); Lab Streaming Layer; LabRecorder; LabRecorderCLI; Windows command line (cmd.exe)
Online: 12 October 2020 (07:06:28 CEST)
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.
EDITORIAL | doi:10.20944/preprints201605.0001.v1
Subject: Keywords: Preprints, Open Science
Online: 3 May 2016 (14:43:02 CEST)
Preprints is a multidisciplinary preprint platform that makes scientific manuscripts from all fields of research immediately available at www.preprints.org. Preprints is a free (not-for-profit) open access service supported by MDPI in Basel, Switzerland.
ARTICLE | doi:10.20944/preprints201909.0122.v1
Subject: Medicine & Pharmacology, Other Keywords: open health; simple rules; ethics; reproducibility; research significance; open science
Online: 11 September 2019 (13:27:26 CEST)
We are witnessing a dramatic transformation in the way we do science. In recent years, significant flaws with existing scientific methods have come to light, including lack of transparency, insufficient involvement of stakeholders, disconnection from the public, and limited reproducibility of research findings. These concerns have sparked a global movement to revolutionize scientific practice and the emergence of Open Science. This new approach to science extends principles of openness to the entire research cycle, from hypothesis generation to data collection, analysis, replication, and translation from research to practice. Open Science seeks to remove all barriers to conducting high quality, rigorous, and impactful scientific research by ensuring that the data, methods, and opportunities for collaboration are open to all. Emerging digital technologies and "big data" (see "Ten simple rules for responsible big data research") have further accelerated the Open Science movement by affording new approaches to data sharing, connecting researcher networks, and facilitating the dissemination of research findings. Open scientific practices are also having a profound impact on the health sciences and medical research, and specifically how we conduct clinical research with human participants. Human health research necessitates careful considerations for practicing science in an ethical manner. There is also a particular urgency to human health research since the goal is to help people, so doing good science takes on a different meaning than simply doing science well. It also implores the scientist to reassess the conventional view of human health research as a pursuit conducted by scientists on human subjects, and lays a greater emphasis on inclusive and ethical practices to ensure that the research takes into account the interests of those who would be most impacted by the research. Openness in the context of human health research also raises greater concerns about privacy and security and presents more opportunities for people, including participants of research studies, to contribute in every capacity. At the core of open health research, scientific discoveries are not only the product of collaboration across disciplines, but must also be owned by the community that is inclusive of researchers, health workers, and patients and their families. To guide successful open health research practices, it is essential to carefully consider and delineate its guiding principles. This editorial is aimed at individuals participating in health science in any capacity, including but not limited to people living with medical conditions, health professionals, study participants, and researchers spanning all types of disciplines. We present ten simple rules that, while not comprehensive, offer guidance for conducting health research with human participants in an open, ethical, and rigorous manner. These rules can be difficult, resource-intensive, and can conflict with one another. They are aspirational and are intended to accelerate and improve the quality of human health research. Work that fails to follow these rules is not necessarily an indication of poor quality research, especially if the reasons for breaking the rules are considered and articulated (see rule 6: document everything). While most of the responsibility of following these rules falls on researchers, anyone involved in human health research in any capacity can apply them.
ARTICLE | doi:10.20944/preprints201905.0098.v2
Subject: Mathematics & Computer Science, Other Keywords: open review; open science; zero-blind review; peer review; methodology
Online: 16 August 2019 (05:27:55 CEST)
We present a discussion and analysis regarding the benefits and limitations of open and non-anonymized peer review based on literature results and responses to a survey on the reviewing process of alt.chi, a more or less open-review track within the CHI conference, the predominant conference in the field of human-computer interaction (HCI). This track currently is the only implementation of an open-peer-review process in the field of HCI while, with the recent increase in interest in open science practices, open review is now being considered and used in other fields. We collected 30 responses from alt.chi authors and reviewers and found that, while the benefits are quite clear and the system is generally well liked by alt.chi participants, they are reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.
ARTICLE | doi:10.20944/preprints202010.0390.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: High step-up converter; Impedance source converter; Z-source converter; Cascaded technique
Online: 19 October 2020 (14:54:47 CEST)
To improve the voltage gain of step-up converters, cascaded technique is considered as a possible solution in this paper. By considering the concept of cascading two Z-source networks in a conventional boost converter, the converter takes the advantages of both impedance source and cascaded converters. However, by applying some modifications, the proposed converter provides high voltage gain while the voltage stress of switch and diodes are still low. Moreover, the low input current ripple of the converter makes it absolutely appropriate for photovoltaic applications in order to expand the lifetime of PV panels. After analyzing the operation principles of the proposed converter, simulation and experimental results of a 100W prototype are presented to verify the proposed converter performance.
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: linguistic knowledge; source language; neural machine translation (NMT); low-resource; multi-source NMT
Online: 2 March 2020 (15:28:34 CET)
Exploiting the linguistic knowledge of the source language for neural machine translation (NMT) has recently achieved impressive performance on many large-scale language pairs. However, since the Turkish→English machine translation task is low-resource and the source-side Turkish is morphologically-rich, there are limited resources of bilingual corpora and linguistic information available to further improve the NMT performance. Focusing on the above issues, we propose a multi-source NMT approach that models the word feature in parallel to external linguistic features by using two separate encoders to explicitly incorporate linguistic knowledge into the NMT model. We extend the word embedding layer of the knowledge-based encoder to accommodate for each word’s linguistic annotations in the context. Moreover, we share all parameters across encoders to enhance the representation ability of the NMT model on the source language. Experimental results show that our proposed approach achieves substantial improvements of up to 2.4 and 1.1 BLEU scores in Turkish→English and English→Turkish machine translation tasks, respectively, which points to a promising way to utilize the source-side linguistic knowledge for the low-resource NMT.
ARTICLE | doi:10.20944/preprints202202.0088.v1
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: Public open spaces; Open streets; Built environment; Leisure-time physical activity; Epidemiology
Online: 7 February 2022 (13:02:09 CET)
Leisure-time physical activity (LTPA) is associated with access and use of public open spaces. The “President João Goulart Elevated Avenue” and current denominated “Minhocão” is a facility for leisure activities that are open for people during the night/weekends. The aim of this study was to examine if the prevalence of LTPA among individuals living in the surroundings of Minhocão is different according to proximity to, and use of, the facility. We conducted a cross-sectional study with cluster sampling with people aged ≥18 years who lived in households until 500m and between 501m to 1500m of Minhocão. The survey was conducted between December/2017 until March/2019 with an electronic questionnaire self-responded. We conducted bivariate analysis and Poisson regression to examine possible differences in LTPA according to the proximity of residences and use of Minhocão. The analysis used post-stratification weights. A total of 12,030 telephone numbers of people were drawn (≤500m = 6,942; and >500m to ≤1500m = 5,088). The final sample analyzed were of 235 residents who returned the questionnaires. There was a higher prevalence of individuals engaging in at least 150 minutes per week of LTPA among users than non-users (Prevalence Ratio=2.23, IC95%1.72-2.90). People who used the park had higher prevalence of all types of LTPA than non-users. The results can serve to inform government decision-making on the future of Minhocão.
CONCEPT PAPER | doi:10.20944/preprints202009.0320.v1
Subject: Keywords: reciprocal personal/public protection; mask discriminating mouth and nose; mouth cover; mask; face covering; source control; source classification; Covid-19; active source; liquid droplets
Online: 14 September 2020 (11:45:27 CEST)
Reciprocal Personal/Public Protection (RPPP) featured with source control is introduced, Mask Discriminating Mouth and Nose (MDMN) is employed to serve the purpose, which includes polymer based mouth cover with optional nose cover. The new knowledge that mouth is a primary, active and dominant source of the virus has been well established, which is the base of MDMN. Source classification and related source control tools are discussed, mouth cover is recommended as the tool prioritized to use. Liquid droplets is identified as a hard issue related to mask, liquid droplets, mask fitting, comfort and facial recognition constitute real challenges of mask in addition to efficiency, All of these have been addressed with MDMN. Comparisons between MDMN and masks/face covering are taken on four aspects: efficiency and efficacy, tolerance and comfort, cost and waste, and civil rights and public interest. Mouth cover is recommended to replace the face covering and act as both a personal tool and a public utensil, mouth cover with nose cover can provide better protection than N95 etc. RPPP with MDMN, could be an alternative for lockdown, a parallel strategy to vaccine, and a collectively living way during the pandemic era. MDMN, featured with high efficiency protection, high degree comfort, easy wearing, tight-fitting, easy facial recognition & communication, reusability, cost-effective, environment friendly and scale manufacturing more readily and widely etc., is a simple and sustainable solution, which is essential for ordinary people to keep wearing it properly for protection.
ARTICLE | doi:10.20944/preprints202001.0240.v1
Subject: Social Sciences, Library & Information Science Keywords: openness under neoliberalism; open-access licensing in capitalism; the politics of open-licensing
Online: 21 January 2020 (11:00:41 CET)
The terms 'open' and 'openness' are widely used across the current higher education environment particularly in the areas of repository services and scholarly communications. Open-access licensing and open-source licensing are two prevalent manifestations of open culture within higher education research environments. As theoretical ideals, open-licensing models aim at openness and academic freedom. But operating as they do within the context of global neoliberalism, to what extent are these models constructed by, sustained by, and co-opted by neoliberalism? In this paper, we interrogate the use of open-licensing within scholarly communications and within the larger societal context of neoliberalism. Through synthesis of various sources, we will examine how open access licensing models have been constrained by neoliberal or otherwise corporate agendas, how open access and open scholarship have been reframed within discourses of compliance, how open-source software models and software are co-opted by politico-economic forces, and how the language of 'openness' is widely misused in higher education and repository services circles to drive agendas that run counter to actually increasing openness. We will finish by suggesting ways to resist this trend and use open-licensing models to resist neoliberal agendas in open scholarship.
ARTICLE | doi:10.20944/preprints202112.0099.v1
Online: 7 December 2021 (11:30:56 CET)
Forest recreation can be successfully used for the psychological relaxation of respondents and can be used as a remedy for common problems with stress. The special form of forest recreation intended for restoration is forest bathing. These activities might be distracted by some factors, such as viewing buildings in the forest or using a computer in nature, which interrupt psychological relaxation. One factor that might interrupt psychological relaxation is the occurrence of an open dump in the forest during an outdoor experience. To test the hypothesis that an open dump might decrease psychological relaxation, a case study was planned that used a randomized, controlled crossover design. For this purpose, two groups of healthy young adults viewed a control forest or a forest with an open dump in reverse order and filled in psychological questionnaires after each stimulus. A pretest was used. Participants wore oblique eye patches to stop their visual stimulation before the experimental stimulation, and the physical environment was monitored. The results were analyzed using the two-way repeated measures ANOVA. The measured negative psychological indicators significantly increased after viewing the forest with waste, and the five indicators of the Profile of Mood States increased: Tension-Anxiety, Depression-Dejection, Anger-Hostility, Fatigue, and Confusion. In addition, the negative aspect of the Positive and Negative Affect Schedule increased in comparison to the control and pretest. The measured positive indicators significantly decreased after viewing the forest with waste, the positive aspect of the Positive and Negative Affect Schedule decreased, and the Restorative Outcome Scale and Subjective Vitality scores decreased (in comparison to the control and pretest). The occurrence of an open dump in the forest might interrupt a normal restorative experience in the forest by reducing psychological relaxation. Nevertheless, the mechanism of these relevancies is not known, and thus, it will be further investigated. In addition, in a future study, the size of the impact of these open dumps on normal everyday experiences should be investigated. It is proposed that different mechanisms might be responsible for these reactions; however, the aim of this manuscript is to only measure this reaction. The identified psychological reasons for these mechanisms can be assessed in further studies.
ARTICLE | doi:10.20944/preprints201901.0238.v1
Online: 23 January 2019 (10:15:00 CET)
In December 2012, DOAJ’s parent company, IS4OA, announced they would introduce new criteria for inclusion in DOAJ  and that DOAJ would collect vastly more information from journals as part of the accreditation process – and that journals already included, would need to reapply in order to be kept in the registry. My hypothesis was that the journals removed from DOAJ on May 9th 2016 would chiefly be journals from small publishers (mostly single journal publishers) and that DOAJ journal metadata information would reveal that they were journals with a lower level of publishing competence than those that would remain in the DOAJ. Among indicators of publishing competence could be the use of APCs, permanent article identifiers, journal licenses, article level metadata deposited with DOAJ, archiving policy/solutions and/or having a policy in SHERPA/RoMEO. The analysis shows my concerns to be correct.
CONCEPT PAPER | doi:10.20944/preprints201707.0095.v1
Online: 31 July 2017 (16:05:17 CEST)
Chemistry is the last natural science discipline to embrace prepublishing, namely the publication of non-peer reviewed scientific articles on the internet. After a brief insight into the origins and the purpose of prepublishing in science, we conduct a concrete analysis of the concrete situation, aiming at providing an answer to several questions. Why the chemistry community has been late in embracing prepublishing? Is this in relation with the slow acceptance of open access publishing by the same community? Will prepublishing become a common habit also for chemistry scholars?
ARTICLE | doi:10.20944/preprints201807.0114.v1
Subject: Materials Science, Biomaterials Keywords: biomass; briquette; combustion; density; energy source
Online: 6 July 2018 (09:48:02 CEST)
This study investigated the physical and combustion properties of briquettes produced from agricultural wastes (groundnut shells and corn cobs), wood residues (Anogeissus leiocarpus) and admixtures of the particles at 15%, 20% and 25% starch levels (binder). A 6 x 3 factorial experiments in a Completely Randomized Design (CRD) was adopted for the study. The briquettes produced were analyzed for density, volatile matter, ash content, fixed carbon and specific heat of combustion. The result revealed that the density ranged from 0.44g/cm3 to 0.53g/cm3, while briquettes produced from groundnut shells had the highest (0.53g/cm3) significant mean density. Mean volatile matter and ash content of the briquettes ranged from 24.35% to 34.95% and 3.37% to 4.91%. A. leiocarpus and corn cobs particles had the lowest and highest ash content respectively. The briquette fixed carbon and specific heat of combustion ranged from 61.68% to 68.97% and 7362kca/kg to 8222kca/kg respectively. Briquette produced from A. leiocarpus particles had the highest specific heat of combustion. In general, briquettes produced from A. leiocarpus particles and admixture of groundnut shell and A. leiocarpus particles at 25% starch level had better quality in terms of density and combustion properties and thus suitable as environmentally friendly alternative energy source.
ARTICLE | doi:10.20944/preprints202105.0543.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Blind Source Separation (BSS), Minimum Mean Square Error (MMSE), convolutive mixture, source Prior, generalized Gaussian distribution
Online: 24 May 2021 (08:50:37 CEST)
This paper proposes a novel efficient multistage algorithm to extract source speech signals from a noisy convolutive mixture. The proposed approach comprises of two stages named Blind Source Separation (BSS) and De-noising. A hybrid source prior model separates the source signals from the noisy reverberant mixture in the BSS stage. Moreover, we model the low and high-energy components by generalized multivariate Gaussian and super-Gaussian models, respectively. We use Minimum Mean Square Error (MMSE) to reduce noise in the noisy convolutive mixture signal in the de-noising stage. Furthermore, two proposed models investigate the performance gain. In the first model, the speech signal is separated from the observed noisy convolutive mixture in the BSS stage, followed by suppression of noise in the estimated source signals in the de-noising module. In the second approach, the noise is reduced using the MMSE filtering technique in the received noisy convolutive mixture at the de-noising stage, followed by separation of source signals from the de-noised reverberant mixture at the BSS stage. We evaluate the performance of the proposed scheme in terms of signal-to-distortion ratio (SDR) with respect to other well-known multistage BSS methods. The results show the superior performance of the proposed algorithm over the other state-of-the-art methods.
Subject: Engineering, Electrical & Electronic Engineering Keywords: High Voltage Direct Current transmission (HVDC); Multi-terminal HVDC; Voltage Source Converter (CSC); Voltage Source Converter (VSC)
Online: 19 July 2020 (20:28:43 CEST)
There is a growing use of High Voltage Direct Current (HVDC) globally due to the many advantages of Direct Current (DC) transmission systems over Alternating Current (AC) transmission, including enabling transmission over long distances, higher transmission capacity and efficiency. Moreover, HVDC systems can be a great enabler in the transition to a low carbon electrical power system which is an important objective in today’s society. The objectives of the paper are to give a comprehensive overview of HVDC technology, its development, and present status, and to discuss its salient features, limitations and applications.
REVIEW | doi:10.20944/preprints202105.0033.v1
Subject: Life Sciences, Molecular Biology Keywords: Campylobacter; Antimicrobial Resistance; Foodborne Pathogen; Animal Source
Online: 5 May 2021 (11:05:37 CEST)
Campylobacter is one of the major foodborne pathogens of concern in its growing trend of antimicrobial resistance. C. jejuni and C. coli are the major causative agents, with C. jejuni contributing to most of the cases in approximately 90% in the world. Infection is transmitted to humans due to consumption of contaminated food and water. Campylobacteriosis caused by C. jejuni is commonly presented with severe diarrhoea, abdominal pain, fever, headache, nausea, and vomiting with some extreme cases resulting in Guillain–Barré syndrome (GBS) and acute flaccid paralysis. Symptoms are severe in cases of children below 5 years, elderly and individuals who are immunocompromised. The infection is usually sporadic, and self-limiting and thus does not require antibiotics for treatment. Still, the antimicrobial resistance in Campylobacter is a major concern because of the transmission of resistance from animal sources to humans. This review highlights the recent epidemiology, geographical impact, resistance mechanisms, spread of Campylobacter spp. and the strategies to control the transmission of Campylobacter from veterinary sources and its antimicrobial resistance.
ARTICLE | doi:10.20944/preprints202102.0552.v1
Subject: Keywords: modified SIR model, epidemic, death and source.
Online: 24 February 2021 (15:59:09 CET)
The original purpose of this article was to modify the original SIR equations to allow for a direct source of infection (without which the original equations would have no solutions unless one starts with an already infected population) and also to see to what extent one could obtain multiple outbreaks of an infectious disease. In the course of developing the basic ideas several other factors arose to take prominent roles.Perhaps one of the more salient factors is the point that choosing an arbitrary time to change conditions from say a lock-down for the population to a less stringent social behavior, such as allowing partial or complete opening of businesses and schools, etc. should be based on knowledge of the disease and its evolution. Such decisions are usually made by politicians who have less than full information concerning the consequences of their actions. Several examples are given to illustrate these points.
ARTICLE | doi:10.20944/preprints202007.0004.v1
Online: 2 July 2020 (12:52:53 CEST)
Malaria remains a life-threatening disease in many tropical countries. Honduras has successfully reduced malaria transmission as different control methods have been applied focusing mainly on indoor mosquitoes. The selective pressure exerted by the use of insecticides inside the households could modify the feeding behavior of the mosquitoes forcing them to search for available animal hosts outside the houses. These animal hosts in the peridomicile could consequently become an important factor in maintaining vector populations in endemic areas. Herein, we investigated the blood meal sources and Plasmodium spp. infection on anophelines collected outdoors in endemic areas of Honduras. Individual PCR reactions with species-specific primers were used to detect five feeding sources on 181 visibly engorged mosquitoes. In addition, a subset of these mosquitoes where chosen for pathogen analysis by a nested PCR approach. Most mosquitoes fed on multiple hosts (2 to 4), and 24.9% of mosquitoes were fed on a single host, animal or human. Chicken and bovine were the most frequent blood meal sources (29.5% and 27.5% respectively). The average human blood index (HBI) was 22.1%. None of the mosquitoes was found to be infected with Plasmodium spp. Our results show the opportunistic and zoophilic behavior of Anopheles mosquitoes in Honduras.
ARTICLE | doi:10.20944/preprints202003.0460.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Wikipedia; reference; source; reliability; popularity; Wikidata, DBpedia
Online: 31 March 2020 (22:18:51 CEST)
One of the most important factors impacting quality of content in Wikipedia is presence of credible sources. By following references readers can verify facts or find more details about described topic. A Wikipedia article can be edited independently in any of over 300 languages, even by anonymous users, therefore information about the same topic may be inconsistent. This also applies to use of references in different language versions of a particular article, so the same statement can have different sources. In this paper we analyzed over 40 million articles from the 55 most developed language versions of Wikipedia to extract information about nearly 200 million references and find the most popular and reliable sources. We presented 10 models for the assessment of the popularity and reliability of the sources based on analysis of meta information about the references in Wikipedia articles, page views and authors of the articles. Using DBpedia and Wikidata we automatically identified the alignment of the sources to a specific domain. Additionally, we analyzed the changes of popularity and reliability in time and identified growth leaders in each considered months. The results can be used for quality improvements of the content in different languages versions of Wikipedia.
ARTICLE | doi:10.20944/preprints201802.0009.v1
Subject: Keywords: Cache Coding, Source Coding, Absorbing Markov Chain
Online: 1 February 2018 (16:45:19 CET)
Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes for increased performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent these attacks while allowing for the benefits of coding in mesh networks, the Cache Coding was proposed. This protocol only allows recoding at the relays when the relay has received enough packets to decode an entire generation of packets. At that point, the relay node recodes and signs the recoded packets with its own private key allowing for the system to detect and minimize the effect of pollution attacks and make relays accountable for changes on the data. This paper analyzes the delay performance of Cache Coding to understand the security-performance trade-off of this scheme. We introduce an analytical model for the case of two relays in an erasure channel relying on an Absorbing Markov Chain and a approximate model to estimate the performance in terms of the number of transmissions before successfully decoding at the receiver. We confirm our analysis using simulation results. We show that Cache Coding can overcome security issues of unrestricted recoding with only a moderate decrease in system performance.
ARTICLE | doi:10.20944/preprints202002.0165.v1
Subject: Social Sciences, Library & Information Science Keywords: open access; api; self archiving,; automation
Online: 13 February 2020 (10:34:30 CET)
This proposal describes the design and development of an interoperable application that supports green open access with long-term sustainability and improved user experience of article deposit. Introduction: The lack of library resources and unfriendly repository user interface are two significant barriers that hinder green open access. Tasked to implement the open access mandate, librarians at an American research university developed a comprehensive system called Easy Deposit 2 to automate the support workflow of green open access. Implementation: Easy Deposit 2 is a web application that is able to harvest newly publications, outreach for manuscript on behalf of the library, and facilitate self-archiving to IR. It is developed and maintained by the library and integrated with the IR. Results and Discussion: The article deposit rate is about 25% with Easy Deposit 2, which increases significantly comparing to the previous period. It also serves as a local database for faculty publications with open access status. The lesson learned is that library cannot rely on a single commercial provider for publication data due to mismatched priorities. Conclusion: Recent IT developments provides new opportunities of innovation like Easy Deposit 2 in supporting open access. Academic librarians are vital in promoting "openness" in scholarly communication such as transparency and diversity in the sharing of publication data.
ARTICLE | doi:10.20944/preprints201905.0029.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: multilingual; open information extraction; parallel corpus
Online: 6 May 2019 (06:14:07 CEST)
The number of documents published on the Web other languages than English grows every year. As a consequence, it increases the necessity of extracting useful information from different languages, pointing out the importance of researching Open Information Extraction (OIE) techniques. Different OIE methods have been dealing with features from a unique language. On the other hand, few approaches tackle multilingual aspects. In such approaches, multilingual is only treated as an extraction method, which results in low precision due to the use of general rules. Multilingual methods have been applied to a vast amount of problems in Natural Language Processing achieving satisfactory results and demonstrating that knowledge acquisition for a language can be transferred to other languages to improve the quality of the facts extracted. We state that a multilingual approach can enhance OIE methods, being ideal to evaluate and compare OIE systems, and as a consequence, to applying it to the collected facts. In this work, we discuss how the transfer knowledge between languages can increase the acquisition from multilingual approaches. We provide a roadmap of the Multilingual Open IE area concerning the state of the art studies. Additionally, we evaluate the transfer of knowledge to improve the quality of the facts extracted in each language. Moreover, we discuss the importance of a parallel corpus to evaluate and compare multilingual systems.
ARTICLE | doi:10.20944/preprints201809.0017.v1
Online: 3 September 2018 (09:39:24 CEST)
Universities, like cities, have embraced novel technologies and data-based solutions to improve their campuses with ‘smart’ becoming a welcomed concept. Campuses in many ways are small-scale cities. They increasingly seek to address similar challenges and to deliver improved experiences to their users. How can data be used in making this vision a reality? What can we learn from smart campuses that can be scale up to smart cities? A short research study was conducted over a three-month period at a public university in the United Kingdom employing stakeholder interviews and user surveys, aiming at gaining insight into these questions. Based on the study, the authors suggest that making data publicly available could bring many benefits to different groups of stakeholders and campus users. These benefits come with risks and challenges such as data privacy and protection and infrastructure hurdles. However, if these challenges can be overcome, open data could contribute significantly to improving campuses and user experiences, and potentially set an example for smart cities.
ARTICLE | doi:10.20944/preprints201806.0243.v1
Online: 15 June 2018 (05:19:00 CEST)
This paper explores whether preprints can better support open science by providing links to other early-stage research outputs. This potentially has benefits for transparency and discoverability of research projects. By looking at preprint submission systems, online preprints and surveying those who run preprint servers, I examined to what extent this is currently possible. No preprints server provided a complete service, however many allowed the linking of several open science elements from the abstract page. I looked at variation based on subject, age, and size of preprint server. In conclusion, authors posting preprints should consider the options provided by different preprint servers. It appears that open science is just one focus of preprint servers and further improvements will be dependent on preprint server policies and priorities rather than overcoming any technical difficulties.
CASE REPORT | doi:10.20944/preprints201801.0066.v1
Online: 8 January 2018 (11:11:47 CET)
The implementation of the European Cohesion Policy aiming at fostering regions competitiveness, economic growth and creation of new jobs is documented over the period 2014–2020 in the publicly available Open Data Portal for the European Structural and Investment funds. On the base of this source, this paper aims at describing the process of data mining and visualization for information production on regional programmes performace in achieving effective expenditure of resouces.
ARTICLE | doi:10.20944/preprints202202.0120.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Meycauayan; aerosols; source apportionment; principal component analysis; MMORS
Online: 8 February 2022 (14:55:34 CET)
This paper focuses on the application of principal component analysis (PCA) to conduct a source apportionment of atmospheric aerosols from 8 sampling locations along the Marilao-Meycauayan-Obando River System (MMORS). Aerosols were collected on May 2016 during the same time that water samples were collected. Elemental analysis was conducted using a scanning electron microscope coupled with energy dispersive x-ray (SEM-EDX). Carbon (C), nitrogen (N), oxygen (O), sodium (Na), magnesium (Mg), aluminum (Al), silicon (Si), sulfur (S), chlorine (Cl), potassium (K), calcium (Ca), titanium (Ti), manganese (Mn), iron (Fe), copper (Cu), zinc (Zn), bromine (Br), niobium (Nb), barium (Ba), mercury (Hg), and lead (Pb) concentrations were measured and used as inputs in Principal Component Analysis (PCA). The aerosol samples showed the presence of heavy metals Pb and Hg, elements that were also detected in trace amounts in the water measurements. Concentrations of heavy metals Fe, Pb, Hg in the aerosols were attributed to industrial sources. However, it was determined that the primary source of aerosols in the area were traffic and crustal emissions (C, N, O, Si, Al, Ca). Thus, control of traffic emissions would be more beneficial in reducing aerosol emissions in Meycauayan.
ARTICLE | doi:10.20944/preprints202107.0458.v2
Subject: Engineering, Electrical & Electronic Engineering Keywords: quantum dots; bias source; multi-channel; high precision
Online: 9 August 2021 (18:17:09 CEST)
To realize precise control of the quantum dots (Qdots) device, multi-channel precision bias source plays the key role. In this paper, the 16-channel high precision bias source with 18-bit resolution for Qdots device was designed. The prototype was made and its performance was tested. The short time fluctuations can reach 50μV. The step response time is less than 3μs. The resolution, stability, linearity and dynamic range of the bias source exhibits good performance. What's more, the bias source can be controlled locally and online. The results show that it is one effective and feasible topology for experiments in Qdots device application.
ARTICLE | doi:10.20944/preprints202106.0317.v1
Subject: Biology, Anatomy & Morphology Keywords: Cannabis sativa; potency; ultraviolet; indoor; sole source; terpene
Online: 11 June 2021 (11:31:18 CEST)
It is commonly believed that exposing Cannabis sativa (cannabis) plants to ultraviolet (UV) radiation can enhance Δ9-tetrahydrocannabinol (Δ9-THC) concentrations in female inflorescences and associated foliar tissues. However, a lack of published scientific studies has left knowledge-gaps in the effects of UV on cannabis that must be elucidated before UV can be utilized as a horticultural management tool in commercial cannabis production. In this study we investigated the effects of UV exposure level on photosynthesis, growth, inflorescence yield, and secondary metabolite composition of two indoor-grown cannabis cultivars: ‘Low Tide’ (LT) and ‘Breaking Wave’ (BW). After growing vegetatively for 2 weeks under a canopy-level photosynthetic photon flux density (PPFD) of ≈225 μmol·m–2·s–1 in an 18-h light/6-h dark photoperiod, plants were grown for 9 weeks in a 12-h light/12-h dark “flowering” photoperiod under a canopy-level PPFD of ≈400 µmol·m–2·s–1 and 3.5 h·d–1 of supplemental UV radiation with UV photon flux densities (UV-PFD) ranging from 0.01 to 0.8 μmol·m–2·s–1 provided by light-emitting diodes (LEDs) with a peak wavelength of 287 nm (i.e., biologically-effective UV doses of 0.16 to 13 kJ·m–2·d–1). The severity of UV-induced morphology (e.g., whole-plant size and leaf size reductions, leaf malformations, and stigma browning) and physiology (e.g., reduced leaf photosynthetic rate and reduced Fv/Fm) symptoms worsened as UV exposure level increased. While the proportion of dry inflorescence yield that was derived from apical tissues decreased in both cultivars with increasing UV exposure level, total dry inflorescence yield only decreased in LT. The equivalent Δ9-THC and cannabidiol (CBD) concentrations also decreased in LT inflorescences with increasing UV exposure level. While the total terpene content in inflorescences decreased with increasing UV exposure level in both cultivars, the relative concentrations of individual terpenes varied by cultivar. The potential for using UV to enhance cannabis quality must still be confirmed before it can be used as a production tool for modern, indoor-grown cannabis cultivars.
ARTICLE | doi:10.20944/preprints202009.0501.v1
Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: human arsenic exposure; water source; risk factors; Thailand
Online: 21 September 2020 (11:32:03 CEST)
Three decades ago, human arsenic (As) contamination has been recognized in Ron Phibun, a sub-district with tin mining activity in southern Thailand. Since then different government bodies have attempted to mitigate the As-contamination problem by providing safe water in households. The most recent study conducted during 2000-2002 reported only a small fraction of population still had high urinary As level. Less attention has been paid to this issue afterwards. The present study aimed to re-assess the current situation, including human As contamination, water use behavior as well as identify risk factors of elevated As concentration among residents of Ron Phibun. The survey of 560 participants living in Ron Phibun with urinary As assessment was conducted. The median urinary As concentration of study participants was higher than normal. Consumption of shallow well water, a source generally considered as As-contaminated, was higher than a previous survey. A significant association was observed between urinary As concentrations and water sources for drinking and cooking. Gender and educational level were found to be associated with urinary As concentration. Significant associations between urinary As concentration and certain diseases (respiratory diseases, dermatitis, and dyslipidemia) were observed. The findings suggested further investigation of all water sources in the area for As contamination.
ARTICLE | doi:10.20944/preprints201806.0043.v1
Subject: Earth Sciences, Atmospheric Science Keywords: saccharides; biomass burning; haze; source apportionment; bio-aerosol
Online: 4 June 2018 (12:47:58 CEST)
The characteristics of biogenic aerosols in urban area were explored by determining the composition, temporal distribution of saccharides in PM2.5 in Shanghai. The total saccharides showed a wide range of 15.2 ng/m3 to 1752.8 ng/m3, with the averaged concentrations were 169.8 ng/m3，300.5 ng/m3，288.4 ng/m3，688 ng/m3 in spring, summer, autumn, and winter, respectively. The concerned saccharides include anhydrosaccharides (levoglucosan and mannosan), which were higher in cold seasons due to the increased biomass burning, saccharide alcohols (mannitol, arabitol, sorbitol) and monosaccharides (fructose, glucose), which showed more abundant in warm seasons attributed to the biological emissions. By PMF analysis, four emission sources of saccharides were demonstrated, including biomass burning, fungal spores, soil suspension and plant pollens. Resolution of backward trajectory and fire points showed a process of high concentrations of levoglucosan. We found that concentrations of anhydrosaccharides showed relatively stable under different pollution levels while saccharide alcohols exhibited an obvious decrease, indicated that biomass burning was not the core reason of the heavy haze pollution, however, and high level PM2.5 pollution might inhibit effects of biological activities.
ARTICLE | doi:10.20944/preprints201703.0082.v1
Subject: Life Sciences, Microbiology Keywords: water; physical; chemical; microbiological; quality; household; stored; source
Online: 14 March 2017 (10:49:43 CET)
In this study, we evaluated the physicochemical and microbial qualities of source and stored household waters in some communities in Southwestern Nigeria using standard methods. Compared parameters include physicochemical constituents; Temperature (T), pH, Total Dissolved Solids (TDS), Total Hardness (TH), Biological Oxygen Demand (BOD), Magnesium ion (Mg2+) and Calcium ion (Ca2+) and microbiological parameters included Total Coliform Counts (TC), Faecal Coliform Counts (FC), Fungal Counts (Fung C), Heterotrophic Plate Counts (HPC). Comparing Stored and Source samples, the mean values of some physicochemical parameters of most of the stored water samples significantly (P<0.05) exceeded that of Sources and ranged in the following order: T (15.3±0.3oC - 28.3±0.5oC), pH (6.4±0.1 - 7.6±0.1), TDS (192.1±11.1 ppm - 473.7±27.9 ppm), TH (10.6±1.7 mg/L - 248.6±18.6 mg/L), BOD (0.5±0.0 mg/L - 3.2±0.3 mg/L), Mg2+ (6.5±2.4 mg/L - 29.1±3.2 mg/L) and Ca2+ (6.5±2.4 mg/L - 51.6±4.4 mg/L). The mean microbial counts obtained from microbial comparison of different points (Stored and Source) of collection showed that most of the stored water had counts significantly exceeding (P<0.05) those of the source water samples (cfu/100 mL) which ranged as follows: TC (3.1±1.5 - 156.8±42.9), FC (0.0±0.0 - 64.3±14.2) and HPC (47.8±12.1 - 266.1±12.2) across all sampled communities. Also, the predominant isolates recovered from the samples were identified as Escherichia coli, Klebsiella pneumonia, Pseudomonas aeruginosa, Enterobacter aerogenes, Aspergillus spp, Mucor spp, Rhizopus spp and Candida spp. The presence of these pathogenic and potentially pathogenic organisms in the waters and the high counts of the indicator organisms suggest the waters to be a threat to public health.
ARTICLE | doi:10.20944/preprints201612.0086.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: sensory preconditioning; source memory; spatial learning; episodic memory
Online: 16 December 2016 (08:28:24 CET)
Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity.
REVIEW | doi:10.20944/preprints202110.0390.v1
Subject: Life Sciences, Biotechnology Keywords: Biological contaminants; grazers; microalgae; open cultivation; biopesticides
Online: 26 October 2021 (14:36:19 CEST)
Microalgae biomass is a budding raw material for the origination of food, fuel, and other value-added products. However, bulk production of microalgal biomass at commercial level is a herculean task for the current microalgal mass production technologies due to the undesirable contaminations by biological pollutants. These contaminants hamstring the production of microalgae biomass by debilitating the growth of cultures, crumble the quality of biomass and sometimes may crash the whole culture. The best utilization of the microalgae biomass at industrial level could be attained by avoiding various possible biological contaminations in mass cultivation system, understanding the contamination mechanisms, and the complex interactions of algae with other microorganisms. This review explores the various types of biological pollutants, their possible mode of infection along with mechanisms, different controlling methods to maintain desired microalgae culture.
ARTICLE | doi:10.20944/preprints202106.0648.v1
Subject: Social Sciences, Accounting Keywords: open access; article processing charges; monitoring systems
Online: 28 June 2021 (12:33:05 CEST)
The Open Access (OA) publishing model that is based on article processing charges (APC) is often associated with the potential for more transparency regarding the expenditures for publications. However, the extent to which transparency can be achieved depends not least on the completeness of data in APC monitoring systems. This article investigates two blind spots of the largest collection of APC payment information, OpenAPC. It aims to identify likely APC-liable publications for German universities that contribute to this system and for those that do not provide data to it. The calculation combines data from Web of Science, the ISSN-Gold-OA-list and OpenAPC. The results show that for the group of universities contributing to the monitoring system, more than half of the APC payments are not covered by it and the average payments for non-covered APCs is higher than for APCs covered by the system. In addition, the group of universities that does not contribute to OpenAPC accounts for two thirds of the number of APC-liable publications recorded for contributing universities. Regarding the size of these blind spots, the value of the monitoring system is limited at present.
ARTICLE | doi:10.20944/preprints202003.0443.v2
Subject: Social Sciences, Library & Information Science Keywords: COVID-19; open science; data; bibliometric; pandemic
Online: 22 April 2020 (06:15:34 CEST)
Introduction: The Pandemic of COVID-19, an infectious disease caused by SARS-CoV-2 motivated the scientific community to work together in order to gather, organize, process and distribute data on the novel biomedical hazard. Here, we analyzed how the scientific community responded to this challenge by quantifying distribution and availability patterns of the academic information related to COVID-19. The aim of our study was to assess the quality of the information flow and scientific collaboration, two factors we believe to be critical for finding new solutions for the ongoing pandemic. Materials and methods: The RISmed R package, and a custom Python script were used to fetch metadata on articles indexed in PubMed and published on Rxiv preprint server. Scopus was manually searched and the metadata was exported in BibTex file. Publication rate and publication status, affiliation and author count per article, and submission-to-publication time were analysed in R. Biblioshiny application was used to create a world collaboration map. Results: Our preliminary data suggest that COVID-19 pandemic resulted in generation of a large amount of scientific data, and demonstrates potential problems regarding the information velocity, availability, and scientific collaboration in the early stages of the pandemic. More specifically, our results indicate precarious overload of the standard publication systems, significant problems with data availability and apparent deficient collaboration. Conclusion: In conclusion, we believe the scientific community could have used the data more efficiently in order to create proper foundations for finding new solutions for the COVID-19 pandemic. Moreover, we believe we can learn from this on the go and adopt open science principles and a more mindful approach to COVID-19-related data to accelerate the discovery of more efficient solutions. We take this opportunity to invite our colleagues to contribute to this global scientific collaboration by publishing their findings with maximal transparency.
ARTICLE | doi:10.20944/preprints201912.0312.v1
Subject: Engineering, Civil Engineering Keywords: thermal comfort; draught; cooling period; open office
Online: 24 December 2019 (08:42:03 CET)
Local thermal comfort (TC) and draught rate (DR) has been studied widely. There has been more meaningful research performed in controlled boundary condition situations than in actual work environments involving occupants. TC conditions in office buildings in Estonia have been barely investigated in the past. In this paper, the results of TC and DR assessment in five office buildings in Tallinn are presented and discussed. Studied office landscapes vary in heating, ventilation and cooling (HVAC) system parameters, room units and elements. All sample buildings were less than six years old, equipped with dedicated outdoor air ventilation system and room conditioning units. The on-site measurements consisted of TC and DR assessment with indoor climate questionnaire (ICQ). The purpose of the survey is to assess the correspondence between HVAC design and the actual situation. Results show, whether and in what extent the standard-based criteria for TC is suitable for actual usage of the occupants. Preferring one room conditioning unit type or system may not guarantee better thermal environment without draught. Although some HVAC systems observed in this study should create the prerequisites for ensuring more comfort, results show that this is not the case for all buildings in this study.
ARTICLE | doi:10.20944/preprints201808.0326.v2
Subject: Social Sciences, Library & Information Science Keywords: business plan; publishing; academic libraries; open access
Online: 27 September 2018 (04:27:08 CEST)
Over the last twenty years, library publishing has emerged in higher education as a new class of publisher. Conceived as a response to commercial publishing practices that have strained library budgets and prevented scholars from openly licensing and sharing their works, library publishing is both a local service program and a broader movement to disrupt the current scholarly publishing arena. It is growing both in numbers of publishers and numbers of works produced. The commercial publishing framework which determines the viability of monetizing a product is not necessarily applicable for library publishers who exist as a common good to address the needs of their academic communities. Like any business venture, however, library publishers must develop a clear service model and business plan in order to create shared expectations for funding streams, quality markers, as well as technical and staff capacity. As the field is maturing from experimental projects to full programs, library publishers are formalizing their offerings and limitations. The anatomy of a library publishing business plan is presented and includes the principles of the program, scope of services, and staffing and governance requirements. Other aspects include production policies, financial structures, and measures of success.
ARTICLE | doi:10.20944/preprints201808.0492.v1
Subject: Social Sciences, Library & Information Science Keywords: bibliometrics; publication statistics; open Access; citation impact
Online: 29 August 2018 (10:32:21 CEST)
Based on the total scholarly article output of Norway, we investigated the coverage and degree of openness according to three bibliographic services 1) Google Scholar, 2) oaDOI by Impact Story and 3) 1findr by 1science. According to Google Scholar, we find that more than 70% of all Norwegian articles are openly available. However, degrees are profoundly lower according to oaDOI and 1findr, respectively 31% and 52%. Varying degrees are mainly caused by different interpretations of openness, with oaDOI being most restrictive. Furthermore, open shares vary considerably by discipline, with the Medcine and Health sciences at the upper and the Humanities at the lower end. We also determined the citation frequencies using Cited-by values as of Google Scholar, applying year and subject normalization. We find a significant citation advantage for open articles. However, this is not the case for all types of openness. In fact, the category Open Access journals was by far lowest cited, indicating that young journals with a declared open access policy still lack recognition.
ARTICLE | doi:10.20944/preprints201803.0067.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: open microcontrolled platform; data acquisition; remote measurement
Online: 8 March 2018 (15:21:13 CET)
The commercial equipment that carries out the measurement of temperature has a high cost. Therefore, this article describes the development of a temperature measurement equipment, which uses a microcontrolled platform, responsible for managing the data of the collected temperature signals and making available the acquired information, so that they can be verified in real time at the measurement site, or remotely. The construction of the temperature measurement equipment was performed using open platform hardware / software, where performance tests were carried out with the objective of developing a temperature measurement equipment that has measurement quality and low cost.
ARTICLE | doi:10.20944/preprints201705.0045.v1
Online: 5 May 2017 (05:29:10 CEST)
Dump design and scheduling are critical elements to effective mine planning, especially if several of them are required in large-scale open pit mines. Infrastructure capital and transportation costs are considerable from an early stage in the mining project, and through the life-of-mine as these dumps gradually become immense structures. Delivered mining rates, as well as certain spatial and physical constraints, provide a set of parameters of mathematical and economic relationship that creates opportunities for modelling and thus facilitates the measuring and optimization of ultimate dump design by using programming and empirical techniques while achieving economic objectives. This paper presents a methodology to model and optimize the design of a mine dump by minimizing the total haulage costs. The proposed methodology consists on: (i) Formulation of a dump model based on a system of equations relying on multiple relevant parameters; (ii) Solves by minimizing the total cost using linear programming and determines a ‘preliminary’ dump design; (iii) Through a series of iterations, modifies the ‘preliminary’ footprint by projecting it to the topography and creates the ultimate dump design. Finally, an example application for a waste rock dump illustrates this methodology.
ARTICLE | doi:10.20944/preprints202108.0018.v1
Subject: Physical Sciences, Radiation & Radiography Keywords: deep reinforcement learning; source search and localization; active search; gamma radiation; source parameter estimation; sequential decision making; non-convex environment}
Online: 2 August 2021 (11:14:24 CEST)
Rapid search and localization for nuclear sources can be an important aspect in preventing human harm from illicit material in dirty bombs or from contamination. In the case of a single mobile radiation detector, there are numerous challenges to overcome such as weak source intensity, multiple sources, background radiation, and the presence of obstructions, i.e., a non-convex environment. In this work, we investigate the sequential decision making capability of deep reinforcement learning in the nuclear source search context. A novel neural network architecture (RAD-A2C) based on the actor critic (A2C) framework and a particle filter gated recurrent unit for localization is proposed. Performance is studied in a randomized 20 x 20 m convex and non-convex environment across a range of signal-to-noise ratio (SNR)s for a single detector and single source. RAD-A2C performance is compared to both an information-driven controller that uses a bootstrap particle filter and to a gradient search (GS) algorithm. We find that the RAD-A2C has comparable performance to the information-driven controller across SNR in a convex environment and at lower computational complexity per action. The RAD-A2C far outperforms the GS algorithm in the non-convex environment with greater than 95% median completion rate for up to seven obstructions.
ARTICLE | doi:10.20944/preprints201906.0294.v1
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: is*-open set, is*-continuous, is*-open, is*-irresolute, is*-totally continuous, is-contra-continuous mappings, is*-separation
Online: 28 June 2019 (11:45:03 CEST)
In this paper, we introduce a new class of open sets that is called is*-open set . Also, we present the notion of is*-continuous, is*-open, is*-irresolute, is*-totally continuous, and is-contra-continuous mappings, and we investigate some properties of these mappings. Furthermore, we introduce some is*-separation axioms, and is*-mappings are related with is*-separation axioms. . .
ARTICLE | doi:10.20944/preprints202207.0005.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Maroon gas; synthetic natural gas; Pink Hydrogen; hydrogen source-water, hydrogen source-hydrocarbon; energy efficient hydro-gen; Thermodynamic simulations; FACTSAGE; DWSIM
Online: 1 July 2022 (07:55:58 CEST)
This paper describes a novel concept of producing energy efficient ”Maroon enriched natural gas “ and then Pink hydrogen” from any hydrocarbon base. The key idea is the extraction of hydrogen from water in addition to that from the hydrocarbon in an optimal fashion. This has the benefit of higher water vapor to CO2 exhaust ratio than conventional carbonaceous fuels when generating energy via combustion, a prudent step in achieving Netzero goals in a shorter time, and creating energy independence in most places.. The process of production makes concentrated CO2 available for use and or sequestration. The process also maximizes use of renewable electricity in hydrogen generation, and maximizes use of existing infrastructure, with a minimum capital cost by energy recycle in the process. The process design applies sound thermodynamic principles which evolved during the nineteenth century, and mimics the geochemical processes going on in some of the natural 'colorless hydrogen'.
ARTICLE | doi:10.20944/preprints202206.0018.v1
Subject: Physical Sciences, Optics Keywords: Path Entanglement; Non-heralded; Bright Entangled Source; Entanglement Purification
Online: 1 June 2022 (11:33:33 CEST)
This paper discusses a means of making an extremely bright path entangled source. An initial laser source is preferred but any source of light: LED, sub-critical laser, coherent or thermal can be used. The light is dimmed by a beam expander until the relative number of |1> or |2> photons increases compared to higher photon states. The expanded beam is then passed through a 1:1 beamsplitter to generate path entanglement on the |1> and |2> photons. A further stage of “purification” can remove the non-entangled higher states by passing the output beams from the beamsplitter through one another, such that the correlated entangled photon electrical fields cancel in some region. In the said region, the uncorrelated non-entangled fields can be Faraday rotated and then absorbed by a polariser. The entangled photons pass through the region without rotation and attenuation. The output from the device then has copious quantities of 1 and 2 photon path entangled suitable for use in telecommunications engineering, secure transmission of data and quantum metrology. The wide beams can be beam-contracted to a thin bright beam and will keep the path entanglement of individual photons, as photons are bosons and so don’t interact, furthermore, all operations are unitary and linear, as by Maxwell’s equations.
ARTICLE | doi:10.20944/preprints202103.0649.v1
Subject: Biology, Agricultural Sciences & Agronomy Keywords: wheat; micronutrient; macronutrient; source-sink regulation; biofortification; phytate; bioavailability
Online: 25 March 2021 (17:17:10 CET)
In order to better understand the source-sink flow and relationships of Zinc (Zn) and other nutrients in wheat (Triticum aestivum L.) plants for biofortification and improving grain nutritional quality, effects of reducing photoassimilate source (through the flag leaf removal and spike shading) or sink (through 50% spikelets removal) in the field on accumulation of Zn and other nutrients in wheat grains of two cultivars (Jimai 22 and Jimai 44) were investigated under two soil Zn application levels. The single panicle weight (SPW), kernel number per spike (KNPS), thousand kernel weight (TKW), total grain weight (TGW), concentrations and yields of various nutrient elements (Zn, Fe, Mn, Cu, N, P, K, Ca and Mg), phytate phosphorus (phytate-P), phytic acid (PA) and phytohormones (ABA: abscisic acid, and the ethylene precursor ACC: 1-aminocylopropane-1-carboxylic acid), and C/N ratios were determined. Soil Zn application significantly increased concentrations of grain Zn, N and K. Cultivars showing higher grain yields had lower grain protein and micronutrient nutritional quality. SPW, KNPS, TKW (with an exception of TKW in half spikelets removal), TGW, and nutrient yields in wheat grains were most severely reduced by half spiklets removal, secondly by spike shading, and slightly by flag leaf removal. Grain concentrations of Zn, N and Mg consistently showed negative correlations with SPW, KNPS and TGW, but positively with TKW. There were general positive correlations among grain concentrations of Zn, Fe, Mn, Cu, N and Mg, and bioavailability of Zn and Fe (estimated by molar ratios of PA/Zn, PA × Ca/Zn, PA/Fe, or PA × Ca/Fe). Although concentrations of Zn and Fe were increased and Ca was decreased in treatments of half spikelets removal and spike shading, the simultaneously increased PA limited the increase in bioavailability of Zn and Fe. In general, different nutrient elements interact with each other and are affected to different degrees by source-sink manipulations. Elevated endogenous ABA levels and ABA/ACC ratios were associated with increased TKW and grain-filling of Zn, Mn, Ca and Mg, and inhibited K in wheat grains. However, effects of ACC were diametrically opposite. These results provide basis for wheat grain biofortification to alleviate human malnutrition.
ARTICLE | doi:10.20944/preprints202103.0352.v1
Subject: Physical Sciences, Acoustics Keywords: remote sensing; spectroscopy; blind source separation; unsupervised clustering; insects
Online: 12 March 2021 (20:16:55 CET)
Characterization of flying insects in-situ measurement using remote sensing spectroscopy is an emerging research field. Also, most analysis techniques in remote sensing spectroscopy are based on the use of an intensity threshold which introduces indeterminacies in the number of detected specimens. In this manuscript, we investigated the possibility of analysing passive remote sensing spectroscopy measurement data using the maximum noise fraction method. The results obtained show that this analysis technique can help to overcome the measurement of background noise in spectroscopic measurements.
ARTICLE | doi:10.20944/preprints202005.0159.v1
Subject: Social Sciences, Other Keywords: awareness; livestock farmer; ICT-source; market information; rural; smallholder
Online: 9 May 2020 (08:46:42 CEST)
The utility of ICTs for providing market information to rural smallholder farmers is growing rapidly, and access to reliable information and sources is considered crucial for beneficial market interaction. This study explored critical factors contributing to usage of electronic sources for market information search among rural smallholder livestock farmers. Using data collected from 129 respondents through a non-random sampling technique; descriptive and regression analysis was applied to identify key factors responsible for their awareness and use of ICT-based market information sources. Level of education was found to be a driver of awareness of ICT-based sources, and use of these sources was influenced by farmer-specific characteristics such as household size, education, income, membership of cooperatives and herd-size. The key ICT tools used was radio and mobile phones, widely available in the study area. Identified constraints to use of these ICTs include cost and patchy network signals in some areas. Policy interventions to reduce cost of mobile phone services and expansion of base stations; including practical recommendations for improved programming in radio and television offerings, are considered indispensable for greater uptake of e-information sources among smallholder livestock farmers.
ARTICLE | doi:10.20944/preprints201806.0041.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: PV; MPRVS; quasi Z-source inverter; MPP; SEPIC converter
Online: 4 June 2018 (12:19:02 CEST)
This research work deals with the modeling and control of hybrid photovoltaic (PV) - Wind micro-grid using Quasi Z-Source inverter. This inverter provides better buck/boost characteristics, able to regulate the phase angle output, less harmonic content, no requirement of the filter and has high power performance characteristics over conventional inverter as major benefits. A SEPIC converter as dc-dc switched power apparatus is employed for maximum power point tracking (MPPT) functions which provides high voltage gain throughout the process. Moreover, a modified power ratio variable step (MPRVS) based perturb & observe (P&O) method has been proposed in the PV MPPT action which forces the operating point close to maximum power point (MPP). Practical responses justify the performance of hybrid PV-Wind micro-grid with Quasi Z-Source inverter structure.
ARTICLE | doi:10.20944/preprints201809.0535.v1
Subject: Earth Sciences, Environmental Sciences Keywords: ozone; greater metropolitan region of Sydney; source contribution; source attribution; air quality model; Cubic Conformal Atmospheric Model (CCAM); Chemical Transport Model (CTM)
Online: 27 September 2018 (06:17:54 CEST)
Ozone and fine particles (PM2.5) are the two main air pollutants of concern in the New South Wales Greater Metropolitan Region (NSW GMR) region due to their contribution to poor air quality days in the region. This paper focuses on source contributions to ambient ozone concentrations for different parts of the NSW GMR, based on source emissions across the greater Sydney region. The observation-based Integrated Empirical Rate Model (IER) was applied to delineate the different regions within the GMR based on the photochemical smog profile of each region. Ozone source contribution is then modelled using the CCAM-CTM (Cubic Conformal Atmospheric Model-Chemical Transport Model) modelling system and the latest air emission inventory for the greater Sydney region. Source contributions to ozone varied between regions, and also varied depending on the air quality metric applied (e.g., average or maximum ozone). Biogenic volatile organic compound (VOC) emissions were found to contribute significantly to median and maximum ozone concentration in North West Sydney during summer. After commercial domestic, power station was found to be the next largest anthropogenic source of maximum ozone concentrations in North West Sydney. However, in South West Sydney, beside commercial and domestic sources, on-road vehicles were predicted to be the most significant contributor to maximum ozone levels, followed by biogenic sources and power stations. The results provide information which policy makers can devise various options to control ozone levels in different parts of the NSW Greater Metropolitan Region.