REVIEW | doi:10.20944/preprints202302.0281.v1
Subject: Computer Science And Mathematics, Software Keywords: precision agriculture; open source software; open source technologies
Online: 16 February 2023 (09:11:51 CET)
Agricultural production needs technologies that assist the management of natural resources, for example, the collection of real-time data on soil, water, weather, crops, and biodiversity conditions. Sensor technology solutions and open-source software are appropriate for promoting more sustainable agricultural production. Among the advantages of using open-source technologies and software is its potential for extension, collaboration, customization, flexibility, maintenance cost, transparency, speed, and better security. Given the above, the objective of this research was to find, in different electronic databases, exclusively open-source software for precision agriculture, offering a systematic review, and addressing considerations and challenges. This survey considers up-to-date open-source software available in repositories such as GitHub and GitLab, to understand its characteristics and application formats.
ARTICLE | doi:10.20944/preprints202310.1681.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: open hardware; open source hardware; open source electronics; automation; Arduino; automation; pyrolysis; data acquisition; controls; monitoring
Online: 26 October 2023 (10:16:54 CEST)
Industrial pilot projects often rely on proprietary and expensive electronic hardware to control and monitor experiments. This raises costs and retards innovation. Open-source hardware tools exist for implementing these processes individually, however, they are not easily integrated with other designs. The Broadly Reconfigurable and Expandable Automation Device (BREAD) framework provides many open-source devices which can be connected to create more complex data acquisition and control systems. This article explores the feasibility of using BREAD plug-and-play open hardware to quickly design and test monitoring and control electronics for an industrial materials processing prototype pyrolysis reactor. Generally, pilot-scale pyrolysis plants are expensive custom designed systems. The plug-and-play prototype approach is first tested by connecting it to the pyrolysis reactor and ensuring it can measure temperature and actuate heaters and a stirring motor. Next, a single circuit board system was created and tested using the designs from the BREAD prototype to reduce the number of microcontrollers required. Both open-source control systems were capable of reliably running the pyrolysis reactor continuously and the overall cost of control was reduced by more than 10X. Open-source, plug-and-play hardware provides a reliable avenue for researchers to quickly develop data acquisition and control electronics for industrial-scale experiments.
REVIEW | doi:10.20944/preprints202004.0054.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: pandemic; influenza pandemic; open source; open hardware; COVID-19; COVID-19 pandemic; medical hardware; open source medicine
Online: 6 April 2020 (12:38:59 CEST)
Distributed digital manufacturing offers a solution to medical supply and technology shortages during pandemics. To prepare for the next pandemic, this study reviews the state-of-the-art for open hardware designs needed in a COVID-19-like pandemic. It evaluates the readiness of the top twenty technologies requested by the Government of India. The results show that the majority of the actual medical products have had some open source development, however, only 15% of the supporting technologies that make the open source device possible are freely available. The results show there is still considerable work needed to provide open source paths for the development of all the medical hardware needed during pandemics. Five core areas of future work are discussed that include: i) technical development of a wide-range of open source solutions for all medical supplies and devices, ii) policies that protect the productivity of laboratories, makerspaces and fabrication facilities during a pandemic, as well as iii) streamlining the regulatory process, iv) developing Good-Samaritan laws to protect makers and designers of open medical hardware, as well as to compel those with knowledge that will save lives to share it, and v) requiring all citizen-funded research to be released with free and open source licenses.
ARTICLE | doi:10.20944/preprints202004.0472.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: 3-D printing; additive manufacturing; distributed manufacturing; laboratory equipment; open hardware; open source; open source hardware; scale; balance; mass
Online: 27 April 2020 (02:59:34 CEST)
This study provides designs for a low-cost, easily replicable open source lab-grade digital scale that can be used as a precision balance. The design is such that it can be manufactured for use in most labs throughout the world with open source RepRap-class material extrusion-based 3-D printers for the mechanical components and readily available open source electronics including the Arduino Nano. Several versions of the design were fabricated and tested for precision and accuracy for a range of load cells. The results showed the open source scale was found to be repeatable within 0.1g with multiple load cells, with even better precision (0.01g) depending on load cell range and style. The scale tracks linearly with proprietary lab-grade scales, meeting the performance specified in the load cell data sheets, indicating that it is accurate across the range of the load cell installed. The smallest loadcell tested(100g) offers precision on the order of a commercial digital mass balance. The scale can be produced at significant cost savings compared to scales of comparable range and precision when serial capability is present. The cost savings increase significantly as the range of the scale increases and are particularly well-suited for resource-constrained medical and scientific facilities.
ARTICLE | doi:10.20944/preprints202311.0780.v1
Subject: Social Sciences, Library And Information Sciences Keywords: LMS; ICT; LSP; FOLIO; OPEN SOURCE
Online: 13 November 2023 (10:44:13 CET)
Over time, the library and information profession has adapted and embraced new technologies to enhance its practices. The emergence of computer and communication technology has significantly transformed the delivery of library services. One early example of this is the utilization of Library Management Systems (LMS) during the computerization of libraries, which integrated Information and Communication Technology (ICT) into library operations. However, as libraries began to adopt electronic and digital materials, it became apparent that the current LMSs were insufficient to handle the growing array of resources. To effectively manage all these resources, new library systems, known as library services platforms (LSP), evolved to the next generation. Several LSPs have gained traction, leading to a transformation in how libraries operate. This article provides an overview of LSPs, explores various LSP options, and delves into detail about FOLIO (Future of Libraries is Open), which stands as the only open-source library services platform available at present.
ARTICLE | doi:10.20944/preprints202302.0056.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: metabolomics; untargeted; mass-spectrometry; open-source; bioinformatics
Online: 3 February 2023 (04:16:00 CET)
Untargeted metabolomics is a powerful tool for measuring and understanding complex biological chemistries. However, employment, bioinformatics and downstream analysis of mass spectrometry (MS) data can be daunting for inexperienced users. Numerous open-source and free-to-use data processing and analysis tools exist for various untargeted MS approaches, including liquid chro-matography (LC), but choosing the ‘correct’ pipeline isn’t straight-forward. This tutorial, in con-junction with a user-friendly online guide presents a workflow for connecting these tools to process, analyse and annotate various untargeted MS datasets. The workflow is intended to guide explor-atory analysis in order to inform decision-making regarding costly and time-consuming down-stream targeted MS approaches. We provide practical advice concerning experimental design, organisation of data and downstream analysis, and offer details on sharing and storing valuable MS data for posterity. The workflow is editable and modular, allowing flexibility for updated/ changing methodologies and increased clarity and detail as user participation becomes more common. Hence, the authors welcome contributions and improvements to the workflow via the online repository. We believe that this workflow will streamline and condense complex mass-spectrometry approaches into easier, more manageable, analyses thereby generating opportunities for researchers previously discouraged by inaccessible and overly complicated software.
ARTICLE | doi:10.20944/preprints202101.0082.v2
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: Shoreline Evolution; Open-Source Software; GIS; Modeling
Online: 19 February 2021 (09:46:48 CET)
This paper presents the validation of the End Point Rate (EPR) tool for QGIS (EPR4Q), a tool built-in QGIS Graphical Modeler to calculate the shoreline change by End Point Rate method. The EPR4Q tries to fill the gap of user-friendly and free open-source tool for shoreline analysis in Geographic Information System environment, since the most used software - Digital Shoreline Analysis System (DSAS) - although is a free extension, is suited for commercial software. Besides, the best free open-source option to calculate EPR called Analyzing Moving Boundaries Using R (AMBUR), since it is a robust and powerful tool, the complexity and heavy processes can restrict the accessibility and simple usage. The validation methodology consists of applying the EPR4Q, DSAS, and AMBUR on different examples of shorelines found in nature, extracted from the U.S. Geological Survey Open-File. The obtained results of each tool were compared with Pearson correlation coefficient. The validation results indicate that the EPR4Q tool created acquired high correlation values with DSAS and AMBUR, reaching a coefficient of 0.98 to 1.00 on linear, extensive, and non-extensive shorelines, guarantying that the EPR4Q tool is ready to be freely used by the academic, scientific, engineering, and coastal managers communities worldwide.
ARTICLE | doi:10.20944/preprints202301.0228.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: open source; climate indices; emissions scenarios; climate projections
Online: 12 January 2023 (13:49:09 CET)
The paper presents the open source tool climdex-kit which includes utilities to compute, analyze and visualize climate indices based on the input data, target domain and temporal extent defined by the user. It is intended to support researchers as well as practitioners and decision makers to derive, handle and interpret meaningful information for climate change studies and sectoral applications. It currently includes the computation of 30 indices based on temperature and precipitation, describing both mean and extreme climate conditions, and it is designed to work with climate model projections. The tool is written in Python and integrates utilities from the well-established Climate Data Operators (CDO) and NetCDF Operators (NCO) libraries. The specific utilities for selecting, aggregating and visualizing data are thought to help users to produce tailored results improving understanding and communication of future climate change. In order to show the functionalities of the package as well as its potential integration in regional climate services, climdex-kit was applied to an ensemble of downscaled climate model projections up to 2100 for the region Trentino – South Tyrol (north-eastern Italian Alps). The projections for a selection of indices accounting for extreme temperature and precipitation conditions were derived and different visualization choices discussed. The package climdex-kit is developed in a way that allows users to implement additional routines for calculating other indices as well as for easily adapting the routines to handle with different data types and spatio-temporal targets defined by the specific application. Effective climate services can in fact be developed only if flexible tools and customizable climate information are integrated with a clear understanding of data features and limitations.
ARTICLE | doi:10.20944/preprints202212.0018.v1
Subject: Engineering, Control And Systems Engineering Keywords: airborne wind energy; optimal control; open-source software
Online: 1 December 2022 (08:54:28 CET)
In this paper we present AWEbox, a Python toolbox for modeling and optimal control of multi-aircraft systems for airborne wind energy (AWE). AWEbox provides an implementation of optimization-friendly multi-aircraft AWE dynamics for a wide range of system architectures and modeling options. It automatically formulates typical AWE optimal control problems based on these models, and finds a numerical solution in a reliable and efficient fashion. To obtain a high level of reliability and efficiency, the toolbox implements different homotopy methods for initial guess refinement. The first type of methods produces a feasible initial guess from an analytic initial guess based on user-provided parameters. The second type implements a warmstart procedure for parametric sweeps. We investigate the software performance in two different case studies. In the first case study we solve a single-aircraft reference problem for a large number of different initial guesses. The homotopy methods reduce the expected computation time by a factor of 1.7 and and the peak computation time by a factor of 8, compared to when no homotopy is applied. Overall, the CPU timings are competitive to timings reported in the literature. When the user initialization draws on expert a priori knowledge, homotopies do not increase expected performance, but the peak CPU time is still reduced by a factor of 5.5. In the second case study, a power curve for a dual-aircraft lift-mode AWE system is computed using the two different homotopy types for initial guess refinement. On average, the second homotopy type, which is tailored for parametric sweeps, outperforms the first type in terms of CPU time by a factor of 3. In conclusion, AWEbox provides an open-source implementation of efficient and reliable optimal control methods that both control experts and non-expert AWE developers can benefit from.
REVIEW | doi:10.20944/preprints202105.0352.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: 3d printing; microscopy; open-source; optics; super-resolution
Online: 14 May 2021 (16:10:24 CEST)
The maker movement has reached the optics labs, empowering researchers to actively create and modify microscope designs and imaging accessories. 3D printing has especially had a disruptive impact on the field, as it entails an accessible new approach in fabrication technologies, namely additive manufacturing, making prototyping in the lab available at low cost. Examples of this trend are taking advantage of the easy availability of 3D printing technology. For example, inexpensive microscopes for education have been designed, such as the FlyPi. Also, the highly complex robotic microscope OpenFlexure represents a clear desire for the democratisation of this technology. 3D printing facilitates new and powerful approaches to science and promotes collaboration between researchers, as 3D designs are easily shared. This holds the unique possibility to extend the open-access concept from knowledge to technology, allowing researchers from everywhere to use and extend model structures. Here we present a review of additive manufacturing applications in microscopy, guiding the user through this new and exciting technology and providing a starting point to anyone willing to employ this versatile and powerful new tool.
ARTICLE | doi:10.20944/preprints202102.0513.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: Sea-Level Rise; GIS; Open-Source Software; Modeling
Online: 23 February 2021 (12:39:09 CET)
Sea-level rise is a problem increasingly affecting coastal areas worldwide. The existence of Free and Open-Source Models to estimate the sea-level impact can contribute to better coastal man-agement. This study aims to develop and to validate two different models to predict the sea-level rise impact supported by Google Earth Engine (GEE) – a cloud-based platform for planetary-scale environmental data analysis. The first model is a Bathtub Model based on the uncertainty of projections of the Sea-level Rise Impact Module of TerrSet - Geospatial Monitoring and Modeling System software. The validation process performed in the Rio Grande do Sul coastal plain (S Brazil) resulted in correlations from 0.75 to 1.00. The second model uses Bruun Rule formula implemented in GEE and is capable to determine the coastline retreat of a profile through the creation of a simple vector line from topo-bathymetric data. The model shows a very high cor-relation (0.97) with a classical Bruun Rule study performed in Aveiro coast (NW Portugal). The GEE platform seems to be an important tool for coastal management. The models developed have been openly shared, enabling the continuous improvement of the code by the scientific commu-nity.
ARTICLE | doi:10.20944/preprints202102.0421.v1
Subject: Environmental And Earth Sciences, Atmospheric Science And Meteorology Keywords: Sea-Level Rise; GIS; Open-Source Software; Modeling
Online: 18 February 2021 (13:52:49 CET)
Sea-level rise is a problem increasingly affecting coastal areas worldwide. The existence 15 of Free and Open-Source Models to estimate the sea-level impact can contribute to better coastal 16 management. This study aims to develop and to validate two different models to predict the 17 sea-level rise impact supported by Google Earth Engine (GEE) – a cloud-based platform for plan-18 etary-scale environmental data analysis. The first model is a Bathtub Model based on the uncer-19 tainty of projections of the Sea-level Rise Impact Module of TerrSet - Geospatial Monitoring and 20 Modeling System software. The validation process performed in the Rio Grande do Sul coastal 21 plain (S Brazil) resulted in correlations from 0.75 to 1.00. The second model uses Bruun Rule for-22 mula implemented in GEE and is capable to determine the coastline retreat of a profile through the 23 creation of a simple vector line from topo-bathymetric data. The model shows a very high correla-24 tion (0.97) with a classical Bruun Rule study performed in Aveiro coast (NW Portugal). The GEE 25 platform seems to be an important tool for coastal management. The models developed have been 26 openly shared, enabling the continuous improvement of the code by the scientific community.
REVIEW | doi:10.20944/preprints202003.0362.v1
Subject: Biology And Life Sciences, Virology Keywords: Free and Open Source Hardware; COVID-19; pandemic
Online: 24 March 2020 (14:46:29 CET)
With the current rapid spread of COVID-19, global health systems are increasingly overburdened by the sheer number of people that need diagnosis, isolation and treatment. Shortcomings are evident across the board, from staffing, facilities for rapid and reliable testing to availability of hospital beds and key medical-grade equipment. The scale and breadth of the problem calls for an equally substantive response not only from frontline workers such as medical staff and scientists, but from skilled members of the public who have the time, facilities and knowledge to meaningfully contribute to a consolidated global response. Here, we summarise community-driven approaches based on Free and Open Source scientific and medical Hardware (FOSH) currently being developed and deployed to bolster access to personal protective equipment (PPE), patient treatment and diagnostics.
ARTICLE | doi:10.20944/preprints201805.0470.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: remote sensing; python; data management; landsat; open-source
Online: 31 May 2018 (11:12:27 CEST)
Many remote sensing analytical data products are most useful when they are in an appropriate regional or national projection, rather than globally based projections like Universal Transverse Mercator (UTM) or geographic coordinates, i.e., latitude and longitude. Furthermore, leaving data in the global systems can create problems, either due to misprojection of imagery because of UTM zone boundaries, or because said projections are not optimised for local use. We developed the open-source Irish Earth Observation (IEO) Python module to maintain a local remote sensing data library for Ireland. This pure Python module, in conjunction with the IEOtools Python scripts, utilises the Geospatial Data Abstraction Library (GDAL) for its geoprocessing functionality. At present, the module supports only Landsat TM/ETM+/OLI/TIRS data that have been corrected to surface reflectance using the USGS/ESPA LEDAPS/ LaSRC Collection 1 architecture. This module and the IEOtools catalogue available Landsat data from the USGS/EROS archive, and includes functions for the importation of imagery into a defined local projection and calculation of cloud-free vegetation indices. While this module is distributed with default values and data for Ireland, it can be adapted for other regions with simple modifications to the configuration files and geospatial data sets.
TECHNICAL NOTE | doi:10.20944/preprints201804.0047.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: BLAST; DNA, open source; phylogenetics; R; sequence orthology.
Online: 4 April 2018 (06:00:40 CEST)
The exceptional increase in molecular DNA sequence data in open repositories is mirrored by an ever-growing interest among evolutionary biologists to harvest and use those data for phylogenetic inference. Many quality issues, however, are known and the sheer amount and complexity of data available can pose considerable barriers to their usefulness. A key issue in this domain is the high frequency of sequence mislabelling encountered when searching for suitable sequences for phylogenetic analysis. These issues include the incorrect identification of sequenced species, non-standardised and ambiguous sequence annotation, and the inadvertent addition of paralogous sequences by users, among others. Taken together, these issues likely add considerable noise, error or bias to phylogenetic inference, a risk that is likely to increase with the size of phylogenies or the molecular datasets used to generate them. Here we present a software package, phylotaR, that bypasses the above issues by using instead an alignment search tool to identify orthologous sequences. Our package builds on the framework of its predecessor, PhyLoTa, by providing a modular pipeline for identifying overlapping sequence clusters using up-to-date GenBank data and providing new features, improvements and tools. We demonstrate our pipeline’s effectiveness by presenting trees generated from phylotaR clusters for two large taxonomic clades: palms and primates. Given the versatility of this package, we hope that it will become a standard tool for any research aiming to use GenBank data for phylogenetic analysis.
ARTICLE | doi:10.20944/preprints201711.0181.v3
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: 3D printing; open source; RepRap; calibration; bed levelling
Online: 12 January 2018 (07:35:31 CET)
Inexpensive piezoelectric diaphragms can be used as sensors to facilitate both nozzle height setting and build platform leveling in FFF (Fused Filament Fabrication) 3D printers. Tests simulating nozzle contact are conducted to establish the available output and an output of greater than 8 Volts found at 20 ºC, a value which is readily detectable by simple electronic circuits. Tests are also conducted at a temperature of 80 ºC and, despite a reduction of greater than 80% in output voltage, this is still detectable. The reliability of piezoelectric diaphragms is investigated by mechanically stressing samples over 100,000 cycles at both 20 ºC and 80 ºC and little loss of output over the test duration is found. The development of a nozzle contact sensor using a single piezoelectric diaphragm is described.
ARTICLE | doi:10.20944/preprints201702.0055.v1
Subject: Engineering, Energy And Fuel Technology Keywords: open source; energy system modelling; energy system optimization
Online: 15 February 2017 (11:20:31 CET)
The process of modelling energy systems is accompanied by challenges inherently connected with mathematical modelling. However, due to modern realities in the 21st century, existing challenges are gaining in magnitude and are supplemented with new ones. Modellers are confronted with a rising complexity of energy systems and high uncertainties on different levels. In addition, interdisciplinary modelling is necessary for getting insight in mechanisms of an integrated world. At the same time models need to meet scientific standards as public acceptance becomes increasingly important. In this intricate environment model application as well as result communication and interpretation is also getting more difficult. In this paper we present the open energy modelling framework (oemof) as a novel approach for energy system modelling and derive its contribution to existing challenges. Therefore, based on literature review, we outline challenges for energy system modelling as well as existing and emerging approaches. Based on a description of the philosophy and elementary structural elements of oemof, a qualitative analysis of the framework with regard to the challenges is undertaken. Inherent features of oemof such as the open source, open data, non-proprietary and collaborative modelling approach are preconditions to meet modern realities of energy modelling. Additionally, a generic basis with an object-oriented implementation allows to tackle challenges related to complexity of highly integrated future energy systems and sets the foundation to address uncertainty in the future. Experiences from the collaborative modelling approach can enrich interdisciplinary modelling activities. Our analysis concludes that there are remaining challenges that can neither be tackled by a model nor a modelling framework. Among these are problems connected to result communication and interpretation.
ARTICLE | doi:10.20944/preprints202005.0479.v1
Subject: Engineering, Mechanical Engineering Keywords: open source; open hardware; COVID-19; medical hardware; RepRap; 3-D printing; open source medical hardware; high temperature 3-D printing; additive manufacturing; ULTEM; polycarbonate
Online: 31 May 2020 (16:18:20 CEST)
Thermal sterilization is generally avoided for 3-D printed components because of the relatively low deformation temperatures for common thermoplastics used for material extrusion-based additive manufacturing. 3-D printing materials required for high-temperature heat sterilizable components for COVID-19 and other applications demands 3-D printers with heated beds, hot ends that can reach higher temperatures than polytetrafluoroethylene (PTFE) hot ends and heated chambers to avoid part warping and delamination. There are several high temperature printers on the market, but their high costs make them inaccessible for full home-based distributed manufacturing required during pandemic lockdowns. To allow for all these requirements to be met for under $1,000, the Cerberus – an open source three-headed self-replicating rapid prototyper (RepRap) was designed and tested with the following capabilities: i) 200oC-capable heated bed, ii) 500oC-capabel hot end, iii) isolated heated chamber with 1kW space heater core and iv) mains voltage chamber and bed heating for rapid start. The Cereberus successfully prints polyetherketoneketone (PEKK) and polyetherimide (PEI, ULTEM) with tensile strengths of 77.5 and 80.5 MPa, respectively. As a case study, open source face masks were 3-D printed in PEKK and shown not to warp upon widely home-accessible oven-based sterilization.
ARTICLE | doi:10.20944/preprints201708.0069.v1
Subject: Computer Science And Mathematics, Software Keywords: energy system analysis; model challenges; open science; open source; energy modelling framework; oemof
Online: 21 August 2017 (03:02:34 CEST)
The research field of energy system analysis is dealing with increasingly complex energy systems and their respective challenges. Moreover, the requirement for open science has become a focal point of public interest. Both drivers have triggered the development of a broad range of (open) energy models and frameworks in recent years. However, there are hardly any approaches on how to evaluate these tools in terms of their capabilities to tackle energy system modelling challenges. This paper describes a first step towards a flexible evaluation of software to model energy systems. We propose a qualitative approach as an useful supplementary to existing model fact sheets and transparency checklists. We demonstrate the applicability by evaluating the newly developed “Open Energy Modelling Framework” with respect to existing challenges in energy system modelling. The case study results highlight that challenges related to complexity and scientific standards can be tackled to a large extent while the challenges of model utilization and interdisciplinary modelling are only tackled partially. However, the challenge of uncertainty remains for the most part unaddressed at present. Advantages of the evaluation approach lie in its simplicity, flexibility and transferability to other tools. Disadvantages mostly stem from its qualitative nature. Our analysis reveals that some challenges in the field of energy system modelling cannot be addressed by a software as they are on meta level like model result communication and interdisciplinary modelling.
ARTICLE | doi:10.20944/preprints202211.0093.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: automatic typesetting; media-neutral publishing; open access; open source; scholarly publishing; XML/HTML conversion
Online: 4 November 2022 (13:17:34 CET)
Due to resource constraints, most Diamond Open Access journals publish less than 25 articles per year, and 75% of journals are not able to provide their content in XML and HTML, primarily providing only PDFs (Bosman et al., 2021, p. 7-8). In order to keep up with larger commercial publishers, a high degree of automation and streamlining of processes is necessary. The Open Source Academic Publishing Suite (OS-APS) project funded by the German Federal Ministry of Education and Research aims to achieve this. OS-APS automatically extracts the underlying XML from Word manuscripts and offers optimization and export options in various formats (PDF, HTML, EPUB). The professional corporate design, e.g., of the PDFs, is managed automatically by using templates or creating one's own using a Template Development Kit. OS-APS will also connect to scholarly-led and community-driven publishing platforms such as Open Journal Systems (OJS), Open Monograph Press (OMP), and DSpace: the software will be able to be integrated into a wide range of publication processes, whether at small, low-resource commercial Open Access Publishers, or institutional and Diamond Open Access Publishers. References: Bosman, J., Frantsvåg, J. E., Kramer, B., Langlais, P.‑C., & Proudman, V. (2021). Oa Diamond Journals Study. Part 1: Findings. https://doi.org/10.5281/zenodo.4558703
REVIEW | doi:10.20944/preprints202307.2105.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: 3-D printing; additive manufacturing; innovation; intellectual monopoly; intellectual property; open innovation; open hardware; open source; patent; RepRap
Online: 31 July 2023 (10:51:27 CEST)
Open-source 3-D printing has played a pivotal role in revolutionizing the additive manufacturing (AM) landscape, by making distributed manufacturing economic, democratizing access, and fostering far more rapid innovation than antiquated proprietary systems. Unfortunately, some 3-D printing manufacturing companies began deviating from open-source principles and violating licenses for the detriment of the community. To determine if a pattern has emerged of companies patenting clearly open-source innovations, this study presents three case studies from the three primary regions of open-source 3-D printing development (EU, U.S. and China) as well as three aspects of 3-D printing technology (AM materials, an open-source 3-D printer, and core open-source 3-D printing concepts used in most 3-D printers). The results of this review have shown that non-inventing entities called patent parasites are patenting open-source inventions already well-established in the open source community and in the most egregious cases commercialized by one (or several) firms at the time of the patent filing. Patent parasites are able to patent open-source innovations by using a different language, vague patent titles and broad claims that encompass enormous swaths of widely diffused open-source innovation space. This practice poses a severe threat to innovation and several approaches to irradicate the threat are discussed.
ARTICLE | doi:10.20944/preprints202309.0372.v1
Subject: Computer Science And Mathematics, Other Keywords: Open-source; Raspberry Pi; computer simulation; frugal; nonlinear control
Online: 6 September 2023 (10:21:06 CEST)
Commonly, professors, students and researchers from universities around the world use software distributed under a license agreement for computer simulation purposes, which requires a computer with considerable hardware capabilities. Consequently, this implies a high cost to conduct simulations that require implementing numerical methods in all areas of engineering, particularly in the field of robotics and nonlinear control. This paper presents the design and results analysis of a low-cost and open-source frugal computer simulation tool with applications to robotic nonlinear control, for instance, for numerical simulations of manipulator robot control based on dynamic models. Using a single board minicomputer with with reduced computing power, Raspberry Pi, together with free software, GNU-Octave, trajectory tracking control simulations in the joint space of a Selective Conformal Assembly Robot Arm (SCARA) are achieved by solving a system of nonlinear differential equations, represented in matrix form, which includes the control law and the system model. The results of the proposed alternative are compared by running the same simulation code on a laptop computer using MATLAB and GNU Octave, which show minimal deviations and reasonable time complexity. Moreover, considering the frugality curve calculated for this approach, in addition to the low acquisition cost of the simulation software tool, it would allow the creation of a simulation laboratory in some universities with budgetary constraints for educational and research purposes.
ARTICLE | doi:10.20944/preprints202308.0896.v1
Subject: Environmental And Earth Sciences, Water Science And Technology Keywords: river monitoring; image-based; discharge estimation; open-source software
Online: 11 August 2023 (07:54:42 CEST)
River monitoring has the potential to grow substantially with present-day available affordable and locally sourced hardware. Yet river monitoring networks are still under pressure due to lack of resources, difficulties with maintenance, and rapidly changing conditions which in part may be due to climate change. We advocate that to turn around this trend, monitoring stations must rely on local people, work with locally available devices, work without contact with water and operate through openly available knowledge. River observations with camera videos have this potential. IP cameras, drones or smartphones are widely available as observation platforms. The scientific methods are well established in literature. Yet current attempts to establish scalable open-source software solutions that can be operated anywhere are lacking. In fact, currently available software is either research-oriented, more aimed at incidental observations, is restricted to single-use licenses, entirely proprietary, or restricted to operations through a third-party Software as a Service. To overcome this obstacle, we present OpenRiverCam, a free and open-source software ecosystem for river observations that can fulfill a wide variety of use cases and business cases through a well-documented and simple to use Application Programming Interface, service workflows, cloud scalability and interoperability, and options to extend to several applications within the hydrological, hydrodynamic, environmental and geospatial domains. We demonstrate its current technical abilities through three different case studies that all originate from a user perspective. We discuss the future developments to meet further requirements, which include documentation, widely available training materials and embedding in curricula, and further hardware and software developments.
ARTICLE | doi:10.20944/preprints202010.0107.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: Bioprinting; microextrusion; tissue engineering; bioink; open-source; stem cells
Online: 6 October 2020 (08:24:54 CEST)
Three-dimensional (3D) bioprinting promises to be essential in tissue engineering for solving the rising demand for organs and tissues. Some bioprinters are commercially available, but their impact on the field of TE is still limited due to their cost or difficulty to tune. Herein, we present a low-cost easy-to-build printhead for microextrusion-based bioprinting (MEBB) that can be installed in many desktop 3D printers to transform them into 3D bioprinters. We can extrude bioinks with precise control of print temperature between 2 - 60 ºC. We validated the versatility of the printhead, by assembling it in three low-cost open-source desktop 3D printers. Multiple units of the printhead can also be easily put together in a single printer carriage for building a multi-material 3D bioprinter. Print resolution was evaluated by creating representative calibration models at different temperatures using natural hydrogels such as gelatin and alginate, and synthetic ones like poloxamer. Using one of the three modified low-cost 3D printers, we successfully printed cell-laden lattice constructs with cell viabilities higher than 90% after 24h post printing. Controlling temperature and pressure according to the rheological properties of the bioinks was essential in achieving optimal printability and great cell viability. The cost per unit of our device, which can be used with syringes of different volume, is less expensive than any other commercially available product. These data demonstrate an affordable open-source printhead with the potential to become a reliable alternative to commercial bioprinters for any laboratory.
ARTICLE | doi:10.20944/preprints201904.0207.v1
Subject: Medicine And Pharmacology, Other Keywords: 3-D printing; additive manufacturing; biomedical equipment; biomedical engineering; centrifuge; design; distributed manufacturing; laboratory equipment; open hardware; open source; open source hardware; medical equipment; medical instrumentation; scientific instrumentation
Online: 18 April 2019 (08:03:58 CEST)
Centrifuges are commonly required devices in medical diagnostics facilities as well as scientific laboratories. Although there are commercial and open source centrifuges, costs of the former and required electricity to operate the latter, limit accessibility in resource-constrained settings. There is a need for low-cost, human-powered, verified and reliable lab-scale centrifuge. This study provides the designs for a low-cost 100% 3-D printed centrifuge, which can be fabricated on any low-cost RepRap-class fused filament fabrication (FFF) or fused particle fabrication (FPF)-based 3-D printer. In addition, validation procedures are provided using a web camera and free and open source software. This paper provides the complete open source plans including instructions for fabrication and operation for a hand-powered centrifuge. This study successfully tested and validated the instrument, which can be operated anywhere in the world with no electricity inputs obtaining a radial velocity of over 1750rpm and over 50N of relative centrifugal force. Using commercial filament the instrument costs about US$25, which is less than half of all commercially available systems; however, the costs can be dropped further using recycled plastics on open source systems for over 99% savings. The results are discussed in the contexts of resource-constrained medical and scientific facilities.
ARTICLE | doi:10.20944/preprints202309.1013.v1
Subject: Computer Science And Mathematics, Security Systems Keywords: energy website risk,; Website Security Measurements; OSINT(Open Source Intelligence),
Online: 15 September 2023 (04:22:21 CEST)
Static and dynamic analysis of website security risks is generally used for website security meas-urements. The website security measurements method recently proposed includes similarity hash-based website security risk analysis and machine learning-based website security risk anal-ysis. In this study, a method that can be performed through information disclosed on the Internet was proposed to measure the risk of a website. DNS information, IP information, and website history information are required to measure the risk of a website. In addition, global traffic rank-ing, malicious code distribution history, and HTTP access status can be checked in the website history information. In this study, information related to the security risk of websites in a total of 2,000 domains was collected and analyzed in 1,000 normal and malicious domains. Through the experiment results, 11 open information collection items were selected to measure the security risk of the website. This study presented the possibility of using public information collection to measure website security risk.
ARTICLE | doi:10.20944/preprints202107.0651.v1
Subject: Social Sciences, Psychology Keywords: multiple measures synchronization; automatic device integration; open-source; PsychoPy; Unity
Online: 29 July 2021 (11:48:02 CEST)
Background: The human mind is multimodal. Yet most behavioral studies rely on century-old measures such as task accuracy and latency. To create a better understanding of human behavior and brain functionality, we should introduce other measures and analyze behavior from various aspects. However, it is technically complex and costly to design and implement the experiments that record multiple measures. To address this issue, a platform that allows synchronizing multiple measures from human behavior is needed. Method: This paper introduces an opensource platform named OpenSync, which can be used to synchronize multiple measures in neuroscience experiments. This platform helps to automatically integrate, synchronize and record physiological measures (e.g., electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, body motion, etc.), user input response (e.g., from mouse, keyboard, joystick, etc.), and task-related information (stimulus markers). In this paper, we explain the structure and details of OpenSync, provide two case studies in PsychoPy and Unity. Comparison with existing tools: Unlike proprietary systems (e.g., iMotions), OpenSync is free and it can be used inside any opensource experiment design software (e.g., PsychoPy, OpenSesame, Unity, etc., https://pypi.org/project/OpenSync/ and https://github.com/moeinrazavi/OpenSync_Unity). Results: Our experimental results show that the OpenSync platform is able to synchronize multiple measures with microsecond resolution.
TECHNICAL NOTE | doi:10.20944/preprints202103.0194.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Active Learning, Classification, Machine Learning, Python, Github, Repository, Open Source
Online: 5 March 2021 (21:14:20 CET)
Machine learning applications often need large amounts of training data to perform well. Whereas unlabeled data can be easily gathered, the labeling process is difficult, time-consuming, or expensive in most applications. Active learning can help solve this problem by querying labels for those data points that will improve the performance the most. Thereby, the goal is that the learning algorithm performs sufficiently well with fewer labels. We provide a library called scikit-activeml that covers the most relevant query strategies and implements tools to work with partially labeled data. It is programmed in Python and builds on top of scikit-learn.
ARTICLE | doi:10.20944/preprints202001.0080.v1
Subject: Engineering, Energy And Fuel Technology Keywords: shale gas; MRST; embedded discrete fracture model; open-source implementation
Online: 9 January 2020 (09:59:37 CET)
We present a generic and open-source framework for the numerical modeling of the expected transport and storage mechanisms in unconventional gas reservoirs. These unconventional reservoirs typically contain natural fractures at multiple scales. Considering the importance of these fractures in shale gas production, we perform a rigorous study on the accuracy of different fracture models. The framework is validated against an industrial simulator and is used to perform a history-matching study on the Barnett shale. This work presents an open-source code that leverages cutting-edge numerical modeling capabilities like automatic differentiation, stochastic fracture modeling, multi-continuum modeling and other explicit and discrete fracture models. We modified the conventional mass balance equation to account for the physical mechanisms that are unique to organic-rich source rocks. Some of these include the use of an adsorption isotherm, a dynamic permeability-correction function, and an embedded discrete fracture model (EDFM) with fracture-well connectivity. We explore the accuracy of the EDFM for modeling hydraulically-fractured shale-gas wells, which could be connected to natural fractures of finite or infinite conductivity, and could deform during production. Simulation results indicates that although the EDFM provides a computationally efficient model for describing flow in natural and hydraulic fractures, it could be inaccurate under these three conditions: 1. when the fracture conductivity is very low. 2. when the fractures are not orthogonal to the underlying Cartesian grid blocks, and 3. when sharp pressure drops occur in large grid blocks with insufficient mesh refinement. Each of these results are very significant considering that most of the fluids in these ultra-low matrix permeability reservoirs get produced through the interconnected natural fractures, which are expected to have very low fracture conductivities. We also expect sharp pressure drops near the fractures in these shale gas reservoirs, and it is very unrealistic to expect the hydraulic fractures or complex fracture networks to be orthogonal to any structured grid. In conclusion, this paper presents an open-source numerical framework to facilitate the modeling of the expected physical mechanisms in shale-gas reservoirs. The code was validated against published results and a commercial simulator. We also performed a history-matching study on a naturally-fractured Barnett shale-gas well considering adsorption, gas slippage & diffusion and fracture closure as well as proppant embedment, using the framework presented. This work provides the first open-source code that can be used to facilitate the modeling and optimization of fractured shale-gas reservoirs. To provide the numerical flexibility to accurately model stochastic natural fractures that are connected to hydraulically-fractured wells, it is built atop other related open-source codes. We also present the first rigorous study on the accuracy of using EDFM to model both hydraulic fractures and natural fractures that may or may not be interconnected.
ARTICLE | doi:10.20944/preprints201708.0022.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: real‐time reconstruction; SLAM; kinect sensors; depth cameras; open source
Online: 7 August 2017 (11:03:23 CEST)
Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than simple moving average, which enables accurate reconstruction of high-frequency details such as sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D containing both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as opensource1 for other researchers to reproduce and verify our results.
ARTICLE | doi:10.20944/preprints201910.0232.v1
Subject: Environmental And Earth Sciences, Geophysics And Geology Keywords: solar radiation; diffuse; LSA SAF; aerosols; MSG SEVIRI; open source code
Online: 20 October 2019 (02:24:07 CEST)
Several studies have shown that changes in incoming solar radiation and variations of the diffuse fraction can significantly modify the vegetation carbon uptake. Hence, monitoring the incoming solar radiation at large scale and with high temporal frequency is crucial for this reason along with many others. The EUMETSAT Satellite Application Facility for Land Surface Analysis (LSA SAF) operationally disseminates in near real time estimates of the downwelling shortwave radiation at the surface since 2005. This product is derived from observations provided by the SEVIRI instrument onboard the Meteosat Second Generation series of geostationary satellites, which covers Europe, Africa, the Middle East, and part of South America. However, near real time generation of the diffuse fraction at the surface level has only recently been initiated. The main difficulty towards achieving this goal was the general lack of accurate information on the aerosol particles in the atmosphere. This limitation is nowadays less important thanks to the improvements in atmospheric numerical models. This study presents an upgrade of the LSA-SAF operational retrieval method, which provides the simultaneous estimation of the incoming solar radiation and its diffuse fraction from satellite every 15 minutes. The upgrade includes a comprehensive representation of the influence of aerosols based on physical approximations of the radiative transfer within an atmosphere-surface associated medium. This article explains the retrieval method, discusses its limitations and differences with the previous method, and details the characteristics of the output products. A companion article will focus on the evaluation of the products against independent measurements of solar radiation. Finally, the access to the source code is provided through an open access platform in order to share with the community the expertise on the satellite retrieval of this variable.
ARTICLE | doi:10.20944/preprints202006.0318.v1
Subject: Medicine And Pharmacology, Other Keywords: ventilator; pandemic; ventilation; influenza pandemic; coronavirus; coronavirus pandemic; pandemic ventilator; single-limb; open source; open hardware; COVID-19; medical hardware; RepRap; 3-D printing; open source medical hardware; embedded systems; real-time operating system
Online: 26 June 2020 (17:25:16 CEST)
This study describes the development of an automated bag valve mask (BVM) compression system, which, during acute shortages and supply chain disruptions can serve as a temporary emergency ventilator. The resuscitation system is based on the Arduino controller with a real-time operating system installed on a largely RepRap 3-D printable parametric component-based structure. The cost of the system is under $170, which makes it affordable for replication by makers around the world. The device provides a controlled breathing mode with tidal volumes from 100 to 800 milliliters, breathing rates from 5 to 40 breaths/minute, and inspiratory-to-expiratory ratio from 1:1 to 1:4. The system is designed for reliability and scalability of measurement circuits through the use of the serial peripheral interface and has the ability to connect additional hardware due to the object-oriented algorithmic approach. Experimental results demonstrate repeatability and accuracy exceeding human capabilities in BVM-based manual ventilation. Future work is necessary to further develop and test the system to make it acceptable for deployment outside of emergencies in clinical environments, however, the nature of the design is such that desired features are relatively easy to add with the test using protocols and parametric design files provided.
REVIEW | doi:10.20944/preprints202208.0434.v1
Subject: Engineering, Mechanical Engineering Keywords: COVID-19; 3D Printing; Additive Manufacturing; Medical Applications; Open-source files; Innovation
Online: 25 August 2022 (10:24:14 CEST)
The Coronavirus disease 2019 (COVID-19) rapidly spread to over 180 countries and abruptly disrupted the production rates and supply chains worldwide. Since then, 3D printing also recognized as additive manufacturing (AM) and known to be a novel technique that uses layer-by-layer deposition of material to produce the intricate 3D geometry, has been engaged in reducing the distress caused by the outbreak. During the early stages of this pandemic, shortages of Personal Protection Equipment (PPE), including facemasks, shields, respirators, and other medical gears, were significantly answered by remotely 3D printing them. Amidst the growing testing requirements, the 3D printing emerged as a potential and fast solution manufacturing process to meet the production needs due to its flexibility, reliability, and rapid response capabilities. In the recent past, some of the other medical applications that have gained prominence in the scientific community include 3D printed ventilator splitters, device components, and patient-specific products. Regarding the non-medical applications, researchers have successfully developed contact-free devices to address the sanitary crisis in public places. This work aims to systematically review the applications of 3D printing or AM techniques that have been involved in producing various critical products essential to limit this deadly pandemic's progression.
REVIEW | doi:10.20944/preprints202003.0220.v1
Subject: Biology And Life Sciences, Ecology, Evolution, Behavior And Systematics Keywords: 3D printing; 3D scanning; customized ecological objects; methods; stereolithography; open-source lab
Online: 12 March 2020 (14:46:07 CET)
3D printing is described as the third industrial revolution: its impact is global in industry and progresses every day in society. It presents a huge potential for ecology and evolution, sciences with a long tradition of inventing and creating objects for research, education and outreach. Its general principle as an additive manufacturing technique is relatively easy to understand: objects are created by adding material layers on top of each other. Although this may seem very straightforward on paper, it is much harder in the real world. Specific knowledge is indeed needed to successfully turn an idea into a real object, because of technical choices and limitations at each step of the implementation. This article aims at helping scientists to jump in the 3D printing revolution, by offering a hands-on guide to current 3D printing technology. We first give a brief overview of uses of 3D printing in ecology and evolution, then review the whole process of object creation, split into three steps: (1) obtaining the digital 3D model of the object of interest, (2) choosing the 3D printing technology and material best adapted to the requirements of its intended use, (3) pre- and post-processing the 3D object. We compare the main technologies available and their pros and cons according to the features and the use of the object to be printed. We give specific and key details in appendices, based on examples in ecology and evolution.
Subject: Computer Science And Mathematics, Computer Science Keywords: 3D object reconstruction, depth cameras, Kinect sensors; open source, signal denoising, SLAM
Online: 9 April 2019 (12:24:34 CEST)
3D object reconstruction from depth image streams using Kinect-style depth cameras has been extensively studied. In this paper, we propose an approach for accurate camera tracking and volumetric dense surface reconstruction assuming a known cuboid reference object is present in the scene. Our contribu¬tion is three-fold. (a) We maintain drift-free camera pose tracking by incorporating the 3D geometric constraints of the cuboid reference object into the image registration process. (b) We reformulate the problem of depth stream fusion as a binary classification problem, enabling high-fidelity surface reconstruction, especially in the con¬cave zones of objects. (c) We further present a surface denoising strategy to mitigate the topological inconsistency (e.g., holes and dangling triangles), which facilitates the generation of a noise-free triangle mesh. We extend our public dataset CU3D with several new image sequences, test our algorithm on these sequences and quantitatively compare them with other state-of-the-art algorithms. Both our dataset and our algorithm are available as open-source content at https://github.com/zhangxaochen/CuFusion for oth-er researchers to reproduce and verify our results.
ARTICLE | doi:10.20944/preprints201902.0087.v1
Subject: Social Sciences, Geography, Planning And Development Keywords: Noise mapping; END directive; GIS; open source; standards, road traffic; population exposure
Online: 11 February 2019 (09:48:53 CET)
The urbanisation phenomenon and related cities expansion and transport networks entail preventing the increase of population exposed to environmental pollution. Regarding noise exposure, the Environmental Noise Directive demands on main metropolis to produce noise maps. While based on standard methods, these latter are usually generated by proprietary software and require numerous input data concerning, for example, the buildings, land use, transportation network and traffic. The present work describes an open source implementation of a noise mapping tool fully implemented in a Geographic Information System compliant with the Open Geospatial Consortium standards. This integration makes easier at once the formatting and harvesting of noise model input data, cartographic rendering and output data linkage with population data. An application is given for a French city, which consists in estimating the impact of road traffic-related scenarios in terms of population exposure to noise levels both in relation to a threshold value and level classes.
ARTICLE | doi:10.20944/preprints201912.0063.v1
Subject: Engineering, Control And Systems Engineering Keywords: Software Quality Metrics; closed source software; open source software; Kahane’s Approach; UCP (Use Case Points) model and William’s Models
Online: 5 December 2019 (08:37:56 CET)
The complexity of software is increasing day by day due to the increase in the size of the projects being developed. For better planning and management of large software projects, estimation of software quality is important. During the development processes, complexity metrics are used for the indication of the attributes/characteristics of the quality software. There are many studies about the effect of the complexity of the software on the cost and quality. In this study, we discussed the effects of software complexity on the quality attributes of the software for open source and closed source software. Though, the quality metrics for open and closed source software are not distinct from each other. In this paper, we comparatively analyzed the impact of complexity metrics on open source and private software. We also presented various models for the management of the project complexity such as William’s Model, Stacey’s Agreement and Certainty matrix, Kahane’s Approach and UCP Model. Quality metrics here refer to the standards for the measurement of the quality of software which contains certain attributes or characteristics of the software that are related to the quality of the software. Certain quality attributes addressed in this study are Usability, Reliability, Security, Portability, Maintainability, Efficiency, Cost, Standards and Availability, etc. Both Open source and Closed source software are evaluated on the basis of these quality attributes. This study also recommended future approaches to manage the quality of project Open source and Closed source software and specify which one of them is mostly used in the industry.
REVIEW | doi:10.20944/preprints202203.0217.v1
Subject: Engineering, Energy And Fuel Technology Keywords: energy policy; energy conservation; climate change; global safety; open hardware; open source; photovoltaic; renewable energy; solar energy; national security
Online: 15 March 2022 (14:27:35 CET)
Free and open source hardware (FOSH) development has been shown to increase innovation and reduce economic costs. This article reviews the opportunity to use FOSH like a sanction to undercut imports and exports from a target criminal country. A formal methodology is presented for selecting strategic national investments in FOSH development to improve both national security and global safety. In this methodology, first the target country that is threatening national security or safety is identified. Next, the top imports from the target country as well as potentially other importing countries (allies) are quantified. Hardware is identified that could undercut imports/exports from the target country. Finally, methods to support the FOSH development are enumerated to support production in a commons-based peer production strategy. To demonstrate how this theoretical method works in practice it is applied as a case study to the current criminal military aggressor nation, who is also a fossil fuel exporter. The results show there are numerous existing FOSH and opportunities to develop new FOSH for energy conservation and renewable energy to reduce fossil fuel energy demand. Widespread deployment would reduce the concomitant pollution, human health impacts, and environmental desecration as well as cut financing of military operations.
Subject: Chemistry And Materials Science, Polymers And Plastics Keywords: drying; materials processing; vacuum oven; small-scale; lab equipment; air-powered; open hard-ware; open source; digital manufacturing; dehydration
Online: 22 April 2021 (09:16:02 CEST)
Vacuum drying can dehydrate materials further than dry heat methods while protecting sensitive materials from thermal degradation. Many industries have shifted to vacuum drying as cost- or time-saving measures. Small-scale vacuum drying, however, has been limited by high costs of specialty scientific tools. To make vacuum drying more accessible, this study provides design and performance information for a small-scale open source vacuum oven, which can be fabricated from off-the-shelf and 3-D printed components. The oven is tested for drying speed and effective-ness on both waste plastic polyethylene terephthalate (PET) and a consortium of bacteria developed for bioprocessing of terephthalate wastes to assist in distributed recycling of PET for both additive manufacturing as well as potential food. Both materials can be damaged when exposed to high temperatures, making vacuum drying a desirable solution. The results showed the open source vacuum oven was effective at drying both plastic and biomaterials, drying at a higher rate than a hot-air dryer for small samples or for low volumes of water. The system can be constructed for less than 20% of commercial vacuum dryer costs for several laboratory-scale applications including dehydration of bio-organisms, drying plastic for distributed recycling and additive manufacturing, and chemical processing.
ARTICLE | doi:10.20944/preprints201905.0060.v1
Subject: Biology And Life Sciences, Biology And Biotechnology Keywords: open source; 3D printing; Drosophila; laser cutter; lab equipment; open labware; fly-pushing; fly pad; fly plate; CO2 anesthesia
Online: 6 May 2019 (11:51:52 CEST)
One of the most important pieces of equipment used in labs in culturing populations of fruit flies (Drosophila sp.) is that of the “CO2 gas plate”, which is used to anesthesize individuals during “fly-pushing”. This piece of equipment consists of a box with a porous top into which carbon-dioxide is pumped. Flies placed on its surface are left immobilized, permitting the sorting, categorizing and/or counting of flies during population culturing and experimental assays. Unfortunately, commercially available gas plates are typically expensive. Here, we describe a new design for a gas plate that can be easily produced using a 3D printer and a laser cutter, which we are making freely available to the fly community.
ARTICLE | doi:10.20944/preprints202306.1669.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: Livestock monitoring; Open source UAV; Depth sorting; Kalman filter; Optical flow; Visual servo
Online: 23 June 2023 (11:53:25 CEST)
It is a challenging and meaningful task to carry out drone-based livestock monitoring in high-altitude and cold regions. The purpose of AI is to execute automated tasks and to solve practical problems in actual applications by combining the software technology with the hardware carrier to create integrated advanced devices. Only in this way, the maximum value of AI could be realized. In this paper, a real-time tracking system with dynamic target tracking ability is proposed. It is developed based on the tracking-by-detection architecture using YOLOv7 and DeepSORT algorithms for target detection and tracking, respectively. To address the existing problems of the DeepSORT algorithm, the following two optimizations are made: (1) Optical flow is used to compensate the Kalman filter for improvement of the prediction accuracy; (2) A low-confidence trajectory filtering method is adopted to reduce the influence of unreliable detection on target tracking. In addition, an visual servo controller for the UAV is designed to enable the automated tracking task. Finally, the system is tested using the Tibetan yaks living in the Tibetan Plateau as the tracking targets, and the results reveal the real-time multiple tracking ability and the ideal visual servo effect of the proposed system.
ARTICLE | doi:10.20944/preprints202304.0994.v1
Subject: Engineering, Mechanical Engineering Keywords: mobility; mobility aid; adaptive aid; walker; 3-D printing; additive manufacturing; mechanical testing; open hardware; open source hardware; frugal innovation
Online: 26 April 2023 (13:06:50 CEST)
To improve accessibility, this article describes a static, four-legged, walker that can be constructed from materials and fasteners commonly available from hardware stores coupled by open-source 3-D printed joints. The designs are described in detail, shared under an open-source license, and fabricated with a low-cost open-source desktop 3-D printer and hand tools. The resulting device is loaded to failure to determine the maximum load that the design can safely support in both vertical and horizontal failure modes. The experimental results showed the average vertical failure load capacity was 3680±694.3N, equivalent to 375.3±70.8kg of applied weight with the fractured location at the wood dowel handlebars. The average horizontal load capacity was 315.6±49.4N, equivalent to 32.2±5.1kg. The maximum weight capacity of a user of 187.1±29.3kg was obtained, which indicates the open-source walker design can withstand the weight requirements of all genders with a 95% confidence interval that includes a safety factor of 1.8 when considering the lowest deviation weight capacity. The design has a cost at the bottom of the range of commercial walkers and reduces the mass compared to a commercial walker by 0.5kg (19% reduction). It can be concluded that this open-source walker design can aid accessibility in low-resource settings.
ARTICLE | doi:10.20944/preprints202011.0325.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: single-subject studies, personalized medicine, precision medicine, reference standards; gold standards; biomarkers; open-source
Online: 10 November 2020 (16:36:56 CET)
Background: Developing patient-centric baseline standards that enable the detection of clinically significant outlier gene products on a genome-scale remains an unaddressed challenge required for advancing personalized medicine beyond the small pools of subjects implied by “precision medicine”. This manuscript proposes a novel approach for reference standard development to evaluate the accuracy of single-subject analyses of metabolomes, proteomes, or transcriptomes. Since distributional assumptions of statistical testing may inadequately model genome dynamics of gene products, the so-called significant results of previous studies may artefactually conflate with real signals. Model confirmation biases escalate when studies use the same analytical methods in the discovery sets and reference standards, as corroboration of results leads to an evaluation of reproducibility confounded with replicated biases rather than a measure of accuracy. We hypothesized that developing method-agnostic reference standards using effect-size and expression-level filtering of results, obtained from multiple discovery methods that are distinct from the one evaluated, would maximize the evaluation of clinical-transcriptomic signals and minimize statistical artefactual biases. We developed and released an R package “referenceNof1” to facilitate the construction of robust reference standards. Results: Since RNA-Seq data analysis methods often rely on binomial and negative binomial assumptions to non-parametric analyses, the differences create statistical noise and make the reference standards method dependent. In our experimental design, the accuracy of 30 distinct combinations of fold changes (FC) and expression levels (EL) were determined for five types of RNA analyses in two different datasets. This design was applied to two distinct datasets: breast cancer cell lines and a yeast study with isogenic biological replicates in two experimental conditions. In addition, the reference standard (RS) comprised all RNA analytical methods with the exception of the method testing accuracy. To mitigate for biased optimization of the RS parameters towards a specific analytical method, similarity between observed results of distinct analytical methods were calculated across all methods (Jaccard Concordance Index). The greatest differences were observed across diametric extremes. For example, filtering out differentially expressed genes (DEGs) using a fold change < 1.2 leads to a 50% increase in concordance between techniques when compared to results with FC > 1.2. Combining this FC cutoff with genes with mean expressions > 30 counts leads to a 65% increase in concordance in comparison to genes with expression levels < 30 counts and with FC < 1.2. Conclusions: We have demonstrated that comparing accuracies of different single-subject analysis methods for clinical optimization requires a new evaluation framework. Reliable and robust reference standards, independent of the evaluated method, can be obtained under a limited number of parameter combinations: fold change (FC) ranges thresholds, expression level cutoffs, and exclusion of the tested method from the RS development process. When applying anticonservative reference standard frameworks (e.g., using the same method for RS development and for prediction), a majority of the concordant signal between prediction and Gold Standard (GS) cannot be confirmed by other methods, which we conclude as biased results. Statistical tests to determine DEGs from a single-subject study generate many biased results that require subsequent filtering for increasing their reliability. Conventional single-subject studies pertain to one or a few measures in one patient over time and need a substantial conceptual framework extension in order to address the tens of thousands of measures in genome-wide analyses of gene products. The proposed referenceNof1 framework addresses some of the inherent challenges in improving transcriptome scale single-subject analyses by providing a robust approach to constructing reference standards. Github: https://github.com/SamirRachidZaim/referenceNof1
ARTICLE | doi:10.20944/preprints201906.0251.v1
Subject: Physical Sciences, Applied Physics Keywords: video microscopy, imaging, automated data acquisition, nanoparticle tracking, measurement embedded applications, open-source software
Online: 25 June 2019 (12:53:50 CEST)
We introduce PyNTA, a modular instrumentation software for live particle tracking. By using the multiprocessing library of Python and the distributed messaging library pyZMQ, PyNTA allows users to acquire images from a camera at close to maximum readout bandwidth while simultaneously performing computations on each image on a separate processing unit. This publisher/subscriber pattern generates a small overhead and leverages the multi-core capabilities of modern computers. We demonstrate capabilities of the PyNTA package on the featured application of nanoparticle tracking analysis. Real-time particle tracking on megapixel images at a rate of 50 Hz is presented. Reliable live tracking reduces the required storage capacity for particle tracking measurements by a factor of approximately 103, as compared with raw data storage, allowing for a virtually unlimited duration of measurements
ARTICLE | doi:10.20944/preprints201808.0545.v2
Subject: Engineering, Electrical And Electronic Engineering Keywords: model intercomparison; renewable energy; production cost modeling; security-constrained unit commitment; open-source software
Online: 24 December 2018 (10:55:11 CET)
Background: New open-source electric-grid planning models have the potential to improve power system planning and bring a wider range of stakeholders into the planning process for next-generation, high-renewable power systems. However, it has not yet been established whether open-source models perform similarly to the more established commercial models for power system analysis. This reduces their credibility and attractiveness to stakeholders, postponing the benefits they could offer. In this paper, we report the first model intercomparison between an open-source power system model and an established commercial production cost model. Results: We compare the open-source Switch 2.0 to GE Energy Consulting’s Multi Area Production Simulation (MAPS) for production-cost modeling, considering hourly operation under 17 scenarios of renewable energy adoption in Hawaii. We find that after configuring Switch with similar inputs to MAPS, the two models agree closely on hourly and annual production from all power sources. Comparing production gave a coefficient of determination of 0.996 across all energy sources and scenarios, indicating that the two models agree on 99.6% of the variation. For individual energy sources, the coefficient of determination was 69–100. Conclusions: Although some disagreement remains between the two models, this work indicates that Switch is a viable choice for renewable integration modeling, at least for the small power systems considered here. Although some disagreement remains between the two models, this work indicates that Switch is a viable choice for renewable integration modeling, at least for the small power systems considered here.
ARTICLE | doi:10.20944/preprints202311.0417.v1
Subject: Biology And Life Sciences, Life Sciences Keywords: circadian alteration; light cycle; intermittent fasting; temperature cycle; animal research; experimental model; open-source hardware
Online: 7 November 2023 (10:35:23 CET)
Exposure of experimental rodents to controlled cycles of light, food, and temperature is important when investigating alterations in circadian cycles that profoundly influence health and disease. However, applying such stimuli simultaneously is difficult in practice. Our aim was to design, build, test, and open-source describe a simple device that subjects a conventional mouse cage to independent cycles of physiologically relevant environmental variables. The device is based on a box enclosing the rodent cage to modify the light, feeding, and temperature environments. The device provides temperature-controlled air conditioning (heating or cooling) by a Peltier module and includes programmable feeding and illumination. All functions are set by a user-friendly front panel for independent cycle programming. Bench testing with a model simulating the CO2 production of mice in the cage showed: a) suitable air renewal (by measuring actual ambient CO2), b) controlled realistic illumination at the mouse enclosure (measured by a photometer), c) stable temperature control, and d) correct cycling of light, feeding, and temperature. The cost of all the supplies (retail purchased by e-commerce) was < 300 US$). Detailed technical information is open-source provided, allowing for any user to reliably reproduce or modify the device. This approach can considerably facilitate circadian research since using one of the described low-cost devices for any mouse group with a given light-food-temperature paradigm allows for all the experiments to be performed simultaneously, thereby requiring no changes in the light/temperature of a general-use laboratory.
ARTICLE | doi:10.20944/preprints202209.0174.v1
Subject: Engineering, Civil Engineering Keywords: open-source; photovoltaic; mechanical design; electric vehicle; solar energy; solar carport; electric vehicle charging station
Online: 13 September 2022 (10:41:43 CEST)
Solar powering the increasing fleet of electrical vehicles (EV) demands more surface area than may be available for photovoltaic (PV) powered buildings. Parking lot solar canopies can provide the needed area to charge EVs, but are substantially costlier than roof- or ground-mounted PV systems. To provide a lower-cost PV parking lot canopy to supply EV charging beneath them, this study provides a full mechanical and economic analysis on three novel PV canopy systems: (1) exclusively wood, single parking spot spanning system, (2) wood and aluminum double parking spot spanning system, and (3) wood and aluminum cantilevered system for curbside parking. All systems can be scalable to any amount of EV parking spots. The complete designs and bill of materials (BOM) of the canopies are provided along with basic instructions and are released with an open source license that will enable anyone to fabricate them. The results found single-span systems have cost savings of 82%-85%, double-span systems save 43%-50%, and cantilevered systems save 31%-40%. In the first operation year, the PV canopies can provide 157% of energy needed to charge the least efficient EV currently on the market if it is driven the average driving distance in London ON, Canada.
ARTICLE | doi:10.20944/preprints202106.0046.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: English vocabulary learning; Incidental vocabulary acquisition,; Context-aware ubiquitous learning,; Ubiquitous Computing; Open-source software
Online: 1 June 2021 (15:24:35 CEST)
Language learners often face communication problems when they need to express themselves and do not have this ability. On the other hand, continuous advances in technology create new opportunities to improve second language (L2) acquisition through context-aware ubiquitous learning (CAUL) technology. Since vocabulary is the foundation of all language acquisition, this article presents the ULearnEnglish, an open-source system to allow ubiquitous English learning focused on incidental vocabulary acquisition. To evaluate the proposal, 15 learners used the system developed, and 10 answered a survey based on the Technology Acceptance Model (TAM). Results indicate a favorable response to the use of the learner context to assist them in their learning. The ULearnEnglish achieved an acceptance of 78.66% for the perception of the utility, 96% for the perception of ease of use, 86% for user context assessment, and 88% for ubiquity. This study presented a positive response in using the location of users to assist their learning. Among the main contributions, this study demonstrates an opportunity for ubiquity use in future research in language learning. Also, furthers studies can use the source available to evolve the model and system.
ARTICLE | doi:10.20944/preprints201901.0302.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: interoperability; digital elevation model; Google Sketchup; geographical information systems-science; free and open source software
Online: 30 January 2019 (05:28:53 CET)
Data creation is often the only way for researchers to produce basic geospatial information for the pursuit of more complex tasks and procedures such as those that lead to the production of new data for studies concerning river basins, slope morphodynamics, applied geomorphology and geology, urban and territorial planning, detailed studies, for example, in architecture and civil engineering, among others. This exercise results from a reflection where specific data processing tasks executed in Google Sketchup (Pro version, 2018) can be used in a context of interoperability with Geographical Information Systems (GIS) software. The focus is based on the production of contour lines and Digital Elevation Models (DEM) using an innovative sequence of tasks and procedures in both environments (GS and GIS). It starts in Google Sketchup (GS) graphic interface, with the selection of a satellite image referring to the study area—which can be anywhere on Earth's surface; subsequent processing steps lead to the production of elevation data at the selected scale and equidistance. This new data must be exported to GIS software in vector formats such as Autodesk Design Web format—DWG or Autodesk Drawing Exchange format—DXF. In this essay the option for the use of GIS Open Source Software (gvSIG and QGIS) was made. Correcting the original SHP by removing “data noise” that resulted from DXF file conversion permits the author to create new clean vector data in SHP format and, at a later stage, generate DEM data. This means that new elevation data becomes available, using simple but intuitive and interoperable procedures and techniques which confgures a costless work flow.
ARTICLE | doi:10.20944/preprints201901.0029.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Android; arduino; bluetooth; hand-gesture recognition; low cost; open source; sensors; smart cars; speech recognition
Online: 3 January 2019 (14:32:23 CET)
Gesture recognition has always been a technique to decrease the distance between the physical and the digital world. In this work, we introduce an Arduino based vehicle system which no longer require manual controlling of the cars. The proposed work is achieved by utilizing the Arduino microcontroller, accelerometer, RF sender/receiver, and Bluetooth. Two main contributions are presented in this work. Firstly, we show that the car can be controlled with hand-gestures according to the movement and position of the hand. Secondly, the proposed car system is further extended to be controlled by an android based mobile application having different modes (e.g., touch buttons mode, voice recognition mode). In addition, an automatic obstacle detection system is introduced to improve the safety measurements to avoid any hazards. The proposed systems are designed at lab-scale prototype to experimentally validate the efﬁciency, accuracy, and affordability of the systems. We remark that the proposed systems can be implemented under real conditions at large-scale in the future that will be useful in automobiles and robotics applications.
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: 3-D printing; additive manufacturing; distributed manufacturing; distributed recycling; granulator; shredder; open hardware; fab lab; open-source; polymers; recycling; waste plastic; extruder; upcycle; circular economy
Online: 1 September 2019 (08:25:03 CEST)
Abstract: In order to accelerate deployment of distributed recycling by providing low-cost feed stocks of granulated post-consumer waste plastic, this study analyzes an open source waste plastic granulator system. It is designed, built and tested for its ability to convert post-consumer waste, 3-D printed products and waste into polymer feedstock for recyclebots of fused particle/granule printers. The technical specifications of the device are quantified in terms of power consumption (380 to 404W for PET and PLA, respectively) and particle size distribution. The open source device can be fabricated for less than USD$2000 in materials. The experimentally-measured power use is only a minor contribution to the overall embodied energy of distributed recycling of waste plastic. The resultant plastic particle size distributions were found to be appropriate for use in both recyclebots and direct material extrusion 3-D printers. Simple retrofits are shown to reduce sound levels during operation by 4dB-5dB for the vacuum. These results indicate that the open source waste plastic granulator is an appropriate technology for community, library, makespace, fab lab or small business-based distributed recycling.
ARTICLE | doi:10.20944/preprints201811.0087.v1
Subject: Engineering, Industrial And Manufacturing Engineering Keywords: 3-D printing; additive manufacturing; distributed manufacturing; open-source; polymers; recycling; waste plastic; extruder; upcycle; circular economy
Online: 5 November 2018 (07:45:36 CET)
Although distributed additive manufacturing can provide high returns on investment the current markup on commercial filament over base polymers limits deployment. These cost barriers can be surmounted by eliminating the entire process of fusing filament by 3-D printing products directly from polymer granules. Fused granular fabrication (FGF) (or fused particle fabrication (FPF)) is being held back in part by the accessibility of low-cost pelletizers and choppers. An open-source 3-D printable invention disclosed here provides for precise controlled pelletizing of both single thermopolymers as well as composites for 3-D printing. The system is designed, built and tested for its ability to provide high tolerance thermopolymer pellets from a number of sizes capable of being used in a FGF printer. In addition, the chopping pelletizer is tested for its ability to chop multi-materials simultaneously for color mixing and composite fabrication as well as precise fractional measuring back to filament. The US$185 open-source 3-D printable pelletizer chopper system was successfully fabricated and has a 0.5 kg/hr throughput with one motor, and 1.0 kg/hr throughput with two motors using only 0.24 kWh/kg during the chopping process. Pellets were successfully printed directly via FGF and indirectly after being converted into high-tolerance filament in a recyclebot.
ARTICLE | doi:10.20944/preprints201709.0145.v1
Subject: Engineering, Mechanical Engineering Keywords: Open source; FEA; finite element analysis; linear static structural; Code Aster; Salome Meca; Mecway; SimScale; Z88, CAE
Online: 28 September 2017 (14:58:31 CEST)
The aim of this work was to determine if the development of low-cost or no-cost finite element analysis (FEA) software has advanced to the point where it can be used in place of trusted commercial FEA packages for linear static structural analyses using isotropic material models. Nonlinear structural analysis will be covered in a separate paper. Several suitable packages were identified, these underwent a process of systematic elimination when they were unable to meet the minimum imposed qualitative criteria. Three packages were chosen to be subjected to performance benchmarking, namely: Code_Aster/Salome Meca; Mecway and Z88 Aurora. SimScale, a browser-based analysis package was included as well because it met all the baseline criteria and has the potential to offer a completely cloud-based approach to computer aided engineering, potentially reshaping the way an engineering business views its operational capabilities. This paper presents the test cases and simulation results for packages that fall under the linear static structural analysis type.
REVIEW | doi:10.20944/preprints201905.0302.v1
Subject: Business, Economics And Management, Economics Keywords: open science; open access; open data; economic impacts
Online: 27 May 2019 (11:19:59 CEST)
A common motivation for increasing open access to research findings and data is the potential to create economic benefits – but evidence is patchy and diverse. This study systematically reviewed the evidence on what kinds of economic impacts (positive and negative) open science can have, how these comes about, and how benefits could be maximized. Use of open science outputs often leaves no obvious trace, so most evidence of impacts is based on interviews, surveys, inference based on existing costs, and modelling approaches. There is indicative evidence that open access to findings/data can lead to savings in access costs, labour costs and transaction costs. There are examples of open science enabling new products, services, companies, research and collaborations. Modelling studies suggest higher returns to R&D if open access permits greater accessibility and efficiency of use of findings. Barriers include lack of skills capacity in search, interpretation and text mining, and lack of clarity around where benefits accrue. There are also contextual considerations around who benefits most from open science (e.g. sectors, small vs larger companies, types of dataset). Recommendations captured in the review include more research, monitoring and evaluation (including developing metrics), promoting benefits, capacity building and making outputs more audience-friendly.
LETTER | doi:10.20944/preprints201607.0042.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Thevenin; Norton; voltage source; current source
Online: 15 July 2016 (11:46:17 CEST)
A power conservative Thevenin-Norton and Norton-Thevenin transformations are proposed in this letter. The transformations introduce a voltage and a current generators for which parameters depend on the loading impedance value.
Subject: Social Sciences, Library And Information Sciences Keywords: bioeconomy; open science; open access
Online: 30 October 2020 (14:45:27 CET)
The purpose of this paper is to assess the degree of openness of scientific articles on bioeconomy. Based on a WoS corpus of 2,489 articles published between 2015 and 2019, we calculated bibliometric indicators, explored the openness of each paper and assessed the share of journals, countries and research areas of these articles. The results show a sharp increase and diversification of articles in the field of bioeconomy, with a beginning long tail distribution. 45.6% of the articles are freely available, and the share of OA papers is steadily increasing, from 31% in 2015 to 52% in 2019. Gold is the most important variant of OA. Open access is low in the applied research areas of chemical, agricultural and environmental engineering but higher in the domains of energy and fuels, forestry, and green and sustainable science and technology. The UK and the Netherlands have the highest rates of OA papers, followed by Spain and Germany. The funding rate of OA papers is higher than of non-OA papers. This is the first bibliometric study on open access to articles on bioeconomy. The results can be useful for the further development of OA editorial and funding criteria in the field of bioeconomy.
CASE REPORT | doi:10.20944/preprints201905.0166.v1
Subject: Social Sciences, Library And Information Sciences Keywords: Open Annotation; Monographs; Open Access; Higher Education; Open Peer Review
Online: 14 May 2019 (10:03:41 CEST)
The digital format opens up new possibilities for interaction with monographic publications. In particular, annotation tools make it possible to broaden the discussion on the content of a book, to suggest new ideas, to report errors or inaccuracies, and to conduct open peer reviews. However, this requires the support of the users who might not yet be familiar with the annotation of digital documents. This paper will give concrete examples and recommendations for exploiting the potential of annotation in academic research and teaching. After presenting the annotation tool of Hypothesis, the article focuses on its use in the context of HIRMEOS (High Integration of Research Monographs in the European Open Science Infrastructure), a project aimed to improve the Open Access digital monograph. The general line and the aims of a post-peer review experiment with the annotation tool, as well as its usage in didactic activities concerning monographic publications are presented and proposed as potential best practices for similar annotation activities.
ARTICLE | doi:10.20944/preprints202307.1608.v1
Subject: Engineering, Architecture, Building And Construction Keywords: Landscape Digital Twin; GIS; natural hazards; road; floods; landslide; risk analysis; open-source data; risk maps; socio-economic approach
Online: 25 July 2023 (03:31:45 CEST)
In the last decades, climate and environmental changes have highlighted the fragility and vulnerability of the landscape, especially in mountain areas where the effects are most severe. The strategy for hazard mitigation must deploy synergistic actions based on a thorough knowledge of the territory and its phenomena to enable integrated and effective land planning and management through suitable communication tools. This study promotes the methodological setup of a Landscape Digital Twin to establish a multi-disciplinary and multi-scalar hazards overview according to a matrix framework implementable over time and space. The original contribution to the research addresses a holistic vision that combines meaningfully qualitative with quantitative approaches to generate risk maps by integrating various indicators within a multi-hazard framework from the socio-economic perspective. This contribution presents road network risk analysis by exploiting flooding and landslide scenarios. The critical road segments or nodes most vulnerable or impacted by network performance and accessibility can be identified with minimal preprocessing from credible open-source sources. The method's applicability is tested in a Piedmont Region, northern Italy case study. The integration proposed will help generate comprehensive risk analysis maps that effectively portray the interconnectedness among natural hazards, infrastructure, and socio-economic factors fostering more resilient decision-making processes.
ARTICLE | doi:10.20944/preprints202301.0048.v1
Subject: Engineering, Mechanical Engineering Keywords: threaded connection; life cycle; information system; PLM; multi-agent; actor model; isomorphism; system-wide regularities; systematic approach; open-source
Online: 4 January 2023 (02:54:27 CET)
The PLM concept implies the use of heterogeneous information resources at different stages of the product life cycle, the joint work of which allows you to effectively solve the problems of product quality and various costs. According to the principle of isomorphism of regularities of complex systems, an effective PLM system must have these regularities. Unfortunately, this principle is not often fundamental when designing PLM systems. The purpose of the work is to show, using a simple example, the principles of development, operation and use of an educational multi-agent PLM system, the main purpose of which is to study and research these regularities in the life cycle of a special threaded connection. The multi-agent approach to the development of a PLM system provides the necessary prerequisites for the emergence of system-wide regularities in it. The parallel work of agents is implemented using the actor model and the Ray Python-package. Agents for the logical inference of knowledge base facts, CAD/FEA/CAM/SCADA agents, agents for optimization by various methods, and other agents have been developed. Open source software was used to develop the system. Each agent has relatively simple behavior, implemented by its rule function, and can interact with other agents. The system can work in interactive mode with the user or in automatic mode according to a simple algorithm: the rule functions of all agents are executed until at least one of them returns a value other than None. Examples of the operation of the system are given and such system-wide regularities as emergence, historicity and self-organization are demonstrated in it.
DATA DESCRIPTOR | doi:10.20944/preprints202209.0323.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: COVID-19; Open-source dataset; Drug Repurposing; Database system; Web application devel-opment; software development; Drug fingerprints; Bulk upload
Online: 21 September 2022 (10:14:11 CEST)
Although various vaccines are now commercially available, they have not been able to stop the spread of COVID-19 infection completely. An excellent strategy to quickly get safe, effective, and affordable COVID-19 treatment is to repurpose drugs that are already approved for other diseases as adjuvants along with the ongoing vaccine regime. The process of developing an accurate and standardized drug repurposing dataset requires a considerable level of resources and expertise due to the commercial availability of an extensive array of drugs that could be potentially used to address the SARS-CoV-2 infection. To address this bottleneck, we created the CoviRx platform. CoviRx is a user-friendly interface that provides access to the data, which is manually curated for COVID-19 drug repurposing data. Through CoviRx, the data curated has been made open-source to help advance drug repurposing research. CoviRx also encourages users to submit their findings after thoroughly validating the data, followed by merging it by enforcing uniformity and integ-rity-preserving constraints. This article discusses the various features of CoviRx and its design principles. CoviRx has been designed so that its functionality is independent of the data it dis-plays. Thus, in the future, this platform can be extended to include any other disease X beyond COVID-19. CoviRx can be accessed at www.covirx.org.
ARTICLE | doi:10.20944/preprints202011.0282.v1
Subject: Business, Economics And Management, Accounting And Taxation Keywords: Open Research Data; Open Peer Review; medicine; health sciences; Open Science; Open Access; health scientists; FAIR
Online: 9 November 2020 (16:02:24 CET)
During the last years, significant initiatives have been launched for the dissemination of Open Access as part of the Open Science movement. Nevertheless, the other major pillars of Open Science such as Open Research Data (ORD) and Open Peer Review (OPR) are still in an early stage of development among the communities of researchers and stakeholders. The present study sought to unveil the perceptions of a medical and health sciences community about these issues. Through the investigation of researchers’ attitude, valuable conclusions can be drawn, especially in the field of medicine and health sciences, where an explosive growth of scientific publishing exists. A quantitative survey was conducted based on a structured questionnaire, with 51.8% response rate (215 responses out of 415 electronic invitations). The participants in the survey agreed with the ORD principles However they ignored basic terms like FAIR (Findable, Accessible, Interoperable and Reusable) and appeared incentive to permit the exploitation of their data. Regarding OPR, participants expressed their agreement, implying their interest for a trustworthy evaluation system. Conclusively, researchers urge to receive proper training for both ORD principles and OPR processes which combined with a reformed evaluation system will enable them to take full advantage of the opportunities that arise from the new scholar publishing and communication landscape.
ARTICLE | doi:10.20944/preprints202302.0310.v1
Subject: Computer Science And Mathematics, Mathematics Keywords: Unmanned Aerial Vehicle; Facility Location Problem; Mission Planning; Restricted Airspace; UAS Geographical Zone; Water Search & Rescue; Open Source Georeferenced Data
Online: 17 February 2023 (11:46:40 CET)
With Unmanned Aerial Vehicles (UAV), a swift response to urgent needs like search \& rescue missions or medical deliveries can be realized. Simultaneously, the legislator is establishing so-called geographical zones, which restrict UAV operations to mitigate the air and ground risk to third parties. These geographical zones serve a particular safety interest, but they may also hinder the efficient usage of UAVs on time-critical missions with a range-limiting battery capacity. In this study, we address a facility location problem for up to two UAV hangars with a robust optimization model considering demand hotspots, geographical zones as restricted areas, a standard mission to satisfy battery capacity constraints, and the impact of wind scenarios. To this end, water rescue missions are used exemplary, for which positive and negative location factors for UAV hangars and areas of increased drowning risk as demand points are derived from open-source georeferenced data. Optimal UAV mission trajectories are computed with an A* algorithm considering five different restriction scenarios. As this pathfinding is very time-consuming, binary occupancy grids and image processing algorithms accelerate the computation by identifying either entirely inaccessible or restriction-free connections beforehand. For the optimal UAV hangar locations, we maximize accessibility while minimizing the service time to the hotspots, resulting in a decrease from the average service time of 570.4 s for all facility candidates to 351.1 s for one and 287.2 s for two optimal UAV hangar locations.
ARTICLE | doi:10.20944/preprints201808.0233.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: 3-D printing; circuit milling; circuit design; distributed manufacturing; electronics; electronics prototyping; free and open-source hardware; P2P; P2P manufacturing
Online: 13 August 2018 (16:42:54 CEST)
Barriers to inventing electronic devices involve challenges of iterating electronic designs due to long lead times for professional circuit board milling or high-costs of commercial milling machines. To overcome these barriers this study provides open source (OS) designs for a low-cost circuit milling machine. First, design modifications for mechanical and electrical sub-systems of the OS D3D Robotics prototyping system are provided. Next, Copper Carve, an OS custom graphical user interface, is developed to enable circuit board milling by implementing backlash and substrate distortion compensation. The performance of the OS D3D circuit mill is then quantified and validated for: positional accuracy, cut quality, feature accuracy and distortion compensation. Finally, the return on investment is calculated for inventors using it. The results show by properly compensating for motion inaccuracies with Copper Carve, the machine achieves a motion resolution of 10 microns, which is more than adequate for most circuit designs. The mill is at least five times less expensive than all commercial alternatives and the material costs of the D3D mill are repaid from fabricating 20-43 boards. The results show that the OS circuit mill is of high-enough quality to enable rapid invention and distributed manufacturing of complex products containing custom electronics.
ARTICLE | doi:10.20944/preprints202008.0570.v1
Subject: Biology And Life Sciences, Agricultural Science And Agronomy Keywords: apothecium; ascospores; sclerotium formation; carbon source; fungicides; resistance source
Online: 26 August 2020 (09:01:24 CEST)
A new disease causing the tan to light brown blighted stems and pods has occurred in 2.6% pea (Pisum sativum L.) plants with an average disease severity rating of 3.7 in Chapainawabganj district, Bangladesh. A fungus with white appressed mycelia and large sclerotia was consistently isolated from symptomatic tissues. The fungus formed funnel-shaped apothecia with sac-like ascus and endogenously formed ascospores. Healthy pea plants inoculated with the fungus produced typical white mold symptoms. The internal transcribed spacer sequences of the fungus were 100% similar to that recovered from an epitype of Sclerotinia sclerotiorum, considering the fungus to be the causative agent of white mold. Mycelial growth and sclerotial development of S. sclerotiorum were favored at 20°C and pH 5.0. Glucose was the best carbon sources to support hyphal growth and sclerotia formation. Bavistin and Amistar Top inhibited the radial growth of the fungus completely at the lowest concentration. In planta, foliar application of Amistar Top showed the considerable potential to control the disease at 1.0% concentration until 7 days after spraying, while Bavistin prevented infection significantly until 15 days after spraying. A large majority (70.93%) of genotypes including tested released pea cultivars were susceptible, while six genotypes (6.98%) appeared resistant to the disease. These results could be important for management strategies aiming to control the incidence of S. Sclerotinia and eliminate yield loss in pea.
ARTICLE | doi:10.20944/preprints201805.0284.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: dual two-level voltage source inverter; common-mode voltage; discontinuous space vector modulation schemes; centralizing pulse width modulation; open-end load
Online: 22 May 2018 (05:11:06 CEST)
The popular motor drive systems with a single two-level voltage source inverter (VSI) have one main problem that is the occurrence of the common-mode voltage (CMV), which is an effect of the electromagnetic interference, shaft voltage, bearing currents, leakage current. These cause the high stress, increasung temperature and early mechanical failure in machine. To overcome this problem, the technology of the dual two-level VSI fed open-end three-phase ac loads is now available to eliminate the CMV at the ac/induction motor load with the 120-degree modulation technique for controlling each inverter. In this paper, the discontinuous space vector modulation (DSVM) schemes are proposed and applied for the dual two-level VSI fed open-end load. It is based on the 120-degree modulation technique by using only 12 active voltage vectors and the 10 zero voltage vectors from the total 64 voltage vectors along with the different five-segment swicthing sequence designs with centralizing pulse width modulation technqiue in order to not only cancel the CMV in the ac load, but also reduce the switching number/switching loss of the conversion system. Among the various DSVM schemes, their performances are compared in this paper, such as the number of the switching, the step and peak value of the CMV in each inverter, and the quality of the output waveform, etc. The details of the verfication and comparison are carried out by simulation using Matlab/Simulink software.
ARTICLE | doi:10.20944/preprints201801.0139.v1
Subject: Environmental And Earth Sciences, Remote Sensing Keywords: data logger; environmental monitoring network; open source; submersible; under-water; critical zone observatory; cave; Yucatan Peninsula, vadose hydrology; subterranean karst estuary
Online: 16 January 2018 (10:40:15 CET)
A low-cost data logging platform is presented for environmental monitoring projects that provides long-term operation in remote or submerged environments. Three premade “breakout boards” from the open-source Arduino ecosystem are assembled into the core of the platform. The components are selected based on low-cost and ready availability, making the loggers easy to build and modify without specialized tools, or a significant background in electronics. Power optimization techniques are explained. The platform has proven to be highly reliable, and capable of operating for more than a year on standard AA batteries. The flexibility of the system is illustrated with two ongoing field studies recording drip rates in a cave, and water flow in a flooded cave system.
ARTICLE | doi:10.20944/preprints202302.0268.v1
Subject: Social Sciences, Library And Information Sciences Keywords: open science; open access; open repositories; green road; self-archiving; contributor; research assessment; monitoring
Online: 16 February 2023 (04:12:14 CET)
(1) Background: The 2002 Budapest Open Access Initiative recommended on self-archiving of scientific articles in open repositories as the “green road” to open access. Twenty years later, only one part of the researchers deposits their publications in open repositories; moreover, one part of the repositories’ content is not based on self-archived deposits but on mediated nonfaculty contributions. The purpose of the paper is to provide more empirical evidence on this situation and to assess the impact on the future of the green road. (2) Methods: We analyzed the contributions on the French national HAL repository from more than 1,000 laboratories affiliated to the ten most important French research universities, with a focus on 2020, representing 14,023 contributor accounts and 166,939 deposits. (3) Results: We identified seven different types of contributor accounts, including deposits from nonfaculty staff and import flows from other platforms. Mediated nonfaculty contribution accounts for at least 48% of the deposits. We also identified difference between institutions and disciplines. (4) Conclusions: Our empirical results reveal a transformation of open repositories from self-archiving and direct scientific communication towards research information management. Repositories like HAL are somewhere in the middle of the process. The paper describes data quality as the main issue and major challenge of this transformation.
ARTICLE | doi:10.20944/preprints202306.1127.v1
Subject: Physical Sciences, Applied Physics Keywords: voltage source; current source; super-condenser; renewable energy; electric power
Online: 15 June 2023 (11:05:16 CEST)
In this study, a new methodology based on the circuit approach is employed to obtain renewable electric energy. Although a voltage source and a current source generate voltage and current, respectively, they work only when they receive electric power, which is the product of voltage and current. However, this paper proposes a circuit with a voltage source, a current source and a super-condenser connected in series, in which the voltage source provides voltage for the current source and the current source provides current for the voltage source. Resultantly, electric power is generated both at the voltage and current sources, and the energy (i.e., the electric power) from the current source is converted to the charging energy in the super-condenser. Herein, we describe the method for charging the super-condenser using a voltage source and a current source. To confirm the abovementioned concept, we conducted experiments, and the energy was successfully produced, with high reproducibility. Finally, the brief mechanism for the energy conversion, a schematic method for reducing the capacitances of stored batteries, and the significance of this study are described.
ARTICLE | doi:10.20944/preprints202010.0132.v1
Subject: Engineering, Energy And Fuel Technology Keywords: sector coupling; gas grid; district heating grid; grid simulation; network analysis; grid operation; open source; multi-energy grids; energy supply; infrastructure design
Online: 6 October 2020 (14:48:08 CEST)
The increasing complexity of the design and operation evaluation process of multi-energy grids (MEGs) requires tools for the coupled simulation of power, gas and district heating grids. Most tools analyzed in this paper either do not allow coupling of infrastructures, simplify the grid model or are not publicly available. We introduce the open source piping grid simulation tool pandapipes that – in interaction with pandapower - fulfills three crucial criteria: clear data structure, adaptable MEG model setup and performance. In an introduction to pandapipes we illustrate how it fulfills these criteria through its internal structure and demonstrate how it performs in comparison to STANET®. Then we show two case studies that have been performed with pandapipes already. The first case study demonstrates a peak shaving strategy as interaction of a local electricity and district heating grid in a small settlement. The second case study analyzes the potential of a power-to-gas device to serve as flexibility in a power grid under consideration of gas grid constraints. They both show the importance of a clear database, a simple simulation setup and good performance to set up different large and complex studies on grid infrastructure design and operation.
ARTICLE | doi:10.20944/preprints202308.0102.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: Bayesian Multi-Change Point Analysis; Linear Trend Segment Fit; Computationally Efficient Open-Source Python Implementation; S&P500; Mean Market Correlation; Economic Crises; Econophysics
Online: 2 August 2023 (02:40:56 CEST)
Identifying macroeconomic events that are responsible for dramatic changes of economy is of particular relevance to understand the overall economic dynamics. We introduce an open-source available efficient Python implementation of a Bayesian multi-trend change point analysis which solves significant memory and computing time limitations to extract crisis information from a correlation metric. Therefore, we focus on the recently investigated S&P500 mean market correlation in a period of roughly 20 years that includes the dot-com bubble, the global financial crisis and the Euro crisis. The analysis is performed two-fold: first, in retrospect on the whole dataset and second, in an on-line adaptive manner in pre-crisis segments. The on-line sensitivity horizon is roughly determined to be 80 up to 100 trading days after a crisis onset. A detailed comparison to global economic events supports the interpretation of the mean market correlation as an informative macroeconomic measure by a rather good agreement of change point distributions and major crisis events. Furthermore, the results hint to the importance of the U.S. housing bubble as trigger of the global financial crisis, provide new evidence for the general reasoning of locally (meta)stable economic states and could work as a comparative impact rating of specific economic events.
Subject: Social Sciences, Psychology Keywords: multimodal experiment; multisensory experiment; automatic device integration; open-source; PsychoPy; Unity; Virtual Reality (VR); Lab Streaming Layer; LabRecorder; LabRecorderCLI; Windows command line (cmd.exe)
Online: 12 October 2020 (07:06:28 CEST)
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.
EDITORIAL | doi:10.20944/preprints201605.0001.v1
Online: 3 May 2016 (14:43:02 CEST)
Preprints is a multidisciplinary preprint platform that makes scientific manuscripts from all fields of research immediately available at www.preprints.org. Preprints is a free (not-for-profit) open access service supported by MDPI in Basel, Switzerland.
ARTICLE | doi:10.20944/preprints201909.0122.v1
Subject: Medicine And Pharmacology, Other Keywords: open health; simple rules; ethics; reproducibility; research significance; open science
Online: 11 September 2019 (13:27:26 CEST)
We are witnessing a dramatic transformation in the way we do science. In recent years, significant flaws with existing scientific methods have come to light, including lack of transparency, insufficient involvement of stakeholders, disconnection from the public, and limited reproducibility of research findings. These concerns have sparked a global movement to revolutionize scientific practice and the emergence of Open Science. This new approach to science extends principles of openness to the entire research cycle, from hypothesis generation to data collection, analysis, replication, and translation from research to practice. Open Science seeks to remove all barriers to conducting high quality, rigorous, and impactful scientific research by ensuring that the data, methods, and opportunities for collaboration are open to all. Emerging digital technologies and "big data" (see "Ten simple rules for responsible big data research") have further accelerated the Open Science movement by affording new approaches to data sharing, connecting researcher networks, and facilitating the dissemination of research findings. Open scientific practices are also having a profound impact on the health sciences and medical research, and specifically how we conduct clinical research with human participants. Human health research necessitates careful considerations for practicing science in an ethical manner. There is also a particular urgency to human health research since the goal is to help people, so doing good science takes on a different meaning than simply doing science well. It also implores the scientist to reassess the conventional view of human health research as a pursuit conducted by scientists on human subjects, and lays a greater emphasis on inclusive and ethical practices to ensure that the research takes into account the interests of those who would be most impacted by the research. Openness in the context of human health research also raises greater concerns about privacy and security and presents more opportunities for people, including participants of research studies, to contribute in every capacity. At the core of open health research, scientific discoveries are not only the product of collaboration across disciplines, but must also be owned by the community that is inclusive of researchers, health workers, and patients and their families. To guide successful open health research practices, it is essential to carefully consider and delineate its guiding principles. This editorial is aimed at individuals participating in health science in any capacity, including but not limited to people living with medical conditions, health professionals, study participants, and researchers spanning all types of disciplines. We present ten simple rules that, while not comprehensive, offer guidance for conducting health research with human participants in an open, ethical, and rigorous manner. These rules can be difficult, resource-intensive, and can conflict with one another. They are aspirational and are intended to accelerate and improve the quality of human health research. Work that fails to follow these rules is not necessarily an indication of poor quality research, especially if the reasons for breaking the rules are considered and articulated (see rule 6: document everything). While most of the responsibility of following these rules falls on researchers, anyone involved in human health research in any capacity can apply them.
ARTICLE | doi:10.20944/preprints201905.0098.v2
Subject: Computer Science And Mathematics, Analysis Keywords: open review; open science; zero-blind review; peer review; methodology
Online: 16 August 2019 (05:27:55 CEST)
We present a discussion and analysis regarding the benefits and limitations of open and non-anonymized peer review based on literature results and responses to a survey on the reviewing process of alt.chi, a more or less open-review track within the CHI conference, the predominant conference in the field of human-computer interaction (HCI). This track currently is the only implementation of an open-peer-review process in the field of HCI while, with the recent increase in interest in open science practices, open review is now being considered and used in other fields. We collected 30 responses from alt.chi authors and reviewers and found that, while the benefits are quite clear and the system is generally well liked by alt.chi participants, they are reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.
ARTICLE | doi:10.20944/preprints202304.0272.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: spiral source; underwater acoustics; bearing angle estimate; spiral source calibration; underwater localization
Online: 12 April 2023 (11:49:37 CEST)
Underwater acoustic spiral sources are able to generate spiral acoustic fields where the phase depends on the bearing angle. It allows to estimate the bearing angle relatively to a receiver by subtracting the phases of a spiral and a circular wavefront, and finds application in the bearing angle estimate with a single hydrophone, e.g., for unmanned underwater vehicles localization. This paper presents a new prototype for a spiral acoustic source, which is able to generate the required wavefronts, and is composed of four monopoles in the same piezoceramic cylinder. This paper reports the prototyping and the acoustic tests performed in a water tank where the spiral source was characterized in terms o Transmitting Voltage Response, phase, and horizontal and vertical directivity patterns. A receiving calibration method for the spiral source is proposed and shows a maximum angle error of 1 degree when the calibration and the operation are carried under the same conditions and a mean angle error up to 6 degrees for frequencies above 25 kHz when the same conditions are not fulfilled.
ARTICLE | doi:10.20944/preprints202010.0390.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: High step-up converter; Impedance source converter; Z-source converter; Cascaded technique
Online: 19 October 2020 (14:54:47 CEST)
To improve the voltage gain of step-up converters, cascaded technique is considered as a possible solution in this paper. By considering the concept of cascading two Z-source networks in a conventional boost converter, the converter takes the advantages of both impedance source and cascaded converters. However, by applying some modifications, the proposed converter provides high voltage gain while the voltage stress of switch and diodes are still low. Moreover, the low input current ripple of the converter makes it absolutely appropriate for photovoltaic applications in order to expand the lifetime of PV panels. After analyzing the operation principles of the proposed converter, simulation and experimental results of a 100W prototype are presented to verify the proposed converter performance.
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: linguistic knowledge; source language; neural machine translation (NMT); low-resource; multi-source NMT
Online: 2 March 2020 (15:28:34 CET)
Exploiting the linguistic knowledge of the source language for neural machine translation (NMT) has recently achieved impressive performance on many large-scale language pairs. However, since the Turkish→English machine translation task is low-resource and the source-side Turkish is morphologically-rich, there are limited resources of bilingual corpora and linguistic information available to further improve the NMT performance. Focusing on the above issues, we propose a multi-source NMT approach that models the word feature in parallel to external linguistic features by using two separate encoders to explicitly incorporate linguistic knowledge into the NMT model. We extend the word embedding layer of the knowledge-based encoder to accommodate for each word’s linguistic annotations in the context. Moreover, we share all parameters across encoders to enhance the representation ability of the NMT model on the source language. Experimental results show that our proposed approach achieves substantial improvements of up to 2.4 and 1.1 BLEU scores in Turkish→English and English→Turkish machine translation tasks, respectively, which points to a promising way to utilize the source-side linguistic knowledge for the low-resource NMT.
ARTICLE | doi:10.20944/preprints202202.0088.v1
Subject: Medicine And Pharmacology, Orthopedics And Sports Medicine Keywords: Public open spaces; Open streets; Built environment; Leisure-time physical activity; Epidemiology
Online: 7 February 2022 (13:02:09 CET)
Leisure-time physical activity (LTPA) is associated with access and use of public open spaces. The “President João Goulart Elevated Avenue” and current denominated “Minhocão” is a facility for leisure activities that are open for people during the night/weekends. The aim of this study was to examine if the prevalence of LTPA among individuals living in the surroundings of Minhocão is different according to proximity to, and use of, the facility. We conducted a cross-sectional study with cluster sampling with people aged ≥18 years who lived in households until 500m and between 501m to 1500m of Minhocão. The survey was conducted between December/2017 until March/2019 with an electronic questionnaire self-responded. We conducted bivariate analysis and Poisson regression to examine possible differences in LTPA according to the proximity of residences and use of Minhocão. The analysis used post-stratification weights. A total of 12,030 telephone numbers of people were drawn (≤500m = 6,942; and >500m to ≤1500m = 5,088). The final sample analyzed were of 235 residents who returned the questionnaires. There was a higher prevalence of individuals engaging in at least 150 minutes per week of LTPA among users than non-users (Prevalence Ratio=2.23, IC95%1.72-2.90). People who used the park had higher prevalence of all types of LTPA than non-users. The results can serve to inform government decision-making on the future of Minhocão.
CONCEPT PAPER | doi:10.20944/preprints202009.0320.v1
Subject: Medicine And Pharmacology, Pulmonary And Respiratory Medicine Keywords: reciprocal personal/public protection; mask discriminating mouth and nose; mouth cover; mask; face covering; source control; source classification; Covid-19; active source; liquid droplets
Online: 14 September 2020 (11:45:27 CEST)
Reciprocal Personal/Public Protection (RPPP) featured with source control is introduced, Mask Discriminating Mouth and Nose (MDMN) is employed to serve the purpose, which includes polymer based mouth cover with optional nose cover. The new knowledge that mouth is a primary, active and dominant source of the virus has been well established, which is the base of MDMN. Source classification and related source control tools are discussed, mouth cover is recommended as the tool prioritized to use. Liquid droplets is identified as a hard issue related to mask, liquid droplets, mask fitting, comfort and facial recognition constitute real challenges of mask in addition to efficiency, All of these have been addressed with MDMN. Comparisons between MDMN and masks/face covering are taken on four aspects: efficiency and efficacy, tolerance and comfort, cost and waste, and civil rights and public interest. Mouth cover is recommended to replace the face covering and act as both a personal tool and a public utensil, mouth cover with nose cover can provide better protection than N95 etc. RPPP with MDMN, could be an alternative for lockdown, a parallel strategy to vaccine, and a collectively living way during the pandemic era. MDMN, featured with high efficiency protection, high degree comfort, easy wearing, tight-fitting, easy facial recognition & communication, reusability, cost-effective, environment friendly and scale manufacturing more readily and widely etc., is a simple and sustainable solution, which is essential for ordinary people to keep wearing it properly for protection.
ARTICLE | doi:10.20944/preprints202001.0240.v1
Subject: Social Sciences, Library And Information Sciences Keywords: openness under neoliberalism; open-access licensing in capitalism; the politics of open-licensing
Online: 21 January 2020 (11:00:41 CET)
The terms 'open' and 'openness' are widely used across the current higher education environment particularly in the areas of repository services and scholarly communications. Open-access licensing and open-source licensing are two prevalent manifestations of open culture within higher education research environments. As theoretical ideals, open-licensing models aim at openness and academic freedom. But operating as they do within the context of global neoliberalism, to what extent are these models constructed by, sustained by, and co-opted by neoliberalism? In this paper, we interrogate the use of open-licensing within scholarly communications and within the larger societal context of neoliberalism. Through synthesis of various sources, we will examine how open access licensing models have been constrained by neoliberal or otherwise corporate agendas, how open access and open scholarship have been reframed within discourses of compliance, how open-source software models and software are co-opted by politico-economic forces, and how the language of 'openness' is widely misused in higher education and repository services circles to drive agendas that run counter to actually increasing openness. We will finish by suggesting ways to resist this trend and use open-licensing models to resist neoliberal agendas in open scholarship.
ARTICLE | doi:10.20944/preprints202310.1475.v1
Subject: Environmental And Earth Sciences, Water Science And Technology Keywords: hydrochemical characteristics; nitrate; source; valley city
Online: 24 October 2023 (08:42:28 CEST)
With the rapid development of cities in northwest China, there has been an increasing focus on groundwater pollution in plateau cities, specifically the common occurrence of nitrate pollution. The special climatic, geological, and geomorphological characteristics of plateau and river valley cities contribute to distinct groundwater chemical characteristics. Therefore, the formation and evolution process of groundwater nitrate contamination differs from that of plain cities. To explore these issues, we conducted an analysis of eight major ions in various groups of water samples from rivers, springs, and groundwater in Haidong. By utilizing factor analysis and correlation analysis, we were able to identify the characteristics and formation of groundwater chemistry and nitrate pollution in Haidong. Our findings reveal that the chemical characteristics of groundwater in Haidong are primarily controlled by rock weathering, mineral dissolution, and evaporation, leading to the formation of highly mineralized groundwater. Additionally, the excessive nitrate content in certain areas is a result of domestic sewage discharge and agricultural fertilizer use, exceeding Chinese drinking water health standards. Furthermore, for cities located in valleys, the geological structure significantly impacts the nitrate content of groundwater in different regions. Areas with obstructed groundwater flow tend to have higher nitrate levels, while regions with unobstructed groundwater experience lower nitrate concentrations. Notably, shallow groundxwater is more vulnerable to nitrate pollution compared to deep groundwater. This study holds great significance in understanding the chemical characteristics of groundwater and the formation and evolution of nitrate pollution in highland river valley cities.
ARTICLE | doi:10.20944/preprints201807.0114.v1
Subject: Chemistry And Materials Science, Biomaterials Keywords: biomass; briquette; combustion; density; energy source
Online: 6 July 2018 (09:48:02 CEST)
This study investigated the physical and combustion properties of briquettes produced from agricultural wastes (groundnut shells and corn cobs), wood residues (Anogeissus leiocarpus) and admixtures of the particles at 15%, 20% and 25% starch levels (binder). A 6 x 3 factorial experiments in a Completely Randomized Design (CRD) was adopted for the study. The briquettes produced were analyzed for density, volatile matter, ash content, fixed carbon and specific heat of combustion. The result revealed that the density ranged from 0.44g/cm3 to 0.53g/cm3, while briquettes produced from groundnut shells had the highest (0.53g/cm3) significant mean density. Mean volatile matter and ash content of the briquettes ranged from 24.35% to 34.95% and 3.37% to 4.91%. A. leiocarpus and corn cobs particles had the lowest and highest ash content respectively. The briquette fixed carbon and specific heat of combustion ranged from 61.68% to 68.97% and 7362kca/kg to 8222kca/kg respectively. Briquette produced from A. leiocarpus particles had the highest specific heat of combustion. In general, briquettes produced from A. leiocarpus particles and admixture of groundnut shell and A. leiocarpus particles at 25% starch level had better quality in terms of density and combustion properties and thus suitable as environmentally friendly alternative energy source.
ARTICLE | doi:10.20944/preprints202305.0878.v1
Subject: Engineering, Architecture, Building And Construction Keywords: Open-plan office; Intelligibility; Reverberation
Online: 12 May 2023 (04:18:45 CEST)
The acoustic conditions of open-plan office spaces influence wellbeing and productivity perceived by users, but with an inadequate evaluation for workspace, acoustic design in open-plan offices can be a factor that alters user performance. Such is the case in Mexico, where there are no adequate standards to evaluate specific acoustic conditions such as intelligibility. For this reason, this case study aims to evaluate different types of measurement methods for intelligibility. This study was carried out in a university in northern Mexico. The sound measurements were based on the Mexican standard for noise analysis and the ISO 3382- part 3 standards for acoustic measurements for open-plan offices. The psychoacoustic parameters evaluated were Reverberation and Intelligibility, using objective methods determined on S/N and subjective methods based on loss of consonant; where it was analyzed distance between sound source and zones classified by building design characteristics. The results indicated in which points the intelligibility effects increased, observing that Reverberation remained stable in this office, and that the subjective methods presented greater size of the measured sound effect than the objective methods. This finding establishes that subjective methods conformed to Lognormal behavior, which is applicable to other linguistic elements describing speech behavior.
ARTICLE | doi:10.20944/preprints202112.0099.v1
Subject: Environmental And Earth Sciences, Environmental Science Keywords: forest experience; open dump; waste
Online: 7 December 2021 (11:30:56 CET)
Forest recreation can be successfully used for the psychological relaxation of respondents and can be used as a remedy for common problems with stress. The special form of forest recreation intended for restoration is forest bathing. These activities might be distracted by some factors, such as viewing buildings in the forest or using a computer in nature, which interrupt psychological relaxation. One factor that might interrupt psychological relaxation is the occurrence of an open dump in the forest during an outdoor experience. To test the hypothesis that an open dump might decrease psychological relaxation, a case study was planned that used a randomized, controlled crossover design. For this purpose, two groups of healthy young adults viewed a control forest or a forest with an open dump in reverse order and filled in psychological questionnaires after each stimulus. A pretest was used. Participants wore oblique eye patches to stop their visual stimulation before the experimental stimulation, and the physical environment was monitored. The results were analyzed using the two-way repeated measures ANOVA. The measured negative psychological indicators significantly increased after viewing the forest with waste, and the five indicators of the Profile of Mood States increased: Tension-Anxiety, Depression-Dejection, Anger-Hostility, Fatigue, and Confusion. In addition, the negative aspect of the Positive and Negative Affect Schedule increased in comparison to the control and pretest. The measured positive indicators significantly decreased after viewing the forest with waste, the positive aspect of the Positive and Negative Affect Schedule decreased, and the Restorative Outcome Scale and Subjective Vitality scores decreased (in comparison to the control and pretest). The occurrence of an open dump in the forest might interrupt a normal restorative experience in the forest by reducing psychological relaxation. Nevertheless, the mechanism of these relevancies is not known, and thus, it will be further investigated. In addition, in a future study, the size of the impact of these open dumps on normal everyday experiences should be investigated. It is proposed that different mechanisms might be responsible for these reactions; however, the aim of this manuscript is to only measure this reaction. The identified psychological reasons for these mechanisms can be assessed in further studies.
ARTICLE | doi:10.20944/preprints201901.0238.v1
Subject: Social Sciences, Library And Information Sciences Keywords: Open access; DOAJ; publisher size
Online: 23 January 2019 (10:15:00 CET)
In December 2012, DOAJ’s parent company, IS4OA, announced they would introduce new criteria for inclusion in DOAJ  and that DOAJ would collect vastly more information from journals as part of the accreditation process – and that journals already included, would need to reapply in order to be kept in the registry. My hypothesis was that the journals removed from DOAJ on May 9th 2016 would chiefly be journals from small publishers (mostly single journal publishers) and that DOAJ journal metadata information would reveal that they were journals with a lower level of publishing competence than those that would remain in the DOAJ. Among indicators of publishing competence could be the use of APCs, permanent article identifiers, journal licenses, article level metadata deposited with DOAJ, archiving policy/solutions and/or having a policy in SHERPA/RoMEO. The analysis shows my concerns to be correct.
CONCEPT PAPER | doi:10.20944/preprints201707.0095.v1
Subject: Chemistry And Materials Science, Other Keywords: prepublishing; preprint; chemistry; open science
Online: 31 July 2017 (16:05:17 CEST)
Chemistry is the last natural science discipline to embrace prepublishing, namely the publication of non-peer reviewed scientific articles on the internet. After a brief insight into the origins and the purpose of prepublishing in science, we conduct a concrete analysis of the concrete situation, aiming at providing an answer to several questions. Why the chemistry community has been late in embracing prepublishing? Is this in relation with the slow acceptance of open access publishing by the same community? Will prepublishing become a common habit also for chemistry scholars?
ARTICLE | doi:10.20944/preprints202105.0543.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Blind Source Separation (BSS), Minimum Mean Square Error (MMSE), convolutive mixture, source Prior, generalized Gaussian distribution
Online: 24 May 2021 (08:50:37 CEST)
This paper proposes a novel efficient multistage algorithm to extract source speech signals from a noisy convolutive mixture. The proposed approach comprises of two stages named Blind Source Separation (BSS) and De-noising. A hybrid source prior model separates the source signals from the noisy reverberant mixture in the BSS stage. Moreover, we model the low and high-energy components by generalized multivariate Gaussian and super-Gaussian models, respectively. We use Minimum Mean Square Error (MMSE) to reduce noise in the noisy convolutive mixture signal in the de-noising stage. Furthermore, two proposed models investigate the performance gain. In the first model, the speech signal is separated from the observed noisy convolutive mixture in the BSS stage, followed by suppression of noise in the estimated source signals in the de-noising module. In the second approach, the noise is reduced using the MMSE filtering technique in the received noisy convolutive mixture at the de-noising stage, followed by separation of source signals from the de-noised reverberant mixture at the BSS stage. We evaluate the performance of the proposed scheme in terms of signal-to-distortion ratio (SDR) with respect to other well-known multistage BSS methods. The results show the superior performance of the proposed algorithm over the other state-of-the-art methods.
Subject: Engineering, Electrical And Electronic Engineering Keywords: High Voltage Direct Current transmission (HVDC); Multi-terminal HVDC; Voltage Source Converter (CSC); Voltage Source Converter (VSC)
Online: 19 July 2020 (20:28:43 CEST)
There is a growing use of High Voltage Direct Current (HVDC) globally due to the many advantages of Direct Current (DC) transmission systems over Alternating Current (AC) transmission, including enabling transmission over long distances, higher transmission capacity and efficiency. Moreover, HVDC systems can be a great enabler in the transition to a low carbon electrical power system which is an important objective in today’s society. The objectives of the paper are to give a comprehensive overview of HVDC technology, its development, and present status, and to discuss its salient features, limitations and applications.
REVIEW | doi:10.20944/preprints202105.0033.v1
Subject: Biology And Life Sciences, Biochemistry And Molecular Biology Keywords: Campylobacter; Antimicrobial Resistance; Foodborne Pathogen; Animal Source
Online: 5 May 2021 (11:05:37 CEST)
Campylobacter is one of the major foodborne pathogens of concern in its growing trend of antimicrobial resistance. C. jejuni and C. coli are the major causative agents, with C. jejuni contributing to most of the cases in approximately 90% in the world. Infection is transmitted to humans due to consumption of contaminated food and water. Campylobacteriosis caused by C. jejuni is commonly presented with severe diarrhoea, abdominal pain, fever, headache, nausea, and vomiting with some extreme cases resulting in Guillain–Barré syndrome (GBS) and acute flaccid paralysis. Symptoms are severe in cases of children below 5 years, elderly and individuals who are immunocompromised. The infection is usually sporadic, and self-limiting and thus does not require antibiotics for treatment. Still, the antimicrobial resistance in Campylobacter is a major concern because of the transmission of resistance from animal sources to humans. This review highlights the recent epidemiology, geographical impact, resistance mechanisms, spread of Campylobacter spp. and the strategies to control the transmission of Campylobacter from veterinary sources and its antimicrobial resistance.
ARTICLE | doi:10.20944/preprints202102.0552.v1
Subject: Medicine And Pharmacology, Medicine And Pharmacology Keywords: modified SIR model, epidemic, death and source.
Online: 24 February 2021 (15:59:09 CET)
The original purpose of this article was to modify the original SIR equations to allow for a direct source of infection (without which the original equations would have no solutions unless one starts with an already infected population) and also to see to what extent one could obtain multiple outbreaks of an infectious disease. In the course of developing the basic ideas several other factors arose to take prominent roles.Perhaps one of the more salient factors is the point that choosing an arbitrary time to change conditions from say a lock-down for the population to a less stringent social behavior, such as allowing partial or complete opening of businesses and schools, etc. should be based on knowledge of the disease and its evolution. Such decisions are usually made by politicians who have less than full information concerning the consequences of their actions. Several examples are given to illustrate these points.
ARTICLE | doi:10.20944/preprints202007.0004.v1
Subject: Biology And Life Sciences, Insect Science Keywords: Blood Meal Source; Malaria; Honduras; Plasmodium spp
Online: 2 July 2020 (12:52:53 CEST)
Malaria remains a life-threatening disease in many tropical countries. Honduras has successfully reduced malaria transmission as different control methods have been applied focusing mainly on indoor mosquitoes. The selective pressure exerted by the use of insecticides inside the households could modify the feeding behavior of the mosquitoes forcing them to search for available animal hosts outside the houses. These animal hosts in the peridomicile could consequently become an important factor in maintaining vector populations in endemic areas. Herein, we investigated the blood meal sources and Plasmodium spp. infection on anophelines collected outdoors in endemic areas of Honduras. Individual PCR reactions with species-specific primers were used to detect five feeding sources on 181 visibly engorged mosquitoes. In addition, a subset of these mosquitoes where chosen for pathogen analysis by a nested PCR approach. Most mosquitoes fed on multiple hosts (2 to 4), and 24.9% of mosquitoes were fed on a single host, animal or human. Chicken and bovine were the most frequent blood meal sources (29.5% and 27.5% respectively). The average human blood index (HBI) was 22.1%. None of the mosquitoes was found to be infected with Plasmodium spp. Our results show the opportunistic and zoophilic behavior of Anopheles mosquitoes in Honduras.
ARTICLE | doi:10.20944/preprints202003.0460.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: Wikipedia; reference; source; reliability; popularity; Wikidata, DBpedia
Online: 31 March 2020 (22:18:51 CEST)
One of the most important factors impacting quality of content in Wikipedia is presence of credible sources. By following references readers can verify facts or find more details about described topic. A Wikipedia article can be edited independently in any of over 300 languages, even by anonymous users, therefore information about the same topic may be inconsistent. This also applies to use of references in different language versions of a particular article, so the same statement can have different sources. In this paper we analyzed over 40 million articles from the 55 most developed language versions of Wikipedia to extract information about nearly 200 million references and find the most popular and reliable sources. We presented 10 models for the assessment of the popularity and reliability of the sources based on analysis of meta information about the references in Wikipedia articles, page views and authors of the articles. Using DBpedia and Wikidata we automatically identified the alignment of the sources to a specific domain. Additionally, we analyzed the changes of popularity and reliability in time and identified growth leaders in each considered months. The results can be used for quality improvements of the content in different languages versions of Wikipedia.
ARTICLE | doi:10.20944/preprints201802.0009.v1
Subject: Computer Science And Mathematics, Computer Networks And Communications Keywords: Cache Coding, Source Coding, Absorbing Markov Chain
Online: 1 February 2018 (16:45:19 CET)
Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes for increased performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent these attacks while allowing for the benefits of coding in mesh networks, the Cache Coding was proposed. This protocol only allows recoding at the relays when the relay has received enough packets to decode an entire generation of packets. At that point, the relay node recodes and signs the recoded packets with its own private key allowing for the system to detect and minimize the effect of pollution attacks and make relays accountable for changes on the data. This paper analyzes the delay performance of Cache Coding to understand the security-performance trade-off of this scheme. We introduce an analytical model for the case of two relays in an erasure channel relying on an Absorbing Markov Chain and a approximate model to estimate the performance in terms of the number of transmissions before successfully decoding at the receiver. We confirm our analysis using simulation results. We show that Cache Coding can overcome security issues of unrestricted recoding with only a moderate decrease in system performance.
ARTICLE | doi:10.20944/preprints202002.0165.v1
Subject: Social Sciences, Library And Information Sciences Keywords: open access; api; self archiving,; automation
Online: 13 February 2020 (10:34:30 CET)
This proposal describes the design and development of an interoperable application that supports green open access with long-term sustainability and improved user experience of article deposit. Introduction: The lack of library resources and unfriendly repository user interface are two significant barriers that hinder green open access. Tasked to implement the open access mandate, librarians at an American research university developed a comprehensive system called Easy Deposit 2 to automate the support workflow of green open access. Implementation: Easy Deposit 2 is a web application that is able to harvest newly publications, outreach for manuscript on behalf of the library, and facilitate self-archiving to IR. It is developed and maintained by the library and integrated with the IR. Results and Discussion: The article deposit rate is about 25% with Easy Deposit 2, which increases significantly comparing to the previous period. It also serves as a local database for faculty publications with open access status. The lesson learned is that library cannot rely on a single commercial provider for publication data due to mismatched priorities. Conclusion: Recent IT developments provides new opportunities of innovation like Easy Deposit 2 in supporting open access. Academic librarians are vital in promoting "openness" in scholarly communication such as transparency and diversity in the sharing of publication data.
ARTICLE | doi:10.20944/preprints201905.0029.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: multilingual; open information extraction; parallel corpus
Online: 6 May 2019 (06:14:07 CEST)
The number of documents published on the Web other languages than English grows every year. As a consequence, it increases the necessity of extracting useful information from different languages, pointing out the importance of researching Open Information Extraction (OIE) techniques. Different OIE methods have been dealing with features from a unique language. On the other hand, few approaches tackle multilingual aspects. In such approaches, multilingual is only treated as an extraction method, which results in low precision due to the use of general rules. Multilingual methods have been applied to a vast amount of problems in Natural Language Processing achieving satisfactory results and demonstrating that knowledge acquisition for a language can be transferred to other languages to improve the quality of the facts extracted. We state that a multilingual approach can enhance OIE methods, being ideal to evaluate and compare OIE systems, and as a consequence, to applying it to the collected facts. In this work, we discuss how the transfer knowledge between languages can increase the acquisition from multilingual approaches. We provide a roadmap of the Multilingual Open IE area concerning the state of the art studies. Additionally, we evaluate the transfer of knowledge to improve the quality of the facts extracted in each language. Moreover, we discuss the importance of a parallel corpus to evaluate and compare multilingual systems.
ARTICLE | doi:10.20944/preprints201809.0017.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Smart city; smart campus; open data
Online: 3 September 2018 (09:39:24 CEST)
Universities, like cities, have embraced novel technologies and data-based solutions to improve their campuses with ‘smart’ becoming a welcomed concept. Campuses in many ways are small-scale cities. They increasingly seek to address similar challenges and to deliver improved experiences to their users. How can data be used in making this vision a reality? What can we learn from smart campuses that can be scale up to smart cities? A short research study was conducted over a three-month period at a public university in the United Kingdom employing stakeholder interviews and user surveys, aiming at gaining insight into these questions. Based on the study, the authors suggest that making data publicly available could bring many benefits to different groups of stakeholders and campus users. These benefits come with risks and challenges such as data privacy and protection and infrastructure hurdles. However, if these challenges can be overcome, open data could contribute significantly to improving campuses and user experiences, and potentially set an example for smart cities.
ARTICLE | doi:10.20944/preprints201806.0243.v1
Subject: Social Sciences, Library And Information Sciences Keywords: preprints; open science; data; academic publishing
Online: 15 June 2018 (05:19:00 CEST)
This paper explores whether preprints can better support open science by providing links to other early-stage research outputs. This potentially has benefits for transparency and discoverability of research projects. By looking at preprint submission systems, online preprints and surveying those who run preprint servers, I examined to what extent this is currently possible. No preprints server provided a complete service, however many allowed the linking of several open science elements from the abstract page. I looked at variation based on subject, age, and size of preprint server. In conclusion, authors posting preprints should consider the options provided by different preprint servers. It appears that open science is just one focus of preprint servers and further improvements will be dependent on preprint server policies and priorities rather than overcoming any technical difficulties.
CASE REPORT | doi:10.20944/preprints201801.0066.v1
Subject: Engineering, Control And Systems Engineering Keywords: cohesion policy; data visualization; open data
Online: 8 January 2018 (11:11:47 CET)
The implementation of the European Cohesion Policy aiming at fostering regions competitiveness, economic growth and creation of new jobs is documented over the period 2014–2020 in the publicly available Open Data Portal for the European Structural and Investment funds. On the base of this source, this paper aims at describing the process of data mining and visualization for information production on regional programmes performace in achieving effective expenditure of resouces.
ARTICLE | doi:10.20944/preprints202304.0130.v1
Subject: Computer Science And Mathematics, Other Keywords: data; cooperatives; open data; data stewardship; data governance; digital commons; data sovereignty; open digital federation platform
Online: 7 April 2023 (14:14:02 CEST)
Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship (SDG9), new skills, and jobs (SDG8), especially in small communities (SDG11) and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being (SDG3), and protect digital rights, we propose data cooperatives [1,2] as a vehicle for secure, trusted, and sovereign data exchange [3,4]. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized . Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted Application Programming Interfaces (APIs) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This policy paper presents and discusses several transformative use cases for cooperative data governance. The use cases demonstrate how platform/data-cooperatives, and their novel value creation can be leveraged to take digital commons and value chains to a new level of collaboration while addressing the most pressing community issues. The proposed framework for a digital federated and sovereign reference architecture will create a blueprint for sustainable development both in the Global South and North.
ARTICLE | doi:10.20944/preprints202309.2165.v2
Subject: Physical Sciences, Quantum Science And Technology Keywords: entropy source; quantum cryptography; quantum hacking; quantum communication
Online: 2 November 2023 (08:04:13 CET)
True randomness is necessary for the security of any cryptographic protocol, including quantum key distribution (QKD). In QKD transceivers, randomness is supplied by one or more local private entropy sources of quantum origin, which can be either passive (e.g. a beam splitter) or active (e.g. an electronic quantum random number generator). In order to understand better the role of randomness in QKD we revisit the well-known "detector blinding" attack on BB84 QKD protocol, which utilizes strong light to achieve an undetectable and complete recovery of the secret key. We present two findings. First, we show that the detector blinding attack is in fact an attack on the receiver’s local entropy source. Second, based on this insight, we propose a modified receiver station and a statistical criterion which together enable robust detection of any bright-light attack and thus restore security.