ARTICLE | doi:10.20944/preprints202208.0201.v1
Subject: Life Sciences, Genetics Keywords: auto-encoder; high sparse binary data; feature extraction; SNV integration
Online: 10 August 2022 (10:27:32 CEST)
Genomics involving tens of thousands of genes is a complex system determining phenotype. An interesting and vital issue is that how to integrate highly sparse genetic genomics data with a mass of minor effects into prediction model for improving prediction power. We find that deep learning method can work well to extract features by transforming highly sparse dichotomous data to lower dimensional continuous data in a non-linear way. This idea may provide benefits in risk prediction based on genome-wide data associated e.g. integrating most of the information in the genotype data. Hence, we developed a multi-stage strategy to extract information from highly sparse binary genotype data and applied it for risk prediction. Specifically, we first reduced the number of biomarkers via a univariable regression model to a moderate size. Then a trainable auto-encoder was used to extract compact representations from the reduced data. Next, we performed a LASSO problem process over a grid of tuning parameter values to select the optimal combination of extracted features. Finally, we applied such feature combination to two prognostic models, and evaluated predictive effect of the models. The results of simulation studies and real data applying indicated that these highly compressed transformation features could better improve predictive performance and did not easily lead to over-fitting.
ARTICLE | doi:10.20944/preprints202208.0241.v1
Subject: Life Sciences, Genetics Keywords: Tyr-FISH; integration genetic; cytogenetic and pseudochromosome maps; transcript based markers; genome assembly; bioinformatics; Allium cepa
Online: 12 August 2022 (12:54:02 CEST)
The ability to directly look into genome sequences has opened great opportunities in plant breeding. Yet, the assembly of full-length chromosomes remains one of the most difficult problems in modern genomics. Genetic maps are commonly used in de novo genome assembly and are constructed on the basis of a statistical analysis of the number of recombination rather than physical distance in base pairs. This may affect the accuracy of the ordering and orientation of scaffolds within the chromosome, especially in the region of recombination suppression. Here we report the use of Tyr-FISH for validation of genetic and pseudochromosome maps. For probe design we developed a pipeline that is based on selection of a unique sequence with minimal potential fluorescent background arising from non-specific in situ hybridization. In total of 24 unique genes were located on physical chromosomes 2 and 6. The order of markers has been corrected by integration genetic and cytogenetic maps. Tyr-FISH mapping showed that the order of 23.1% (chromosome 2) and 27.3% (chromosome 6) of the tested genes differed between physical chromosomes and pseudochromosomes. Also the position of the mlh1 gene, which was not on the genetic map, was defined on physical chromosome 2. The results suggest the possibility of Tyr-FISH mapping of any gene that may not be neither on the genetic map nor in the assembly. Hence, Tyr-FISH provides valuable information for the improvement of the genome assembly.
REVIEW | doi:10.20944/preprints202101.0521.v1
Subject: Life Sciences, Molecular Biology Keywords: Data integration; multi-omics; integration strategies; genomics
Online: 25 January 2021 (16:19:31 CET)
Metabolomics deals with multiple and complex chemical reactions within living organisms and how these are influenced by external or internal perturbations. It lies at the heart of omics profiling technologies not only as the underlying biochemical layer that reflects information expressed by the genome, the transcriptome and the proteome, but also as the closest layer to the phenome. The combination of metabolomics data with the information available from genomics, transcriptomics, and proteomics offers unprecedented possibilities to enhance current understanding of biological functions, elucidate their underlying mechanisms and uncover hidden associations between omics variables. As a result, a vast array of computational tools have been developed to assist with integrative analysis of metabolomics data with different omics. Here, we review and propose five criteria – hypothesis, data types, strategies, study design and study focus – to classify statistical multi-omics data integration approaches into state-of-the-art classes under which all existing statistical methods fall. The purpose of this review is to look at various aspects that lead the choice of the statistical integrative analysis pipeline in terms of the different classes. We will draw a particular attention to metabolomics and genomics data to assist those new to this field in the choice of the integrative analysis pipeline.
ARTICLE | doi:10.20944/preprints201904.0144.v1
Subject: Keywords: supply chain integration
Online: 12 April 2019 (10:33:23 CEST)
This paper applied case study research to design architectures for green-field supply chain integration. The integration design is based on a case study of a supply chain integration of 5 companies, operating in different, but supply chain complimenting industry sectors. The case study research is applied to design and validate the architectures in a real world scenario. The supply chain integration architectures enable the conversion of individual into integrated strategies. The architectures are categorised and the process develops into a conceptual system for identifying the correlations between individual participants’ strategic areas of interest and the integrated supply chain areas of interest. The novelty of this paper is a conceptual system for green-field supply chain integration architectures, which can be applied in real world by supply chain practitioners.
ARTICLE | doi:10.20944/preprints201707.0014.v1
Subject: Medicine & Pharmacology, Behavioral Neuroscience Keywords: network; topology; integration; segregation; fMRI
Online: 10 July 2017 (05:48:41 CEST)
Recent methodological advances have enabled researchers to track the network structure of the human brain over time. Together, these studies provide novel insights into effective brain function, highlighting the importance of the systems-level perspective in understanding the manner in which the human brain organizes its activity to facilitate behavior. Here, we review a range of recent fMRI and electrophysiological studies that have mapped the relationship between inter-regional communication and network structure across a diverse range of brain states. In doing so, we identify both behavioral and biological axes that may underlie the tendency for network reconfiguration. We conclude our review by providing suggestions for future research endeavors that may help to refine our understanding of the functioning of the human brain.
ARTICLE | doi:10.20944/preprints202112.0286.v2
Subject: Engineering, Other Keywords: data integration; interoperability; harmonization; GeoBIM; metadata
Online: 7 June 2022 (11:10:07 CEST)
The reuse and integration of data give big opportunities, supported by the F.A.I.R. data principles. Seamless data integration from heterogenous sources has been interest of the geospatial community for long time. However, 3D city models, BIM and information supporting smart cities present higher semantic and geometrical complexity, which pose new challenges, never tackled in a comprehensive methodology. Building on previous theories and studies, this paper proposes an overarching workflow and framework for multisource (geo)spatial data integration. It starts from the definition of use case-based requirements for the integrated data, guides the analysis of integrability of the involved datasets, suggesting actions to harmonise them, until data merging and validation. It is finally tested and exemplified on a case study. This approach allows the development of consistent, well-documented and inclusive data integration workflows, for the sake of use cases automation in various geospatial domains and the production of Interoperable and Reusable data.
CONCEPT PAPER | doi:10.20944/preprints202010.0474.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: agile; waterfall, build; continuous integration; reproducible
Online: 23 October 2020 (09:41:15 CEST)
Stock assessment determines the status of fishery stock to support manage- ment decision making. Considering the iterations between exploratory cal- culations and the need to compare outputs to elicit better model settings, we should focus on not only the accuracy of abundance estimation and the toler- ance of uncertainty but also the efficiency of the project workflow. Although in Japan, a stock assessment model was introduced written in the R language in 2012, the workflow did not sufficiently adjust, creating problems because the current workflow is contrary to the principles of effective value creation. To make our project sustainable, we propose adopting the agile methodol- ogy, an iterative development method used by software developers, for stock assessment. Therefore, we wrote an example report as a package document in the R language. Developed under a continuous integration environment, the report remains up to date, with every modification on component files. This method enabled our work to be efficient and transparent by allowing and documenting scenario branching, error corrections, and annual updates. We show that the iterative development cycle benefit us by allowing us to focus on the essential business problem of the assessment project.
ARTICLE | doi:10.20944/preprints202201.0270.v1
Subject: Mathematics & Computer Science, Other Keywords: Road accidents; Brazil; fractional integration; long memory
Online: 19 January 2022 (11:45:26 CET)
This paper deals with the analysis of trends in road accidents on major highways in Brazil. Using updated time series techniques, our results indicate that a low degree of long memory was detected in the series with shocks having transitory effects over time. We further find that the number of accidents taking place in Brazil has been reducing over time, though in the presence of negative shocks, the recovery is not going to be immediate due to the long memory nature of the data. Despite the absence of relevant investment relating to infrastructure expansion, it is worth mentioning the consolidation of a nationwide tolled road system in Brazil involving concessions to private administrators, alongside more severe traffic laws that can impose limitations on driving licences.
ARTICLE | doi:10.20944/preprints202107.0698.v1
Online: 30 July 2021 (11:43:12 CEST)
Background: In an age where information is generally accessible, most of the interest these days has focused on how accessible and convenient technology can be. So small and personal, mobile devices can transform our perception of learning by combining both mobility and convenience. Mobile learning is part of the digital learning landscape alongside e-learning and serious games. However, knowledge about effective design of mobile learning experiences remains of interest with a focus on appropriate design models and the embodiments that can be implemented to achieve the intended educational outcomes. Exploring the instructor's perspective on mobile learning is essential. Therefore, the aim of this study was to investigate the Moroccan instructors' perception and practice of mobile learning to inform the development of an ecologically valid mobile learning integration model. Methods: Higher education Instructors (n=41) were recruited to the study. The Moroccan instructors' perception and their experiences regarding their adoption of mobile learning were collected using an online survey. The analysis focused on their mobile use, perceived IT competency, and opinions on mobile learning. Results: We described most of the instructors' considerations regarding integrating mobile technologies into their teaching activities. We found that most of the mobile learning activities defined by the respondents corresponded to relatively advanced use of mobile devices. More promising, instructors have found innovative ways to use the educational potential of mobile devices. However, the prospect of mobile devices was still to challenge. No or poor Wi-Fi connection, number of devices or limited access, sometimes fees or applications incompatibility were identified as reasons and obstacles to mobile learning usage. Conclusion: Mobile learning is mostly perceived positively among Moroccan instructors allowing many applications and usage to enhance teaching and learning. In this study, a better understanding of aspects and factors influencing the integration of mobile learning in the Moroccan educational context is exposed, helping further the development of an ecologically valid mobile learning integration model. Future work on mobile learning should consider the highly paced evolution of mobile technologies, emphasizing the flexibility of integration frameworks to support instructors and learners.
ARTICLE | doi:10.20944/preprints202105.0109.v1
Subject: Social Sciences, Economics Keywords: Electricity Markets; Integration; Demand Response; Innovation; Regulation
Online: 6 May 2021 (15:25:51 CEST)
We select four important waves of new entrants that knocked on the door of European electricity markets to illustrate how market rules need to be continuously adapted to allow new entrants to come in and push innovation forward. The new entrants that we selected are utilities venturing into neighbouring markets after establishing a strong position in their home market, utility-scale renewables project developers, asset-light software companies aggregating the assets of smaller consumers and producers, and different types of communities. We show that well-intentioned rules designed for certain types of market participants can (unintentionally) become obstacles for new entrants. We conclude that the evolution of market rules illustrates the importance of dynamic regulation. At the start of the liberalisation process the view was that we would deregulate or re-regulate the sector after which the role of regulators could be reduced. But their role has only increased. New players might also present new risks that require intervention by regulators.
Online: 30 October 2020 (15:35:00 CET)
In the information age today, data are getting more and more important. While other industries achieve tangible improvement by applying cutting edge information technology, the construction industry is still far from being enough. Cost, schedule, and performance control are three major functions in the project execution phase. Along with their individual importance, cost-schedule integration has been a significant challenge over the past five decades in the construction industry. Although a lot of efforts have been put into this development, there is no method used in construction practice. The purpose of this study is to propose a new method to integrate cost and schedule data using big data technology. The proposed algorithm is designed to provide data integrity and flexibility in the integration process, considerable time reduction on building and changing database, and practical use in a construction site. It is expected that the proposed method can transform the current way that field engineers regard information management as one of the troublesome tasks in a data-friendly way.
ARTICLE | doi:10.20944/preprints201806.0488.v1
Subject: Earth Sciences, Geoinformatics Keywords: gis; bim; ifc; citygml; integration; interoperability; geometry
Online: 29 June 2018 (15:15:57 CEST)
It is widely acknowledged that the integration of BIM and GIS data is a crucial step forward for future 3D city modelling, but most of the research conducted so far has covered only the semantic aspects of GIS-BIM integration. We present here the results of the GeoBIM project, in which we tackled three integration problems focussing instead on aspects involving geometry processing: (i) the automated processing of complex architectural IFC models, (ii) the integration of existing GIS subsoil data in BIM, and (iii) the georeferencing of BIM models for their use in GIS software. All the problems have been studied using real world models and existing datasets made and used by practitioners in the Netherlands. For each problem, we expose in detail the issues we faced, our proposed solutions, and our recommendations for a more successful integration.
ARTICLE | doi:10.20944/preprints202209.0182.v1
Subject: Social Sciences, Other Keywords: transportation integration; service industry agglomeration; Yangtze River Delta urban agglomeration; urban agglomeration transportation integration index system; knowledge spillover effect.
Online: 13 September 2022 (16:02:29 CEST)
This study selected the Yangtze River Delta urban agglomeration as the research area, combining it with the current situation of the transportation development of the Yangtze River Delta urban agglomeration to construct the urban agglomeration transportation integration index system and evaluate the development status of the Yangtze River Delta urban agglomeration transportation integration. The study examined the influence mechanism of transportation infrastructure on service industry agglomeration. The results are as follows: (1) From 2011–2020, the Yangtze River Delta urban agglomeration’s transportation integration index showed a clear upward trend. (2)The development of transport integration in urban agglomerations has heterogeneous effects on local service agglomeration. The development of the integration level of local transportation has a certain inhibitory effect on the agglomeration of local service industry. The transportation integration of the Yangtze River Delta urban agglomeration plays an important role in promoting the agglomeration of local wholesale and retail industry, transportation, storage and postal services. (3) The transportation integration of urban agglomeration can affect the agglomeration of service industry through the knowledge spillover brought by the free flow of various factors. The knowledge spillover effect caused by local transportation integration can promote the agglomeration of local service industry to a certain extent. The Yangtze River Delta urban agglomeration needs to accelerate the construction of trans-provincial and trans-municipal transportation infrastructure, and further improve the connectivity level of the urban agglomeration, so as to promote the integrated development of high-quality transportation in the Yangtze River Delta urban agglomeration.
ARTICLE | doi:10.20944/preprints202209.0042.v1
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: active range of motion; Structural Integration; Rolfing; fascia
Online: 5 September 2022 (03:32:00 CEST)
Background: Recent work has investigated significant force transmission between the compo-nents of myofascial chains. Misalignments in the body due to fascial thickening and shortening can therefore lead to complex compensatory patterns. For the treatment of such nonlinear cause-effect pathology, a comprehensive neuro-musculoskeletal therapy such as the Rolf Meth-od of Structural Integration (SI) could be targeted. Methods: A total of 727 subjects were retro-spectively screened from the medical records of an SI practice over a 23-year period. 383 subjects who had completed 10 basic SI sessions met eligibility criteria and were assessed for active range of motion (AROM) of the shoulder and hip before and after SI treatment. Results: Shoulder flex-ion, external and internal rotation, and hip flexion improved significantly (all p < 0.0001) after 10 SI sessions. Left shoulder flexion and external rotation of both shoulders increased more in men than in women (p < 0.0001), but were not affected by age. Conclusions: SI intervention produces multiple changes in the components of myofascial chains that could help maintain upright pos-ture in humans and reduce inadequate compensatory patterns. SI affects differently the outcome of some AROM parameters in women and men.
REVIEW | doi:10.20944/preprints202208.0363.v1
Subject: Social Sciences, Other Keywords: migration; mentoring; unaccompanied minors; refugee; asylum seeker; integration
Online: 19 August 2022 (10:39:38 CEST)
In 2015, an increased migration movement into Europe generated a European Refugee Crisis. Adolescents often migrate unaccompanied by a caregiver and face particular risk during the different phases of migration. Recently, Portugal hosted the fourth highest number of Middle East and North Africa unaccompanied minors (UM) among EU countries. Thus, it is relevant to explore peer reviewed interventions among EU state members to inform the development of future Portuguese-based programs aiming to support the integration of these citizens. This review aimed to analyse mentoring as a relevant integration tool for UM refugees arriving to Portugal. Mentoring was identified as low-cost strategy with low to moderate positive results for youth at risk of developing psychological, social, and behavioural problems. Mentoring is starting to gain momentum within the EU countries receiving more refugee citizens integrated into the EU relocation program. This review can inform social education technicians and the staff involved in the Portuguese refugee relocation program and encourage the discussion on the creation of Portuguese-based mentoring programs for the studied population.
ARTICLE | doi:10.20944/preprints202206.0326.v1
Subject: Engineering, Energy & Fuel Technology Keywords: hydrogen propulsion; aircraft design; conceptual integration; performance assessment
Online: 23 June 2022 (15:59:12 CEST)
The present paper deals with the investigation, at conceptual level, of the performance of short-medium-range aircraft with hydrogen propulsion. The attention is focused on the relationship between figures of merit related to transport capability, such as passenger capacity and flight range, and the parameters which drive the design of liquid hydrogen tanks and their integration with a given aircraft geometry. The reference aircraft chosen for such purpose is a box-wing short-medium-range airplane, object of study within a previous European research project called PARSIFAL, capable to cut the fuel consumption per passenger-kilometre up to 22%. By adopting a retrofitting approach, non-integral pressure vessels are sized to fit into the fuselage of the reference aircraft, under the assumption that the main aerodynamic, flight mechanic and structural characteristics are not affected. A parametric model is introduced to generate a wide variety of fuselage-tank cross-section layouts, from a single tank with the maximum diameter compatible with a catwalk corridor to multiple tanks located in the cargo deck , and an assessment work-flow is implemented to perform the structural sizing of the tanks and analyse their thermodynamic behaviour during the mission. This latter is simulated with a time-marching approach that couples the fuel request from engines with the thermodynamics of the hydrogen in the tanks, which is constantly subject to evaporation and, depending on the internal pressure, vent-ed-out in gas form. Each model is presented in detail in the paper and results are provided through sensitivity analyses to both the technology parameters of the tanks and the geometric parameters influencing their integration. The guidelines resulting from the analyses indicate that light materials, such as the Aluminium alloy AA2219 for tanks’ structure and polystyrene foam for the insulation, should be selected. Preferred values are also indicted for the aspect ratios of the vessel components, i.e. central tube and endcaps, as well as suggestions for the integration layout to be adopted depending on the desired trade-off between passenger capacity, as for the case of multiple tanks in the cargo deck, and achievable flight ranges, as for the single tank in the section.
ARTICLE | doi:10.20944/preprints202112.0487.v1
Subject: Earth Sciences, Geoinformatics Keywords: 3D City Model; CityGML 2.0; Spatial Data Integration
Online: 30 December 2021 (12:51:35 CET)
3D city models integrate heterogeneous urban data from multiple sources in a unified geospatial representation, combining both semantics and geometry. Although in the last decades, they are predominantly used for visualization, today they are used in a large range of tasks related to exploration, analysis, and management across multiple domains. The complexity of urban processes and the diversity of urban environment bring challenges to the implementation of 3D city models. To address such challenges, this paper presents the development process of a 3D city model of a single neighborhood in Sofia city based on CityGML 2.0 standard. The model represents the buildings in LOD1 with a focus on CityGML features of related to the buildings like building part, terrain intersection curve and address. Similar building models of 18 cities provided as open datasets are explored and compared in order to extract good modeling practices. As a result, workflows for generation of 3D building models in LOD1 are elaborated and improvements in the feature modeling are proposed. Two options of building model are examined: modeling of a building as a single solid and modeling of a building with separate building parts. Finally, the possibilities for visualization of the model in popular platforms such as ArcGIS Pro and Cesium Ion are explored.
ARTICLE | doi:10.20944/preprints202112.0439.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: refugees; career adaptability; resettlement success; labor market integration
Online: 27 December 2021 (15:56:38 CET)
Today's unstable labor market increasingly requires flexibility and adaptability to cope with the threat of unemployment. It can cause distress in people and have a more significant negative impact on fragile workers, such as migrants. This study aimed to test whether a Career Counseling intervention designed for Migrants (CCfM) can develop Career Adaptability and, therefore, both Work Self-efficacy (WSe) and Job Search Self-efficacy (JSSe) perceptions. It was conducted in Italy and involved a sample of 233 migrants, who were asked to respond to a questionnaire available in three languages (Italian, French, and English). Data analysis showed that an improvement was demonstrated in all the variables considered, namely career adaptability (including concern, control, confidence, and curiosity), WSe, and JSSe, even though the CCfM was not directly designed to increase the last one. In addition, the development of career adaptability explained the increase in migrants' WSe and JSSe, and the initial level of career adaptability was found to explain the increase in WSe due to the initial positive level of curiosity.
Subject: Engineering, Energy & Fuel Technology Keywords: Solar Photovoltaics, PV Self-consumption, Building-integrated photovoltaics (BIPV), Build-ing-applied photovoltaics (BAPV), PV orientations, PV Grid-integration
Online: 22 September 2021 (10:14:35 CEST)
As Solar Photovoltaics in buildings reaches maturity, grid integration and economic yield are topics of greater interest. The traditional design of photovoltaic installations has considered the optimal orientation of photovoltaic modules to be that which yields the maximum annual energy production. The influence of the consumption patterns and the hourly-variable electricity prices implies that this traditional optimal design might not be the most profitable. Using a full-year dataset for a residential installation, alternative installations using canopies and modules attached to the façades are simulated. Simulating the energy balances for different annual consumptions, it is found that the canopy and façade installations offer better self-consumption of the PV produced energy, reflected in a 9% higher self-consumption degree using modules on façades and a 5% using canopies. The economic evaluation under the new electricity tariffs in Spain shows a better profit for PV self-consumption, reducing by more than 2 years the time of return on investment. The analysis of different alternatives for an industrial PV has allowed us to identify several benefits for these orientations, such as an increase in annual energy production of up to 59% over the optimal-producing orientation, that are confirmed after several months of operation.
ARTICLE | doi:10.20944/preprints202103.0313.v1
Online: 11 March 2021 (11:06:24 CET)
The rapid development of information and communication technologies has led to the use of new and digital technologies in education which involves combinations of text, graphics, audio, video, animations and other eLearning resources such as authoring tools, Learning Management System (LMS), Mobile learning and others. Arguably, using LMS leaves much to be desired. The inherent problem here is that the future of extensive adoption of ICT via LMS to enhance and promote classroom interaction in Open and Distance Learning (ODL) is bleak. This is worrisome given that the country is lagging far behind in the innovative use of this web 2.0 technology to impart knowledge. Further, the low-level application of LMS in instruction connotes the loss of inherent advantages in its adoption. Also, the online setting which makes students less nervous and interactive, sharing of ideas and viewpoints; and a host of other benefit will be lost. While evidence has shown that LMS is not a new phenomenon, the use of LMS in ODL is still at its infancy, particularly in Nigeria. Research in this area is rare. A quick search on prominent research databases could testify that. It is on this thrust that this study investigates University of Ibadan undergraduate students’ perceived roles and readiness towards integration of learning management system into teaching and learning.
REVIEW | doi:10.20944/preprints202101.0418.v1
Subject: Medicine & Pharmacology, Dentistry Keywords: health workforce; operational models; planning; skill mix; integration
Online: 21 January 2021 (12:35:46 CET)
Over the last decade, there has been a renewed interest in oral health workforce planning. The purpose of this review is to examine oral health workforce planning models on supply, demand and needs, mainly in respect to their data sources, modelling technique and use of skill mix. A search was carried out on PubMed, Web of Science, and Google Scholar databases for published scientific articles on oral health workforce planning models between 2010 to 2020. No restrictions were placed on the type of modelling philosophy, and all studies including supply, demand or needs based models were included. Rapid review methods guided the review process. Twenty-three studies from 15 different countries were included in the review. A majority were from high income countries (n=17). Dentists were the sole oral health workforce group modelled in 13 studies; only five studied included skill mix (allied dental personnel) considerations. The most common application of modelling was a workforce to population ratio or a needs-based demand weighted variant. Nearly all studies presented weaknesses in modelling process due to the limitations in data sources and/or non availability of necessary data to inform oral health workforce planning. Skill mix considerations in planning models were also limited to horizontal integration within oral health professionals. Planning for the future oral health workforce is heavily reliant on quality data being available for supply, demand and needs models. Integrated methodologies that expand skill mix considerations and account for uncertainty are essential for future planning exercises.
ARTICLE | doi:10.20944/preprints202012.0311.v1
Subject: Biology, Anatomy & Morphology Keywords: resource integration; network analysis; sustainability; small-scale farm
Online: 14 December 2020 (09:26:08 CET)
Shrinking farm size and fragile farm resources pose a significant challenge to the sustainability of small-scale farms. Efficient resource utilization in small-scale farms is crucial to achieving farm sustainability through endogenous mechanisms. However, the precise mechanisms to integrate physical resources to achieve farm sustainability are not very clear yet. By capturing the interaction among farm resources as a network phenomenon, we identify the discrete resource interactions (RIs) in different types of small-scale farms of Indian Sundarbans, which are associated with higher farm sustainability. Thirty-two linkages, 11 reciprocal linkages, 22 triads, and three ‘core elements’ that occurred and cooccurred on highly sustainable farms are found to be critical in achieving farm sustainability. Using the properties of resource interaction networks as explanators of farm sustainability, we anticipate that sustainability in small-scale farms can be achieved by strategically creating new RIs on the farm. However, there may be limitations to such achievement depending on the nature of RI and type of farm. The analytical approach helps to understand the structural basis of sustainability in small-scale farms, and this approach can be used to achieve farm sustainability through the strategic integration of existing farm resources in the smallholder systems.
REVIEW | doi:10.20944/preprints202010.0108.v1
Subject: Medicine & Pharmacology, Allergology Keywords: Morphological integration; dysmorphogenesis; skull; etiogenesis; neuro-psychiatric disorders
Online: 6 October 2020 (08:29:01 CEST)
Structure - function interdependence is a universal phenomenon in biological systems. Any alteration in structural features may result in change in functions–leading to natural selection of a particular trait, or dysfunctions thereof. Many such alterations arise during the course of evolution of a species and may meticulously be traced during embryonic development of an organism. Through the theoretical construct of morphological integration, a set of phenotypic traits alter in a coordinated and integrated manner during evolution and embryonic development of an organism yielding efficient environmentally adapted physiological functions pertinent to those structures. Such integration may go awry sometimes, setting the basis for genesis of diseases. Morphological integration in human skull has been established through various methods. The brain-skull co-development is handcuffed through evolution and development, and the very basis of a neuro-psychiatric disorder could be underlying in dysmorphogenesis of the skull, its consequent effect on structures, and thus functions of the pertinent brain components. Here we propose that morphological integration in human skull may be mechanistically implied in etiogenesis of certain neuro-psychiatric disorders and should be borne in mind during clinical diagnosis and therapeutic interventions.
REVIEW | doi:10.20944/preprints202007.0224.v1
Subject: Keywords: fibromyalgia; comprehensive review; neurophysiological abnormalities; psychosocial processes; integration
Online: 11 July 2020 (03:45:54 CEST)
Research into the neurobiological and psychosocial mechanisms involved in fibromyalgia (FM) has progressed remarkably in recent years. Despite this, currents accounts of FM fail to capture the complex, dynamic and mutual crosstalk between neurophysiological and psychosocial domains. We conducted a comprehensive review of the existing literature in order to synthesise current knowledge on FM, explore and highlight multi-level links and pathways among different systems and build bridges between existing approaches. An extensive panel of international experts in neurophysiology and psychosocial aspects of FM discussed the collected evidence and progressively refined and conceptualized its interpretation. Fibromyalgia is a complex condition resulting from the dynamic interplay between multiple systems and processes. We provided an updated overview of the most relevant observations in FM to date as well as the potential pathways by which they exert they are related and exert their mutual influence, to produce the manifestations commonly associated with FM. This review constituted the first step towards and supported the development of a much needed model capable of integrating the main factors implicated in FM into a single, unified model that may prove valuable in understanding and managing FM.
ARTICLE | doi:10.20944/preprints202003.0470.v1
Subject: Biology, Other Keywords: EthA; ethionamide resistance; BVMO; molecular dynamics; thermodynamic integration
Online: 31 March 2020 (23:21:53 CEST)
Mutation in the ethionamide (ETH) activating enzyme, EthA, is the main factor determining resistance to this drug, used to treat TB patients infected with MDR and XDR Mycobacterium tuberculosis isolates. Many mutations in EthA of ETH resistant (ETH-R) isolates have been described but their roles in resistance remain uncharacterized, partly because structural studies on the enzyme are lacking. Thus, we took a two-tier approach to evaluate two mutations (Y50C and T453I) found in ETH-R clinical isolates. First, we used a combination of comparative modeling, molecular docking, and molecular dynamics to build an EthA model in complex with ETH that has hallmark features of structurally characterized homologs. Second, we used free energy computational calculations for the reliable prediction of relative free energies between the wild type and mutant enzymes. The ΔΔG values for Y50C and T453I mutant enzymes in complex with FADH2-NADP-ETH were 3.34 (+/−0.55) and 8.11 (+/−0.51) kcal/mol, respectively, compared to the wild type complex. The positive ΔΔG values indicate that the wild type complex is more stable than the mutants, with the T453I complex being the least stable. These are the first results shedding light on the molecular basis of ETH resistance, namely reduced complex stability of mutant EthA.
ARTICLE | doi:10.20944/preprints201811.0444.v1
Subject: Life Sciences, Molecular Biology Keywords: cystic fibrosis; gene therapy; gene targeting; gene integration
Online: 19 November 2018 (10:14:02 CET)
Cystic Fibrosis (CF) is an inherited monogenic disorder, amenable to gene based therapies. Because CF lung disease is currently the major cause of mortality and morbidity, and lung airway is readily accessible to gene delivery, the major CF gene therapy effort at present is directed to the lung. Although airway epithelial cells are renewed slowly, permanent gene correction through gene editing or targeting in airway stem cells is needed to perpetuate the therapeutic effect. Transcription activator-like effector nuclease (TALEN) has been utilized widely for a variety of gene editing applications. The stringent requirement for nuclease binding target sites allows for gene editing with precision. In this study, we engineered helper-dependent adenoviral (HD-Ad) vectors to deliver a pair of TALENs together with donor DNA targeting the human AAVS1 locus. With homology arms of 4 kb in length, we demonstrated precise insertion of either a LacZ reporter gene or a human CFTR minigene into the target site. Using the LacZ reporter, we determined the efficiency of gene integration to be about 5%. In the CFTR vector transduced cells, we have detected both CFTR mRNA and protein expression by qPCR and Wetern analysis, respectively. We have also confirmed CFTR function correction by flurometric Image Plate Reader (FLIPR) and iodide efflux assays. Taking together, these findings suggest a new direction for future in vitro and in vivo studies in CF gene editing.
ARTICLE | doi:10.20944/preprints202203.0038.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: polyimide bonding; plasma activation; hydrophilic; hybrid bonding; 3D integration
Online: 2 March 2022 (07:47:17 CET)
Polymer adhesives have emerged as a promising dielectric passivation layer in hybrid bonding for 3D integration while they raise misalignment problems during curing. In this work, the synergistic effect of oxygen plasma surface activation and wetting is utilized to achieve bonding between completed cured polyimides. The optimized process achieves a void-less bonding with a maximum shear strength of 35.3 MPa at a low temperature of 250 °C in merely 2 min, significantly shortening the bonding period and decreasing thermal stress. It is found that the plasma activation generated hydrophilic groups on the polyimide surface, and the wetting process further introduced more -OH groups and water molecular on the activated polyimide surface. The synergistic process of plasma activation and wetting facilitate bridging polyimide interfaces to achieve bonding, providing an alternative path for adhesive bonding in 3D integration.
REVIEW | doi:10.20944/preprints202202.0287.v1
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: integration of sports and health care; sports; health; community
Online: 23 February 2022 (07:06:51 CET)
(1) Background: With continuous globalization and modernization of people's lives, lifestyle has changed dramatically, with decreased physical activity and increased unhealthy eating patterns in many nations throughout the world. With the COVID-19 pandemic and changes taking place in people’s health and lifestyles around the world, the need for rehabilitation is expected to rise in the coming years.(2)Methods: This paper analyzes the integration model of sports and health care using theoretical analysis, literature reviews, logical reasoning, and other methods.(3)Results: The integration of sports and health care in China has entered the stage of practical implementation after many years of development, forming a few representative integration patterns. Governments, communities, community hospitals, hospitals, and third-party institutions are the main participants, with the community playing an important role in the integration. Pharmacies, sports venues, and schools with sufficient staff have a relatively low participation rate.(4)Conclusion: The grading treatment has been applied in health management and sports rehabilitation, based on the development of digital medicine, a government-led grading treatment model of "health management center" can promote the participation of multiple subjects in the integration of sports and health care, solving the problems existing in the current integration process to a certain extent.
CONCEPT PAPER | doi:10.20944/preprints202106.0578.v1
Subject: Biology, Animal Sciences & Zoology Keywords: homology; developmental mechanism; evidential integration; eumetazoan body plan; phylogenetics
Online: 23 June 2021 (11:45:06 CEST)
Reconstructing ancestral species is a challenging endeavour: fossils are often scarce or enigmatic, and inferring ancestral characters based on novel molecular approaches (e.g. comparative genomics or developmental genetics) has long been controversial. A key philosophical challenge pertinent at present is the lack of a theoretical framework capable of evaluating inferences of homology made through integration of multiple kinds of evidence (e.g. molecular, developmental, or morphological). Here, I present just such a framework. I start with a brief history and critical assessment of attempts at inferring morphological homology through developmental genetics. I then bring attention to a recent model of homology, namely Character Identity Mechanisms (DiFrisco, Love, & Wagner, 2020), intended partly to elucidate the relationships between morphological characters, developmental genetics, and homology. I utilise and build on this model to construct the evaluative framework mentioned above, which judges the epistemic value of evidence of each kind in each particular case based on three proposed criteria: effectiveness, admissibility, and informativity, as well as providing a generalised guideline on how it can be scientifically operationalised. I then point out the evolution of the eumetazoan body plan as a case in point where the application of this framework can yield satisfactory results, both empirically and conceptually. I will conclude with a discussion on some potential implications for more general philosophy of biology and philosophy of science, especially surrounding evidential integration, models and explanation, and reductionism.
Subject: Biology, Anatomy & Morphology Keywords: Crustacea; Anomura; Brachyura; Carcinization; Phylogeny; Convergent evolution; Morphological integration
Online: 2 March 2021 (12:43:52 CET)
A fundamental question in biology is whether phenotypes can be predicted by ecological or genomic rules. At least five cases of convergent evolution of the crab-like body plan (with a wide and flattened shape, and a bent abdomen) are known in decapod crustaceans, and have, for over 140 years, been known as ‘carcinization’. The repeated loss of this body plan has been identified as ‘decarcinization’. In reviewing the field, we offer phylogenetic strategies to include poorly known groups, and direct evidence from fossils, that will resolve the history of crab evolution and the degree of phenotypic variation within crabs. Proposed ecological advantages of the crab body are summarized into a hypothesis of phenotypic integration suggesting correlated evolution of the carapace shape and abdomen. Our premise provides fertile ground for future studies of the genomic and developmental basis, and the predictability, of the crab-like body form.
ARTICLE | doi:10.20944/preprints201806.0219.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Big data technology; Business intelligence; Data integration; System virtualization.
Online: 13 June 2018 (16:19:48 CEST)
Big Data warehouses are a new class of databases that largely use unstructured and volatile data for analytical purpose. Examples of this kind of data sources are those coming from the Web, such as social networks and blogs, or from sensor networks, where huge amounts of data may be available only for short intervals of time. In order to manage massive data sources, a strategy must be adopted to define multidimensional schemas in presence of fast-changing situations or even undefined business requirements. In the paper, we propose a design methodology that adopts agile and automatic approaches, in order to reduce the time necessary to integrate new data sources and to include new business requirements on the fly. The data are immediately available for analyses, since the underlying architecture is based on a virtual data warehouse that does not require the importing phase. Examples of application of the methodology are presented along the paper in order to show the validity of this approach compared to a traditional one.
ARTICLE | doi:10.20944/preprints201804.0235.v1
Subject: Engineering, Energy & Fuel Technology Keywords: cogeneration; process integration; solar energy; thermal storage; desalination; optimization
Online: 18 April 2018 (08:08:48 CEST)
Shale gas production is associated with significant usage of fresh water and discharge of wastewater. Consequently, there is a necessity to create the proper management strategies for water resources in shale gas production and to integrate conventional energy sources (e.g., shale gas) with renewables (e.g., solar energy). The objective of this study is to develop a design framework for integrating water and energy systems including multiple energy sources, cogeneration process, and desalination technologies in treating wastewater and providing fresh water for shale gas production. Solar energy is included to provide thermal power directly to a multi-effect distillation plant (MED) exclusively (to be more feasible economically) or indirect supply through a thermal energy storage system. Thus, MED is driven by direct or indirect solar energy, and excess or direct cogeneration process heat. The proposed thermal energy storage along with the fossil fuel boiler will allow for the dual-purpose system to operate at steady-state by managing the dynamic variability of solar energy. Additionally, electric production is considered to supply a reverse osmosis plant (RO) without connecting to the local electric grid. A multi-period mixed integer nonlinear program (MINLP) is developed and applied to discretize operation period to track the diurnal fluctuations of solar energy. The solution of the optimization program determines the optimal mix of solar energy, thermal storage, and fossil fuel to attain the maximum annual profit of the entire system. A case study is solved for water treatment and energy management for Eagle Ford Basin in Texas.
ARTICLE | doi:10.20944/preprints201803.0121.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: deep Kalman filter; simultaneous sensor integration and modelling (SSIM); GNSS/IMU integration; recurrent neural network; deep learning; long-short term memory (LSTM)
Online: 15 March 2018 (07:10:32 CET)
The Bayes filters, such as Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of the unknowns. Efficient integration of multiple sensors requires deep knowledge of their error sources and it is not trivial for complicated sensors, such as Inertial Measurement Unit (IMU). Therefore, IMU error modelling and efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we develop deep Kalman filter to model and remove IMU errors and consequently, improve the accuracy of IMU positioning. In other words, we add modelling step to the prediction and update steps of Kalman filter and the IMU error model is learned during integration. Therefore, our deep Kalman filter outperforms Kalman filter and reaches higher accuracy.
ARTICLE | doi:10.20944/preprints202107.0651.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: multiple measures synchronization; automatic device integration; open-source; PsychoPy; Unity
Online: 29 July 2021 (11:48:02 CEST)
Background: The human mind is multimodal. Yet most behavioral studies rely on century-old measures such as task accuracy and latency. To create a better understanding of human behavior and brain functionality, we should introduce other measures and analyze behavior from various aspects. However, it is technically complex and costly to design and implement the experiments that record multiple measures. To address this issue, a platform that allows synchronizing multiple measures from human behavior is needed. Method: This paper introduces an opensource platform named OpenSync, which can be used to synchronize multiple measures in neuroscience experiments. This platform helps to automatically integrate, synchronize and record physiological measures (e.g., electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, body motion, etc.), user input response (e.g., from mouse, keyboard, joystick, etc.), and task-related information (stimulus markers). In this paper, we explain the structure and details of OpenSync, provide two case studies in PsychoPy and Unity. Comparison with existing tools: Unlike proprietary systems (e.g., iMotions), OpenSync is free and it can be used inside any opensource experiment design software (e.g., PsychoPy, OpenSesame, Unity, etc., https://pypi.org/project/OpenSync/ and https://github.com/moeinrazavi/OpenSync_Unity). Results: Our experimental results show that the OpenSync platform is able to synchronize multiple measures with microsecond resolution.
ARTICLE | doi:10.20944/preprints202107.0032.v1
Subject: Social Sciences, Econometrics & Statistics Keywords: global value chain; global economic integration; nestedness; evolutionarily stable equilibrium
Online: 1 July 2021 (14:20:58 CEST)
Nested structure is a structural feature that is conducive to system stability formed by the co-evolution of species in mutualistic ecosystems, and reflects the ability of ecosystem stability to be restored to a stable state again after being destroyed. The co-opetition relationship and value flow between industrial sectors in the global value chain are similar to the mutualistic ecosystem, and the pattern of the global economic system is always changing in dynamic equilibrium. Nestedness theory is used in this article to define the generalist and specialist sectors in the global value chain to analyze the changes in the global supply pattern. Then we study the mechanism of the global economic system to reach a stable equilibrium and the role of different sectors in the steady of the economic system, so as to provide countermeasures for enhancing the stability of the global economic system. At the end of the article, the domestic trade network, export trade network and import trade network of each country are extracted, and an econometric model is designed to analyze how the microstructure of the production system affects a country’s macroeconomic performance, thereby deriving the conclusion that the stability of the international trade network is crucial to a country's economic development.
Subject: Social Sciences, Sociology Keywords: PTSD; acculturation stress; transnational families; caregiving; generational trauma; immigrant integration
Online: 31 May 2021 (09:21:20 CEST)
This paper investigates the mental health stressors experienced by Central American youth immigrants and asylum seekers, including unaccompanied minors, surveyed in the U.S. in 2017. This population is hard to reach, vulnerable, and disproportionately exposed to trauma from a young age. They face numerous challenges to mental health, and increased psychopathological risk, exacerbated by high levels of violence and low state-capacity in sending countries, restrictive immigration policies, the fear of deportation for themselves and their family members, and the pressure to integrate once in the U.S. Using survey data and the validated PHQ-9 questionnaire and Child PTSD Symptom Scale (CPSS), we find that Central American youth have seen improvements in their self-reported mental health after migrating to the U.S. but remain at risk of further trauma exposure, depression, and PTSD. They exhibit a disproportionate likelihood of having lived through traumatizing experiences that put Central American immigrants at higher risk for psychological distress and disorders and may also create obstacles to integration that in turn create new stressors that compound with PTSD and depression. PTSD, depression, or anxiety can be minimized through programs that aid their integration and mental health.
ARTICLE | doi:10.20944/preprints202104.0580.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Cybersecurity; supply chains; IoT systems; systems integration, real scenarios analysis
Online: 21 April 2021 (12:33:59 CEST)
The specific demands inherent to supply chains built upon large IoT systems, make a must the design of a coordinated framework for cyber resilience provisioning intended to guaranteeing trusted supply chains of ICT systems, built upon distributed, dynamic, potentially insecure and heterogeneous ICT infrastructures. As such, the proposed solution is envisioned to deal with the whole supply chain system components, from the IoT ecosystem to the infrastructure connecting them, addressing security and privacy functionalities related to risks and vulnerabilities management, accountability and mitigation strategies as well as security metrics and evidence-based security assurance. In this paper we present FISHY, as a preliminary designed architecture, designed to orchestrate both existing and beyond state-of-the-art security appliances in composed ICT scenarios and also leveraging capabilities of programmable network and IT infrastructure through seamless orchestration and instantiation of novel security services, both in real-time and proactively. The paper also includes a thorough business analysis to go far beyond the technical benefits of a potential FISHY adoption as well as three real-world use cases where to strongly support the envisioned benefits of a FISHY adoption.
ARTICLE | doi:10.20944/preprints202104.0425.v1
Subject: Engineering, Automotive Engineering Keywords: Tolls; INTEGRATION software; microscopic traffic simulation; traveler value of time
Online: 15 April 2021 (16:52:34 CEST)
Unique analytical challenges arise when drivers, who face a route choice between a toll lane and a set of free lanes, have different values of time. The most complex situation is one in which multiple sub-populations of drivers exist, each with their own unique mean and coefficient of variation of value of time. This situation, when imbedded within a larger network cannot be tackled using existing planning models, and consequently is usually only approximated. This paper examines these different approximations, the resulting numerical solutions and the implications of these approximations on the estimate of the number of expected toll lane users. The paper also shows how this problem can be solved using a combined traffic assignment/simulation model. The first part of this paper develops an analytical formulation for solving the toll lane scenario using the “value of time” representations range from the simplest to the most complex. It is shown that one of the most critical issues is a determination of who the marginal users are of the toll lane, at each level of usage, as the perceived disutility of the last marginal toll lane user depends dynamically upon that driver’s value of time. Analytical formulations based on these different approximations are then solved numerically in the second part of the paper. These numerical solutions show that significant different lane use estimates result, depending upon the representation of value of time. Consequently, it is clear that solving this problem with the fewest approximations is both of theoretical and practical importance. The third part of the paper illustrates the solution to the toll lane problem, with each level of approximation, using a combined traffic assignment/simulation model. The simulated resulting estimates of the toll lane usage for each case matches both the relative and absolute trends found in analytical solutions. However, the solution using the assignment/simulation model is not only much faster and simpler to obtain, but is also scalable both in size and complexity. The additional complexities, that are associated with a less approximate representation of value of time, should therefore be incorporated in all future assessments of toll lane facilities, be they analyzed analytically or through simulation.
REVIEW | doi:10.20944/preprints202008.0133.v1
Subject: Life Sciences, Virology Keywords: epidemic; viral sequences; genomics; metadata; data harmonization; integration and search
Online: 5 August 2020 (10:58:27 CEST)
With the outbreak of the COVID-19 disease, the research community is producing unprecedented efforts dedicated to better understand and mitigate the affects of the pandemic. In this context, we review the data integration efforts required for accessing and searching genome sequences and metadata of SARS-CoV2, the virus responsible for the COVID-19 disease, which have been deposited into the most important repositories of viral sequences. Organizations that were already present in the virus domain are now dedicating special interest to the emergence of COVID-19 pandemics, by emphasizing specific SARS-CoV2 data and services. At the same time, novel organizations and resources were born in this critical period to serve specifically the purposes of COVID-19 mitigation, while setting the research ground for contrasting possible future pandemics. Accessibility and integration of viral sequence data, possibly in conjunction with the human host genotype and clinical data, are paramount to better understand the COVID-19 disease and mitigate its effects.
ARTICLE | doi:10.20944/preprints201911.0035.v1
Subject: Engineering, Control & Systems Engineering Keywords: smart environment; smart sensors; distributed architectures; object detection; information integration
Online: 4 November 2019 (03:45:21 CET)
Objects recognition is a necessary task in smart city environments. This recognition can be used in processes such as the reconstruction of the environment map or the intelligent navigation of vehicles. This paper proposes an architecture that integrates heterogeneous distributed information to recognize objects in intelligent environments. The architecture is based on the IoT / Industry 4.0 model to interconnect the devices, called Smart Resources. Smart Resources can process local sensor data and send information to other devices. These other devices can be located in the same operating range, the Edge, in the same intranet, the Fog, or on the Internet, the Cloud. Smart Resources must have an intelligent layer in order to be able to process the information. A system with two Smart Resources equipped with different image sensors has been implemented to validate the architecture. Experiments show that the integration of information increases the certainty in the recognition of objects between 2\% and 4\%. Consequently, in the field of intelligent environments, it seems appropriate to provide the devices with intelligence, but also capabilities to collaborate closely with other devices.
ARTICLE | doi:10.20944/preprints201906.0075.v1
Subject: Engineering, Other Keywords: forest tending; group decision support system; process management; data integration
Online: 10 June 2019 (10:32:09 CEST)
In this study, the decision-making process management of forest tending in the forestry business is decentralized, and forest tending decision-making activities at different points in time are integrated by decision makers at different geographical locations. The decision-making process was analyzed and optimized from a system perspective. Based on the optimized decision-making process, a forest tending business group decision support system (FTGDSS) was established. We first reviewed and discussed the characteristics and development of the forest tending business and forestry decision support system. Business Process Modeling Notation was used to draw a current state flow chart of the forest tending business, to identify and discover important decision points in the process of tending decision-making. We also analyzed the content and attributes of each decision point, and described the system structure, functional framework, knowledge base structure, and reasoning algorithm of FTGDSS in detail. Finally, FTGDSS was evaluated from the two dimensions of the technology adoption model. FTGDSS integrates different levels of time-space decision-making activities, historical tending data, business plans, decision-makers' management tendencies into the decision-making process and automatically extracts decision-making data from the forest business process management enterprise resource planning system (Smartforest) that improves the ease of use of the decision support system (DSS). It also improves the quality of forest tending decisions, and enables the DSS to better support multi-target management strategies.
ARTICLE | doi:10.20944/preprints201904.0077.v1
Subject: Mathematics & Computer Science, Other Keywords: integrated information theory; differentiation; integration; complexity; consciousness; computational; IIT; Phi
Online: 8 April 2019 (08:58:29 CEST)
Integrated information theory (IIT) proposes a measure of integrated information (Φ) to capture the level of consciousness for a physical system in a given state. Unfortunately, calculating Φ itself is currently only possible for very small model systems, and far from computable for the kinds of systems typically associated with consciousness (brains). Here, we consider several proposed measures and computational approximations, some of which can be applied to larger systems, and test if they correlate well with Φ. While these measures and approximations capture intuitions underlying IIT and some have had success in practical applications, it has not been shown that they actually quantify the type of integrated information specified by the latest version of IIT. In this study, we evaluated these approximations and heuristic measures, based not on practical or clinical considerations, but rather based on how well they estimate the Φ values of model systems. To do this, we simulated networks consisting of 3–6 binary linear threshold nodes randomly connected with excitatory and inhibitory connections. For each system, we then constructed the system’s state transition probability matrix (TPM), as well as its state transition matrix (STM) over time for all possible initial states. From these matrices, we calculated, approximations to Φ, and measures based on state differentiation, state entropy, state uniqueness, and integrated information. All measures were correlated with Φ in a state dependent and state independent manner. Our findings suggest that Φ can be approximated closely in small binary systems by using one or more of the readily available approximations (r > 0.95), but without major reductions in computational demands. Furthermore, Φ correlated strongly with measures of signal complexity (LZ, rs = 0.722), decoder based integrated information (Φ*, rs = 0.816), and state differentiation (D1, rs = 0.827), on the system level (state independent). These measures could allow for efficient estimation of Φ on a group level, or as accurate predictors of low, but not high, Φ systems. While it’s uncertain whether the results extend to larger systems or systems with other dynamics, we stress the importance that measures aimed at being practical alternatives to Φ are at a minimum rigorously tested in an environment where the ground truth can be established.
ARTICLE | doi:10.20944/preprints201804.0352.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: network inference; data integration; regulatory networks; transcription factor; gene expression
Online: 27 April 2018 (06:34:12 CEST)
Data generation using high throughput technologies has led to the accumulation of diverse types of molecular data.These data have different types (discrete,real,string etc.) and occur in various formats and sizes. Datasets including gene expression, miRNA expression, protein-DNA binding data (ChIP-Seq/ChIP-ChIP), mutation data(copy number variation, single nucleotide polymorphisms), GO annotations, protein-protein interaction and disease-gene association data are some of the commonly used genomic datasets to study biological processes. Each of them provides a unique, complementary and partly independent view of the genome and hence embed essential information about their regulatory mechanisms. In order to understand the functions of genes, proteins and analyze mechanisms arising out of their interactions, information provided by each of these datasets individually may not be sufficient. Therefore integrating these multi-omic data and inferring regulatory interactions from the integrated dataset provides a system level biological insights in predicting gene functions and their phenotypic outcomes. To study genome functionality through interaction networks, different methods have been proposed for collective mining of information from an integrated dataset. We survey here data integration approaches using state-of-the-art techniques such as network integration, Bayesian networks, regularized regression (LASSO) and multiple kernel learning methods.
ARTICLE | doi:10.20944/preprints201802.0025.v1
Subject: Engineering, Other Keywords: process integration; fuel gas network synthesis; block superstructure; optimization; MINLP
Online: 5 February 2018 (03:33:19 CET)
Fuel gas network (FGN) synthesis is a systematic method for reducing fresh fuel consumption in a chemical plant. In this work, we address the synthesis of fuel gas network using block superstructure originally proposed for process design and intensification (Demirel et.al. ). Instead of a classical source-pool-sink superstructure, we consider a superstructure with multiple feed and product streams. These blocks interact with each other through direct flows that connect a block with its adjacent blocks and through jump flows that connect a block with all blocks. The blocks with feed streams are viewed as fuel sources and the blocks with product streams are regarded as fuel sinks. Addition blocks can be added as pools when there exists intermediate operations among 9 source blocks and sink blocks. These blocks can be arranged in a I × J two-dimensional grid with I = 1 for problems without pools, or I = 2 for problems with pools. J is determined by the maximum number of pools/sinks. With this representation, we formulate fuel gas network synthesis problem as a mixed-integer nonlinear (MINLP) problem to optimally design a fuel gas network with minimal total annul cost. We present a real-life case study from LNG plant to demonstrate the capability of the proposed approach.
ARTICLE | doi:10.20944/preprints202105.0428.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: Agenda 2030; sustainability goals; national environmental quality objectives; industry; integration; implementation
Online: 19 May 2021 (07:36:12 CEST)
Abstract This article examines the implementation of the Swedish national environmental quality objectives and discusses what can be learnt for the equivalent process for the set of global UN 2030 goals (SDGs), established in 2015. The empirical basis is a study on 50 large companies in Sweden and their use of these objectives in their policy formulation. The SDGs are crafted with a broader approach than the Swedish national environmental quality objectives. Therefore, the SDGs probably better reflect the agenda of the business community since they have a global character, cover the whole spectrum of important sustainability issues and provide a mutual agenda for the business community world-wide. More than 90 percent of the large companies in the study have explicitly committed themselves to the SDGs, only 1-2 years after they were published, whereas similar commitments hardly exist for the national environmental quality objectives, even 20 years after its establishment. A large majority of the large companies in this study know about the SDGs, have actively endorsed them, and started to adjust their activities accordingly. At the end, the results of these endorsements remain to be seen.
ARTICLE | doi:10.20944/preprints202102.0535.v1
Subject: Engineering, Automotive Engineering Keywords: Connected vehicles; C-V2X; V2V; INTEGRATION software; traffic simulation; communication modeling
Online: 23 February 2021 (19:38:56 CET)
The transportation system has evolved into a complex cyber-physical system with the introduction of wireless communication and the emergence of connected travelers and connected automated vehicles. Such applications create an urgent need to develop high-fidelity transportation modeling tools that capture the mutual interaction of the communication and transportation systems. This paper addresses this need by developing a high-fidelity, large-scale dynamic and integrated traffic and direct cellullar vehicle-to-vehicle and vehicle-to-infrastructure (collectively known as V2X) modeling tool. The unique contributions of this work are (1) we developed a scalable analytical communication model that captures packet movement at the millisecond level; (2) we coupled the communication and traffic simulation models in real-time to develop a fully integrated dynamic connected vehicle modeling tool; and (3) we developed scalable approaches that adjust the frequency of model coupling depending on the number of concurrent vehicles in the network. The proposed scalable modeling framework is demonstrated by running on the Los Angeles downtown network considering the morning peak hour traffic demand (145,000 vehicles), running faster than real-time on a regular personal computer (1.5 hours to run 1.86 hours of simulation time). Spatiotemporal estimates of packet delivery ratios for downtown Los Angeles are presented. This novel modeling framework provides a breakthrough in the development of urgently needed tools for large-scale testing of Direct C-V2X enabled applications.
ARTICLE | doi:10.20944/preprints202102.0365.v1
Subject: Life Sciences, Biochemistry Keywords: Cancer subtype detection; Multi-omics data; Data integration; Autoencoder; Survival analysis
Online: 17 February 2021 (10:09:51 CET)
A heterogeneous disease like cancer is activated through multiple pathways and different perturbations. Depending upon the activated pathway(s), patients’ survival vary significantly and show different efficacy to various drugs. Therefore, cancer subtype detection using genomics level data is a significant research problem. Subtype detection is often a complex problem, and in most cases, needs multi-omics data fusion to achieve accurate subtyping. Different data fusion and subtyping approaches have been proposed, such as kernel-based fusion, matrix factorization, and deep learning autoencoders. In this paper, we compared the performance of different deep learning autoencoders for cancer subtype detection. We performed cancer subtype detection on four different cancer types from The Cancer Genome Atlas (TCGA) datasets using four autoencoder implementations. We also predicted the optimal number of subtypes in a cancer type using the silhouette score. We observed that the detected subtypes exhibit significant differences in survival profiles. Furthermore, we also compared the effect of feature selection and similarity measures for subtype detection. To evaluate the results obtained, we selected the Glioblastoma multiforme (GBM) dataset and identified the differentially expressed genes in each of the subtypes identified by the autoencoders; the obtained results coincide well with other genomic studies and can be corroborated with the involved pathways and biological functions. Thus, it shows that the results from the autoencoders, obtained through the interaction of different datatypes of cancer, can be used for the prediction and characterization of patient subgroups and survival profiles.
ARTICLE | doi:10.20944/preprints202008.0250.v1
Subject: Social Sciences, Organizational Economics & Management Keywords: Simmelian ties, enterprise innovation performance, knowledge capturing, knowledge integration, network routines
Online: 11 August 2020 (04:17:39 CEST)
In an innovation driven business environment, cross-border access to resources is important for companies to improve innovation capabilities and development performance. Based on the previous research, it shows that there are barriers to cross domain communication among alliance firms because of the restriction of multidimensional ties and dyads. Simmelian ties, as a form of alliance network with ternary connections, it effectively restrained opportunism and self-interest in the cooperation process and take a crucial role to evaluate innovation related performance in corporation. Based on the theory of Simmelian, this paper builds a theoretical framework and proposes corresponding research hypotheses between Simmelian ties and enterprise innovation performance. After designing questionnaires, collecting data and conducting empirical analysis to test theoretical models and hypotheses. Results have shown that: (1) Simmelian ties generally have a positive impact on enterprise innovation performance. (2) Knowledge capturing and knowledge integration play a partial intermediary role between Simmelian ties and enterprise innovation performance, and the mediating chain formed by the two variables plays a serial mediating role in the effect. (3) Network routines significantly positively moderates the relationship between Simmelian ties and knowledge capturing. And also, the positive relationship between Simmelian ties and enterprise innovation performance is also actively moderated by network routines. The conclusion of this study is meaningful for companies to establish of Simmelian ties, improve knowledge management capabilities and further promote enterprise innovation performance.
Subject: Engineering, Electrical & Electronic Engineering Keywords: Injection molding, smart textiles, e-textiles, integration of electronics in textiles
Online: 26 August 2019 (13:48:04 CEST)
The protection of electronics against environmental influences and mechanical loads is important for integration of conventional electronics in textile conductive tapes. For this purpose, the sensors on the tapes are molded with plastic locally. This process step is recognized in the injection molding process. The molding compound is then later selected depending on: The field of application, the parameters of the manufacturing process and the textile tape properties. We have designed a mould for liquid silicone (LSR) as well as for the textile and electronic insertions. The cavities are sealed by a local compression of the textile and the two inserts are positioned with position pins representing the main aspect of the mould design. The sampling tool and the process parameter optimization are mainly based on the material properties of the silicone and the mechanical sensitivity of the inserts. To reduce the deformation of the circuit boards by the melt front and ensuring the functionality of the electronics a low-pressure process is used.
ARTICLE | doi:10.20944/preprints201905.0193.v1
Subject: Earth Sciences, Geophysics Keywords: archaeological geophysics; magnetic methods; ground penetrating radar; tunnel detection; data integration
Online: 15 May 2019 (11:14:34 CEST)
The UNESCO World Heritage Hadrian’s Villa lies over the Colli Albani volcanic district near Rome. Magnetic, paleomagnetic, radar, and electric resistivity surveys were performed in the Plutonium–Inferi sector to detect buried buildings and outline a segment of the underground system of tunnels that link different zones of the villa. In particular, a paleomagnetic analysis of the bedrock unit allowed to accomplish an accurate geomagnetic field modelling and characterize the archaeological sources of the magnetic field anomalies. We used a computer-assisted forward modelling procedure to generate a structural model of the sources of the observed anomalies. The intrinsic ambiguity of the magnetic field modelling was reduced with the support of ground penetrating radar amplitude slices and an analysis of radar and electric resistivity profiles. The bedrock lithology in this area is an ignimbrite tuff characterized by abundant iron oxides. The high-amplitude magnetic anomalies observed in the Plutonium–Inferi area are due to strong bedrock remnant magnetization and susceptibility contrasts between topsoil infill of cavities and the surrounding tuff. The resulting magnetization model of the Plutonium–Inferi complex shows that the observed anomalies are mostly due to the presence of tunnels, skylights and a system of ditches excavated in the tuff.
ARTICLE | doi:10.20944/preprints201811.0337.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Web API; SPARQL; micro-service; Data Integration; Linked Data; REST; Biodiversity
Online: 14 November 2018 (10:59:31 CET)
In recent years, Web APIs have become a de facto standard for exchanging machine-readable data on the Web. Despite this success though, they often fail in making resource descriptions interoperable due to the fact that they rely on proprietary vocabularies that lack formal semantics. The Linked Data principles similarly seek the massive publication of data on the Web, yet with the specific goal of ensuring semantic interoperability. Given their complementary goals, it is commonly admitted that cross-fertilization could stem from the automatic combination of Linked Data and Web APIs. Towards this goal, in this paper we leverage the micro-service architectural principles to define a SPARQL Micro-Service architecture, aimed at querying Web APIs using SPARQL. A SPARQL micro-service is a lightweight SPARQL endpoint that provides access to a small, resource-centric, virtual graph. In this context, we argue that full SPARQL Query expressiveness can be supported efficiently without jeopardizing servers availability. Furthermore, we demonstrate how this architecture can be used to dynamically assign dereferenceable URIs to Web API resources that do not have URIs beforehand, thus literally ``bringing'' Web APIs into the Web of Data. We believe that the emergence of an ecosystem of SPARQL micro-services published by independent providers would enable Linked Data-based applications to easily glean pieces of data from a wealth of distributed, scalable and reliable services. We describe a working prototype implementation and we finally illustrate the use of SPARQL micro-services in the context of two real-life use cases related to the biodiversity domain, developed in collaboration with the French National Museum of Natural History.
ARTICLE | doi:10.20944/preprints201803.0203.v1
Subject: Physical Sciences, General & Theoretical Physics Keywords: DNA; DNA nanotechnology; patchy particles; Wertheim theory; thermodynamic integration; phase coexistence
Online: 25 March 2018 (16:14:27 CEST)
We present a numerical study in which large-scale bulk simulations of self-assembled DNA constructs have been carried out with a realistic coarse-grained model. The investigation aims at obtaining a precise, albeit numerically demanding, estimate of the free energy for such systems. We then, in turn, use these accurate results to validate a recently proposed theoretical approach that builds on a liquid-state theory, the Wertheim theory, to compute the phase diagram of all-DNA fluids. This hybrid theoretical/numerical approach, based on the lowest order virial expansion and a nearest-neighbor DNA model, can provide, in an undemanding way, a thermodynamic description of DNA associating fluids that is in semi-quantitative agreement with experiments. We show that the predictions of such scheme are as accurate as the ones obtained with more sophisticated methods. We also demonstrate the flexibility of the approach by incorporating non-trivial additional contributions that go beyond the nearest-neighbor model to compute the DNA hybridization free energy.
ARTICLE | doi:10.20944/preprints201611.0002.v2
Subject: Engineering, Control & Systems Engineering Keywords: wind prediction; wind estimation; UAS; wind shear; gust; multi-platform integration
Online: 18 January 2017 (09:44:54 CET)
This paper presents a system for identification of wind features, such as gusts and wind shear. These are of particular interest in the context of energy-efficient navigation of Small Unmanned Aerial Systems (UAS). The proposed system generates real-time wind vector estimates and a novel algorithm to generate wind field predictions. Estimations are based on the integration of an off-the-shelf navigation system and airspeed readings in a so-called direct approach. Wind predictions use atmospheric models to characterize the wind field with different statistical analyses. During the prediction stage, the system is able to incorporate, in a big-data approach, wind measurements from previous flights in order to enhance the approximations. Wind estimates are classified and fitted into a Weibull probability density function. A Genetic Algorithm (GA) is utilized to determine the shaping and scale parameters of the distribution, which are employed to determine the most probable wind speed at a certain position. The system uses this information to characterize a wind shear or a discrete gust and also utilizes a Gaussian Process regression to characterize continuous gusts. The knowledge of the wind features is crucial for computing energy-efficient trajectories with low cost and payload. Therefore, the system provides a solution that does not require any additional sensors. The system architecture presents a modular decentralized approach, in which the main parts of the system are separated in modules and the exchange of information is managed by a communication handler to enhance upgradeability and maintainability. Validation is done providing preliminary results of both simulations and Software-In-The-Loop testing. Telemetry data collected from real flights, performed in the Seville Metropolitan Area in Andalusia (Spain), was used for testing. Results show that wind estimation and predictions can be calculated at 1 Hz and a wind map can be updated at 0.4 Hz. Predictions show a convergence time with a 95% confidence interval of approximately 30 s.
ARTICLE | doi:10.20944/preprints202203.0386.v1
Subject: Engineering, Mechanical Engineering Keywords: CFD; PIV; experimental fluid mechanics; pressure calculation; SIMPLE; Reynolds Stresses; measurement integration)
Online: 30 March 2022 (04:40:11 CEST)
Calculation of the pressure field on and around solid bodies exposed to external flow is of paramount importance to a number of engineering applications. However, conventional pressure measurement techniques are inherently linked to problems principally caused by their point-wise and/or intrusive nature. In the present paper, we attempt to calculate the time-averaged two-dimensional pressure field by integrating PIV (Particle Image Velocimetry) velocity measurements into a CFD code and modifying them by the respective correction step of the SIMPLE algorithm. Boundary conditions are applied from the PIV data as a three-layer area of constant velocities, adjacent to the boundaries. A novel characteristic of the approach is the straightforward inclusion of the Reynolds Stresses into the source terms of the momentum equations, calculated directly from the PIV statistics. The methodology is applied to three regions of the symmetry plane parallel to the main boundary layer flow past a surface mounted cube. In spite of findings of deviations from the planar 2D flow assumption, the derived pressure fields and the adjusted velocity fields are found to be reliable, while the intrinsic turbulent nature of the flow is considered without modelling of the Reynolds stresses.
REVIEW | doi:10.20944/preprints202105.0683.v2
Subject: Engineering, Biomedical & Chemical Engineering Keywords: centrifugal microfluidics; Lab-on-a-Disc; fluidic integration; rotational flow control; valving
Online: 7 June 2021 (14:45:53 CEST)
Current, application-driven trends towards larger-scale integration (LSI) of microfluidic systems for comprehensive assay automation and multiplexing pose significant technological and economical challenges to developers. By virtue of their intrinsic capability for powerful sample preparation, centrifugal systems have attracted significant interest in academia and business since the early 1990s. This review models common, rotationally controlled valving schemes at the heart of such “Lab-on-a-Disc” (LoaD) platforms to predict critical spin rates and reliability of flow control mainly based on geometries, location and liquid volumes to be processed, and their experimental tolerances. In absence of larger-scale manufacturing facilities during product development, the method presented here facilitates the provision of efficient simulation tools for virtual prototyping and characterization to greatly expedite design optimization according to key performance metrics. This virtual in silico approach thus significantly accelerates, de-risks and lowers costs along the critical advancement from idea, fluidic testing, bioanalytical validation and scale-up to commercial mass manufacture.
ARTICLE | doi:10.20944/preprints201911.0309.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: in vitro diagnostics; microfluidics; full integration; lab-on-a-chip; pathogen detection
Online: 26 November 2019 (09:56:47 CET)
Microfluidics is facing critical challenges in the quest of miniaturizing, integrating, and automating in vitro diagnostics, including the increasing complexity of assays, the gap between the macroscale world and the microscale devices, and the diverse throughput demands in various clinical settings. Here a “3D extensible” microfluidic design paradigm that consists of a set of basic structures and unit operations was developed for constructing any application-specific assay. Four basic structures- check valve (in), check valve (out), double-check valve (in and out), and on-off valve, were designed to mimic basic acts in biochemical assays. By combining these structures linearly, a series of unit operations can be readily formed. We then proposed a “3D extensible” architecture to fulfill the needs of the function integration, the adaptive “world-to-chip” interface, and the adjustable throughput in the X, Y, and Z directions, respectively. To verify this design paradigm, we developed a fully integrated loop-mediated isothermal amplification microsystem that can directly accept swab samples and detect Chlamydia trachomatis automatically with a sensitivity one order higher than that of the conventional kit. This demonstration validated the feasibility of using this paradigm to develop integrated and automated microsystems in a less risky and more consistent manner.
ARTICLE | doi:10.20944/preprints201703.0197.v1
Subject: Social Sciences, Other Keywords: carbon intensity; coal consumption; co-integration test; Granger causality; error correction model
Online: 27 March 2017 (10:33:34 CEST)
Co-integration and Causality was built to conduct studies on causality relation between carbon intensity and coal consumption leading to providing important basis for the transition to a low carbon economy. The EG two-step method was performed to study the relation between carbon intensity and coal consumption of China during 1990-2015 and the co-integration and Granger test was constructed to build up the co-integration and error correction models for analysis of the interaction between carbon intensity and coal consumption. The results showed that in long term there is a stable co-integration relation and a positive correlation between carbon intensity and coal consumption; whereas fluctuations exist in short term and there is a one-way Granger causality of carbon intensity with respect to coal consumption.
ARTICLE | doi:10.20944/preprints202205.0043.v1
Subject: Engineering, Energy & Fuel Technology Keywords: North Sea region; offshore grid; offshore hydrogen; offshore wind; system integration; IESA-NS
Online: 5 May 2022 (15:45:35 CEST)
The North Sea Offshore Grid concept has been envisioned as a promising alternative to: 1) ease the integration of offshore wind and onshore energy systems, and 2) increase the cross-border capacity between the North Sea region countries at low cost. In this paper we explore the techno-economic benefits of the North Sea Offshore Grid using two case studies: a power-based offshore grid, where only investments in power assets are allowed (i.e. offshore wind, HVDC/HVAC interconnectors); and a power-and-hydrogen offshore grid, where investments in offshore hydrogen assets are also permitted (i.e. offshore electrolysers, new hydrogen pipelines and retrofitted natural gas pipelines). We compare these scenario results with a business as usual scenario, in which offshore wind is connected radially to the shore and no offshore grid is deployed. All scenarios are run with the IESA-NS model, without any specific technology ban and under open optimization. This paper also presents a novel methodology, combining Geographic Information Systems and Energy System Models, to cluster offshore spatial data and define meaningful offshore regions and offshore hub locations. This novel methodology is applied to the North Sea region to define nine offshore clusters taking into account offshore spatial claims, and identifying suitable areas for single-use and multi-use of space for renewable energy purposes. The scenario results show that the deployment of an offshore grid provides relevant cost savings, ranging from 1% to 4.1% of relative cost decrease (2.3 bn € to 8.7 bn €) in the power-based, and ranging from 2.8% to 7% of relative cost decrease (6 bn € to 14.9 bn €) in the power-and-hydrogen based. In the most extreme scenario (H2) an offshore grid permits to integrate 283 GW of HVDC connected offshore wind and 196 GW of HVDC meshed interconnectors. Even in the most conservative scenario (P1) the offshore grid integrates 59 GW of HVDC connected offshore wind capacity and 92 GW of HVDC meshed interconnectors. When allowed, the deployment of offshore electrolysis is considerable, ranging from 61 GW to 96 GW, with capacity factors of around 30%. Finally, we also find that, when imported hydrogen is available at 2 €/kg (including production and transport costs), large investments in an offshore grid are not optimal anymore. In contrast, at import costs over 4 €/kg imported hydrogen is not competitive.
ARTICLE | doi:10.20944/preprints202007.0694.v1
Subject: Engineering, Other Keywords: Transpiration; PV Heat Conversion; Plant Heat Stress; Agrivoltaic system; Sustainable Integration; Thermal Analysis
Online: 29 July 2020 (11:20:25 CEST)
This paper shares some new information on the ambient temperature profile and the heat stress occurrences directly underneath ground-mounted Solar Photovoltaic (PV) Arrays (monocrystalline-based) focusing on different temperature levels. A common ground for this work lies on the fact that 10C increase of PV cell temperature results in reduction of 0.5% energy conversion efficiency thus any means of natural cooling mechanism would gain much benefit especially to the Solar Farm operators. Transpiration process plays an important role in the cooling of green plants where in average it could dissipate around 32.9% of the total solar energy absorbed by the leaf making it a good natural cooling mechanism. This condition is relatively applied for herbs specifically for this project, Orthosiphon Stamineus or generally known as Java Tea are used as the high value crops. The thermal process via convective heat and mass exchange of leaves with the environment is relevant for a better understanding of plant physiological processes in response to environmental conversion factors for a wide range of applications. An important fact for plant heat stress with respect to the Ambient temperature is that the range lies between 10 C to 15 C above the surrounding value. This heat stress condition is relatively important and should be modelled in crops-energy integration. Agrivoltaic concept is a system that combines commercial agriculture and photovoltaic electricity generation in the same space. The concept is in line with the Kyoto Protocol and the United Nation Sustainable Development Goals (UN-SDG) which highlights the clean energy and sustainable urban living. The integration of agrivoltaic systems would optimize the yield, improving clean system efficiency and solving the issue of land resource sustainability. The PV bottom surface temperature are the main source of dissipated heat as shown in the thermal images recorded at 5 minutes interval at 3 sampling time. Statistical analysis shows that the Thermal correlations for transpiration process and heat stress occurrences between PV bottom surface and plant height will be an important finding for large scale plant cultivation in agrivoltaic farms.
ARTICLE | doi:10.20944/preprints201703.0196.v1
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: simulation software; manufacturing systems; process integration; machining optimization; Industry 4.0; knowledge-based manufacturing
Online: 27 March 2017 (10:28:34 CEST)
The future of machine tools will be dominated by highly flexible and interconnected systems, in order to achieve the required productivity, accuracy and reliability. Nowadays, distortion and vibration problems are easily solved in labs for the most common machining operations by using models based on equations describing the physical laws of the machining processes; however additional efforts are needed to overcome the gap between scientific research and the real manufacturing problems. In fact, there is an increasing interest in developing simulation packages based on “deep-knowledge and models” that aid machine designers, production engineers or machinists to get the best of the machine-tools. This article proposes a methodology to reduce problems in machining by means of a simulation utility, which uses the main variables of the system&process as input data, and generates results that help in the proper decision-making and machining planification. Direct benefits can be found in a) the fixture/clamping optimal design, b) the machine tool configuration, c) the definition of chatter-free optimum cutting conditions and d) the right programming of cutting toolpaths at the Computer Aided Manufacturing (CAM) stage. The information and knowledge-based approach showed successful results in several local manufacturing companies and are explained in the paper.
ARTICLE | doi:10.20944/preprints201703.0171.v1
Subject: Mathematics & Computer Science, Other Keywords: identity over time; Bayesian networks; multi-information; entity; persistence; integration; emergence; naturalising agency
Online: 21 March 2017 (16:23:00 CET)
We present a first formal analysis of specific and complete local integration. Complete local integration was previously proposed as a criterion for detecting entities or wholes in distributed dynamical systems. Such entities in turn were conceived to form the basis of a theory of emergence of agents within dynamical systems. Here, we give a more thorough account of the underlying formal measures. The main contribution is the disintegration theorem which reveals a special role of completely locally integrated patterns (what we call ι-entities) within the trajectories they occur in. Apart from proving this theorem we introduce the disintegration hierarchy and its refinement-free version as a way to structure the patterns in a trajectory. Furthermore we construct the least upper bound and provide a candidate for the greatest lower bound of specific local integration. Finally, we calculate the i-entities in small example systems as a first sanity check and find that ι-entities largely fulfil simple expectations.
ARTICLE | doi:10.20944/preprints201703.0058.v1
Subject: Mathematics & Computer Science, Other Keywords: Smartphone sensing; mobile-social integration; automatic recognition; social data; long-term health monitoring
Online: 10 March 2017 (17:32:31 CET)
Over the past decades, overweight and obesity has become a global epidemic and the leading threat for death. To prevent the serious risk, an overweight or obese individual must apply a long-term weight-management strategy to control food intake and physical activities, which is however, not easy. Recently, with the advances of information technology, more and more people can use wearable devices and smartphones to obtain physical activity information, while they can also access various health-related information from Internet online social networks (OSNs). Nevertheless, there is a lack of an integrated approach that can combine these two methods in an efficient way. In this paper, we address this issue and propose a novel mobile-social framework for health recognition and recommendation, namely, H-Rec2. The main ideas of H-Rec2 include (1) to recognize the individual's health status using smartphone as a general platform, and (2) to recommend physical activity and food intake based on personal health information, life science principles, and health-related information obtained from OSNs. To demonstrate the potentials of the H-Rec2 framework, we develop a prototype that consists of four important components: (1) an activity recognition module that senses physical activity using accelerometer, (2) a health status modeling module that applies a novel algorithm to generate personalized health status index, (3) a restaurant information collection module that collects relevant information from OSN, and (4) a restaurant recommendation module that provides personalized and context-aware recommendation. To evaluate the prototype, we conduct both objective and subjective experiments, which confirm the performance and effectiveness of the proposed system.
ARTICLE | doi:10.20944/preprints201612.0106.v2
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: simulation software; manufacturing systems; process integration; machining optimization; Industry 4.0; knowledge-based manufacturing
Online: 26 February 2017 (10:18:59 CET)
The next future using machine tools will be dominated by highly flexible and interconnected systems, in order to achieve the required productivity, accuracy and reliability. Nowadays, distortion and vibration problems are easily solved for the most common cases by sing models based on equations describing the physical laws dominating the machining process; however additional efforts are needed to overcome the gap between scientific research and the real manufacturing problems. In fact, there is an increasing interest in developing simulation packages based on “deep knowledge and models” that aid the machine designer, the production engineer, or machinists to get the best of their machines. This article proposes a systematic methodology to reduce problems in machining by means of a simulation utility, which recognizes, collects and uses the main variables of the system/process as input data, and generates objective results that help in the proper decision-making. Direct benefits by such an application are found in a) the fixture/clamping optimal design, b) the machine tool configuration, c) the definition of chatter free optimum cutting conditions and the right programming of cutting tool path at the Computer Aided Manufacturing (CAM) stage. The information and knowledge-based approach showed successful results in several local manufacturing companies.
ARTICLE | doi:10.20944/preprints202208.0389.v1
Subject: Earth Sciences, Atmospheric Science Keywords: Numerical weather prediction; Time integration; Filtering; Laplace transform; semi-implicit; semi-Lagrangian; Forecast accuracy
Online: 23 August 2022 (03:13:59 CEST)
A time integration scheme based on semi-Lagrangian advection and Laplace transform adjustment has been implemented in a baroclinic primitive equation model. The semi-Lagrangian scheme makes it possible to use large time steps. However, errors arising from the semi-implicit scheme increase with the time step size. In contrast, the errors using the Laplace transform adjustment remain relatively small for typical time steps used with semi-Lagrangian advection. Numerical experiments confirm the superior performance of the Laplace transform scheme relative to the semi-implicit reference model. The algorithmic complexity of the scheme is comparable to the reference model, making it computationally competitive, and indicating its potential for integrating weather and climate prediction models.
ARTICLE | doi:10.20944/preprints202110.0381.v1
Subject: Life Sciences, Other Keywords: Physico-chemical parameters; water quality index; land use land cover; GIS integration; special correlation
Online: 26 October 2021 (12:24:29 CEST)
The water quality of the river is becoming deteriorated due to human interference. It is essential to understand the relationship between human activities and land-use types to assess the water quality of a region. GIS has the latest tool for analyzing the spatial correlation. Land use land cover and change detection is the best illustration to show the human interactions on land features. The study assessed water quality index of upper Ganga River near Haridwar, Uttarakhand and spatially correlated them with changing land use to reach a logical conclusion. At the upper course of Ganga along 78 Km long from Kaudiyala to Bhogpur, water samples were collected from five stations. For water quality index the physicochemical parameters like pH, EC, DO, TDS, CaCO3-, CaCO3, Cl¯, Ca++, Mg++, Na+, K+, F-, Fe2+ were considered. The result of the spatial analysis was evaluated through error estimation and spatial correlation. The root mean square error between spatial land use and water quality index of selected sampling sites was estimated as 0.1443. The spatial correlation between land-use change and site-wise differences in water quality index has also shown a high positive correlation with R² = 0.8455. The degree of positive correlation and root mean square error has strongly indicated that the water quality of the river at the upper course of Ganga is highly impacted through human activities.
CONCEPT PAPER | doi:10.20944/preprints202105.0767.v1
Subject: Biology, Animal Sciences & Zoology Keywords: evidential integration; causal explanation; early animal evolution; phylogenetics; macroevolution; evolutionary scenario; cross-disciplinary research
Online: 31 May 2021 (12:25:44 CEST)
Molecular methods have revolutionised virtually every area of biology, and metazoan phylogenetics is no exception: molecular phylogenies, molecular clocks, comparative phylogenomics, and developmental genetics have collectively transformed our understanding of the evolutionary history of animals. Moreover, the diversity of methods and models within molecular phylogenetics has resulted in significant disagreement among molecular phylogenies as well as between these and traditional phylogenies. Here, I argue that tackling this multifaceted problem lies in integrating evidence to infer the best evolutionary scenario. I begin with an overview of recent developments in early metazoan phylogenetics, followed by a discussion of key conceptual issues in phylogenetics revolving around phylogenetic evidence and theory. I then argue that integration of different kinds of evidence is necessary for arriving at the best evolutionary scenario rather than the best-fitting cladogram. Finally, I discuss the prospects of this view in stimulating interdisciplinary cross-talk in early metazoan research and beyond.
ARTICLE | doi:10.20944/preprints202105.0541.v1
Subject: Keywords: Artificial intelligence; Accounting systems integration; Accounting systems accuracy; Financial statements; Aqaba Special Economic Zone
Online: 24 May 2021 (08:47:20 CEST)
The study aims to examine the effects of artificial intelligence (AI) on the consistency and analysis of financial statements in hotels in ASEZA, Jordan. This research is an exploratory, empirical study, which uses the methodology of data collection and interpretation to draw conclusions. The researchers used the arithmetic mean, standard deviation, T-test and ANOVA test to calculate the degree of significance of the study questions. The findings of a basic linear regression study of the impact of AI implemented in Jordanian hotels on the integration of accounting information systems and the association between AI and the integration of accounting information systems (R = 59.6%) also indicate that the fixed limit value amounted to (2.060) and the value of (Beta) for T-test
ARTICLE | doi:10.20944/preprints201806.0274.v2
Subject: Physical Sciences, Optics Keywords: ultra-low-loss waveguide; silicon photonics; heterogeneous integration; narrow linewidth lasers; high Q resonators.
Online: 16 July 2018 (15:14:32 CEST)
Integrated ultra-low-loss waveguides are highly desired for integrated photonics to enable applications that require long delay lines, high-Q resonators, narrow filters, etc. Here we present an ultra-low-loss silicon waveguide on 500 nm thick SOI platform. Meter-scale delay lines, million-Q resonators and tens of picometer bandwidth grating filters are experimentally demonstrated. We design a low-loss low-reflection taper to seamlessly integrate the ultra-low-loss waveguide with standard heterogeneous Si/III-V integrated photonics platform to allow realization of high-performance photonic devices such as ultra-low-noise lasers and optical gyroscopes.
ARTICLE | doi:10.20944/preprints202105.0282.v2
Subject: Engineering, Biomedical & Chemical Engineering Keywords: centrifugal microfluidics, Lab-on-a-Disc, large-scale integration, reliability, tolerances, band width, packing density
Online: 8 June 2021 (12:07:35 CEST)
Enhancing the degree of functional multiplexing while assuring operational reliability and manufacturability at competitive costs are crucial ingredients for enabling comprehensive sample-to-answer automation, e.g., for use in common, decentralized “Point-of-Care” or “Point-of-Use” scenarios. This paper demonstrates a model-based ‘digital twin’ approach which efficiently supports the algorithmic design optimization of exemplary centrifugo-pneumatic (CP) dissolvable-film (DF) siphon valves towards larger-scale integration (LSI) of well-established “Lab-on-a-Disc” (LoaD) systems. Obviously, the spatial footprint of the valves and their upstream laboratory unit operations (LUOs) have to fit, at a given radial position prescribed by its occurrence in the assay protocol, into the locally accessible disc space. At the same time, the retention rate of a rotationally actuated CP-DF siphon valve and, most challenging, its band width related to unavoidable tolerances of experimental input parameters, need to slot into a defined interval of the practically allowed frequency envelope. To accomplish particular design goals, a set of parametrized metrics is defined, which are to be met within their practical boundaries while (numerically) minimizing the band width in the frequency domain. While each LSI scenario needs to be addressed individually on the basis of the digital twin, a suite of qualitative design rules and instructive showcases structures are presented.
ARTICLE | doi:10.20944/preprints202010.0393.v1
Subject: Life Sciences, Biochemistry Keywords: Parkinson's disease; Huntington's disease; Integration; Shared patterns; Neurodegeneration; Multi-Omics; Alzheimer's Disease; Amyotrophic Lateral Sclerosis
Online: 19 October 2020 (15:44:47 CEST)
Neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, Huntington's disease, and Amyotrophic Lateral Sclerosis are heterogeneous, progressive diseases with frequently overlapping symptoms characterized by a loss of neurons. Studies suggested relations between neurodegenerative diseases for many years, e.g., regarding the aggregation of toxic proteins or triggering endogenous cell death pathways. Within this study, publicly available genomic, transcriptomic and proteomic data were gathered from 188 studies and more than one million patients to detect shared genetic patterns between the neurodegenerative diseases and the analyzed omics-layers within conditions. The results show a remarkably high number of shared genes between the transcriptomic and proteomic levels for all diseases while showing a significant relation between genomic and proteomic data only in some cases. A set of 139 genes was found to be differentially expressed in several transcriptomic experiments of all four diseases. These 139 genes showed overrepresented GO-Terms and pathways mainly involved in stress response, cell development, cell adhesion, and the cytoskeleton. Furthermore, the overlap of two and three omics-layers per disease were used to search for overrepresented pathways and GO-Terms. Taken together, we could confirm the existence of many relations between Alzheimer's disease, Parkinson's disease, Huntington's disease, and Amyotrophic Lateral Sclerosis on the transcriptomic and proteomic level by analyzing the pathways and GO-Terms arising in these intersections. The significance of the connection between the transcriptomic and proteomic data for all four analyzed neurodegenerative diseases showed that exploring these omics-layers simultaneously holds new insights that do not emerge from analyzing these omics-layers separately. Our data therefore suggests addressing human patients with neurodegenerative diseases as complex biological systems by integrating multiple underlying data sources.
ARTICLE | doi:10.20944/preprints202007.0088.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Firm power generation; Energy storage; Irradiance forecasts; Implicit storage; Grid integration; ultra-high RE penetration
Online: 5 July 2020 (16:27:08 CEST)
We introduce firm solar forecasts as a strategy to operate optimally overbuilt solar power plants in conjunction with optimally sized storage systems so as to make up for any power prediction errors, hence entirely remove load balancing uncertainty emanating from grid-connected solar fleets. A central part of this strategy is plant overbuilding that we term implicit storage. We show that strategy, while economically justifiable on its own account, is an effective entry step to least-cost ultra-high solar penetration where firm power generation will be a prerequisite. We demonstrate that in absence of an implicit storage strategy, ultra-high solar penetration would be vastly more expensive. Using the New York Independent System Operator (NYISO) as a case study, we determine current and future cost of firm forecasts for a comprehensive set of scenarios in each ISO electrical region, comparing centralized vs. decentralized production and assessing load flexibility’s impact. We simulate the growth of the strategy from firm forecast to firm power generation. We conclude that ultra-high solar penetration enabled by the present strategy, whereby solar would firmly supply the entire NYISO load, could be achieved locally at electricity production costs comparable to current NYISO wholesale market prices.
ARTICLE | doi:10.20944/preprints202001.0224.v1
Subject: Engineering, Control & Systems Engineering Keywords: electric vehicles; sector coupling; energy system optimization; renewable energy integration; REMix; charging behavior; marginal values
Online: 20 January 2020 (10:08:13 CET)
Battery electric vehicles provide an opportunity to balance supply and demand in future power systems with high shares of fluctuating renewable energy. Compared to other storage systems such as pumped-storage hydroelectricity, electric vehicle energy demand is highly dependent on charging and connection choices of vehicle users. We present a model framework of a utility-based stock and flow model, a utility-based microsimulation of charging decisions, and an energy system model including respective interfaces to assess how the representation of battery electric vehicle charging affects energy system optimization results. We then apply the framework to a scenario study for controlled charging of nine million electric vehicles in Germany in 2030. Assuming a respective fleet power demand of 27 TWh, we analyze the difference between power-system-based and vehicle user-based charging decisions in two respective scenarios. Our results show that taking into account vehicle users’ charging and connection decisions significantly decreases the load shifting potential of controlled charging. The analysis of marginal values of equations and variables of the optimization problem yields valuable insights on the importance of specific constraints and optimization variables. In particular, state-of-charge assumptions and representing fast charging drive curtailment of renewable energy feed-in and required gas power plant flexibility. A detailed representation of fleet charge connection is less important. Peak load can be significantly reduced by 5% and 3% in both scenarios, respectively. Shifted load is very robust across sensitivity analyses while other model results such as curtailment are more sensitive to factors such as underlying data years. Analyzing the importance of increased BEV fleet battery availability for power systems with different weather and electricity demand characteristics should be further scrutinized.
ARTICLE | doi:10.20944/preprints201810.0206.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: human psychophysics; apparent motion; temporal integration; cat; retina; neural coding; Hassenstein-Reichardt detector; model analysis
Online: 10 October 2018 (06:31:03 CEST)
Under optimal conditions, just 3–6 ms of visual stimulation suffices for humans to see motion. Motion perception on this time scale implies that the visual system under these conditions reliably encodes, transmits, and processes neural signals with near-millisecond precision. Motivated by in vitro evidence for high temporal precision of motion signals in the primate retina, we investigated how neuronal and perceptual limits of motion encoding relate. Specifically, we examined the correspondence between the time scale at which cat retinal ganglion cells in vivo represent motion information and temporal thresholds for human motion discrimination. The time scale for motion encoding by ganglion cells ranged from 4.6–91 ms, depended nonlinearly on temporal frequency but not on contrast. Human psychophysics revealed that minimal stimulus durations required for perceiving motion direction were similarly brief, 5.6–65 ms, similarly depended on temporal frequency but, above ~10%, not on contrast. Notably, physiological and psychophysical measurements corresponded closely throughout (r = 0.99), despite more than a 20-fold variation in both human thresholds and optimal time scales for motion encoding in the retina. These results demonstrate that neural circuits for motion vision in cortex can maintain and make use of the high temporal fidelity of the retinal output signals.
REVIEW | doi:10.20944/preprints201809.0243.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: integrated optical sensors; monolithic integration; broad-band interferometers; label-free; multiplexed detection; on-site determinations
Online: 13 September 2018 (14:44:50 CEST)
The article reviews the current status of label-free integrated optical sensors focusing on the evolution over the years of their analytical performance. At first a short introduction to the evanescent wave optics is provided followed by detailed description of the main categories of label-free optical sensors including sensors based on surface plasmon resonance (SPR), grating coupler, photonic crystals, ring resonators and interferometric transducers. After a short reference to SPR techniques, including localized SPR, integrated optical sensors i.e., grating-couplers, interferometers, photonic crystals, microring resonators are reviewed. For each type of sensor, the detection principle is first provided followed by description of the different transducer configurations so far developed and their performance as biosensors. Finally, a short discussion about the current limitations and future perspectives of integrated label-free biosensors optical sensors is provided
REVIEW | doi:10.20944/preprints202105.0065.v1
Subject: Medicine & Pharmacology, Allergology Keywords: human papillomavirus; head and neck cancer; cancer subtypes; gene expression; oropharynx; HPV integration; immune response; keratinization
Online: 5 May 2021 (13:39:59 CEST)
Until recently, research on the molecular signatures of HPV-associated head and neck cancers mainly focused on their differences with respect to HPV-negative HNSCCs. However, given the continuing high incidence level of HPV-related HNSCC, the time is ripe to characterize the heterogeneity that exists within these cancers. Here, we review research thus far on HPV-positive HNSCC molecular subtypes, and their relationship with clinical characteristics and HPV integration into the host genome. Different omics data including host transcriptomics and epigenomics, as well as HPV characteristics, can provide complementary viewpoints. Keratinization, mesenchymal differentiation, immune signatures, stromal cells, and oxidoreductive processes all play important roles.
ARTICLE | doi:10.20944/preprints201908.0143.v1
Subject: Life Sciences, Other Keywords: acid-etching; micro-rough; bone regeneration; sub-micro-rough; bone integration; osseointegration; dental implants; orthopedic implants
Online: 12 August 2019 (12:35:48 CEST)
Titanium micro-scale topography results in excellent osteoconductivity and bone-implant integration. However, the biological effects of sub-micron topography are unknown. We compared osteoblastic phenotypes and in vivo bone and implant integration abilities between titanium surfaces with micro- (1–5 µm) and sub-micro-scale (0.1–0.5 µm) topographies and machined titanium. Average roughness was 12.5 ± 0.65 nm, 123 ± 6.15 nm, and 24 ± 1.2 nm for machined, micro-rough, and sub-micro-rough surfaces, respectively. The micro-rough surface showed the fewest cells attaching during the initial stage and the lowest proliferation. Calcium deposition and expression of osteoblastic genes were highest on the sub-micro-rough surface and lowest on the machined surface. Bone-to-implant integration was strongest for the micro-rough surface, consistent with it having the greatest ability to retain cells in vitro. Thus, the biological effects of titanium surfaces are not necessarily proportional to the degree of roughness in osteoblastic cultures or in vivo. Sub-micro-rough titanium ameliorates the disadvantage of micro-rough titanium by restoring cell attachment and proliferation and enhances the rate of osteoblastic differentiation over that of micro-rough titanium; however, bone integration and the ability to retain cells are compromised due to its lower interfacial mechanical locking compared to that of micro-rough titanium.
ARTICLE | doi:10.20944/preprints201901.0236.v1
Subject: Earth Sciences, Geoinformatics Keywords: 3D models; multi-sensor; multi-scale; SLAM; MMS; LiDAR; UAV; data integration; data fusion; cultural heritage
Online: 23 January 2019 (10:08:42 CET)
This article proposes the use of a multi-scale and multi-sensor approach to collect and modelling 3D data concerning wide and complex areas in order to obtain a variety of metric information in the same 3D archive, based on a single coordinate system. The employment of these 3D georeferenced products is multifaceted and the fusion or integration among different sensors data, scales and resolutions is promising and could be useful for the generation of a model that could be defined as hybrid. The correct geometry, accuracy, radiometry and weight of the data models are hereby evaluated comparing integrated processes and results from Terrestrial Laser Scanner (TLS), Mobile Mapping System (MMS), Unmanned Aerial Vehicle (UAV), terrestrial photogrammetry, using Total Station (TS) and Global Navigation Satellite System (GNSS) as topographic survey. The entire analysis underlines the potentiality of the integration and fusion of different solutions and is a crucial part of the “Torino 1911” project whose main purpose is mapping and virtually reconstructing the 1911 Great Exhibition settled in the Valentino Park in Turin (Italy).
REVIEW | doi:10.20944/preprints202106.0176.v1
Subject: Medicine & Pharmacology, Gastroenterology Keywords: chronic hepatitis B; covalently closed circular DNA; viral integration; transcription factor; nuclear receptor; transcriptional inhibitor; RNA interference
Online: 7 June 2021 (12:43:06 CEST)
Approximately 240 million people are chronically infected with hepatitis B virus (HBV), despite four decades of an effective HBV vaccine. During chronic infection, HBV forms two distinct templates responsible for viral gene transcription: (1) episomal covalently closed circular (ccc)DNA and (2) host-genome integrated viral templates. Multiple ubiquitous and liver-specific transcription factors are recruited onto these templates and modulate viral gene transcription. This review details the latest developments in antivirals that inhibit HBV gene transcription, and their impact on the stability of viral transcripts. Notably, nuclear receptor agonists exhibit potent inhibition of viral gene transcription from cccDNA, small molecule inhibitors repress HBV X protein-mediated transcription from cccDNA and small interfering RNAs and single-stranded oligonucleotides result in transcript degradation from both cccDNA and integrant templates. These antivirals mediate their effects by reducing viral transcripts abundance, eventually leading to loss of surface antigen expression, and can potentially be added to the arsenal of drugs with demonstrable anti-HBV activity. Thus, these candidates deserve special attention for future repurposing or further development as anti-HBV therapeutics.
ARTICLE | doi:10.20944/preprints202104.0775.v1
Subject: Social Sciences, Accounting Keywords: Energy Modelling; Climate Change; Climate Resilience; OSeMOSYS; Integrated Assessment Modelling; Nationally Determined Contributions (NDCs); Renewable Energy Integration
Online: 29 April 2021 (13:58:18 CEST)
Zimbabwe has ambitious and laudable GHG mitigation targets. Compared to a coal based future emissions reductions by 33% per capita by 2030 are targeted, by implementing a set of identified nationally determined contributions (NDCs). If historical climate conditions continue, it can do this at low or negative cost. However, anticipated conditions may not continue, and of the planned emissions reductions in the NDCs, 88% would come from the expansion of hydropower, which is driven by rainfall. If climate change causes the extreme droughts witnessed in recent years to become more frequent, embarking on Zimbabwe’s NDC future (underpinned by its official system development plan) may be expensive and further cripple the economy. Note that the economy is already being strangled by constrained power supplies due to unusually dry conditions in the Zambezi river basin. If the NDC Future is pursued, but the climate becomes drier, proactive efforts might be made to overcome the power shortages. However, this may result in a rapid ramp up of greenhouse gas emissions if the country turns to coal to reinforce its system and increase its resilience against hydropower vulnerability and the costs that would otherwise ensue. If the country were to keep its NDC investments and supplement them with more aggressive deployment of clean adaptation options, strongly positive outcomes appear possible. Specifically, this would require increased deployment of renewable energy technologies, a restructured power market, and deep increases in energy efficiency investments. In so doing, the country would not only exceed its NDC targets, but also reduce costs in a manner that is climate resilient. This would not remove the country's need for hydropower and some level of coal reliance. However, it will introduce requirements to ensure flexibility in both hydropower and coal power production. For hydropower, power stations will need to provide and be recompensed for providing ‘balancing services’, that is, storing water and producing electricity when the wind is not blowing, nor sun shining. In order to ensure continuous output from mines, it may require intelligent stockpiling combined with dynamic forecasting. This would apply not only to production in Zimbabwe, but potentially for neighboring countries. Doing so would allow predictable mining activities, but allow electricity systems to absorb low cost, low carbon hydropower at high rainfall periods. To make the NDCs resilient via clean adaptation, strong institutional restructuring is required. However, internalizing those costs and moving to advanced market structures and business cases may strain the capacity of current institutions.
ARTICLE | doi:10.20944/preprints201804.0193.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: energy systems integration; sector coupling; power gas simulation; day-ahead and real-time coordination; power gas interdependence
Online: 16 April 2018 (06:46:31 CEST)
The operation of electricity and natural gas transmission networks in the U.S. are increasingly interdependent, due to the growing number of installations of gas fired generators and the penetration of renewable energy sources. This development suggests the need for closer communication and coordination between gas and power transmission system operators in order to improve the efficiency and reliability of the combined energy system. In this paper, we present a co-simulation platform for examining the interdependence between natural gas and electricity transmission networks based on a direct current unit-commitment and economic dispatch model for the power system and a transient hydraulic gas model for the gas system. We analyze the value of day-ahead coordination of power and natural gas network operations and show the importance of considering gas system constraints when analyzing power systems operation with high penetration of gas generators and renewable energy sources. Results show that day-ahead coordination contributes to a reduction in curtailed gas during high stress periods (e.g., large gas offtake ramps) and a reduction in energy consumption of gas compressor stations.
ARTICLE | doi:10.20944/preprints202208.0328.v1
Subject: Engineering, Mechanical Engineering Keywords: thin-film sensors; foil sensors; composite structures; structural bonding; multifunctional bondline; function conformity; sensor integration; structural health monitoring
Online: 18 August 2022 (03:41:32 CEST)
We present an integrable, sensor inlay for monitoring crack initiation and growth inside bondlines of structural carbon fiber reinforced plastic (CFRP) components. The sensing structures are sandwiched between crack stopping polyvinyliden fluoride (PVDF) and a thin reinforcing polyetherimide (PEI) layer. Good adhesion at all interfaces of the sensor system and to the CFRP material is crucial as weak bonds can counteract the desired crack stopping functionality. At the same time, the chosen reinforcing layer must withstand high strains, safely support the metallic measuring grids and possess outstanding fatigue strength. We show that this robust sensor system, which measures the strain at two successive fronts inside the bondline, allows to recognize cracks in the proximity of the inlay regardless of the mechanical loads. Feasibility is demonstrated by static load tests as well as cyclic long-term fatigue testing with up to 1,000,000 cycles. In addition to pure crack detection, crack distance estimation based on sensor signals is illustrated. The inlay integration process is developed with respect to industrial applicability. Thus, implementation of the proposed system will allow the potential of lightweight CFRP constructions to be better exploited by expanding the possibilities of structural adhesive bonding.
TECHNICAL NOTE | doi:10.20944/preprints202105.0442.v2
Subject: Engineering, Civil Engineering Keywords: Axially functionally graded non-prismatic Timoshenko beam; finite difference method; additional points; vibration analysis; direct time integration method
Online: 24 September 2021 (13:12:35 CEST)
This paper presents an approach to the vibration analysis of axially functionally graded non-prismatic Timoshenko beams (AFGNPTB) using the finite difference method (FDM). The characteristics (cross-sectional area, moment of inertia, elastic moduli, shear moduli, and mass density) of axially functionally graded beams vary along the longitudinal axis. The Timoshenko beam theory covers cases associated with small deflections based on shear deformation and rotary inertia considerations. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam’s ends. In this paper, differential equations were formulated with finite differences, and additional points were introduced at the beam’s ends and at positions of discontinuity (supports, hinges, springs, concentrated mass, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam’s ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. Vibration analysis of AFGNPTB was conducted with this model, and natural frequencies were determined. Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of AFGNPTB, considering the damping. The results obtained in this study showed good agreement with those of other studies, and the accuracy was always increased through a grid refinement.
REVIEW | doi:10.20944/preprints202105.0219.v2
Subject: Life Sciences, Biochemistry Keywords: Logic-based models; Boolean models; executable models; qualitative dynamical modelling; omic data integration; in silico simulations; formal verification
Online: 30 July 2021 (15:05:03 CEST)
Discrete, logic-based models are increasingly used to describe biological mechanisms. Initially introduced to study gene regulation, these models evolved to cover various molecular mechanisms, such as signalling, transcription factor cooperativity, and even metabolic processes. The abstract nature and amenability of discrete models to robust mathematical analyses make them appropriate for addressing a wide range of complex biological problems. Recent technological breakthroughs have generated a wealth of high throughput data. Novel, literature-based representations of biological processes and emerging algorithms offer new opportunities for model construction. Here, we review up-to-date efforts to address challenging biological questions by incorporating omic data into logic-based models, and discuss critical difficulties in constructing and analysing integrative, large-scale, logic-based models of biological mechanisms.
ARTICLE | doi:10.20944/preprints202007.0227.v1
Subject: Life Sciences, Endocrinology & Metabolomics Keywords: Data integration; Metabolomics; Multi-tissue; Multiblock; Joint and unique multiblock analysis (JUMBA), OnPLS; Multiblock Orthogonal Component Analysis (MOCA)
Online: 11 July 2020 (04:01:03 CEST)
Data integration has been proven to provide valuable information. The information extracted using data integration in the form of multiblock analysis can pinpoint both common and unique trends in the different blocks. When working with small multiblock datasets the number of possible integration methods is drastically reduced. To investigate the application of multiblock analysis in cases where one has few number of samples, we studied a small metabolomic multiblock dataset containing six blocks (i.e. tissue types), only including common metabolites. We used a single model multiblock analysis method called Joint and unique multiblock analysis (JUMBA) and compare it to a commonly used method, concatenated PCA. These methods were used to detect trends in the dataset and identify underlying factors responsible for metabolic variations. Using JUMBA, we were able to interpret the extracted components and link them to relevant biological properties. JUMBA shows how the observations are related to one another, the stability of these relationships and to what extent each of the blocks contribute to the components. These results indicate that multiblock methods can be useful even with a small number of samples.
TECHNICAL NOTE | doi:10.20944/preprints202105.0660.v2
Subject: Engineering, Civil Engineering Keywords: Axially functionally graded non-prismatic Euler-Bernoulli beam; finite difference method; additional points; vibration analysis; direct time integration method
Online: 24 September 2021 (11:27:04 CEST)
This paper presents an approach to the vibration analysis of axially functionally graded (AFG) non-prismatic Euler-Bernoulli beams using the finite difference method (FDM). The characteristics (cross-sectional area, moment of inertia, elastic moduli, and mass density) of AFG beams vary along the longitudinal axis. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam’s ends. In this paper, differential equations were formulated with finite differences, and additional points were introduced at the beam’s ends and at positions of discontinuity (supports, hinges, springs, concentrated mass, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam’s ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. Vibration analysis of AFG non-prismatic Euler-Bernoulli beams was conducted with this model, and natural frequencies were determined. Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of AFG non-prismatic Euler-Bernoulli beams, considering the damping. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was always increased through a grid refinement.
ARTICLE | doi:10.20944/preprints201904.0274.v1
Subject: Arts & Humanities, Linguistics Keywords: computer-aided translation; machine translation; speech translation; translation memory-machine translation integration; user interface; domain-adaptation; human-computer interface
Online: 25 April 2019 (07:59:18 CEST)
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project.
ARTICLE | doi:10.20944/preprints202107.0046.v1
Subject: Earth Sciences, Geoinformatics Keywords: urbanization; long-term settlement patterns; built-up land data; global human settlement layer; historical maps; topographic map processing; data integration.
Online: 2 July 2021 (10:03:54 CEST)
Abstract: Spatially explicit, fine-grained datasets describing historical urban extents are rarely available prior to the era of operational remote sensing. However, such data are necessary to better understand long-term urbanization and land development processes and for the assessment of coupled nature-human systems, e.g., the dynamics of the wildland-urban interface. Herein, we propose a framework that jointly uses remote sensing derived human settlement data (i.e., the Global Human Settlement Layer, GHSL) and scanned, georeferenced historical maps to automatically generate historical urban extents for the early 20th century. By applying unsupervised color segmentation to the historical maps, spatially constrained to the urban extents derived from the GHSL, our approach generates historical settlement extents for seamless integration with the multi-temporal GHSL. We apply our method to study areas in countries across four continents, and evaluate our approach for two U.S. study sites against historical settlement extents derived from the Historical Settlement Data Compilation for the US, HISDAC-US, achieving Area-under-the-Curve values >0.9. Our results are largely in agreement with model-based urban areas from the HYDE database, and demonstrate that the integration of remote sensing derived observations and historical cartographic data sources opens up new, promising avenues for assessing urbanization, and long-term land cover change in countries where historical maps are available.
ARTICLE | doi:10.20944/preprints201709.0092.v1
Subject: Engineering, Energy & Fuel Technology Keywords: renewable energy networks; principal component analysis; large-scale integration of renewables; wind power; solar power; super grid; energy system design
Online: 20 September 2017 (04:44:36 CEST)
Due to its spatio-temporal variability, the mismatch between the weather and demand patterns challenges the design of highly renewable energy systems. A principal component analysis is applied to a simplified networked European electricity system with a high share of wind and solar power generation. It reveals a small number of important mismatch patterns, which explain most of the system's required backup and transmission infrastructure. Whereas the first principal component is already able to reproduce most of the temporal mismatch variability for a solar dominated system, a few more principal components are needed for a wind dominated system. Due to its monopole structure the first principal component causes most of the system's backup infrastructure. The next few principal components have a dipole structure and dominate the transmission infrastructure of the renewable electricity network.
ARTICLE | doi:10.20944/preprints202105.0252.v2
Subject: Engineering, Civil Engineering Keywords: Timoshenko beam; finite difference method; additional points; element stiffness matrix; tapered beam; second-order analysis; vibration analysis; direct time integration method
Online: 30 September 2021 (16:24:48 CEST)
This paper presents an approach to the Timoshenko beam theory (TBT) using the finite difference method (FDM). The Timoshenko beam theory covers cases associated with small deflections based on shear deformation and rotary inertia considerations. The FDM is an approximate method for solving problems described with differential equations. It does not involve solving differential equations; equations are formulated with values at selected points of the structure. In addition, the boundary conditions and not the governing equations are applied at the beam’s ends. The model developed in this paper consisted of formulating differential equations with finite differences and introducing additional points at the beam’s ends and at positions of discontinuity (concentrated loads or moments, supports, hinges, springs, brutal change of stiffness, spring-mass system, etc.). The introduction of additional points allowed us to apply the governing equations at the beam’s ends. Moreover, grid points with variable spacing were considered, the grid being uniform within beam segments. First-order, second-order, and vibration analyses of structures were conducted with this model. Furthermore, tapered beams were analyzed (element stiffness matrix, second-order analysis, vibration analysis). Finally, a direct time integration method (DTIM) was presented; the FDM-based DTIM enabled the analysis of forced vibration of structures, with damping taken into account. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was increased through a grid refinement. Especially in the first-order analysis of uniform beams, the results were exact for uniformly distributed and concentrated loads regardless of the grid.
ARTICLE | doi:10.20944/preprints202104.0541.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Sinc methods; inverse Laplace transform; indefinite integrals; fractional calculus; Mittag−Leffler function; Prabhakar function; variable fractional order differentiation; variable fractional order integration
Online: 20 April 2021 (12:45:42 CEST)
We shall discuss three methods of inverse Laplace transforms. A Sinc-Thiele approxi- mation, a pure Sinc, and a Sinc-Gaussian based method. The two last Sinc related methods are exact methods of inverse Laplace transforms which allow us a numerical approximation using Sinc methods. The inverse Laplace transform converges exponentially and does not use Bromwich contours for computations. We apply the three methods to Mittag-Leffler functions incorporating one, two, and three parameters. The three parameter Mittag-Leffler function represents Prabhakar’s function. The exact Sinc methods are used to solve fractional differential equations of constant and variable differentiation order.
REVIEW | doi:10.20944/preprints202011.0290.v1
Subject: Keywords: plant diversity; plant productivity; humped pattern; intrinsic rate of species richness; complementary effect; resource availability; disturbance; species pool effect; competition exclusion; process integration
Online: 10 November 2020 (08:28:28 CET)
The plant productivity-richness relationship (PPR) is one of the most debated and important issues in ecology. There have been distinct stages in the research of this issue, including the discovery of the different PPR shapes, respective tests of influencing processes, and integrative research with vegetation investigation, manipulation experiments, and theoretical analysis. The debate largely focuses on what the dominant shapes and underlying mechanisms are. Recent integrative research works following analyses of respective processes affecting PPR have found that the humped, asymptotic, positive, negative, and irregular shapes of PPR are linked to each other. One shape of PPR may change into another. The balance between positive and negative processes determines the different shapes of PPR. Plant diversity has a globally positive effect on plant productivity.
Subject: Behavioral Sciences, Applied Psychology Keywords: multimodal experiment; multisensory experiment; automatic device integration; open-source; PsychoPy; Unity; Virtual Reality (VR); Lab Streaming Layer; LabRecorder; LabRecorderCLI; Windows command line (cmd.exe)
Online: 12 October 2020 (07:06:28 CEST)
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.
ARTICLE | doi:10.20944/preprints202102.0559.v3
Subject: Keywords: Euler Bernoulli beam; finite difference method; additional points; element stiffness matrix; tapered beam; first-order analysis; second-order analysis; vibration analysis; direct time integration method
Online: 23 September 2021 (13:08:38 CEST)
This paper presents an approach to the Euler-Bernoulli beam theory (EBBT) using the finite difference method (FDM). The EBBT covers the case of small deflections, and shear deformations are not considered. The FDM is an approximate method for solving problems described with differential equations. The FDM does not involve solving differential equations; equations are formulated with values at selected points of the structure. Generally, the finite difference approximations are derived based on fourth-order polynomial hypothesis (FOPH) and second-order polynomial hypothesis (SOPH) for the deflection curve; the FOPH is made for the fourth and third derivative of the deflection curve while the SOPH is made for its second and first derivative. In addition, the boundary conditions and not the governing equations are applied at the beam’s ends. In this paper, the FOPH was made for all of the derivatives of the deflection curve, and additional points were introduced at the beam’s ends and positions of discontinuity (concentrated loads or moments, supports, hinges, springs, etc.). The introduction of additional points allowed us to apply the governing equations at the beam’s ends and to satisfy the boundary and continuity conditions. Moreover, grid points with variable spacing were also considered, the grid being uniform within beam segments. First-order analysis, second-order analysis, and vibration analysis of structures were conducted with this model. Furthermore, tapered beams were analyzed (element stiffness matrix, second-order analysis). Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of structures, with damping taken into account. The results obtained in this paper showed good agreement with those of other studies, and the accuracy was increased through a grid refinement. Especially in the first-order analysis of uniform beams, the results were exact for uniformly distributed and concentrated loads regardless of the grid. Further research will be needed to investigate polynomial refinements (higher-order polynomials such as fifth-order, sixth-order…) of the deflection curve; the polynomial refinements aimed to increase the accuracy, whereby non-centered finite difference approximations at beam’s ends and positions of discontinuity would be used.
ARTICLE | doi:10.20944/preprints202104.0612.v2
Subject: Engineering, Biomedical & Chemical Engineering Keywords: centrifugal microfluidics; Lab-on-a-Disc; centrifugo-pneumatic flow control; integration; multiplexing; parallelization; sample-to-answer; reliability; tolerances; design-for-manufacture; digital twin; event-triggering
Online: 8 June 2021 (11:23:58 CEST)
Fluidic larger-scale integration (LSI) resides at the heart of comprehensive sample-to-answer automation and parallelization of assay panels for frequent and ubiquitous bioanalytical testing in decentralized the point-of-use / point-of-care settings. This paper develops a novel “digital twin” strategy with an emphasis on rotational, centrifugo-pneumatic flow control. The underlying model systematically connects retention rates of rotationally actuated valves as a key element of LSI to experimental input parameters; for the first time, the concept of band widths in frequency space as the decisive quantity characterizing operationally robustness is introduced, a set of quantitative performance metrics guiding algorithmic optimization of disc layouts is defined, and the engineering principles of advanced, logical flow control and timing are elucidated. Overall, the digital twin enables efficient design for automating multiplexed bioassay protocols on such “Lab-on-a-Disc” (LoaD) systems featuring high packing density, reliability, configurability, modularity and manufacturability to eventually minimize cost, time and risk of development and production.
ARTICLE | doi:10.20944/preprints202111.0327.v1
Subject: Engineering, Civil Engineering Keywords: KirchhoffLove plate; finite difference method; additional points; plate of varying thickness; plate with stiffeners; skew edge; plate buckling analysis; vibration analysis; direct time integration method;
Online: 18 November 2021 (13:51:59 CET)
This paper presents an approach to the Kirchhoff-Love plate theory (KLPT) using the finite difference method (FDM). The KLPT covers the case of small deflections, and shear deformations are not considered. The FDM is an approximate method for solving problems described with differential equations. The FDM does not involve solving differential equations; equations are formulated with values at selected points of the structure. Generally in the case of KLPT, the finite difference approximations are derived based on the fourth-order polynomial hypothesis (FOPH) and second-order polynomial hypothesis (SOPH) for the deflection surface. The FOPH is made for the fourth and third derivative of the deflection surface while the SOPH is made for its second and first derivative; this leads to a 13-point stencil for the governing equation. In addition, the boundary conditions and not the governing equations are applied at the plate edges. In this paper, the FOPH was made for all of the derivatives of the deflection surface; this led to a 25-point stencil for the governing equation. Furthermore, additional nodes were introduced at plate edges and at positions of discontinuity (continuous supports/hinges, incorporated beams, stiffeners, brutal change of stiffness, etc.), the number of additional nodes corresponding to the number of boundary conditions at the node of interest. The introduction of additional nodes allowed us to apply the governing equations at the plate edges and to satisfy the boundary and continuity conditions. First-order analysis, second-order analysis, buckling analysis, and vibration analysis of plates were conducted with this model. Moreover, plates of varying thickness and plates with stiffeners were analyzed. Finally, a direct time integration method (DTIM) was presented. The FDM-based DTIM enabled the analysis of forced vibration of structures, with damping taken into account. In first-order, second-order, buckling, and vibration analyses of rectangular plates, the results obtained in this paper were in good agreement with those of well-established methods, and the accuracy was increased through a grid refinement.
REVIEW | doi:10.20944/preprints201809.0281.v1
Subject: Life Sciences, Cell & Developmental Biology Keywords: axon guidance; growth cone; cytoskeleton; caspases; apoptosis; signal integration; basal level of caspase activity; death associated inhibitor of apoptosis; axon branching; Netrin; DCC; frazzled; slit; robo; Drosophila
Online: 16 September 2018 (09:43:52 CEST)
Navigating growth cones are exposed to multiple signals simultaneously and have to integrate competing cues into a coherent navigational response. Integration of guidance cues is traditionally thought to occur at the level of cytoskeletal dynamics. Drosophila studies indicate that cells exhibit a low level of continuous caspase protease activation, and that axon guidance cues can activate or suppress caspase activity. We base a model for axon guidance on these observations. By analogy with other systems in which caspase signaling has non-apoptotic functions, we propose that caspase signaling can either reinforce repulsion or negate attraction in response to external guidance cues by cleaving cytoskeletal proteins. Over the course of an entire trajectory, incorrectly navigating axons may pass the threshold for apoptosis and be eliminated, whereas axons making correct decisions will survive. These observations would also explain why neurotrophic factors can act as axon guidance cues and why axon guidance systems such as Slit/Robo signaling may act as tumor suppressors in cancer.
ARTICLE | doi:10.20944/preprints202205.0183.v1
Subject: Engineering, Energy & Fuel Technology Keywords: Energy geostructure; ground source heat pump (GSHP); sustainable urban drainage system (SUDS); sector integration; 5th generation district heating and cooling; permeable asphalt; rainwater retardation; full-scale demonstration; numerical modelling; analytical modelling
Online: 13 May 2022 (08:06:39 CEST)
This paper proposes and demonstrates, in full scale, a novel type of energy geostructure (“the Climate Road”) that combines a ground source heat pump (GSHP) with a sustainable urban drainage system (SUDS) by utilizing the gravel roadbed simultaneously as energy source and rainwater retarding basin. The Climate Road measures 50m x 8m x 1m (length, width, depth) and has 800 m of geothermal piping embedded in the roadbed, serving as the heat collector for a GSHP that supplies a nearby kindergarten with domestic hot water and space heating. Model analysis of operational data from 2018-2021 indicates sustainable annual heat production levels around 0.6 MWh per meter road, with a COP of 2.9-3.1. The continued infiltration of rainwater to the roadbed increases the amount of extractable heat by an estimated 17% compared to the case of zero infiltration. Using the developed model for scenario analysis we find that draining rainwater from three single family houses and storing 30% of the annual heating consumption in the roadbed, increases the predicted extractable energy by 56% compared to zero infiltration with no seasonal energy storage. The Climate Road is capable of supplying three single family houses with heating, cooling and rainwater management year-round.