ARTICLE | doi:10.20944/preprints202106.0705.v1
Subject: Biology And Life Sciences, Anatomy And Physiology Keywords: DNA pooling; parentage; reproduction
Online: 29 June 2021 (12:54:42 CEST)
Phenotypes are necessary for genomic evaluations and management. Sometimes genomics can be used to measure phenotypes when other methods are difficult or expensive. Prolificacy of bulls used in multiple-bull pastures for commercial beef production is an example. A retrospective study of 79 bulls aged 2-year-old and older used 141 times in 4-5 pastures across 4 years was used to estimate repeatability from variance components. Traits available before each season’s use were tested for predictive ability. Sires were matched to calves using individual genotypes and evaluating exclusions. A lower cost method of measuring prolificacy was simulated for 5 pastures using the bulls’ genotypes and pooled genotypes to estimate average allele frequencies of calves and of cows. Repeatability of prolificacy was 0.62 ± 0.09. A combination of age-class and scrotal circumference accounted for less than 5 % of variation. Simulated estimation of prolificacy by pooling DNA of calves was accurate. Adding pooling of cow DNA or actual genotypes both increased accuracy about the same. Knowing a bull’s prior prolificacy would help predict future prolificacy for management purposes and could be used in genomic evaluations and research with coordination of breeders and commercial beef producers.
ARTICLE | doi:10.20944/preprints202308.0825.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: max-pooling; convolutional neural network (CNN); FPGA; rank tracking based max-pooling (RTB-MAXP); cascaded maximum based max-pooling (CMB-MAXP)
Online: 10 August 2023 (05:48:50 CEST)
This paper proposes two max-pooling engines, named the RTB-MAXP engine and the CMB-MAXP engine, with a scalable window size parameter for FPGA-based convolutional neural network (CNN) implementation. The max-pooling operation for the CNN can be decomposed into two stages, i.e., a horizontal axis max-pooling operation and a vertical axis max-pooling operation. These two one-dimensional max-pooling operations are performed by tracking the rank of the values within the window in the RTB-MAXP engine and cascading the maximum operations of the values in CMB-MAXP engine. Both the RBM-MAXP engine and the CMB-MAXP engine were implemented using VHSIC Hardware Description Language (VHDL) and verified by simulations. They have been employed for and tested in our CNN accelerator targeting at the CNN model YOLOv4-CSP-S-Leaky for object detection.
ARTICLE | doi:10.20944/preprints202302.0026.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: Graph Neural Network; Variational Autoencoder; Pooling; Nearest Neighbours
Online: 2 February 2023 (03:36:17 CET)
We present a Deep Learning generative model specialized to work with graphs with a regular geometry. It is build on a Variational Autoencoder framework and employs Graph convolutional layers in both encoding and decoding phases. We also introduce a pooling technique (ReNN-Pool), used in the encoder, that allows to downsample graph nodes in a spatially uniform and highly interpretable way. In the decoder, a symmetrical un-pooling technique is used to retrieve the original dimensionality of graphs. Performance of the model are tested on the standard Sprite benchmark dataset, a set of 2D images of video game characters, adequately transforming images data into graphs, and on the more realistic use-case of a dataset of cylindrical-shaped graph data that describe the distributions of the energy deposited by a particle beam in a medium.
ARTICLE | doi:10.20944/preprints202101.0105.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: World Trade Center; Exposure; Cancer; Rescue and recovery workers; Pooling cohorts
Online: 6 January 2021 (10:30:43 CET)
Three cohorts including the Fire Department of the City of New York (FDNY), the World Trade Center Health Registry (WTCHR), and the General Responder Cohort (GRC), each funded by the World Trade Center Health Program have reported associations between WTC-exposures and cancer. Results have generally been consistent with effect estimates for excess incidence for all cancers ranging from 6 to 14% above background rates. Pooling would increase sample size and de-duplicate cases between the cohorts. Pooling required time consuming steps: obtaining IRB approvals and legal agreements from entities involved; establishing an honest broker for managing the data; de-duplicating the pooled cohort files; applying to State Cancer Registries (SCRs) for matched cancer cases; and finalizing analysis data files. Obtaining SCR data use agreements ranged from 6.5 to 114.5 weeks with six states requiring >20 weeks. Records from FDNY (n=16,221), WTCHR (n=29,372), and GRC (n=33,427) were combined de-duplicated resulting in 69,102 unique individuals. Overall, 7,894 cancer tumors were matched to the pooled cohort, increasing the number cancers by as much as 58% compared to previous analyses. Pooling resulted in a coherent resource for future research for studies on rare cancers and mortality, with more representative of occupations and WTC- exposure.
ARTICLE | doi:10.20944/preprints202209.0223.v1
Subject: Medicine And Pharmacology, Epidemiology And Infectious Diseases Keywords: Covid; Covid testing; sample pooling; resources; time; binary system; probability; positivity rate
Online: 15 September 2022 (08:14:36 CEST)
In Los Angeles, at one point, the Covid-19 testing positivity rate was 6.25%, or one in sixteen. This translates to, on average, one in sixteen specimens testing positive and the vast majority testing negative. Usually, we run sixteen tests on sixteen specimens to identify the positive one(s). This process can be time consuming and expensive. Since a group of negative specimens pooled together for testing will produce a negative result, one single test could potentially eliminate many specimens. Only when the pooled specimen tests positive do we need further testing to identify the positive one(s). Based on this concept, we designed a strategy that will identify the positive specimen(s) efficiently. Assuming one in sixteen specimens is positive, we find that only four tests are needed. Furthermore, we can run them simultaneously, saving both resources and time. Although, in the real world, we cannot make the assumption of only one positive specimen, the same strategy works with slight modification and proves to be much more efficient than the conventional testing. Our strategy returns an answer 48% of the time in four tests and one time cycle. Overall, the average number of tests is seven or eight depending on the follow-up testing, and the average time cycle is about one and a half.
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: classification; optimization; batch normalization; kernel regularization; convolution; pooling; dropout layer; learning rate
Online: 20 July 2021 (09:34:53 CEST)
Alcoholism is attributed to regular or excessive drinking of alcohol and leads to the disturbance of the neuronal system in the human brain. This results in certain malfunctioning of neurons that can be detected by an electroencephalogram (EEG) using several electrodes on a human skull at appropriate positions. It is of great interest to be able to classify an EEG activity as that of a normal person or an alcoholic person using data from the minimum possible electrodes (or channels). Due to the complex nature of EEG signals, accurate classification of alcoholism using only a small data is a challenging task. Artificial neural networks, specifically convolutional neural networks (CNN), provide efficient and accurate results in various pattern-based classification problems. In this work, we apply CNN on raw EEG data, and demonstrate how we achieved 98% average accuracy by optimizing a baseline CNN model and outperforming its results in a range of performance evaluation metrics on the UCI-KDD EGG dataset. This article explains the step-wise improvement of the baseline model using the dropout, batch normalization, and kernel regularization techniques, and provides a comparison of the two models that can be beneficial for aspiring practitioners who aim to develop similar classification models in CNN. A performance comparison is also provided with other approaches using the same dataset.
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: Knowledge graphs; hierarchical pooling; graph classification; graph neural networks; FPool; large graph sensor datasets
Online: 15 June 2021 (14:32:53 CEST)
In considering knowledge graphs in a diverse range of domains of interest, graph neural networks have demonstrated significant improvements in node classification and prediction when applied to graph representation with learning node embedding to effectively represent hierarchical properties of graphs. DiffPool is a deep-learning approach using a differentiable graph pooling technique that generates hierarchical representations of graphs. In operation DiffPool is a differentiable graph pooling technique that generates hierarchical representations of graphs. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep graph neural network with nodes mapped sets of clusters. However, control of the learning process is difficult given the complexity and large number of parameters on an `end-to-end’ model. To address this difficulty we propose an novel approach termed FPool which is predicated on the basic approach adopted in DiffPool (where pooling is applied directly to node representations). Methods designed to enhance data classification have been developed and evaluated using a number popular and publicly available sensor data sets. Experimental results for FPool demonstrate improved classification and prediction performance when compared to alternative methods. Moreover, FPool shows an important reduction in the training time over the basic DiffPool framework.
ARTICLE | doi:10.20944/preprints202109.0329.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: BOINC; cloud computing; economic uncertainty; framework; grid compu-ting; optimization; resource pooling; Small and Medium-Sized Enterprise; technology acceptance model; virtualization
Online: 20 September 2021 (11:49:41 CEST)
Market turbulence with fiscal investment influences has altered IT infrastructure performance as business pursue extravagant new technology adoption. Yet, few studies have examined how shareware solution goes beyond Medium Size Enterprise that pushes efficiency and sustainability. This PLS-SEM integrated with dual primary compilation approach lessens shallow perceptions coupled with outlooks that streamline each phenomenal activity that is worthy of the necessity for competitive innovation. This unified model was applied to sampling respondents and analyzed using an ordinal regression relationship that generates a robust association that triggers the hypothesis acceptance. The adopting of BOINC shareware mesh network towards unified processing designs that were employed to build the yield by promising financial possibility using coordinated interworks hence improved group accomplishment and establishing greater esteem. This paper showcases a flexible inner IT infrastructure alongside the economic uncertainty with the framework advancement for Exostructure as a Service. Associated theoretical and practical implications were discussed.
REVIEW | doi:10.3390/sci2030068
Subject: Medicine And Pharmacology, Tropical Medicine Keywords: COVID-19; pooling clinical trials; hyperinfection; steroids; treatment; targeted healthcare; population health management; cancer treatment; clinical research; clinical trials; developing vaccines; ranking and rating hospital quality; school closures; interventions for delirium; assessments of COVID-19 death inequities; regulatory safeguards; preventing child abuse and maltreatment; prevalence of health care worker burnout; nursing home ratings; challenging oncology practice; addressing racial; ethnic; social and economic divides; violence against sexual minority adolescents; primary tumors; metastasis; stages of cancer; reforming cancer clinical trials; supporting carers; protection and prevention; benign and malignant tumors; reforming cancer clinical trials; protection of healthcare personnel; comparing excess deaths in NYC; 1918 influenza pandemic; the possibility of full recovery from COVID-19; mental health impact of COVID-19 on young adults; ranking and rating nursing home quali
Online: 21 August 2020 (00:00:00 CEST)
The SARS-CoV-2 virus that causes the COVID-19 disease has wreaked havoc on the world community in terms of every imaginable parameter. The research output on COVID-19 has been nothing short of phenomenal, especially in the medical and biomedical sciences, where the search for a potential vaccine is being conducted in earnest. Much of the advanced research has been distributed in the leading medical journals, including the Journal of the American Medical Association (JAMA), where the latest research is distributed on a daily basis. The purpose of this paper is to provide some perspectives on 44 interesting and highly topical research papers that have been published in JAMA, at the time of writing, within the past two weeks. The diverse topics include public health, general medicine, internal medicine, oncology, paediatrics, geriatrics, and biostatistics.