Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

A Parallel Software Pipeline for Personalized Medicine

Version 1 : Received: 31 March 2018 / Approved: 2 April 2018 / Online: 2 April 2018 (07:53:23 CEST)

A peer-reviewed article of this Preprint also exists.

Agapito, G.; Guzzi, P.H.; Cannataro, M. A Parallel Software Pipeline for DMET Microarray Genotyping Data Analysis. High-Throughput 2018, 7, 17. Agapito, G.; Guzzi, P.H.; Cannataro, M. A Parallel Software Pipeline for DMET Microarray Genotyping Data Analysis. High-Throughput 2018, 7, 17.


Personalized medicine is an aspect of the P4 medicine (predictive, preventive, personalized and participatory) based precisely on the customization of all medical characters of each subject. In personalized medicine, the development of medical treatments and drugs is tailored to the individual characteristics and needs of each subject, according to the study of diseases at different scales from genotype to phenotype scale. To make concrete the goal of personalized medicine, it is necessary to employ high-throughput methodologies such as Next Generation Sequencing (NGS), Genome-Wide Association Studies (GWAS), Mass Spectrometry or Microarrays, that are able to investigate a single disease from a broader perspective. For example, by using genotyping microarrays (e.g. collections of Single Nucleotide Polymorphism - SNP) it is possible to uncover the reasons (i.e. mutation in genes) because a treatment works properly in some patients (for example absence of mutated genes), but it does not work (presence of mutated genes) in others. A side effect of high-throughput methodologies is the massive amount of data produced for each single experiment, that poses several challenges (e.g. high execution time and required memory) to bioinformatic software. Thus a main requirement of modern bioinformatic software is the use of good software engineering methods and efficient programming techniques, able to face those challenges, that include the use of parallel programming and efficient and compact data structures. Thus, to exploit all the potential of this massive amount of data in the short possible time (before that data becomes obsolete), the necessity to develop parallel software tools for efficient data collection and analysis arise. Moreover, due to the heterogeneity of the data produced by the different kinds of experimental platforms, it is necessary to automatize in a comprehensive software pipeline, the various steps that compose a bioinformatic analysis, such as: the preprocessing of raw data to remove noise or corrupted data; the annotation of data with external knowledge (e.g. Gene Ontology), and the integration of molecular data with clinical data. It should be noted that such steps are necessary to make statistical or data mining analysis more effective. This paper presents the design and the experimentation of a comprehensive software pipeline, named microPipe, for the preprocessing, annotation and analysis of microarray-based SNP genotyping data. A case study in pharmacogenomics is presented. The main advantages of using microPipe are: the reduction of errors that may happen when trying to make data compatible among different tools; the possibility to analyze in parallel huge datasets; the easy annotation and integration of data.


SNP; multiple analysis pipeline; pharmacogenomics; overall survival curves; data mining: statistical analysis


Computer Science and Mathematics, Information Systems

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0

Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.