Preprint Brief Report Version 1 Preserved in Portico This version is not peer-reviewed

A Simple Convolutional Neural Network for Precise and Automated Identification of COVID-19

Version 1 : Received: 26 July 2022 / Approved: 27 July 2022 / Online: 27 July 2022 (10:01:54 CEST)

How to cite: Zhu, Z. A Simple Convolutional Neural Network for Precise and Automated Identification of COVID-19. Preprints 2022, 2022070419. https://doi.org/10.20944/preprints202207.0419.v1 Zhu, Z. A Simple Convolutional Neural Network for Precise and Automated Identification of COVID-19. Preprints 2022, 2022070419. https://doi.org/10.20944/preprints202207.0419.v1

Abstract

To solve two key problems in the identification of people who are infected with COVID-19: the first problem is that the identification accuracy is not high enough. The second problem is that present identification method such as nucleic acid testing is expensive in many countries. Methods: So, I decided to design a fast identification method for COVID-19 patients which is based on deep learning. After the model (CoughNet) learns more than 6,000 cough spectrograms of both COVID-19 patients and normal people, the accuracy rate of identification of COVID-19 patients and normal people is higher than 99% in the test set. Structure: This paper is mainly divided into three parts: the first part introduces the main background and research status of the research; The second part introduces the research methods; The third part introduces the specific process of the experiment.

Keywords

computer vision; deep learning; CoughNet model

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.