Article
Version 1
Preserved in Portico This version is not peer-reviewed
Plan for Constructing DataDiscoveryLab
Version 1
: Received: 20 April 2023 / Approved: 27 April 2023 / Online: 27 April 2023 (10:36:36 CEST)
Version 2 : Received: 27 April 2023 / Approved: 2 May 2023 / Online: 2 May 2023 (04:13:23 CEST)
Version 2 : Received: 27 April 2023 / Approved: 2 May 2023 / Online: 2 May 2023 (04:13:23 CEST)
How to cite: Keskinoglu, E. Plan for Constructing DataDiscoveryLab. Preprints 2023, 2023041074. https://doi.org/10.20944/preprints202304.1074.v1 Keskinoglu, E. Plan for Constructing DataDiscoveryLab. Preprints 2023, 2023041074. https://doi.org/10.20944/preprints202304.1074.v1
Abstract
DataDiscoveryLab is a software tool that enables users to recommend possible pathways to their research with references by extracting valuable insights from academic articles by parsing them into text and figures and processing the image data using computer vision algorithms. The software creates two databases for text-based purposes, one for titles, figure captions, and references, and another for abstracts, introductions, methods, and results using NLP techniques. The software then compares these databases to users' research questions, finds similarities, and presents the findings. Additionally, the software takes data from researchers' scientific software and devices to compare with the current figure-based databases, creating a loop until the best answer and pathways to research and articles to recommend can be found. This tool provides valuable insights and context for researchers, helping them make informed decisions about their research.
Keywords
data analysis; computer vision algorithms; visual data; natural language processing; scientific research
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment