Artificial Intelligence Models, Tools and Applications

Please bear in mind that preprints report early stage research which has not undergone peer review and should not be regarded as conclusive or be reported as established fact.

For the peer-reviewed articles in this topics, please click here.

Sort by

Article
Engineering
Industrial and Manufacturing Engineering

Jonas Gomes da Silva

Abstract: Despite a high publication volume (1996–2024), Brazil lags innovation leaders due to systemic barriers driving researchers toward low-impact journals or research abandonment. One critical and solvable barrier is the absence of interactive tools for top journal selection. To tackle this, the Journal with Notable Assessment Score (Jonas) Butterfly Model provides a data-driven framework to help resource-limited researchers in Administration, Engineering, Environmental, and Interdisciplinary fields identify fast, high-impact, transparent, and APC-free journals when possible. To achieve this, the model begins its flight with four phases, exploring national and international databases, scientometrics, open science, documentary and bibliometric reviews, three original formulas, statistical normalization, computational methods, Julius AI, and digital platforms for data extraction, analysis, storage, and interactive presentation. From the Qualis dataset of 27,929 unique titles, the model generates structured databases and an HTML tool, enabling researchers to explore 400 Top-Tier A1 journals (21 Diamond, 361 Hybrid, 18 Gold OA) ranked by the Most Notable Journal Score, classified within a 3x9 matrix (speed × impact), ranging from Q1 (Slowest & Highest Impact), Q3 (Fastest & High Impact), down to Q9 (Fastest & Lowest Impact). For authors with fewer funds, 21 diamond journals emerged as viable alternatives, proving that fee-free publishing need not compromise visibility or prestige. By merging scientometrics, open science principles, digital tools, and intuitive visualization, the proposed model offers authors, policymakers, and editors a clear flight path toward more equitable, faster, and higher-impact scientific dissemination, helping to strengthen Brazil’s global research visibility.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Boris A. Galitsky

Abstract: A text obtained by a Large Language Model (LLM) such as GPT4 usually has issues in terms of incorrectness and hallucinations. We build a fact-checking system 'Truth-O-Meter' which identifies wrong facts, comparing the generation results with the web and other sources of information, and suggests corrections. Text mining and web mining techniques are leveraged to identify correct corresponding sentences; also, the syntactic and semantic generalization procedure adopted to the content improvement task. To handle inconsistent sources while fact-checking, we rely on an argumentation analysis in the form of defeasible logic programming. We compare our fact checking engine with competitive approach based on reinforcement learning on top of LLM or token-based hallucination detection. It is observed that LLM content can be substantially improved for factual correctness and meaningfulness.

Article
Arts and Humanities
Film, Radio and Television

Kostas Karpouzis

Abstract: This paper discusses the transformative role Artificial Intelligence (AI) is anticipated to play in the film industry, specifically concerning its capacity to enhance and redefine the parameters of cinematic realism. As the integration of AI in the cinematic field gains momentum, an exploration of its implications on the perceived realism in film is essential. The paper first outlines the historical trajectory of realism in cinema, mapping its evolution and its core principles with respect to technological advances; we then introduce AI as a disruptive technology poised to reshape this trajectory. The focus of the paper lies in examining how AI, through its advanced image recognition and synthesis, data analysis, and deep learning capabilities, has the potential to revolutionise traditional methods of film production, post-production, and distribution, significantly impacting the authenticity of cinematic narratives. The paper then discusses potential drawbacks, such as the risk of over-reliance on technology and the ethical implications of AI utilisation, offering a balanced perspective on this emergent phenomenon. The paper aims to inform film scholars, industry professionals, and enthusiasts about the profound transformations AI is set to bring, propelling the discussion towards the future of film in the era of artificial intelligence.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Prashant Sawant

Abstract: This study presents a comprehensive quantitative analysis of Agentic AI performance and applications across various industries. Agentic AI, an emerging field combining advanced AI techniques with enterprise automation, has shown promise in creating autonomous agents capable of complex decision-making and problem-solving. Our research, conducted over a 12-month period, employed a mixed-methods approach, analyzing data from 500 organizations and incorporating insights from 50 industry experts. The study aimed to evaluate the efficiency, accuracy, and impact of Agentic AI systems compared to traditional AI approaches.Results demonstrate that Agentic AI systems significantly outperform traditional AI, with a 34.2% reduction in task completion time, 7.7% increase in accuracy, and 13.6% improvement in resource utilization. Productivity gains varied across industries, with the technology sector showing the highest improvement at 45%. The study also revealed high scalability of Agentic AI solutions across different organizational sizes, although implementation time increased with organization complexity.Key challenges identified include data privacy concerns, integration difficulties with legacy systems, skill gaps, and ethical considerations. Despite these challenges, the study concludes that Agentic AI has significant potential to transform business processes and decision-making across various sectors. Future research directions include enhancing interpretability, optimizing domain-specific applications, and exploring multi-agent collaborations.This research contributes valuable insights into the current state and future prospects of Agentic AI, providing a foundation for further development and implementation strategies in this rapidly evolving field.

Review
Social Sciences
Education

Adam Zary

,

Nabil Zary

Abstract: Background: The pressures of Industry 4.0 have driven the incorporation of artificial intelligence (AI) in Technical and Vocational Education and Training (TVET) to improve the development of practical skills. Nonetheless, there is still a lack of empirical agreement regarding the effects and implementation of AI. Methods: We conducted a literature review using databases like IEEE Xplore, Scopus, ERIC, and Google Scholar, as well as grey literature from conference proceedings and UNESCO-UNEVOC reports, to find empirical studies on AI in Technical and Vocational Education and Training (TVET). Our search included keywords such as "artificial intelligence," "machine learning," and "vocational training." After screening titles/abstracts and full texts against our inclusion criteria (focused on TVET settings with measurable outcomes), we identified 11 studies published between 2021 and 2025. Each study was coded by methodology, AI technology type, vocational domain, country, and reported outcomes. Results: Evaluations in vocational trades show AI-driven simulators enhance hands-on skills. Lee et al. found that an AI-guided XR welding trainer improved welding accuracy and learning rate over traditional VR instruction. An Indonesian "AI teaching factory" boosted students' technical proficiency, efficiency, and industry readiness. Surveys indicate high student satisfaction: Malaysian polytechnic students using an AI-powered robotics trainer saw increased understanding and confidence, while TVET students with ChatGPT reported improved comprehension and engagement. Analytical studies highlight curriculum alignment: a decision analysis in the End-of-Life Vehicle sector identified AI integration, tool training, and industry partnerships as priorities for employability. Discussion: Overall, AI applications promise to enhance vocational skill acquisition and engagement. However, much of the research focuses on short-term pilots or perceptions rather than long-term outcomes. Ongoing challenges include limited infrastructure and inadequate teacher preparedness. Future efforts should prioritize rigorous, longitudinal evaluations of AI-enabled TVET interventions using standardized skill and employment outcomes metrics.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rukun Dou

Abstract: We present a novel benchmarking methodology for Large Language Models (LLMs) to evaluate their susceptibility to hallucinations, thereby determining their reliability for real-world applications involving greater responsibilities. This method, called Deception-Based Benchmarking, involves testing the model with a task that requires composing a short paragraph. Initially, the model performs under standard conditions. Then, it is required to begin with a misleading sentence. Based on these outputs, the model is assessed on three criteria: accuracy, susceptibility, and consistency. This approach can be integrated with existing benchmarks or applied to new ones, thus facilitating a comprehensive evaluation of models across multiple dimensions. It also encompasses various forms of hallucination. We applied this methodology to several small opensource models using a modified version of MMLU, DB-MMLU1 . Our findings indicate that most current models are not specifically designed to self-correct when the random sampling process leads them to produce inaccuracies. However, certain models, such as Solar-10.7B-Instruct, exhibit a reduced vulnerability to hallucination, as reflected by their susceptibility and consistency scores. These metrics are distinct from traditional benchmark scores. Our results align with TruthfulQA, a widely used benchmark for hallucination. Looking forward, DB-benchmarking can be readily applied to other benchmarks to monitor the advancement of LLMs.

Article
Biology and Life Sciences
Neuroscience and Neurology

Federico Turkheimer

,

Daniel Martins

,

Erik Fagerholm

,

Giuseppe de Alteriis

,

Massimiliano Facca

,

Manuela Moretto

,

Lucia Batzu

,

Silvia Rota

,

Milan Brázdil

,

Paul Expert

+1 authors

Abstract: Transformer models have revolutionized natural language processing by enabling flexible,parallel processing of complex input–output relationships. Here, we adapt this architecture to brain imaging through a biologically informed framework called Neuro-BOTs. Unlike traditional Transformers that learn attention weights purely from data, Neuro-BOTs incorporate prior neurobiological knowledge at each stage of the encoder: molecular maps (e.g.,neurotransmitters), cellular distributions (e.g., mitochondrial density), and large-scale structural connectivity. These priors act as spatial filters—analogous to attention weights—that guide the model’s interpretation of brain features. We apply this approach to a binaryclassification task using resting-state fMRI data from Parkinson’s disease patients and healthycontrols. Among several biologically defined attention layers, the noradrenergic map significantly improved classification accuracy from 71.3% to 89.7%. While based on a limited sample, this approach demonstrates that embedding multiscale biological priors intoTransformer-based architectures can improve both predictive performance and neurobiological interpretability. More broadly, we propose that such models open a pathway toward viewing brain inference as a form of translation, with applications across clinical,preclinical, and multimodal domains.

Review
Arts and Humanities
Other

Bharat Dhiman

Abstract: Artificial Intelligence (AI) is used in our day-to-day life. AI for journalism is a reality. Just like other vital aspects of our life, AI has also entered the world of journalism. Many reputed news organizations have adopted AI in journalism to perform various newsroom tasks. In today’s digital world, several technologies are powering journalism. One such technology that is transforming the journalism field is artificial intelligence. Accuracy is a core value of journalism. With AI and machine learning systems, there’s a statistical element of uncertainty which means it’s impossible to guarantee 100 percent accuracy. AI tools such as OpenAI’s ChatGPT, Microsoft’s Bing chatbot, and Google’s Bard have been subject to debate now. AI tools have the potential to assist in a variety of tasks, such as scraping PDF files, writing code, and translating languages. AI can be a helpful tool for journalism students and media researchers. A main concern among computational journalists is that AI sometimes hallucinates data. This review paper highlights how artificial intelligence can help journalists.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohammad Rasoolinejad

Abstract: Training artificial intelligence (AI) for reasoning presents significant challenges due to the lack of clear objectives and reward functions. In structured environments such as chess and Go, objectives are well-defined, and the optimal moves can be determined through search algorithms that maximize the reward function. This clarity enables AI to learn and improve autonomously through random trials and recursive calculation of rewards, mirroring human learning processes that depend on prior experiences and state-dependent actions. However, in real-world applications, AI lacks access to direct interaction and feedback, making it difficult to define proper reward functions. This limitation is particularly evident in the training of large language models (LLMs), where the quality of the output is constrained by the training data and the absence of a dynamic reward system. Simulated environments offer some utility but are inherently limited by their design and scope. Achieving artificial general intelligence (AGI) requires AI to function and receive feedback in real-world settings, similar to human cognitive and strategic development through interactive experiences. This paper explores the critical role of clear objectives and reward functions in AI training, the limitations posed by the current lack of real-world interaction, and the implications for future advancements in AI reasoning capabilities.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Iosif Iulian Petrila

Abstract: The neural information organizing and processing principles are presented in a general transdisciplinary axiomatic form, highlighting the fundamental neural informational characteristics and processes as a guiding informational synthesis useful in various fields from neurosciences to neural processing units design and artificial intelligence software implementations. The proposed neural information organizing and processing principles highlights the fundamental characteristics of neural information: function, memorization, nondeterminism, fragmentation, aggregation, nonlinearization, geometrization, parallelization, adaptation, and objectivation. The presented principles are formulated in order to facilitate transdisciplinary utility even if were synthesized through neural specific informational organization in computing implementations and approaches as generalization, abstraction and in correlation with biological neuronal systems, especially viewed from an informational perspective.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Pranav Guruprasad

,

Harshvardhan Sikka

,

Jaewoo Song

,

Yangyue Wang

,

Paul Liang

Abstract: Vision-language-action (VLA) models represent a promising direction for developing general-purpose robotic systems, demonstrating the ability to combine visual understanding, language comprehension, and action generation. However, systematic evaluation of these models across diverse robotic tasks remains limited. In this work, we present a comprehensive evaluation framework and benchmark suite for assessing VLA models. We profile three state-of-the-art VLM and VLAs —GPT-4o, OpenVLA, and JAT—across 20 diverse datasets from the Open-X-Embodiment collection, evaluating their performance on various manipulation tasks. Our analysis reveals several key insights: (1) current VLA models show significant variation in performance across different tasks and robot platforms, with GPT-4o demonstrating the most consistent performance through sophisticated prompt engineering, (2) all models struggle with complex manipulation tasks requiring multi-step planning, and (3) model performance is notably sensitive to action space characteristics and environmental factors. We release our evaluation framework and findings to facilitate systematic assessment of future VLA models and identify critical areas for improvement in the development of general-purpose robotic systems.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Naseef Ur Rahman Chowdhury

,

Ahshanul Haque

,

Hamdy Soliman

Abstract:

Large Language Models (LLMs) have revolutionized artificial intelligence, driving advancements in natural language processing, automated content generation, and numerous other applications. However, these models' increasing scale and computational requirements pose significant energy consumption challenges. This paper comprehensively reviews power consumption in LLMs, highlighting key factors such as model size, hardware dependencies, and optimization techniques. We analyze the power demands of various state-of-the-art models, compare their efficiency across different hardware architectures, and explore strategies for reducing energy consumption without compromising performance. Additionally, we discuss the environmental impact of large-scale AI computations and propose future research directions for sustainable AI development. Our findings aim to inform researchers, engineers, and policymakers about the growing energy demand.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Manaswini Bollikonda

Abstract: As intelligent systems advance, the integration of rule-based logic with transformer-driven neural inference is emerging as a foundational architecture for scalable, explainable AI. This paper explores the shift from static, logic-encoded decision trees to dynamic, context-aware large language models (LLMs), presenting hybrid reasoning systems that combine deterministic control with neural adaptability. We analyze rule-based and transformer-based approaches across key reasoning workflows, supported by architectural diagrams, performance benchmarks, and real-world applications such as policy automation and legal review. Our proposed dual-stream framework illustrates how symbolic validation and generative inference can operate in parallel, enabling trustworthy and adaptable decision-making. The paper also outlines deployment strategies, including tiered inference, observability, and fallback modes, alongside ethical safeguards such as bias audits and transparency checks. These contributions offer a practical blueprint for building hybrid AI systems that are not only performant but also interpretable and governance-ready across diverse enterprise environments.

Essay
Arts and Humanities
Architecture

Nuno Cortiços

Abstract: Transforming Cities and Buildings explores the integration of artificial intelligence (AI) into architectural practice, addressing its transformative potential across multiple scales of the built environment. It presents a comprehensive examination of AI’s influence on design methodologies, construction processes, building operations, and urban planning. Through detailed analysis, the book discusses key concepts such as generative design, machine learning, and deep learning, focusing on their application in building performance optimization, sustainability, and smart city development. Historical and theoretical contexts frame the discourse, tracing the evolution from early computational methods to contemporary AI-driven approaches. Case studies illustrate practical implementations, including AI-assisted form-finding, adaptive façades, and predictive building performance systems. The book also highlights the role of AI in enhancing human-AI collaboration, emphasizing how these technologies augment rather than replace human creativity and judgment. Ethical considerations, governance, and the implications for professional practice are critically analyzed, providing guidance for architects, urban planners, and researchers. By addressing challenges such as climate adaptation, social equity, and resource efficiency, the book positions AI as a pivotal tool for creating resilient, sustainable, and responsive built environments. It serves as a foundational reference for understanding and applying AI within architecture’s evolving landscape.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

M.J Guru Venkatesh

,

Dharun R.

,

Guru Prathosh. S

,

Nilesh A.

,

I. Mohamed Sameer

,

Srivatsan G

Abstract: Ipfs and public peer-to-peer (P2P) networks were adopted to make AI calculations more decentralized. Taking AI workloads over a decentralized network could bring better fault tolerance, guarantee data safety, and increased privacy. This research explores the challenges of centralized AI, such as data confidentiality, scalability, and accessibility, while discussing the promise of decentralized AI. Combining IPFS with decentralized systems improves scalability, data protection, and fault tolerance.

Article
Social Sciences
Education

Jovan Shopovski

,

Raihana Mohdali

,

Dejan Marolov

Abstract: ChatGPT-4 and other large language models (LLMs) and their use in academic writing have raised questions regarding their capacity to facilitate the peer review process. This research article compares AI-generated peer review reports (using ChatGPT-4 with tools) with traditional review reports generated by humans. The review reports were received for 198 manuscripts submitted to the European Scientific Journal (ESJ) between January and September 2024. Each manuscript underwent a parallel evaluation process. First by human reviewers of the journal and by ChatGPT-4 with tools paid version afterward. However, each manuscript received review reports from at least two human reviewers and only one AI-generated review report. Both review types used the ESJ’s standardized evaluation form. The ChatGPT-4 was prompted to review the papers objectively and critically. Statistical analyses were conducted to compare the grades of different parts of the manuscripts, recommendation distributions, and consistency between the AI and human review reports. Kolmogorov-Smirnov test; Pearson Chi-Square test, Mann-Whitney U test; and Cohen's kappa test were used to analyze the data. Results showed that ChatGPT-4 reviewers consistently awarded higher grades and were less rigorous than the human reviewers. The ChatGPT-4 review reports mostly recommended minor revisions and have never recommended rejection of a manuscript. On the other hand, human reviewers demonstrated a more balanced distribution of recommendations, including stricter score evaluations. However, a lack of agreement between the human review reports was registered. While LLM tools can enhance the efficiency of the peer review process, their ability to uphold rigorous academic standards remains limited. Editors who use LLM tools as reviewers have to remain vigilant and not rely their decisions solely on LLM-generated reports. The existing version of ChatGPT-4 is not trained in peer review and cannot replace human expertise in peer review. However, it can be used as an assistant that under human oversight can provide useful comments and recommendations for content improvement of the manuscripts. Future research should focus on LLM tools trained for peer review in various academic fields as well as on the ethical frameworks for LLM integration in peer review.

Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jade Zheng

,

Fernando Jia

,

Florence Li

,

Rebekah Jia

,

Tianqin Li

Abstract: The rapid advancement of artificial intelligence (AI) has been largely characterized by centralized development and control, limiting accessibility and innovation. This paper introduces Intelligence Cubed (I Cubed), a decentralized, open-source "modelverse" designed to democratize the creation, distribution, and utilization of machine learning (ML) models. I-Cubed aims to establish a community-driven ecosystem where ML model developers and AI creators can collaborate, monetize their contributions, build reputation, and engage in novel co-creation using a spectrum of techniques from prompt-level conditioning to advanced model composition via task arithmetic. The platform leverages blockchain technology to ensure transparency, immutability, and fair governance. Key mechanisms such as Proof of Intelligence (PoI) are proposed to validate model originality and performance, while Initial Model Offerings (IMOs) facilitate early-stage funding and community engagement. By fostering a decentralized marketplace and integrating distributed compute resources, I-Cubed seeks to lower entry barriers for developers, provide users with access to a diverse range of specialized AI models, and collectively advance the pursuit of Artificial General Intelligence (AGI) through a more open and collaborative paradigm.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Manuel Gozzi

,

Federico Di Maio

Abstract: This study examines the impact of prompt engineering on large language models (LLMs), focusing on a comparison between multitasking and single-task prompts. Specifically, we explore whether a single prompt handling multiple tasks — such as Named Entity Recognition (NER), sentiment analysis, and JSON output formatting — can achieve similar efficiency and accuracy to dedicated single-task prompts. The evaluation uses a combination of performance metrics to provide a comprehensive analysis of output quality. Experiments were conducted using a selection of open-source LLMs, including LLama3.1 8B, Qwen2 7B, Mistral 7B, Phi3 Medium, and Gemma2 9B. Results show that single-task prompts do not consistently outperform multitasking prompts, highlighting the significant influence of the model’s data and architecture on performance.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Raj Bridgelall

Abstract: This primer provides an overview of the rapidly evolving field of generative artificial intelligence, specifically focusing on large language models like ChatGPT (OpenAI) and Bard (Google). Large language models have demonstrated unprecedented capabilities in responding to natural language prompts. The aim of this primer is to demystify the underlying theory and architecture of large language models, providing intuitive explanations for a broader audience. Learners seeking to gain insight into the technical underpinnings of large language models must sift through rapidly growing and fragmented literature on the topic. This primer brings all the main concepts into a single digestible document. Topics covered include text tokenization, vocabulary construction, token embedding, context embedding with attention mechanisms, artificial neural networks, and objective functions in model training. The primer also explores state-of-the-art methods in training large language models to generalize on specific applications and to align with human intentions. Finally, an introduction to the concept of prompt engineering highlights the importance of effective human-machine interaction through natural language in harnessing the full potential of artificial intelligence chatbots. This comprehensive yet accessible primer will benefit students and researchers seeking foundational knowledge and a deeper understanding of the inner workings of existing and emerging artificial intelligence models. The author hopes that the primer will encourage further responsible innovation and informed discussions about these increasingly powerful tools.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Manjurul Ahsan

,

Shivakumar Raman

,

Yingtao Liu

,

Zahed Siddique

Abstract: Diffusion Models (DMs) are probabilistic models that create realistic samples by simulating the diffusion process, gradually adding and removing noise from data. These models have gained popularity in domains such as image processing, speech synthesis, and natural language processing due to their ability to produce high-quality samples. As DMs are being adopted in various domains, existing literature reviews that often focus on specific areas like computer vision or medical imaging may not serve a broader audience across multiple fields. Therefore, this review presents a comprehensive overview of DMs, covering their theoretical foundations and algorithmic innovations. We highlight their applications in diverse areas such as media quality, authenticity, synthesis, image transformation, healthcare, and more. By consolidating current knowledge and identifying emerging trends, this review aims to facilitate a deeper understanding and broader adoption of DMs and provide guidelines for future researchers and practitioners across diverse disciplines.

of 8

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated