Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Pamayla Darbyshire,

Carl Beitsayadeh

Abstract: In the complex landscape of research, we often encounter challenges that require innovative solutions. One such challenge is dealing with multiple instruments that use different Likert scale points. Our recent study involved five instruments with varying scales: three 5-point, one 4-point, and one 7-point. Additionally, our data contained outliers that represented a positive and negative skew. These factors could significantly impact the validity of our results if not addressed properly in the preprocessing stage and prior to data analysis.
Article
Computer Science and Mathematics
Security Systems

Geert De Cubber,

Daniela Doroftei,

Paraskevi Petsioti,

Alexios Koniaris,

Konrad Brewczyński,

Marek Życzkowski,

Razvan Roman,

Silviu Sima,

Ali Mohamoud,

Johan van de Pol

+4 authors
Abstract: This paper aims to introduce a standardized test methodology for drone detection, tracking and identification systems. It is the aim that this standardized test methodology for assessing the performance of counter-drone systems will lead to a much better understanding of the capabilities of these solutions. This is urgently needed, as there is an increase in drone threats and there are no cohesive policies to evaluate the performance of these systems and hence mitigate and manage the threat. The presented methodology has been developed within the framework of the project COURAGEOUS funded by European Union’s Internal Security Fund Police. This standardized test methodology is based upon a series of standard user-defined scenarios representing a wide set of use cases. At this moment, these standard scenarios are geared towards civil security end users. However, the proposed standard methodology provides an open architecture where the standard scenarios can modularly be extended, providing the standard users the possibility to easily add new scenarios. For each of these scenarios, operational needs and functional performance requirements are provided. Using this information, an integral test methodology is presented that allows for a fair qualitative and quantitative comparison between different counter-drone systems. The standard test methodology concentrates on the qualitative and quantitative evaluation of counter-drone systems. This test methodology was validated during three user-scripted validation trials.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Junjie Wu,

Songjian Huang,

Senpeng Chen,

Chun-Ta Wei,

Jiangqiang Hu,

Kaiming Ji,

Irfan Khan,

Miao Zhang

Abstract: Self-similar data streams are characterized by their similarity across multiple time scales, exhibiting distinct nonlinear and discrete features. These characteristics complicate the accurate identification of data points associated with soft error features, thereby making it difficult to effectively discern the intricate relationship between the data flow and soft error data. This, in turn, severely impacts the accuracy of soft error detection in self-similar data streams. In this study, we propose a novel approach to address this challenge. Leveraging the suddenness and long-range correlation inherent in self-similar data streams, we construct a time series model to capture the data stream based on linear correlation and straight-line fitting features. By incorporating relationship parameters to fit neighboring flow points, we utilize a structural mapping model to establish the local angular relationship between the data streams and soft error data. Additionally, we construct a structural mapping network using flow features to achieve soft error detection in self-similar data streams. Experimental results demonstrate that our proposed method achieves high accuracy and low time overhead for soft error detection in self-similar data streams.
Article
Computer Science and Mathematics
Information Systems

Brendan Patrick Hall,

Matthew Paul Dube

Abstract: Topological relations form the backbone of qualitative spatial reasoning, and as such play a paramount role in geographic information systems. Three decades of research have provided a proliferation of sets of qualitative topological relations in both continuous and discretized spaces, but only in continuous spaces has the concept of organizing these relations into a larger framework (called a conceptual neighborhood graph) been considered. Previous work leveraged matrix differences to derive the anisotropic scaling neighborhood for these relations. In this paper, a simulation protocol is used to derive conceptual neighborhood graphs of qualitative topological relations in Z2 for the operations of translation and isotropic scaling. It is further shown that when aggregating raster relations into their continuous counterparts and collapsing neighborhood connections within these groups that the familiar conceptual neighborhood structures for continuous regions appear.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Fahmid Al Farid,

Md Ahsanul Bari,

Abu Saleh Musa Miah,

Sarina Mansur,

Jia Uddin,

Prabha Kumaresan

Abstract: Ambient Assisted Living (AAL) leverages technology to support the elderly and individuals with disabilities. A key challenge in AAL systems is efficient human activity recognition (HAR), yet no study has systematically compared single-view (SV) and multi-view (MV) HAR. This review addresses this gap by analyzing the evolution from SV to MV-HAR, covering benchmark datasets, feature extraction methods, and classification approaches. We examine how HAR systems have transitioned to MV with advanced deep learning architectures optimized for AAL, improving accuracy and robustness. Additionally, we explore machine learning and deep learning models—including CNNs, RNNs, LSTMs, TCNs, and GCNs—as well as lightweight transfer learning techniques for resource-constrained environments. Key challenges such as data remediation, privacy, and generalization are discussed alongside potential solutions like sensor fusion and advanced learning methods. Our study provides insights into advancements and future directions, guiding the development of intelligent, efficient, and privacy-compliant HAR systems for AAL.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yuzhu Wu,

Junjie Huang,

Siji Wang,

Yujian Bao,

Yizhe Wang,

Jia Song,

Wenwu Liu

Abstract: China is the world's largest producer of chili peppers, which occupy particularly important economic and social values in various fields such as medicine, food and industry.However, during its production process, chili peppers are affected by pests and diseases resulting in significant yield reduction due to temperature, environment and other reasons. In this study, a lightweight pepper disease identification method DD-YOLO based on the YOLOv8n model is proposed. First, the deformable convolutional module DCNv2 (Deformable Convolutional Networks) and the inverted residual mobile block iRMB (Inverted Residual Mobile Block) are introduced into the C2Fmodule to improve the accuracy of the sampling range and reduce the computational amount; secondly, the DySample sampling operator (Dynamic Sample) is integrated into the head network toreduce the amount of data and reduce the complexity of computation. Finally, we use Large Separable Kernel Attention (LSKA) to improve the SPPF module (Spatial Pyramid Pooling Fast) to enhance the performance of multi-scale feature fusion. The experimental results show that the accuracy, recall and average precision of the DD-YOLO model are 91.6%, 88.9% and 94.4%, respectively, compared with the base network YOLOv8n, it improves 6.2, 2.3 and 2.8 percentage points respectively, the model weight is reduced by 22.6%, and the number of floating-point operations per second is improved by 11.1%. This method provides a technical basis for intensive cultivation and management of chili peppers as well as efficiently and cost-effectively accomplishes the task of identifying chili pepper pests and diseases.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Kartik Bhardwaj,

Ritu Soryan

Abstract: Electronic waste (e-waste) is one of the fastest-growing waste streams worldwide, posing critical environmental, economic, and public health challenges. The circular economy paradigm offers a holistic approach to managing e-waste through resource recovery, recycling, and reduced landfill disposal. Recently, Artificial Intelligence (AI) and Machine Learning (ML) have demonstrated transformative potential in addressing central bottlenecks in e-waste handling, including precise materials identification, automated disassembly, improved recycling efficiency, and predictive logistics. This paper critically evaluates 30 peer-reviewed studies published between 2010 and 2025, selected via a transparent screening process, focusing on AI- and ML-driven technologies for e-waste management within the circular economy. We synthesize evidence from real-world implementations, discuss performance metrics (e.g., sorting accuracy, throughput gains, and carbon footprint reduction), and highlight how AI and ML algorithms can boost recovery of high-value materials, reduce environmental impact, and improve overall cost-effectiveness. We further examine current trends, underscore notable achievements, and analyze key challenges—such as data privacy, regulatory gaps, heterogeneous waste streams, and algorithmic bias. A series of policy recommendations and a future research roadmap are proposed, delineating technological, regulatory, and socio-economic pathways to expedite adoption of AI-enhanced e-waste management. By presenting a rigorous, thematically focused synthesis, this review anchors AI-based e-waste solutions as a linchpin for advancing the circular economy and achieving sustainable development.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mehdi Imani,

Majid Joudaki,

Ali Beikmohamadi,

Hamid Reza Arabnia

Abstract: Customer churn poses a significant challenge across various sectors, resulting in considerable revenue losses and increased customer acquisition costs. Machine Learning (ML) and Deep Learning (DL) have emerged as transformative approaches in churn prediction, significantly outperforming traditional statistical methods by effectively analyzing high-dimensional and dynamic customer datasets. This literature review systematically examines recent advancements in churn prediction methodologies based on 240 peer-reviewed studies published between 2020 and 2024 across diverse domains such as telecommunications, retail, banking, healthcare, education, and insurance. It emphasizes the evolution of ML and DL approaches, their practical applications, and ongoing challenges, including model interpretability, class imbalance, and concept drift. The study identifies an increasing preference for advanced techniques, including ensemble models, profit-driven frameworks, hybrid architectures, and sophisticated DL methods like convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and attention mechanisms. It highlights the growing focus on explainable AI (XAI), profit-oriented modeling, and adaptive learning strategies to accommodate evolving customer behaviors. Despite these advancements, the review underscores persistent challenges such as class imbalance, the black-box nature of complex DL models, difficulties adapting to concept drift, and limited consideration of real-world deployment constraints. This review contributes to the field by comprehensively synthesizing recent methodological trends and identifying gaps related to real-world applicability, interpretability, and business-oriented evaluation metrics. It offers essential insights and practical guidance for data scientists, researchers, and industry practitioners seeking to develop more accurate, robust, and interpretable churn prediction models, enabling more effective customer retention strategies and improved business outcomes.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Alexia Jolicoeur-Martineau,

Emy Gervais

Abstract: Generating new video games using AI has potential to be the next holy grail of the video game industry. Current AI efforts have focused on two directions: i) controllable video generation and ii) code generated by Large Language Models (LLMs). The first direction is limited due to short-term memory and increasing corruption (blur, noise) over time. The second direction is promising, but it requires a lot of human hand-holding with human-provided assets. Generating hours of coherent interactive video content is infeasible. In this paper, we instead attempt the overly ambitious problem of end-to-end generation of Small Web Format (SWF) games and animations through bytes. By modeling bytes, one does not need code or assets to potentially obtain full games with title screen, narrative, text, graphics, music, and sounds. We make a first attempt by fine-tuning a 7-billion-parameter LLM at 32K context length to generate the bytes of video games and animations conditional on a text description. Our model (ByteCraft) can generate up to 32K tokens, each containing at most 4-5 bytes (generating files as big as 140 KB). Some of the generated files are partially working (4.8-12%), or fully working (0.4-1.2%). ByteCraft is a proof-of-concept highlighting what could be possible given more scaling and engineering effort. We open-source our model and inference code alongside a dataset of 10K synthetic prompts for use with ByteCraft.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ziyue Wang,

Junde Wu,

Chang Han Low,

Yueming Jin

Abstract: Developing reliable AI systems to assist human clinicians in multi-modal medical diagnosis has long been a key objective for researchers. Recently, Multi-modal Large Language Models (MLLMs) have gained significant attention and achieved success across various domains. With strong reasoning capabilities and the ability to perform diverse tasks based on user instructions, they hold great potential for enhancing medical diagnosis. However, directly applying MLLMs to the medical domain still presents challenges. They lack detailed perception of visual inputs, limiting their ability to perform quantitative image analysis, which is crucial for medical diagnostics. Additionally, MLLMs often exhibit hallucinations and inconsistencies in reasoning, whereas clinical diagnoses must adhere strictly to established criteria. To address these challenges, we propose MedAgent-Pro, an evidence-based reasoning agentic system designed to achieve reliable, explainable, and precise medical diagnoses. This is accomplished through a hierarchical workflow: at the task level, knowledge-based reasoning generate reliable diagnostic plans for specific diseases following retrieved clinical criteria. While at the case level, multiple tool agents process multi-modal inputs, analyze different indicators according to the plan, and provide a final diagnosis based on both quantitative and qualitative evidence. Comprehensive experiments on both 2D and 3D medical diagnosis tasks demonstrate the superiority and effectiveness of MedAgent-Pro, while case studies further highlight its reliability and interpretability. The code is available at https://github.com/jinlab-imvr/MedAgent-Pro.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rahul Neware

Abstract: Identifying diseases in horticulture fruits is crucial in maintaining quality, reducing losses, and enhancing sustainable agricultural practices. Deep learning (DL) and machine learning (ML) techniques have enabled proficient and precise identification of these diseases. This paper consolidates the use of ML and DL approaches in horticultural fruit disease detection, incorporating the innovative models of convolutional neural networks (CNNs), Vision transformers, and other hybrid systems. It also reviews preprocessing and feature extraction for hyperspectral and multispectral imaging. Volume public datasets and real-world case studies are analyzed to demonstrate practical implementation and obstacles which include the quality of the dataset, required computation resources, and model interpretability. Furthermore, the paper elaborates on GAN-based data augmentation, implementing lightweight models on resource-constrained devices, and real-time IoT monitoring. Future directions aim at the utilization of explainable artificial intelligence, scaling up the models, and increasing sustainability in disease detection systems. The reviewed literature established this study as a point of reference for other researchers and practitioners to inspire the development of intelligent horticultural disease management systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yin Li

Abstract: Explainability is increasingly crucial for real-world deployment of deep learning models, yet traditional explanation techniques can be prohibitively slow and memory- intensive on resource-constrained devices. This paper presents a novel lightweight ex- plainability framework that significantly reduces the computational cost of generating explanations without compromising on quality. My approach focuses on an optimized Grad-CAM pipeline with sophisticated thresholding, advanced memory handling, and specialized evaluation metrics. I demonstrate speedups exceeding 300x over naive im- plementations while maintaining robust faithfulness and completeness scores. Through an extensive series of benchmarks, user studies, and statistical tests, I show that this framework is scalable, accurate, and deployable on edge devices such as Raspberry Pi, Android phones, and iPhones. I also discuss ethical considerations, future research directions, and potential applications in high-stakes domains like healthcare and au- tonomous systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Parth Gosar,

Sahil Pardasani,

Soumyadeep Das

Abstract: The proliferation of AI tools in academic writing poses significant challenges for verifying the authenticity of student submissions, particularly at the sentence level. This paper proposes a novel Stylometric Fingerprinting with Contextual Anomaly Detection approach to distinguish AI-generated sentences from student-authored ones in writing reports. By combining manual stylometric analysis with contextual coherence checks, our method achieves sentence-level granularity without requiring computational tools or LLMs, making it accessible for student use. We compare our approach to existing models like Turnitin, GPTZero, and Moss, highlighting its unique focus on manual, coherence-driven detection. Experimental insights and theoretical analysis demonstrate its feasibility and effectiveness in ensuring academic integrity.
Article
Computer Science and Mathematics
Computer Science

Kazi Fatema,

Samrat Kumar Dey,

Mehrin Anannya,

Risala Tahsin Khan,

Mohammad Rashid,

SU Chunhua,

Rashed Mazumder

Abstract: Intrusion Detection Systems (IDS) are crucial module of cybersecurity which is designed to identify unauthorized activities in network environments. Traditional IDS, on the other hand, have a number of problems, such as high rates of inaccurate positives and inaccurate negatives and a lack of explainability that makes it difficult to provide adequate protection. Furthermore, centralized IDS approaches have issues with interpretability and data protection, especially when dealing with sensitive data. In order to overcome these drawbacks, we provide Federated XAI IDS, a brand-new explainable and privacy-preserving IDS that improves security and interpretability by fusing Federated Learning (FL) with Shapley Additive Explanations (SHAP). Our approach enables IDS models to be collaboratively trained across multiple decentralized devices while ensuring that local data remains securely on edge nodes, thus mitigating privacy risks. The Artificial Neural Network (ANN)-based IDS is distributed across four clients in a federated setup using the CICIoT2023 dataset, with model aggregation performed via FedAvg. The proposed method demonstrated efficacy in intrusion detection, achieving 88.4% training and 88.2% testing accuracy. Furthermore, SHAP was utilized to analyze feature importance, providing a deeper comprehension of the critical attributes influencing model predictions. Transparency is improved and the model becomes more dependable and interpretable thanks to the feature importance ranking that SHAP produces. Our findings demonstrate how well Federated XAI IDS handles the two problems of explainability and privacy in intrusion detection. This dissertation accelerates the major establishment in the creation of safe, interpretable, and decentralized intrusion detection systems (IDS) for contemporary cybersecurity applications by utilizing federated learning and explainable AI (XAI).
Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Satyadhar Joshi

Abstract: This paper provides an extensive review of DeepSeek, an emerging open-source large language model (LLM) known for its Mixture-of-Experts (MoE) architecture and Multi-Head Latent Attention innovations. The study highlights DeepSeek's superior efficiency, scalability, and performance across tasks such as natural language processing, mathematical reasoning, and code generation, positioning it as a competitive alternative to proprietary models like ChatGPT, Claude, and Gemini. Comparative evaluations reveal its strengths in formal writing, structured reasoning, and diagnostic applications in healthcare and finance, while noting challenges in creative tasks and user safety concerns. With a focus on democratizing AI, DeepSeek's cost-efficient, open-source nature fosters accessibility and collaboration across industries such as education, business, and healthcare. Ethical considerations and future directions, including multimodal integrations and enhanced safety protocols, are also explored. Overall, the paper underscores DeepSeek's potential in driving innovation and expanding the frontiers of artificial intelligence research and applications. Comparative analyses reveal that DeepSeek excels in tasks requiring structured writing, grammatical precision, and technical problem-solving. For instance, it achieves notable success in healthcare diagnostics and risk management in finance. However, challenges include its limitations in creative outputs and a higher rate of unsafe responses compared to some competitors, signaling the need for enhanced safety protocols. The paper also highlights user feedback, which is generally positive regarding accessibility and reasoning capabilities, though criticisms are directed at content policies and moderation. DeepSeek's open-source nature is celebrated for democratizing AI, making advanced technology accessible to researchers, educators, and developers worldwide, particularly in resource-constrained settings. Applications across education, healthcare, and finance demonstrate its versatility, from personalizing learning experiences to improving diagnostic accuracy and enabling better financial decision-making. Future directions include expanding its multimodal capabilities, refining safety measures, and exploring innovative applications to maximize its impact across industries.
Article
Computer Science and Mathematics
Applied Mathematics

Hiroshi Isshiki

Abstract: The remarkable progress of generative AI has brought fresh air to linguistics. In animal communication and language translation, generative AI has been used to deepen consideration at the semantic space level, and as a result, geometric similarities in semantic space have been discovered. Even if the expressions in concrete space are different, when they are mapped to semantic space, the existence of geometric similarities that had been hidden until now has been discovered. In the end, this may seem obvious, but it is a surprising discovery. Inspired by this discovery, this paper proposes a basic mathematical theory necessary for understanding linguistic space. This study aims to clarify the geometric properties of the translation process, which are difficult to explain using conventional linguistic theory. In this study, linguistic space is regarded as a mathematical coordinate space, and translation between different languages is treated as a coordinate transformation, assuming the existence of a common invariant, the meaning of language. From this perspective, the approach of this study is called ''geometric linguistic space''.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Wu Yonghao,

Liu Minyi,

Li Jun

Abstract: In various fields including visual processing, computer graphics, neuroscience, and biological sciences, geons are widely recognized as fundamental units of complex shapes. Their importance has been broadly acknowledged. However, accurately identifying and extracting these geons remains a challenge.This study integrates theories from signal processing, computer graphics, neuroscience, and biological sciences, utilizing "object imaging" and neural networks to describe mathematical operators, in order to reveal the essence of visual geons. Experiments validate the core hypothesis of geon theory, namely that geons are foundational components for the visual system to recognize complex objects. Through training, neural networks are capable of identifying distinct basic geons and, based on this foundation, performing target recognition in more complex scenarios. This effectively confirms the existence of geons and their critical role in visual recognition, providing new tools and theoretical foundations for related research fields.
Article
Computer Science and Mathematics
Mathematical and Computational Biology

José Alberto Rodrigues

Abstract: The tumor microenvironment is a highly dynamic and complex system where cellular interactions evolve over time, influencing tumor growth, immune response, and treatment resistance. In this study, we develop a graph-theoretic framework to model the tumor microenvironment , where nodes represent different cell types, and edges denote their interactions. The temporal evolution of the tumor microenvironment is governed by fundamental biological processes, including proliferation, apoptosis, migration, and angiogenesis, which we model using differential equations with stochastic effects. Specifically, we describe tumor cell population dynamics using a logistic growth model incorporating both apoptosis and random fluctuations. Additionally, we construct a dynamic network to represent cellular interactions, allowing for an analysis of structural changes over time. Through numerical simulations, we investigate how key parameters such as proliferation rates, apoptosis thresholds, and stochastic fluctuations influence tumor progression and network topology. Our findings demonstrate that graph theory provides a powerful mathematical tool to analyze the spatiotemporal evolution of tumors, offering insights into potential therapeutic strategies. This approach has implications for optimizing cancer treatments by targeting critical network structures within the tumor microenvironment.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Erfan Wang

Abstract: Personalized recommendation systems play a crucial role in enhancing user engagement and decision-making across various domains. Traditional approaches, such as collaborative filtering and matrix factorization, have shown effectiveness but suffer from data sparsity and cold-start problems. Recent advances in deep learning, graph-based models, and attention mechanisms have significantly improved recommendation performance. This paper proposes a novel hybrid recommendation model that integrates Factorization Machines (FM), Graph Convolutional Networks (GCN), and Multi-Layer Attention Networks (MLAN) to optimize feature representations and enhance prediction accuracy. Experimental results demonstrate the superiority of the proposed approach over baseline methods in key performance metrics.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md. Shahid Ahammed Shakil,

Nitun Kumar Podder,

S.M. Hasan Sazzad Iqbal,

Abu Saleh Musa Miah,

Md Abdur Rahim

Abstract: Emotion recognition in speech is essential for enhancing human-computer interaction (HCI) systems. Despite progress in Bangla speech emotion recognition, challenges remain, including low accuracy, speaker dependency, and poor generalization across emotional expressions. Previous approaches often rely on traditional machine learning or basic deep learning models, struggling with robustness and accuracy in noisy or varied data. In this study, we propose a novel multi-stream deep learning feature fusion approach for Bangla speech emotion recognition, addressing the limitations of existing methods. Our approach begins with various data augmentation techniques applied to the training dataset, enhancing the model’s robustness and generalization. We then extract a comprehensive set of handcrafted features, including Zero-Crossing Rate (ZCR), chromagram, spectral centroid, spectral roll-off, spectral contrast, spectral flatness, Mel-Frequency Cepstral Coefficients (MFCCs), Root Mean Square (RMS) energy, and Mel-spectrogram. These features capture key characteristics of the speech signal, providing valuable insights into the emotional content. Sequentially, we utilize a multi-stream deep learning architecture to automatically learn complex, hierarchical representations of the speech signal. This architecture consists of three distinct streams: the first stream uses 1D Convolutional Neural Networks (1D CNN), the second integrates 1D CNN with Long Short-Term Memory (LSTM), and the third combines 1D CNN with Bidirectional LSTM (Bi-LSTM). These models capture intricate emotional nuances that handcrafted features alone may not fully represent. For each of these models, we generate predicted scores, and then employ ensemble learning with a soft voting technique to produce the final prediction. This fusion of handcrafted features, deep learning-derived features, and ensemble voting enhances the accuracy and robustness of emotion identification across multiple datasets. Our method demonstrates the effectiveness of combining various learning models to improve emotion recognition in Bangla speech, providing a more comprehensive solution compared to existing methods. We utilize three primary datasets—SUBESCO, BanglaSER, and a merged version of both—as well as two external datasets, RAVDESS and EMODB, to assess the performance of our models. Our method achieves impressive results with accuracies of 92.90%, 85.20%, 90.63%, 67.71%, and 69.25% for the SUBESCO, BanglaSER, merged SUBESCO and BanglaSER, RAVDESS, and EMODB datasets, respectively. These results demonstrate the effectiveness of combining handcrafted features with deep learning-based features through ensemble learning for robust emotion recognition in Bangla speech.

of 456

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated