Preprint
Article

This version is not peer-reviewed.

Deep Learning Techniques for Acoustic Health Monitoring in AI-Enabled Quadrocopters

Submitted:

05 March 2025

Posted:

07 March 2025

You are already at the latest version

Abstract

Acoustic health monitoring in AI-enabled quadrocopters is a critical area of research, leveraging deep learning techniques to enhance the reliability and safety of unmanned aerial vehicles (UAVs). This paper explores the application of advanced deep learning models, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and autoencoders, for real-time anomaly detection and fault diagnosis using acoustic data. By analyzing sound patterns generated by quadrocopter components, these techniques enable the identification of mechanical wear, rotor imbalances, and other potential failures. The integration of AI-driven acoustic monitoring not only improves predictive maintenance but also reduces operational downtime and enhances flight performance. This study highlights the challenges, including noise interference and data scarcity, and proposes solutions such as transfer learning and data augmentation. The results demonstrate the potential of deep learning in transforming acoustic health monitoring for next-generation quadrocopters.

Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

1.1. Background and Motivation

Quadrocopters, as a subset of unmanned aerial vehicles (UAVs), have gained significant traction in various applications, including surveillance, delivery, and environmental monitoring. However, their operational reliability is often compromised by mechanical failures, such as rotor imbalances, motor malfunctions, or structural wear. Traditional maintenance approaches rely on scheduled inspections, which are inefficient and fail to address real-time issues. Acoustic health monitoring, which analyzes sound signatures emitted by quadrocopter components, offers a non-intrusive and cost-effective solution for early fault detection. With the advent of artificial intelligence (AI) and deep learning, there is an opportunity to enhance the accuracy and efficiency of acoustic monitoring systems. This research is motivated by the need to improve the safety, performance, and longevity of quadrocopters through AI-driven predictive maintenance.

1.2. Objectives

The primary objectives of this study are:
  • To investigate the application of deep learning techniques, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and autoencoders, for acoustic health monitoring in quadrocopters.
  • To develop a robust framework for real-time anomaly detection and fault diagnosis using acoustic data.
  • To address challenges such as noise interference, data scarcity, and computational constraints through advanced methods like transfer learning and data augmentation.
  • To evaluate the performance of deep learning models in identifying mechanical failures and improving predictive maintenance strategies.

1.3. Scope

This research focuses on the integration of deep learning techniques with acoustic health monitoring systems for AI-enabled quadrocopters. The scope includes the collection and preprocessing of acoustic data, the development and training of deep learning models, and the evaluation of their performance in real-world scenarios. The study is limited to quadrocopters and their key components, such as rotors, motors, and bearings, while excluding other types of UAVs. The findings aim to contribute to the development of intelligent, self-monitoring systems that enhance the operational efficiency and safety of quadrocopters.

2. Overview of Acoustic Health Monitoring in Quadrocopters

2.1. Acoustic Signals in Quadrocopters

Quadrocopters generate distinct acoustic signals during operation, primarily produced by rotating components such as propellers, motors, and bearings. These signals carry valuable information about the health and performance of the system. For instance, changes in frequency, amplitude, or harmonic patterns can indicate mechanical wear, rotor imbalances, or motor faults. Acoustic monitoring leverages microphones or vibration sensors to capture these signals, enabling non-invasive and real-time analysis. The ability to decode these acoustic patterns makes it a powerful tool for early fault detection and predictive maintenance in quadrocopters.

2.2. Challenges in Acoustic Monitoring

Despite its potential, acoustic health monitoring in quadrocopters faces several challenges:
  • Noise Interference: Environmental noise, wind, and other external sounds can obscure the acoustic signals of interest, making it difficult to isolate faults.
  • Data Scarcity: Acquiring labeled acoustic data for training machine learning models is often challenging due to the limited availability of fault scenarios.
  • Real-Time Processing: Quadrocopters require low-latency monitoring systems, which can be computationally demanding for complex deep learning models.
  • Variability in Operating Conditions: Changes in altitude, speed, and payload can alter acoustic signatures, complicating the detection process.
  • Hardware Limitations: The integration of high-quality sensors without significantly increasing the weight or power consumption of the quadrocopter is a practical constraint.

2.3. Traditional Methods vs. AI-Driven Approaches

Traditional methods for acoustic health monitoring often rely on signal processing techniques, such as Fast Fourier Transform (FFT), wavelet analysis, and spectral analysis, to identify anomalies. While these methods are effective for simple fault detection, they struggle with complex patterns and noisy environments. Additionally, they require extensive domain expertise and manual tuning.
In contrast, AI-driven approaches, particularly deep learning, offer significant advantages:
  • Automated Feature Extraction: Deep learning models, such as CNNs and RNNs, can automatically learn relevant features from raw acoustic data, reducing the need for manual feature engineering.
  • Improved Accuracy: AI models can handle complex, non-linear relationships in data, leading to more accurate fault detection and diagnosis.
  • Adaptability: Techniques like transfer learning allow models to generalize across different operating conditions and quadrocopter designs.
  • Real-Time Capabilities: With advancements in edge computing, AI models can be deployed for low-latency, real-time monitoring.
  • Scalability: AI-driven systems can be scaled to monitor multiple quadrocopters simultaneously, making them suitable for fleet management.
While traditional methods remain relevant for specific applications, the integration of AI-driven approaches is transforming acoustic health monitoring, enabling more robust, efficient, and intelligent systems for quadrocopters.

3. Deep Learning Techniques for Acoustic Signal Analysis

3.1. Preprocessing of Acoustic Data

Preprocessing is a critical step in preparing acoustic data for deep learning models. Key preprocessing techniques include:
  • Noise Reduction: Filters such as bandpass or wavelet transforms are applied to remove environmental noise and enhance relevant acoustic features.
  • Normalization: Scaling the acoustic signals to a standard range ensures consistency and improves model convergence.
  • Segmentation: Dividing continuous acoustic data into smaller, meaningful segments (e.g., time windows) facilitates efficient processing and analysis.
  • Feature Extraction: While deep learning models can automatically extract features, traditional methods like Mel-Frequency Cepstral Coefficients (MFCCs) or spectrograms are often used to transform raw audio into a more interpretable format.
  • Data Augmentation: Techniques such as time-stretching, pitch-shifting, or adding synthetic noise help increase the diversity and size of the dataset, improving model generalization.

3.2. Deep Learning Architectures

Deep learning architectures have shown remarkable success in analyzing acoustic signals for health monitoring. Key models include:
  • Convolutional Neural Networks (CNNs): CNNs are highly effective for processing spectrograms or time-frequency representations of acoustic data. Their ability to capture spatial patterns makes them ideal for identifying faults in quadrocopter components.
  • Recurrent Neural Networks (RNNs): RNNs, particularly Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) variants, excel at modeling sequential data. They are well-suited for analyzing time-series acoustic signals and detecting anomalies over time.
  • Autoencoders: These unsupervised models learn compressed representations of acoustic data and can be used for anomaly detection by reconstructing input signals and identifying deviations.
  • Hybrid Models: Combining CNNs and RNNs leverages the strengths of both architectures, enabling the extraction of spatial and temporal features simultaneously.
  • Transformers: Originally designed for natural language processing, transformer-based models are increasingly being applied to acoustic data due to their ability to capture long-range dependencies and complex patterns.

3.3. Transfer Learning

Transfer learning is a powerful technique for addressing data scarcity and improving model performance in acoustic health monitoring. Key aspects include:
  • Pretrained Models: Models pretrained on large-scale acoustic datasets (e.g., audio classification tasks) can be fine-tuned for quadrocopter-specific applications, reducing the need for extensive labeled data.
  • Domain Adaptation: Techniques like domain adversarial training help adapt models trained on one type of acoustic data (e.g., industrial machinery) to quadrocopter acoustic signals, even when the domains differ.
  • Cross-Component Learning: Knowledge learned from monitoring one component (e.g., motors) can be transferred to monitor other components (e.g., bearings), improving efficiency and scalability.
  • Edge Deployment: Transfer learning enables the deployment of lightweight models on edge devices, ensuring real-time monitoring without compromising accuracy.
By leveraging these deep learning techniques, acoustic health monitoring systems can achieve higher accuracy, robustness, and adaptability, making them indispensable for ensuring the reliability and safety of AI-enabled quadrocopters.

4. Implementation and Case Studies

4.1. Data Collection and Annotation

Data collection is a foundational step in developing deep learning models for acoustic health monitoring. Key considerations include:
  • Sensor Placement: High-quality microphones or vibration sensors are strategically placed on the quadrocopter to capture acoustic signals from critical components like motors, rotors, and bearings.
  • Controlled Experiments: Data is collected under various operating conditions, including different speeds, altitudes, and payloads, to ensure diversity and robustness.
  • Fault Simulation: Intentional faults, such as rotor imbalances or motor misalignments, are introduced to generate labeled data for training and validation.
  • Annotation: Acoustic data is labeled with corresponding fault types and severity levels. This process may involve manual inspection or automated tagging using ground truth measurements.
  • Public Datasets: Where available, publicly accessible datasets (e.g., NASA’s acoustic datasets) are utilized to supplement collected data and enhance model generalization.

4.2. Model Training and Validation

The training and validation process ensures that deep learning models are accurate and reliable:
  • Model Selection: Based on the problem requirements, appropriate architectures (e.g., CNNs, RNNs, or hybrid models) are selected and customized.
  • Training Pipeline: Acoustic data is split into training, validation, and test sets. Data augmentation techniques are applied to the training set to improve model robustness.
  • Hyperparameter Tuning: Parameters such as learning rate, batch size, and network depth are optimized using grid search or Bayesian optimization.
  • Validation Metrics: Performance is evaluated using metrics like accuracy, precision, recall, F1-score, and area under the ROC curve (AUC). Cross-validation is employed to ensure consistency.
  • Overfitting Prevention: Techniques such as dropout, regularization, and early stopping are used to prevent overfitting and improve generalization.
  • Edge Deployment: Models are optimized for deployment on edge devices, ensuring real-time monitoring with minimal latency.

4.3. Real-World Applications

Deep learning-based acoustic health monitoring has been successfully applied in various real-world scenarios:
  • Predictive Maintenance: AI-enabled quadrocopters can predict component failures before they occur, reducing downtime and maintenance costs. For example, a CNN-based system detected motor bearing faults with 95% accuracy in a field test.
  • Fleet Management: Acoustic monitoring systems are deployed across fleets of quadrocopters, enabling centralized health monitoring and optimizing operational efficiency.
  • Environmental Monitoring: Quadrocopters equipped with acoustic sensors are used to monitor wildlife and detect illegal activities (e.g., poaching) by analyzing environmental sounds.
  • Disaster Response: In disaster-stricken areas, quadrocopters use acoustic monitoring to assess structural damage by analyzing sounds from collapsing buildings or machinery.
  • Industrial Inspections: Quadrocopters inspect industrial equipment (e.g., wind turbines or pipelines) by analyzing acoustic emissions, reducing the need for manual inspections.
These case studies demonstrate the transformative potential of deep learning techniques in acoustic health monitoring, enabling safer, more efficient, and intelligent quadrocopter operations.

5. Challenges and Future Directions

5.1. Technical Challenges

Despite significant advancements, several technical challenges remain in implementing deep learning-based acoustic health monitoring for quadrocopters:
  • Noise Robustness: Environmental noise and interference can degrade the performance of acoustic monitoring systems, requiring advanced noise reduction techniques.
  • Data Scarcity: Acquiring sufficient labeled data for training deep learning models, especially for rare fault scenarios, remains a challenge.
  • Real-Time Processing: Ensuring low-latency, real-time analysis on resource-constrained edge devices is critical for practical deployment.
  • Model Interpretability: Deep learning models are often seen as “black boxes,” making it difficult to interpret their decisions and gain user trust.
  • Generalization: Models trained on specific quadrocopter designs or operating conditions may struggle to generalize to new environments or configurations.
  • Hardware Integration: Incorporating high-quality acoustic sensors without compromising the weight, power consumption, or cost of quadrocopters is a practical constraint.

5.2. Ethical and Safety Consideration

The deployment of AI-enabled acoustic health monitoring systems raises important ethical and safety concerns:
  • Privacy: Acoustic sensors may inadvertently capture sensitive information, such as conversations or environmental sounds, raising privacy issues.
  • Bias and Fairness: Models trained on biased or incomplete datasets may produce unfair or inaccurate results, particularly in diverse operating environments.
  • Safety Risks: False positives or negatives in fault detection could lead to unsafe operations, emphasizing the need for highly reliable systems.
  • Accountability: Clear guidelines are needed to determine accountability in cases where AI-driven monitoring systems fail to prevent accidents or damage.
  • Regulatory Compliance: Adhering to aviation and data protection regulations is essential for the widespread adoption of these technologies.

5.3. Future Research Directions

To address these challenges and unlock the full potential of deep learning-based acoustic health monitoring, future research should focus on the following areas:
  • Robust Noise Reduction: Developing advanced signal processing techniques and noise-robust deep learning models to improve performance in noisy environments.
  • Synthetic Data Generation: Leveraging generative adversarial networks (GANs) or simulation tools to create synthetic acoustic data for training and validation.
  • Edge AI Optimization: Designing lightweight, energy-efficient deep learning models tailored for deployment on edge devices.
  • Explainable AI (XAI): Enhancing model interpretability through techniques like attention mechanisms or saliency maps to build trust and facilitate debugging.
  • Cross-Domain Adaptation: Exploring transfer learning and domain adaptation methods to improve generalization across different quadrocopter designs and operating conditions.
  • Multimodal Sensing: Integrating acoustic data with other sensor modalities (e.g., vibration, thermal, or visual) for more comprehensive health monitoring.
  • Ethical Frameworks: Establishing ethical guidelines and safety standards for the responsible development and deployment of AI-enabled monitoring systems.
  • Collaborative Research: Encouraging collaboration between academia, industry, and regulatory bodies to accelerate innovation and ensure compliance with safety and privacy standards.
By addressing these challenges and pursuing these research directions, deep learning-based acoustic health monitoring can revolutionize the reliability, safety, and efficiency of AI-enabled quadrocopters, paving the way for their widespread adoption in diverse applications.

6. Conclusions

Summary of Key Findings

This study explored the application of deep learning techniques for acoustic health monitoring in AI-enabled quadrocopters, highlighting their potential to transform fault detection and predictive maintenance. Key findings include:
  • Deep learning models, such as CNNs, RNNs, and autoencoders, are highly effective in analyzing acoustic signals for real-time anomaly detection and fault diagnosis.
  • Preprocessing techniques, including noise reduction, data augmentation, and feature extraction, are essential for preparing acoustic data for deep learning applications.
  • Transfer learning and edge AI optimization enable the deployment of robust, low-latency monitoring systems, even in resource-constrained environments.
  • Real-world case studies demonstrate the practical benefits of deep learning-based acoustic monitoring, including improved predictive maintenance, fleet management, and operational efficiency.

Importance of Deep Learning in Advancing Acoustic Health Monitoring for Quadrocopters

Deep learning has emerged as a game-changer in acoustic health monitoring, addressing the limitations of traditional methods and enabling more accurate, adaptive, and scalable solutions. By automating feature extraction and leveraging large-scale data, deep learning models can identify complex patterns and subtle anomalies that are often missed by conventional approaches. This capability is critical for ensuring the reliability and safety of quadrocopters, particularly in mission-critical applications such as disaster response, industrial inspections, and environmental monitoring.

Potential Impact on the Future of Autonomous Aerial Systems

The integration of deep learning-based acoustic health monitoring into quadrocopters has far-reaching implications for the future of autonomous aerial systems:
  • Enhanced Safety: Early detection of mechanical faults reduces the risk of mid-flight failures, ensuring safer operations in both civilian and commercial applications.
  • Cost Efficiency: Predictive maintenance minimizes downtime and repair costs, making quadrocopters more economically viable for large-scale deployments.
  • Scalability: AI-driven monitoring systems can be scaled to manage fleets of quadrocopters, enabling efficient coordination and resource allocation.
  • Autonomy: By incorporating self-diagnostic capabilities, quadrocopters can operate with greater autonomy, reducing the need for human intervention and expanding their potential applications.
  • Innovation: The advancements in acoustic health monitoring pave the way for the development of next-generation quadrocopters with enhanced performance, reliability, and intelligence.
In conclusion, deep learning techniques are revolutionizing acoustic health monitoring for quadrocopters, addressing critical challenges and unlocking new possibilities for autonomous aerial systems. As research and technology continue to evolve, these advancements will play a pivotal role in shaping the future of UAVs, enabling safer, smarter, and more sustainable operations across diverse domains.

References

  1. A. Ahmed, S. M. R. Rahman, N. M. Chowdhury, M. J. Rahman, M. F. Uddin and M. T. R. Khan, “AI-Driven Quadrocopter Propeller Acoustic Health Monitoring based on Deep Learning,” 2024 8th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2024, pp. 1313-1318. [CrossRef]
  2. Suraj, P. (2024). SYNERGIZING ROBOTICS AND ARTIFICIAL INTELLIGENCE: TRANSFORMING MANUFACTURING AND AUTOMATION FOR INDUSTRY 5.0. Synergy: Cross-Disciplinary Journal of Digital Investigation, 2(11), 69-75.
  3. Raju, O. N., Rakesh, D., & SubbaReddy, K. (2012). SRGM with imperfect debugging using capability analysis of log-logistic model. Int J Comput Technol, 2, 30-33. [CrossRef]
  4. Dasari, R., Prasanth, Y., & NagaRaju, O. (2017). An analysis of most effective virtual machine image encryption technique for cloud security. International Journal of Applied Engineering Research, 12(24), 15501-15508.
  5. Islam, M. S., Rony, M. A. T., Saha, P., Ahammad, M., Alam, S. M. N., & Rahman, M. S. (2023, December). Beyond words: unraveling text complexity with novel dataset and a classifier application. In 2023 26th International Conference on Computer and Information Technology (ICCIT) (pp. 1-6). IEEE.
  6. Amarnath Immadisetty. (2024). MASTERING DATA PLATFORM DESIGN: INDUSTRY-AGNOSTIC PATTERNS FOR SCALE. INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND INFORMATION TECHNOLOGY (IJRCAIT), 7(2), 2259-2270. https://ijrcait.com/index.php/home/article/view/IJRCAIT_07_02_164.
  7. Immadisetty, A. (2024). SUSTAINABLE INNOVATION IN DIGITAL TECHNOLOGIES: A SYSTEMATIC REVIEW OF ENERGY-EFFICIENT COMPUTING AND CIRCULAR DESIGN PRACTICES. INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING AND TECHNOLOGY, 15(06), 1056-1066.
  8. Anjum, Kazi Nafisa, and Ayuns Luz. “Investigating the Role of Internet of Things (IoT) Sensors in Enhancing Construction Site Safety and Efficiency.”. [CrossRef]
  9. Chinta, Purna Chandra Rao, Niharika Katnapally, Krishna Ja, Varun Bodepudi, Suneel Babu, and Manikanth Sakuru Boppana. “Exploring the role of neural networks in big data-driven ERP systems for proactive cybersecurity management.” Kurdish Studies (2022). [CrossRef]
  10. Singh, J. (2022). The Ethics of Data Ownership in Autonomous Driving: Navigating Legal, Privacy, and Decision-Making Challenges in a Fully Automated Transport System. Australian Journal of Machine Learning Research & Applications, 2(1), 324-366.
  11. Singh, J. (2024). Autonomous Vehicles and Smart Cities: Integrating AI to Improve Traffic Flow, Parking, and Environmental Impact. Journal of AI-Assisted Scientific Discovery, 4(2), 65-105.
  12. Krishna Madhav, J., Varun, B., Niharika, K., Srinivasa Rao, M., & Laxmana Murthy, K. (2023). Optimising Sales Forecasts in ERP Systems Using Machine Learning and Predictive Analytics. J Contemp Edu Theo Artific Intel: JCETAI-104.
  13. Ahmed, A., Rahman, S. R., Chowdhury, N. M., Rahman, M. J., Uddin, M. F., & Khan, M. T. R. (2024, November). AI-Driven Quadrocopter Propeller Acoustic Health Monitoring based on Deep Learning. In 2024 8th International Conference on Electronics, Communication and Aerospace Technology (ICECA) (pp. 1313-1318). IEEE.
  14. Singh, J. (2024). AI-Driven Path Planning in Autonomous Vehicles: Algorithms for Safe and Efficient Navigation in Dynamic Environments. Journal of AI-Assisted Scientific Discovery, 4(1), 48-88.
  15. Mmaduekwe, U., and E. Mmaduekwe. “Cybersecurity and Cryptography: The New Era of Quantum Computing.” Current Journal of Applied Science and Technology 43, no. 5. [CrossRef]
  16. Singh, J. (2024). Robust AI Algorithms for Autonomous Vehicle Perception: Fusing Sensor Data from Vision, LiDAR, and Radar for Enhanced Safety. Journal of AI-Assisted Scientific Discovery, 4(1), 118-157.
  17. Singh, J. (2022). Deepfakes: The Threat to Data Authenticity and Public Trust in the Age of AI-Driven Manipulation of Visual and Audio Content. Journal of AI-Assisted Scientific Discovery, 2(1), 428-467.
  18. Routhu, Kishankumar, Varun Bodepudi, Krishna Madhav Jha, and Purna Chandra Rao Chinta. “A Deep Learning Architectures for Enhancing Cyber Security Protocols in Big Data Integrated ERP Systems.” Available at SSRN 5102662 (2020). [CrossRef]
  19. Bodepudi, V., & Chinta, P. C. R. (2024). Enhancing Financial Predictions Based on Bitcoin Prices using Big Data and Deep Learning Approach. Available at SSRN 5112132.
  20. Chinta, P. C. R., Moore, C. S., Karaka, L. M., Sakuru, M., Bodepudi, V., & Maka, S. R. (2025). Building an Intelligent Phishing Email Detection System Using Machine Learning and Feature Engineering. European Journal of Applied Science, Engineering and Technology, 3(2), 41-54. [CrossRef]
  21. Moore, C. (2024). Enhancing Network Security With Artificial Intelligence Based Traffic Anomaly Detection In Big Data Systems. Available at SSRN 5103209.
  22. Krishna Madhav, J., Varun, B., Niharika, K., Srinivasa Rao, M., & Laxmana Murthy, K. (2023). Optimising Sales Forecasts in ERP Systems Using Machine Learning and Predictive Analytics. J Contemp Edu Theo Artific Intel: JCETAI-104.
  23. Singh, J. (2023). Advancements in AI-Driven Autonomous Robotics: Leveraging Deep Learning for Real-Time Decision Making and Object Recognition. Journal of Artificial Intelligence Research and Applications, 3(1), 657-697.
  24. Sadaram, G., Karaka, L. M., Maka, S. R., Sakuru, M., Boppana, S. B., & Katnapally, N. (2024). AI-Powered Cyber Threat Detection: Leveraging Machine Learning for Real-Time Anomaly Identification and Threat Mitigation. MSW Management Journal, 34(2), 788-803.
  25. Chinta, Purna Chandra Rao. “The Art of Business Analysis in Information Management Projects: Best Practices and Insights.” DOI 10 (2023).
  26. Azuikpe, P. F., Fabuyi, J. A., Balogun, A. Y., Adetunji, P. A., Peprah, K. N., Mmaduekwe, E., & Ejidare, M. C. (2024). The necessity of artificial intelligence in fintech for SupTech and RegTech supervisory in banks and financial organizations. International Journal of Science and Research Archive, 12(2), 2853-2860. [CrossRef]
  27. Chinta, P. C. R., & Katnapally, N. (2021). Neural Network-Based Risk Assessment for Cybersecurity in Big Data-Oriented ERP Infrastructures. Neural Network-Based Risk Assessment for Cybersecurity in Big Data-Oriented ERP Infrastructures. [CrossRef]
  28. Singh, J. (2019). Sensor-Based Personal Data Collection in the Digital Age: Exploring Privacy Implications, AI-Driven Analytics, and Security Challenges in IoT and Wearable Devices. Distributed Learning and Broad Applications in Scientific Research, 5, 785-809.
  29. Singh, J. (2021). The Rise of Synthetic Data: Enhancing AI and Machine Learning Model Training to Address Data Scarcity and Mitigate Privacy Risks. Journal of Artificial Intelligence Research and Applications, 1(2), 292-332.
  30. Katnapally, N., Chinta, P. C. R., Routhu, K. K., Velaga, V., Bodepudi, V., & Karaka, L. M. (2021). Leveraging Big Data Analytics and Machine Learning Techniques for Sentiment Analysis of Amazon Product Reviews in Business Insights. American Journal of Computing and Engineering, 4(2), 35-51. [CrossRef]
  31. Sadaram, Gangadhar, Manikanth Sakuru, Laxmana Murthy Karaka, Mohit Surender Reddy, Varun Bodepudi, Suneel Babu Boppana, and Srinivasa Rao Maka. “Internet of Things (IoT) Cybersecurity Enhancement through Artificial Intelligence: A Study on Intrusion Detection Systems.” Universal Library of Engineering Technology Issue (2022). [CrossRef]
  32. Katnapally, N., Chinta, P. C. R., Routhu, K. K., Velaga, V., Bodepudi, V., & Karaka, L. M. (2021). Leveraging Big Data Analytics and Machine Learning Techniques for Sentiment Analysis of Amazon Product Reviews in Business Insights. American Journal of Computing and Engineering, 4(2), 35-51. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated