Preprint
Article

This version is not peer-reviewed.

Machine Learning Techniques for Fake News Detection

Submitted:

04 March 2025

Posted:

05 March 2025

You are already at the latest version

Abstract
The rapid proliferation of fake news on digital platforms has emerged as a significant challenge, undermining trust in information and influencing public opinion. To combat this issue, researchers have increasingly turned to machine learning (ML) techniques for automated fake news detection. This paper explores the application of various ML approaches, including supervised, unsupervised, and deep learning models, to identify and classify fake news. Key techniques such as natural language processing (NLP), sentiment analysis, and feature extraction are discussed, highlighting their role in improving detection accuracy. Additionally, the challenges of dataset quality, model interpretability, and real-time detection are addressed. The study concludes that while ML techniques show promise in fake news detection, ongoing advancements in model robustness and adaptability are essential to keep pace with the evolving nature of misinformation.
Keywords: 
;  ;  ;  ;  ;  ;  

I. Introduction

A. Definition of Fake News

Fake news refers to intentionally fabricated or misleading information presented as factual news, often designed to deceive readers, influence public opinion, or generate revenue. It can take various forms, including false headlines, manipulated images, and out-of-context information, spreading rapidly through social media and other digital platforms.

B. Importance of Detecting Fake News

The proliferation of fake news poses significant threats to society, including the erosion of trust in media, manipulation of democratic processes, and the amplification of social polarization. Detecting and mitigating fake news is crucial to maintaining informed public discourse, ensuring the integrity of information ecosystems, and safeguarding individuals and institutions from harm.

C. Challenges in Fake News Detection

Detecting fake news is a complex task due to several challenges:
  • Volume and Speed: The sheer volume of information shared online and the speed at which it spreads make manual detection impractical.
  • Evolving Tactics: Fake news creators continuously adapt their methods, making it difficult to develop static detection systems.
  • Contextual Understanding: Fake news often relies on subtle linguistic cues or partial truths, requiring deep contextual analysis.
  • Bias and Subjectivity: Distinguishing between fake news and legitimate opinion pieces or satire can be challenging due to subjective interpretations.

D. Role of Machine Learning in Addressing the Problem

Machine learning (ML) has emerged as a powerful tool for automating fake news detection. By leveraging techniques such as natural language processing (NLP), sentiment analysis, and deep learning, ML models can analyze large datasets, identify patterns, and classify content with high accuracy. These models can be trained to detect linguistic anomalies, source credibility, and emotional manipulation, offering scalable and efficient solutions to combat misinformation. However, the effectiveness of ML techniques depends on the quality of training data, model interpretability, and the ability to adapt to new forms of fake news. This paper explores the potential of ML in addressing the challenges of fake news detection and highlights areas for future research.

II. Overview of Fake News Detection

A. Types of Fake News

Fake news can be categorized into several types based on intent and content:
  • Fabricated Content: Completely false information created to deceive or mislead.
  • Misleading Headlines: Sensational or inaccurate headlines that distort the context of the story.
  • False Context: Genuine content shared with false contextual information to alter its meaning.
  • Imposter Content: Fake content designed to mimic legitimate news sources or brands.
  • Manipulated Media: Altered images, videos, or audio used to misrepresent events or individuals.
  • Satire or Parody: Humorous content that, while not intended to harm, can be misinterpreted as real news.

B. Sources of Fake News

Fake news originates from various sources, including:
  • Social Media Platforms: Major hubs for the rapid dissemination of fake news due to their wide reach and lack of stringent content moderation.
  • Fake News Websites: Websites designed to mimic legitimate news outlets but publish fabricated or misleading stories.
  • Bots and Trolls: Automated accounts or malicious actors that spread fake news to manipulate public opinion or create chaos.
  • Echo Chambers: Online communities that reinforce and amplify biased or false information.
  • Mainstream Media Errors: Occasionally, even reputable sources may inadvertently spread misinformation due to lack of verification.

C. Impact of Fake News on Society

The spread of fake news has far-reaching consequences, including:
  • Erosion of Trust: Undermines public trust in media, institutions, and democratic processes.
  • Polarization: Exacerbates social and political divisions by spreading biased or inflammatory content.
  • Public Safety Risks: Misinformation about health, safety, or emergencies can lead to harmful behaviors or panic.
  • Economic Damage: Fake news can manipulate stock markets, damage reputations, and harm businesses.
  • Threat to Democracy: Influences elections and policy decisions by spreading false narratives or discrediting legitimate information.
Understanding the types, sources, and societal impacts of fake news is essential for developing effective detection and mitigation strategies. Machine learning techniques play a critical role in addressing these challenges by providing scalable and automated solutions to identify and combat fake news.

III. Machine Learning Techniques for Fake News Detection

A. Supervised Learning

Supervised learning is one of the most widely used techniques for fake news detection. It involves training models on labeled datasets, where each data point is tagged as “real” or “fake.” Common algorithms include:
  • Logistic Regression: Used for binary classification to predict the probability of news being fake.
  • Support Vector Machines (SVM): Effective for high-dimensional data, such as text, by finding the optimal boundary between classes.
  • Decision Trees and Random Forests: Provide interpretable models for classifying news based on features like word frequency or source credibility.
  • Naive Bayes: A probabilistic model that works well with text data by leveraging word frequencies.

B. Unsupervised Learning

Unsupervised learning is used when labeled data is scarce. It identifies patterns and clusters in data without predefined labels. Techniques include:
  • Clustering Algorithms (e.g., K-Means, DBSCAN): Group similar news articles together, helping to identify potential fake news clusters.
  • Topic Modeling (e.g., Latent Dirichlet Allocation - LDA): Extracts topics from text data to detect anomalies or inconsistencies in news content.
  • Anomaly Detection: Identifies outliers or unusual patterns that may indicate fake news.

C. Semi-Supervised Learning

Semi-supervised learning combines labeled and unlabeled data to improve model performance, especially when labeled data is limited. Techniques include:
  • Self-Training: A model is initially trained on a small labeled dataset and then iteratively labels unlabeled data to expand the training set.
  • Graph-Based Methods: Leverages relationships between labeled and unlabeled data points to improve classification accuracy.

D. Deep Learning Techniques

Deep learning has shown remarkable success in fake news detection due to its ability to model complex patterns in data. Key approaches include:
  • Recurrent Neural Networks (RNNs): Effective for sequential data like text, capturing contextual information over time.
  • Long Short-Term Memory (LSTM): A variant of RNNs that handles long-term dependencies, useful for analyzing lengthy news articles.
  • Convolutional Neural Networks (CNNs): Traditionally used for image data, CNNs can also be applied to text for feature extraction.
  • Transformers (e.g., BERT, GPT): State-of-the-art models that use attention mechanisms to understand context and semantics in text, achieving high accuracy in fake news detection.

E. Hybrid Approaches

Hybrid approaches combine multiple techniques to leverage their strengths and improve detection performance. Examples include
  • Ensemble Methods: Combining predictions from multiple models (e.g., SVM, Random Forest, and LSTM) to enhance accuracy.
  • Feature Fusion: Integrating features from different sources, such as text, metadata, and social network analysis, to provide a comprehensive view of news authenticity.
  • Multi-Modal Learning: Combining text, images, and videos to detect fake news across different media types.
These machine learning techniques, when applied effectively, offer powerful tools for detecting and mitigating the spread of fake news. However, challenges such as model interpretability, adaptability to new fake news tactics, and the need for high-quality datasets remain areas for further research and development.

IV. Datasets for Fake News Detection

A. Commonly Used Datasets

High-quality datasets are essential for training and evaluating machine learning models for fake news detection. Some widely used datasets include:
  • Fake News Detection Dataset (FakeNewsNet): A comprehensive dataset containing news articles, social context, and metadata from platforms like Twitter.
  • LIAR Dataset: A dataset with labeled statements from Politifact, categorized as true, false, or mixed.
  • BuzzFeed News Dataset: Contains news articles and social media engagement data, labeled as real or fake.
  • Kaggle Fake News Dataset: A popular dataset with news articles labeled as reliable or unreliable.
  • COVID-19 Fake News Dataset: Focused on misinformation related to the COVID-19 pandemic, providing labeled examples of fake and real news.
  • CREDBANK Dataset: A large-scale dataset of tweets annotated for credibility, useful for social media-based fake news detection.

B. Data Preprocessing Techniques

Preprocessing is a critical step to prepare raw data for machine learning models. Common techniques include:
Text Cleaning:
  • Removing special characters, punctuation, and stopwords.
  • Lowercasing text to ensure uniformity.
  • Handling missing or incomplete data.
Tokenization: Splitting text into individual words or tokens for analysis.
Stemming and Lemmatization: Reducing words to their root forms to standardize variations (e.g., “running” to “run”).
Feature Extraction:
Bag of Words (BoW): Representing text as a vector of word frequencies.
TF-IDF (Term Frequency-Inverse Document Frequency): Weighing words based on their importance in a document relative to a corpus.
Word Embeddings (e.g., Word2Vec, GloVe): Capturing semantic relationships between words in a continuous vector space.
Handling Imbalanced Data: Techniques like oversampling (e.g., SMOTE) or undersampling to address class imbalance in datasets.
Normalization: Scaling numerical features to a standard range to improve model performance.
Metadata Integration: Incorporating additional features such as source credibility, author information, or social media engagement metrics.
Effective preprocessing ensures that the data is clean, structured, and suitable for training machine learning models, ultimately improving the accuracy and reliability of fake news detection systems.

V. Evaluation Metrics

A. Accuracy, Precision, Recall, and F1-Score

Accuracy: Measures the proportion of correctly classified instances (both true positives and true negatives) out of the total instances. While useful, accuracy can be misleading in imbalanced datasets.
Accuracy= True Positives (TP)+True Negatives (TN)
   _______________________________________
TP+TN+False Positives (FP)+False Negatives (FN)
Precision: Indicates the proportion of correctly identified fake news instances out of all instances classified as fake. High precision reduces the risk of false positives.
Precision=TP
TP+FP
Recall (Sensitivity): Measures the proportion of actual fake news instances correctly identified by the model. High recall ensures minimal false negatives.
Recall=TP
TP+FN
F1-Score: The harmonic mean of precision and recall, providing a balanced measure of model performance, especially in imbalanced datasets.
F1-Score=2×Precision×Recall
Precision+Recall

B. Confusion Matrix

A confusion matrix is a table that summarizes the performance of a classification model by showing the counts of true positives, true negatives, false positives, and false negatives. It provides a detailed breakdown of model predictions and helps identify specific areas of improvement.
Predicted Fake Predicted Real
Actual Fake TP FN
Actual Real FP TN

C. ROC-AUC Curve

The Receiver Operating Characteristic (ROC) curve plots the true positive rate (recall) against the false positive rate at various threshold settings. The Area Under the Curve (AUC) provides a single metric to evaluate the model’s ability to distinguish between classes:
AUC = 1: Perfect classifier.
AUC = 0.5: Random classifier.
AUC < 0.5: Worse than random.

D. Challenges in Evaluating Fake News Detection Models

  • Imbalanced Datasets: Fake news datasets are often imbalanced, with far fewer fake instances than real ones, leading to biased evaluation metrics.
  • Subjectivity in Labeling: Determining ground truth for fake news can be subjective, as some content may be partially true or context-dependent.
  • Evolving Nature of Fake News: Models trained on historical data may struggle to generalize to new forms of fake news, requiring continuous evaluation and updates.
  • Contextual Understanding: Metrics may not fully capture the model’s ability to understand nuanced or context-dependent fake news.
  • Real-World Deployment: Evaluation in controlled environments may not reflect real-world performance, where noise, bias, and adversarial attacks are prevalent.
These evaluation metrics and challenges highlight the importance of robust and comprehensive assessment methods to ensure the reliability and effectiveness of fake news detection models.

VI. Applications and Case Studies

A. Real-World Applications of Fake News Detection

Social Media Platforms:
Platforms like Facebook, Twitter, and Instagram use machine learning models to detect and flag fake news, reducing its spread and impact.
Example: Facebook’s partnership with fact-checking organizations to identify and label misinformation.
News Aggregators and Fact-Checking Websites:
Tools like Google News and Snopes leverage ML algorithms to verify news authenticity and provide users with credible information.
Government and Public Health Organizations:
Governments and health agencies use fake news detection systems to combat misinformation during elections, pandemics, or crises.
Example: WHO’s efforts to debunk COVID-19 misinformation using automated tools.
Financial Institutions:
Banks and financial organizations use fake news detection to identify and mitigate the impact of false information on stock markets and investments.
Educational Institutions
Universities and schools employ fake news detection tools to teach media literacy and critical thinking skills.
Cybersecurity Firms:
Companies specializing in cybersecurity use ML models to detect and prevent the spread of malicious content, including fake news.
B. Case Studies of Successful Implementation
Facebook’s Fact-Checking Initiative:
Facebook implemented a machine learning-based system to identify potentially false content and flag it for review by third-party fact-checkers. This approach reduced the visibility of fake news by 80% in some regions.
Twitter’s Birdwatch Program:
Twitter introduced Birdwatch, a community-driven platform where users can flag and annotate misleading tweets. Machine learning algorithms prioritize flagged content for review, enhancing the platform’s ability to combat misinformation.
Google’s Fact-Check Explorer:
Google integrated fact-checking tools into its search engine and news aggregator, using ML to highlight verified information and flag disputed content. This has improved the credibility of search results.
Reuters’ News Tracer:
Reuters developed an AI-powered tool called News Tracer to detect breaking news and verify its authenticity in real-time. The system analyzes social media signals and metadata to identify credible sources and filter out misinformation.
Full Fact’s Automated Fact-Checking:
Full Fact, a UK-based fact-checking organization, uses machine learning to automate the detection of false claims in political speeches and news articles. Their system has significantly reduced the time required for manual fact-checking.
COVID-19 Misinformation Detection:
During the COVID-19 pandemic, researchers developed ML models to detect and debunk false claims about the virus. For example, the University of California, Berkeley, created a system to analyze social media posts and identify misinformation related to vaccines and treatments.
These applications and case studies demonstrate the effectiveness of machine learning techniques in combating fake news across various domains. However, ongoing research and collaboration are essential to address emerging challenges and improve the scalability and accuracy of these systems.

VII. Challenges and Limitations

A. Evolving Nature of Fake News

  • Adaptive Tactics: Fake news creators continuously adapt their methods, such as using deepfakes, manipulated media, or sophisticated language models, making detection more challenging.
  • Real-Time Detection: The rapid spread of fake news requires real-time detection systems, which are computationally intensive and difficult to implement effectively.
  • Contextual Nuances: Fake news often relies on subtle contextual or cultural cues that are difficult for models to interpret accurately.

B. Bias in Datasets and Models

  • Dataset Bias: Training datasets may not be representative of all types of fake news, leading to biased models that perform poorly on underrepresented categories.
  • Algorithmic Bias: Models may inherit biases present in the training data, such as favoring certain languages, regions, or political perspectives.
  • Labeling Subjectivity: Human annotators may introduce bias during the labeling process, affecting the quality and reliability of the dataset.

C. Ethical Considerations

  • Censorship Concerns: Automated fake news detection systems may inadvertently flag legitimate content, raising concerns about censorship and freedom of speech.
  • Privacy Issues: Analyzing social media data for fake news detection may infringe on user privacy, especially when personal information is involved.
  • Transparency and Accountability: Lack of transparency in how models make decisions can lead to mistrust and ethical dilemmas, particularly in high-stakes applications like elections or public health.

D. Computational Complexity

  • Resource-Intensive Models: Advanced techniques like deep learning require significant computational resources, making them expensive and inaccessible for some organizations.
  • Scalability Issues: Scaling models to handle large volumes of data in real-time is challenging, especially for platforms with millions of users.
  • Energy Consumption: Training and deploying complex models consume substantial energy, raising environmental concerns.
Addressing these challenges and limitations requires a multidisciplinary approach, combining advancements in machine learning, data collection, and ethical frameworks. Collaboration between researchers, policymakers, and industry stakeholders is essential to develop robust, fair, and scalable solutions for fake news detection.

VIII. Future Directions

A. Incorporating Multimodal Data (Text, Images, Videos)

  • Multimodal Learning: Future systems will leverage text, images, videos, and audio to detect fake news more effectively. For example, deepfake detection and image verification can complement textual analysis.
  • Cross-Modal Analysis: Combining features from different modalities (e.g., analyzing the consistency between a news article’s text and its accompanying images) can improve detection accuracy.
  • Advanced Models: Techniques like multimodal transformers (e.g., CLIP, ViLT) will play a key role in integrating and analyzing diverse data types.

B. Explainable AI for Transparency

  • Interpretable Models: Developing models that provide clear explanations for their decisions will enhance trust and accountability. Techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) can help achieve this.
  • User-Friendly Explanations: Presenting explanations in a way that is understandable to non-experts, such as journalists or policymakers, will improve adoption and usability.
  • Ethical AI: Explainable AI can help identify and mitigate biases in models, ensuring fair and ethical fake news detection.

C. Real-Time Detection Systems

  • Streaming Data Processing: Developing systems that can process and analyze data in real-time will be critical for combating the rapid spread of fake news.
  • Edge Computing: Leveraging edge computing to perform detection tasks closer to the data source (e.g., on user devices) can reduce latency and improve efficiency.
  • Adaptive Models: Models that can continuously learn and adapt to new forms of fake news will be essential for maintaining effectiveness over time.

D. Collaboration with Human Fact-Checkers

  • Human-in-the-Loop Systems: Combining the strengths of machine learning with human expertise can improve detection accuracy and reduce false positives. For example, models can flag suspicious content for human review.
  • Crowdsourcing Fact-Checking: Platforms that allow users to contribute to fact-checking efforts, such as Twitter’s Birdwatch, can enhance the scalability and diversity of detection systems.
  • Training and Support: Providing fact-checkers with AI-powered tools to assist in verifying claims, analyzing sources, and identifying patterns can improve their efficiency and effectiveness.
These future directions highlight the need for innovative approaches, interdisciplinary collaboration, and a focus on ethical and transparent AI to address the evolving challenges of fake news detection. By integrating multimodal data, enhancing explainability, enabling real-time detection, and fostering human-AI collaboration, researchers and practitioners can build more robust and reliable systems to combat misinformation.

IX. Conclusion

A. Summary of Key Points

Fake news detection has emerged as a critical challenge in the digital age, with significant societal, political, and economic implications. Machine learning techniques, including supervised, unsupervised, semi-supervised, and deep learning approaches, have shown great promise in automating the detection process. Key advancements in natural language processing, feature extraction, and multimodal data analysis have improved the accuracy and scalability of detection systems. However, challenges such as dataset bias, model interpretability, and the evolving nature of fake news remain significant hurdles.

B. Importance of Continued Research

Continued research is essential to address the limitations of current systems and adapt to the rapidly changing landscape of misinformation. Key areas for future exploration include:
  • Developing more robust and adaptive models to handle new forms of fake news, such as deepfakes and AI-generated content.
  • Improving the quality and diversity of datasets to reduce bias and enhance model generalizability.
  • Advancing explainable AI techniques to ensure transparency and build trust in detection systems.
  • Integrating multimodal data and real-time processing capabilities to improve detection accuracy and speed.

C. Call to Action for Stakeholders

Combating fake news requires a collaborative effort from all stakeholders:
  • Researchers: Focus on developing innovative, ethical, and scalable solutions while addressing the limitations of current technologies.
  • Tech Companies: Invest in robust detection systems, ensure transparency, and prioritize user privacy and freedom of speech.
  • Governments and Policymakers: Establish regulations and frameworks to support ethical AI development and combat misinformation without infringing on civil liberties.
  • Media Organizations: Partner with fact-checkers and AI developers to verify content and promote media literacy.
  • General Public: Stay informed, critically evaluate information, and support initiatives that promote digital literacy and responsible sharing
By working together, stakeholders can build a more informed and resilient society, equipped to tackle the challenges of fake news and misinformation in the digital era.

References

  1. M. Islam et al., “A Comprehensive Review on Object Detection in the Context of Autonomous Driving,” 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS), Gobichettipalayam, India, 2024, pp. 1860–1864. [CrossRef]
  2. Islam, M.M., Chowdhury, I.J., Mahboob, T.Z., Mazumder, M.S.J., Hossain, M.J., Biswas, M.S., & Rone, P.D. (2024, December). A Comprehensive Review on Object Detection in the Context of Autonomous Driving. In 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS) (pp. 1860–1864). IEEE.
  3. Suraj, P. Synergizing robotics and artificial intelligence: transforming manufacturing and automation for industry 5.0. Synergy: Cross-Disciplinary Journal of Digital Investigation 2024, 2, 69–75. [Google Scholar]
  4. Raju, O.N.; Rakesh, D.; SubbaReddy, K. SRGM with imperfect debugging using capability analysis of log-logistic model. Int J Comput Technol 2012, 2, 30–33. [Google Scholar]
  5. Dasari, R.; Prasanth, Y.; NagaRaju, O. An analysis of most effective virtual machine image encryption technique for cloud security. International Journal of Applied Engineering Research 2017, 12, 15501–15508. [Google Scholar]
  6. Islam, M.S.; Rony, M.A.T.; Saha, P.; Ahammad, M.; Alam, S.M.N.; Rahman, M.S. (2023, December). Beyond words: unraveling text complexity with novel dataset and a classifier application. In 2023 26th International Conference on Computer and Information Technology (ICCIT) (pp. 1–6). IEEE.
  7. Islam, M.M.; Chowdhury, I.J.; Mahboob, T.Z.; Mazumder, M.S.J.; Hossain, M.J.; Biswas, M.S.; Rone, P.D. (2024, December). A Comprehensive Review on Object Detection in the Context of Autonomous Driving. In 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS) (pp. 1860–1864). IEEE.
  8. Immadisetty, A. Mastering Data Platform Design: Industry-Agnostic Patterns For Scale. International Journal Of Research In Computer Applications And Information Technology (IJRCAIT) 2024, 7, 2259–2270, https://ijrcait.com/index.php/home/article/view/IJRCAIT_07_02_164. [Google Scholar]
  9. Immadisetty, A. Sustainable innovation in digital technologies: a systematic review of energy-efficient computing and circular design practices. International Journal of Computer Engineering And Technology 2024, 15, 1056–1066. [Google Scholar]
  10. Anjum, Kazi Nafisa, and Ayuns Luz. “Investigating the Role of Internet of Things (IoT) Sensors in Enhancing Construction Site Safety and Efficiency.”.
  11. Chinta, Purna Chandra Rao, Niharika Katnapally, Krishna Ja, Varun Bodepudi, Suneel Babu, and Manikanth Sakuru Boppana. “Exploring the role of neural networks in big data-driven ERP systems for proactive cybersecurity management.” Kurdish Studies (2022).
  12. Singh, J. The Ethics of Data Ownership in Autonomous Driving: Navigating Legal, Privacy, and Decision-Making Challenges in a Fully Automated Transport System. Australian Journal of Machine Learning Research & Applications 2022, 2, 324–366. [Google Scholar]
  13. Singh, J. Autonomous Vehicles and Smart Cities: Integrating AI to Improve Traffic Flow, Parking, and Environmental Impact. Journal of AI-Assisted Scientific Discovery 2024, 4, 65–105. [Google Scholar]
  14. Krishna Madhav, J.; Varun, B.; Niharika, K.; Srinivasa Rao, M.; Laxmana Murthy, K. (2023). Optimising Sales Forecasts in ERP Systems Using Machine Learning and Predictive Analytics. J Contemp Edu Theo Artific Intel: JCETAI-104.
  15. Singh, J. AI-Driven Path Planning in Autonomous Vehicles: Algorithms for Safe and Efficient Navigation in Dynamic Environments. Journal of AI-Assisted Scientific Discovery 2024, 4, 48–88. [Google Scholar]
  16. Mmaduekwe, U.; Mmaduekwe, E. Cybersecurity and Cryptography: The New Era of Quantum Computing. Current Journal of Applied Science and Technology. 43, no. 5. [CrossRef]
  17. Singh, J. Robust AI Algorithms for Autonomous Vehicle Perception: Fusing Sensor Data from Vision, LiDAR, and Radar for Enhanced Safety. Journal of AI-Assisted Scientific Discovery 2024, 4, 118–157. [Google Scholar]
  18. Singh, J. Deepfakes: The Threat to Data Authenticity and Public Trust in the Age of AI-Driven Manipulation of Visual and Audio Content. Journal of AI-Assisted Scientific Discovery 2022, 2, 428–467. [Google Scholar]
  19. Routhu, Kishankumar, Varun Bodepudi, Krishna Madhav Jha, and Purna Chandra Rao Chinta. “A Deep Learning Architectures for Enhancing Cyber Security Protocols in Big Data Integrated ERP Systems.” Available at SSRN 5102662 (2020).
  20. Bodepudi, V., & Chinta, P.C.R. (2024). Enhancing Financial Predictions Based on Bitcoin Prices using Big Data and Deep Learning Approach. Available at SSRN 5112132.
  21. Chinta, P.C.R.; Moore, C.S.; Karaka, L.M.; Sakuru, M.; Bodepudi, V.; Maka, S.R. Building an Intelligent Phishing Email Detection System Using Machine Learning and Feature Engineering. European Journal of Applied Science, Engineering and Technology 2025, 3, 41–54. [Google Scholar]
  22. Moore, C. (2024). Enhancing Network Security With Artificial Intelligence Based Traffic Anomaly Detection In Big Data Systems. Available at SSRN 5103209.
  23. Krishna Madhav, J.; Varun, B.; Niharika, K.; Srinivasa Rao, M.; Laxmana Murthy, K. (2023). Optimising Sales Forecasts in ERP Systems Using Machine Learning and Predictive Analytics. J Contemp Edu Theo Artific Intel: JCETAI-104.
  24. Singh, J. Advancements in AI-Driven Autonomous Robotics: Leveraging Deep Learning for Real-Time Decision Making and Object Recognition. Journal of Artificial Intelligence Research and Applications 2023, 3, 657–697. [Google Scholar]
  25. Sadaram, G.; Karaka, L.M.; Maka, S.R.; Sakuru, M.; Boppana, S.B.; Katnapally, N. AI-Powered Cyber Threat Detection: Leveraging Machine Learning for Real-Time Anomaly Identification and Threat Mitigation. MSW Management Journal 2024, 34, 788–803. [Google Scholar]
  26. Chinta, Purna Chandra Rao. “The Art of Business Analysis in Information Management Projects: Best Practices and Insights.” DOI 10 (2023).
  27. Azuikpe, P.F.; Fabuyi, J.A.; Balogun, A.Y.; Adetunji, P.A.; Peprah, K.N.; Mmaduekwe, E.; Ejidare, M.C. The necessity of artificial intelligence in fintech for SupTech and RegTech supervisory in banks and financial organizations. International Journal of Science and Research Archive 2024, 12, 2853–2860. [Google Scholar]
  28. Chinta, P.C.R.; Katnapally, N. (2021). Neural Network-Based Risk Assessment for Cybersecurity in Big Data-Oriented ERP Infrastructures. Neural Network-Based Risk Assessment for Cybersecurity in Big Data-Oriented ERP Infrastructures.
  29. Singh, J. Sensor-Based Personal Data Collection in the Digital Age: Exploring Privacy Implications, AI-Driven Analytics, and Security Challenges in IoT and Wearable Devices. Distributed Learning and Broad Applications in Scientific Research 2019, 5, 785–809. [Google Scholar]
  30. Singh, J. The Rise of Synthetic Data: Enhancing AI and Machine Learning Model Training to Address Data Scarcity and Mitigate Privacy Risks. Journal of Artificial Intelligence Research and Applications 2021, 1, 292–332. [Google Scholar]
  31. Katnapally, N.; Chinta, P.C.R.; Routhu, K.K.; Velaga, V.; Bodepudi, V.; Karaka, L.M. Leveraging Big Data Analytics and Machine Learning Techniques for Sentiment Analysis of Amazon Product Reviews in Business Insights. American Journal of Computing and Engineering 2021, 4, 35–51. [Google Scholar]
  32. Sadaram, Gangadhar, Manikanth Sakuru, Laxmana Murthy Karaka, Mohit Surender Reddy, Varun Bodepudi, Suneel Babu Boppana, and Srinivasa Rao Maka. “Internet of Things (IoT) Cybersecurity Enhancement through Artificial Intelligence: A Study on Intrusion Detection Systems.” Universal Library of Engineering Technology Issue (2022).
  33. Katnapally, N.; Chinta, P.C.R.; Routhu, K.K.; Velaga, V.; Bodepudi, V.; Karaka, L.M. Leveraging Big Data Analytics and Machine Learning Techniques for Sentiment Analysis of Amazon Product Reviews in Business Insights. American Journal of Computing and Engineering 2021, 4, 35–51. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated