Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Using Large Language Models to Mitigate Ransomware Threats

Version 1 : Received: 6 November 2023 / Approved: 10 November 2023 / Online: 10 November 2023 (07:44:31 CET)

How to cite: Wang, F. Using Large Language Models to Mitigate Ransomware Threats. Preprints 2023, 2023110676. https://doi.org/10.20944/preprints202311.0676.v1 Wang, F. Using Large Language Models to Mitigate Ransomware Threats. Preprints 2023, 2023110676. https://doi.org/10.20944/preprints202311.0676.v1

Abstract

This paper explores the application of Large Language Models (LLMs), such as GPT-3 and GPT-4, in generating cybersecurity policies and strategies to mitigate ransomware threats, including data theft ransomware. We discuss the strengths and limitations of LLMs for ransomware defense and provide recommendations for effectively leveraging LLMs while ensuring ethical compliance. The key contributions include a quantitative evaluation of LLM-generated policies, an examination of the legal and ethical implications, and an analysis of how LLMs can enhance ransomware resilience when applied judiciously.

Keywords

ransomware; malware; ransomware mitigation

Subject

Computer Science and Mathematics, Security Systems

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.