Article
Version 1
Preserved in Portico This version is not peer-reviewed
Using Large Language Models to Mitigate Ransomware Threats
Version 1
: Received: 6 November 2023 / Approved: 10 November 2023 / Online: 10 November 2023 (07:44:31 CET)
How to cite: Wang, F. Using Large Language Models to Mitigate Ransomware Threats. Preprints 2023, 2023110676. https://doi.org/10.20944/preprints202311.0676.v1 Wang, F. Using Large Language Models to Mitigate Ransomware Threats. Preprints 2023, 2023110676. https://doi.org/10.20944/preprints202311.0676.v1
Abstract
This paper explores the application of Large Language Models (LLMs), such as GPT-3 and GPT-4, in generating cybersecurity policies and strategies to mitigate ransomware threats, including data theft ransomware. We discuss the strengths and limitations of LLMs for ransomware defense and provide recommendations for effectively leveraging LLMs while ensuring ethical compliance. The key contributions include a quantitative evaluation of LLM-generated policies, an examination of the legal and ethical implications, and an analysis of how LLMs can enhance ransomware resilience when applied judiciously.
Keywords
ransomware; malware; ransomware mitigation
Subject
Computer Science and Mathematics, Security Systems
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment