Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Application of Open- source Large Language Model (LLM) for Simulation of a Vulnerable IoT System and Cybersecurity Best Practices Assistance

Version 1 : Received: 16 May 2024 / Approved: 17 May 2024 / Online: 17 May 2024 (11:45:11 CEST)

How to cite: Yosifova, V. Application of Open- source Large Language Model (LLM) for Simulation of a Vulnerable IoT System and Cybersecurity Best Practices Assistance. Preprints 2024, 2024051169. https://doi.org/10.20944/preprints202405.1169.v1 Yosifova, V. Application of Open- source Large Language Model (LLM) for Simulation of a Vulnerable IoT System and Cybersecurity Best Practices Assistance. Preprints 2024, 2024051169. https://doi.org/10.20944/preprints202405.1169.v1

Abstract

This paper explores the role of open-source large language models in IoT cybersecurity world. The threats of malicious activity on the Internet and the loss of private information are very real and lead to serious consequences. The purpose of this paper is to investigate how open source-large language models can help to defend against the growing threat of cyber-crimes. We conducted our experiments in two directions. The first one is a security assistant that helps with cybersecurity best practices advices. The second one is a how large language model can simulate a vulnerable IoT system. For both types of experiments, the interactive mode of operation of the language model is used. In the context of the cybersecurity research, a major advantage of the locally installed open-sourced large language models is that they do not share sensitive data with a remote system in a cloud. The paper concludes by discussing the potential impact of open-source large language models on cybersecurity research and recommends future research directions.

Keywords

cybersecurity; open-source; large language models; IoT

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.