Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

GPTs or Grim Position Threats? The Potential Impacts of Large Language Models on Non-Managerial Jobs and Certifications in Cybersecurity

Version 1 : Received: 23 April 2024 / Approved: 24 April 2024 / Online: 24 April 2024 (09:46:46 CEST)

How to cite: Nowrozy, R. GPTs or Grim Position Threats? The Potential Impacts of Large Language Models on Non-Managerial Jobs and Certifications in Cybersecurity. Preprints 2024, 2024041600. https://doi.org/10.20944/preprints202404.1600.v1 Nowrozy, R. GPTs or Grim Position Threats? The Potential Impacts of Large Language Models on Non-Managerial Jobs and Certifications in Cybersecurity. Preprints 2024, 2024041600. https://doi.org/10.20944/preprints202404.1600.v1

Abstract

ChatGPT, a Large Language Model (LLM) utilizing Natural Language Processing (NLP), has caused concerns about its impact on job sectors, including cybersecurity. In this study, we assessed ChatGPT’s impacts in non-managerial cybersecurity roles using the NICE Framework and Technological Displacement theory. We also explored its potential in passing top cybersecurity certification exams. Findings revealed ChatGPT’s promise in streamlining some jobs, especially those requiring memorization. Moreover, we highlighted ChatGPT’s challenges and limitations, such as ethical implications, LLM limitations, and Artificial Intelligence (AI) security. The study suggests that LLMs like ChatGPT could transform the cybersecurity landscape, causing job losses, skill obsolescence, labor market shifts, and mixed socioeconomic impacts. We recommend a shift in focus from memorization to critical thinking, and collaboration between LLM developers and cybersecurity professionals.

Keywords

cybersecurity; skills; ChatGPT; workforce; large language model; generative AI

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.