The digitalization of healthcare systems increases their exposure to security incidents. Security analysts use standard CVE (Common Vulnerabilities and Exposures) records to identify and mitigate vulnerabilities. However, CVEs are often incomplete or overly generic, requiring the addition of structured, actionable information to support effective decision-making. Manually performing this augmentation is unfeasible due to the rapidly growing number of published CVEs. In this paper we evaluate the capabilities of LLMs (Large Language Models) to classify and analyze CVEs within the medical IT systems domain. We propose a framework where LLMs parse structured JSON context and answer a set of specific natural language questions, enabling the categorization of vulnerabilities by their position in the medical chain, affected component types, and mapping to the MITRE ATT&CK framework. While recent studies show that general LLMs can achieve high accuracy in objective CVSS elements and learn CNA-oriented patterns, they often struggle with subjective impact metrics. Our results demonstrate that domain-specific classification through natural language prompting provides the necessary granularity for medical risk prioritization. We conclude that this augmentation effectively bridges the gap in standard CVE records, allowing for a better understanding of how vulnerabilities impact critical healthcare infrastructure and patient safety.