Mr Vijay Chakravarthy1
1Fujitsu Australia Ltd, Melbourne, Australia
Biography:
Vijay a well-regarded Cyber security professional with expertise in delivering diverse product portfolios and driving innovation. Skilled in AI security, cloud security and Operational technology security domains, he has been proficient in solutioning and using data-driven insights as part of addressing security challenges. He is also an active member of Fujitsu's global Cybersecurity research and development (R&D) group where global innovations are born from.
Abstract:
Introduction
The integration of large language models (LLMs) into digital systems has introduced a new class of security vulnerabilities that challenge conventional cybersecurity paradigms. Despite their utility, LLMs exhibit a range of exploitable behaviours, from susceptibility to prompt injection to the inadvertent generation of insecure code. As these models become increasingly embedded in critical infrastructure, their vulnerabilities demand rigorous academic scrutiny.
Methods
This study presents a qualitative analysis of LLM vulnerabilities through empirical case studies and adversarial testing. We examine the mechanisms behind prompt-based attacks, including psychological manipulation and reverse-engineering techniques, and assess the implications of these behaviours in real-world deployment contexts. Particular attention is given to the intersection of LLM behaviour with broader system-level security concerns.
Results
Findings indicate that LLMs are highly susceptible to manipulation via both technical and social vectors. Even benign prompts can elicit unintended outputs, and AI-generated code frequently contains exploitable flaws. Moreover, LLMs demonstrate persistent challenges in maintaining data confidentiality. These vulnerabilities are not isolated; they propagate through interconnected systems, amplifying risk across the cybersecurity ecosystem.
Conclusion
Mitigating LLM-related threats requires a paradigm shift in how AI security is conceptualised. We argue for a novel, systems-oriented approach that integrates LLM security into the broader cybersecurity framework. This work contributes to the growing discourse on AI safety by offering actionable insights and proposing a research agenda aimed at developing resilient, context-aware mitigation strategies.