Ahead of GISEC GLOBAL in Dubai from 6-8th May 2025, Cisco has unveiled the findings of its inaugural global State of AI Security report. The report aims to provide a comprehensive overview of important developments in AI security across several key areas: threat intelligence, policy, and research.
AI has emerged as one of the defining technologies of the 21st century, yet the AI threat landscape is novel, complex, and not effectively addressed by traditional cybersecurity solutions. The State of AI Security report aims to empower the community to better understand the AI security landscape, so that companies are better equipped to manage the risks and reap the benefits that AI brings.
Cisco is participating at GISEC GLOBAL 2025 as a Platinum Sponsor, under the theme “Innovating where security meets the network”. Across its portfolio, Cisco is harnessing AI to reframe how organizations think about cybersecurity outcomes and tip the scales in favor of defenders. Visitors at GISEC will learn how Cisco combines AI within its breadth of telemetry across the network, private and public cloud infrastructure, applications and endpoints to deliver more accurate and reliable outcomes.
“As AI becomes deeply embedded into business and society, securing it must become a top priority,” said Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS. “As our State of AI Security report indicates, traditional cybersecurity approaches are no longer sufficient to address the unique risks presented by AI. GISEC serves as the ideal platform to discuss the new age of AI-enhanced cybersecurity – bringing together security leaders, innovators, and policymakers who are shaping the region’s cyber defense strategies. Through our thought leadership and innovations, we are showcasing at GISEC, Cisco aims to equip organizations with the insights, research, and recommendations they need to build secure and resilient AI systems.”
Findings from Cisco’s first State of AI Security report include:
Evolution of the AI Threat Landscape
The rapid proliferation of AI and AI-enabled technologies has introduced a massive new attack surface that security leaders are only beginning to contend with.
Risk exists at virtually every step across the entire AI development lifecycle; AI assets can be directly compromised by an adversary or discreetly compromised though a vulnerability in the AI supply chain. The State of AI Security report examines several AI-specific attack vectors including prompt injection attacks, data poisoning, and data extraction attacks. It also reflects on the use of AI by adversaries to improve cyber operations like social engineering, supported by research from Cisco Talos.
Looking at the year ahead, cutting-edge advancements in AI will undoubtedly introduce new risks for security leaders to be aware of. For example, the rise of agentic AI which can act autonomously without constant human supervision seems ripe for exploitation. On the other hand, the scale of social engineering threatens to grow tremendously, exacerbated by powerful multimodal AI tools in the wrong hands.
Key Developments in AI Policy
The past year has seen significant advancements in AI policy. International efforts have led to key developments in global AI governance. Early actions in 2025 suggest greater focus towards effectively balancing the need for AI security with accelerating the speed of innovation.
Original AI Security Research
The Cisco AI security research team has led and contributed to several pieces of groundbreaking research which are highlighted in the State of AI Security report.
Research into algorithmic jailbreaking of large language models (LLMs) demonstrates how adversaries can bypass model protections with zero human supervision. This technique can be used to exfiltrate sensitive data and disrupt AI services. More recently, the team explored automated jailbreaking of advanced reasoning models like DeepSeek R1, to demonstrate that even reasoning models can still fall victim to traditional jailbreaking techniques.
The team also explores the safety and security risks of fine-tuning models. While fine-tuning is a popular method for improving the contextual relevance of AI, many are unaware of the inadvertent consequences like model misalignment.
The report also reviews two pieces of original research into poisoning public datasets and extracting training data from LLMs. These studies shed light on how easily—and cost-effectively—a bad actor can tamper with or exfiltrate data from enterprise AI applications.
Recommendations for AI Security
Securing AI systems requires a proactive and comprehensive approach. The report outlines several actionable recommendations:
- Manage risk at every point in the AI lifecycle: Ensure your security team is equipped to identify and mitigate at every phase: supply chain sourcing (e.g., third-party AI models, data sources, and software libraries), data acquisition, model development, training, and deployment.
- Maintain familiar cybersecurity best practices: Concepts like access control, permission management, and data loss prevention remain critical. Approach securing AI the same way you would secure core technological infrastructure and adapt existing security policies to address AI-specific threats.
- Uphold AI security standards throughout the AI lifecycle: Consider how your business is using AI and implement risk- based AI frameworks to identify, assess, and manage risks associated with these applications. Prioritize security in areas where adversaries seek to exploit weaknesses.
- Educate your workforce in responsible and safe AI usage: Clearly communicate internal policies around acceptable AI use within legal, ethical, and security boundaries to mitigate risks like sensitive data exposure.
Read the State of AI Security 2025 here.