Enterprises are all in on AI for security but budgets aren’t keeping pace
Recent EY research shows security practitioners see both sides of the AI coin – as foundational to their security strategies but also as a dangerous threat.
96% of the security leaders surveyed see AI as a core element in their cybersecurity strategy that they are already deploying. Credit: MUNGKHOOD STUDIO via Shutterstock.com.
Enterprises facing a challenging cybersecurity threat environment recognise AI is coming into play in equal parts as an important element in their defences, as a dangerous weapon their adversaries are all too eager to wield. Global professional services company EY surveyed 500 security decision-makers at companies with annual revenues of at least $500m to gain their perspective on the role AI will play in their security strategies now and going forward. 96% of the security leaders surveyed see AI as a core element in their cybersecurity strategy that they are already deploying. However, that same number perceives AI-driven attacks as serious threats to their organisation.
Unfortunately, most – 85% – who are already using AI as part of their security arsenals think their budgets are underfunded with respect to the severity of the looming threat AI-powered attacks pose. Just 20% said their cybersecurity governance framework is sufficient and well-integrated into organisational culture.
That said, it is still relatively early days in terms of embedding AI capabilities, and many anticipate their organisations will make appropriate investments. The number of organisations expecting 25% of their cybersecurity budgets to be dedicated to AI solutions will increase from 9% now to 48% in the next two years. Two-thirds currently use AI as part of their cybersecurity efforts today and project they will spend at least $5m in two years; one-third expect to allocate $10m.
46% saw a return of under $1mn from AI-driven solutions now, while 12% said they didn’t see any return or aren’t quantifying cost savings. Areas where organisations expect AI to play a major role include Advanced Persistent Threat (APT) detection, identity and access management (IAM), third-party risk management, real-time fraud detection, data privacy and compliance, and deepfake impersonation defence.
Though embedding AI into their security defences may not produce significant cost reductions, security leaders predict the technology will lead to progress in key metrics such as mean time to recovery (MTTR), mean time to detect (MTTD), and significant decreases in false positives.
close
Sign up to the newsletter: In Brief
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
research
Quantum computers might crack today's encryption far sooner than we thought
According to a study by engineers at Caltech and the UC Department of Physics, quantum computers do not need to be nearly as powerful as previously believed to crack the most advanced cryptographic technologies. The research claims that Shor's algorithm could break RSA public-key encryption using quantum computers with just... Read Entire Article
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in AI Tools

How AI helped Medvi, a telehealth provider of GLP-1 weight-loss drugs with just two employees, hit $401M in 2025 sales; it s tracking for $1.8B in 2026 sales (Erin Griffith/New York Times)
Erin Griffith / New York Times : How AI helped Medvi, a telehealth provider of GLP-1 weight-loss drugs with just two employees, hit $401M in 2025 sales; it's tracking for $1.8B in 2026 sales Matthew Gallagher took just two months, $20,000 and more than a dozen artificial intelligence tools to get his start-up off the ground.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!