A majority of security leaders are struggling to defend AI systems with tools and skills that are not fit for the challenge, according to the AI and Adversarial Testing Benchmark Report 2026 from Pentera.
The report, based on a survey of 300 US CISOs and senior security leaders, examines how organizations are securing AI infrastructure and highlights critical gaps tied to skills shortages and
AI systems are rarely deployed in isolation.
They are layered across and integrated into existing corporate technology, from cloud platforms and identity systems to applications and data pipelines.
With ownership spread across disparate teams, effective centralized oversight has collapsed.
As a result, 67 percent of CISOs reported limited visibility into how AI is being used across their organization.
None of the respondents indicated they have full visibility; rather, they acknowledge being aware of or accepting some form of unmanaged or unsanctioned AI usage.
Without a clear view of where AI systems operate or what resources they can access, security teams struggle to assess risk effectively.
Basic questions, such as which identities AI systems rely on, what data they can reach, or how they behave when controls fail, often remain unanswered.