
AI Data Leaks Impact 68% of Organizations, Yet Only 23% Have Comprehensive Security Policies, Reveals Metomic’s 2025 State of Data Security Report
In a startling revelation, Metomic, a leading provider of next-generation data security and data loss prevention (DLP) solutions for AI and SaaS work environments, has released its annual “2025 State of Data Security Report: Top Priorities, Challenges, and Concerns for Today’s CISOs.” Conducted in collaboration with Harris Interactive, the report highlights alarming vulnerabilities in AI-driven workplace tools despite widespread confidence among security leaders. The findings underscore the urgent need for organizations to adopt AI-specific security protocols as these technologies become deeply embedded in daily workflows.
Alarming Gaps in AI Security
The survey, which polled over 400 Chief Information Security Officers (CISOs) and security leaders across the U.S. and UK, reveals that 68% of organizations have experienced data leakage incidents directly tied to employees sharing sensitive information with AI tools. Shockingly, only 23% of respondents reported having implemented comprehensive AI security policies to address this growing threat.
While 90% of security leaders expressed confidence in their organization’s security measures, and 91% believed their employee training initiatives were effective, the reality paints a different picture. Over half of the respondents admitted to regularly encountering malware attacks, phishing schemes, and data breaches—many of which were linked to improper AI implementation and usage.
“The proliferation of AI across workplace tools has dramatically expanded the attack surface for malicious actors,” said Ben van Enckevort, co-founder and CTO of Metomic. “Our research shows that employees using AI applications without proper guardrails are unwittingly exposing sensitive company data at an alarming rate. The gap between security leaders’ confidence and the actual threat landscape represents one of the most significant blind spots in modern cybersecurity.”
Rising Threats in the Age of AI
One of the most concerning trends highlighted in the report is the rise of AI-enabled ransomware attacks in the U.S., which have overtaken phishing schemes and customer data breaches as a top security concern. In the UK, risks associated with third-party suppliers have surged by more than ten percentage points, largely driven by the integration of third-party AI solutions.
These findings align with a 2024 report from the Information Systems Security Association (ISSA) and Enterprise Strategy Group, which revealed that 74% of CISOs believe cybersecurity complexity and workloads have increased over the past two years—a trend exacerbated by rapid AI adoption across industries.
Building a Security-Conscious Culture
When asked about the biggest obstacles to their security programs’ success in 2025, 80% of respondents cited fostering a strong security culture within their organization as their top challenge. This underscores the critical need for leadership commitment, cultural change, and human behavior adaptation to address the unique risks posed by AI integration.
“Our report puts a spotlight on a hard truth that very few security professionals are addressing: Cybersecurity software solutions simply cannot single-handedly protect an organization from the ongoing influx of data security threats, particularly those introduced by AI systems,” van Enckevort continued. “In today’s threat landscape, the most effective security teams are led by CISOs who focus on building security-conscious organizations from the ground up.”
Shifting Priorities and Emerging Strategies
The report also sheds light on how security leaders plan to allocate their time and resources in 2025. Notably, 44% of respondents prioritized security infrastructure oversight and implementation, much of which now focuses on securing AI systems and preventing data leakage through these channels. This marks a significant shift from last year, when security operations took precedence, now falling to third place behind security infrastructure and security awareness training.
To combat the rising tide of AI-related threats, van Enckevort emphasized the importance of embedding security awareness into daily workflows. “The most effective cybersecurity strategies are not centered on security tools alone—they require leadership commitment, cultural change, and human behavior adaptation specifically addressing AI risks. It’s about taking security awareness to a whole new level where it is continuous, contextual, and integrated into everyday activities.”
A Call to Action for Organizations
As AI adoption continues to accelerate, organizations must recognize the unique risks posed by these technologies and take proactive steps to mitigate them. The findings from Metomic’s report highlight the urgent need for:
- Comprehensive AI Security Policies: Only 23% of organizations currently have robust policies in place, leaving the majority vulnerable to data leaks and breaches.
- Employee Training and Awareness: Despite high confidence levels, regular breaches indicate a disconnect between perceived and actual preparedness.
- Proactive Risk Management: Addressing AI-specific risks requires a fundamental mindset shift that begins within the security team and extends throughout the organization.
“To truly protect a business’ most critical data in the age of widespread AI adoption, there must be a fundamental mindset shift,” van Enckevort concluded. “This concept is foundational to Metomic’s value: enabling better decision-making processes, cultural buy-in, and a shift toward more proactive security management in an AI-driven workplace.”



