Data breaches with generative AI have doubled
The new Netskope Cloud & Threat Report 2026 for the DACH region warns: While the use of Generative AI (GenAI) is increasing threefold, data security is lagging behind. «Legacy risks» such as phishing and insider threats remain critical.

Despite attempts to control AI usage through official company tools, the recently published Cloud and Threat Report 2026 from Netskope Threat Labs: the number of breaches of data security guidelines in connection with AI applications has more than doubled compared to the previous year. The risk from shadow AI remains high, as employees continue to feed massive amounts of sensitive data into unprotected channels. The Netskope Cloud and Threat Report 2026 is based on anonymized data collected worldwide via the Netskope Security Cloud Platform. It analyzes trends in cloud usage, AI adoption and the global threat landscape.
AI boom overtakes safety measures
The report shows explosive growth. While the number of users of SaaS GenAI applications tripled last year, the intensity of use increased dramatically. The number of prompts sent increased six-fold. What is particularly critical is that despite increased efforts by IT departments, 47 % of users continue to use private AI accounts for business purposes. As a result of this shadow AI, companies register an average of 223 incidents per month in which sensitive data is sent to AI apps. For the top quarter (25 %) of the companies surveyed, this figure even exceeds 2100 incidents per month.
The report also shows other relevant results:
- Gemini is in the fast lane: Google Gemini recorded massive growth. The usage rate rose from 46 % to 69 %. Netskope predicts that Gemini will replace ChatGPT as the most widely used AI platform in companies in the first half of 2026.
- Dangerous data leaks: Source code (42 %), regulated data (32 %) and intellectual property (16 %) are the most commonly uploaded sensitive information in AI tools.
- Insider risk «personal apps»: 60 % of all insider threats are related to the use of private cloud instances (such as Google Drive or OneDrive), with the use of GenAI tools massively increasing this risk. Particularly worrying: 54 % of data breaches relate to regulated data (financial, health or personal data).
Agentic AI and MCP
For the year 2026, Netskope warns of the increasing complexity of agentic AI. These are AI systems that autonomously perform complex tasks across various company resources. «Agentic AI creates a whole new attack surface,» says Ray Canzanese, Director of Netskope Threat Labs. «When autonomous agents gain access to internal data, misconfigurations or malicious prompts can lead to massive data leakage in milliseconds.» The trend towards AI-driven browsers and the Model Context Protocol (MCP) is also classified as a critical security risk for 2026.
Recommendations for action
The report recommends four levels of protection for companies:
- Complete inspection: Inspection of all HTTP/HTTPS traffic, including AI traffic.
- App governance: Blocking of high-risk tools with no business purpose (e.g. ZeroGPT, DeepSeek).
- DLP focus: Use of data loss prevention to protect source code and passwords from leaking into AI models.
- Isolation: Use of Remote Browser Isolation (RBI) for risky or new domains.
Source: Netscope


