Cloud-based AI applications and the danger of shadow AI

The rapid spread of cloud-based AI is revolutionizing companies, but harbors an underestimated danger: "shadow AI". The uncontrolled use of cloud-based AI tools increases the complexity of cyber security and poses new challenges for the protection of sensitive data and processes.

Shadow AI can lead to phenomena that raise questions about governance and resilience. (Image: AdobeStock / Stormshield)

Grok, the AI chatbot developed by Elon Musk's xAI start-up, has been available on Microsoft's Azure cloud platform since the end of May. The announcement, made at the Build 2025 conference, marks a strategic turning point: Microsoft is opening up its ecosystem to a wider range of AI players, including some that are challenging its traditional partners such as OpenAI.

Open cloud environments lead to more complexity

"In general, the rapid development of AI in the cloud requires a rethink of access policies and improved monitoring of usage to ensure greater security for sensitive data flows," says Sébastien Viou, Director of Cybersecurity & Product Management at Stormshield: "The integration of AI models developed by xAI, such as Grok, on the Microsoft Azure platform represents a further step in the opening up of cloud environments to alternative large language model providers. While this open ecosystem dynamic appears to bring agility to organizations, it also introduces a new level of complexity for the teams responsible for cybersecurity."

Transparency of use is a key concern here. With generative AI now accessible via standardized Azure interfaces, the number of potential applications can increase without meaningful controls and countermeasures, especially in complex application environments that span a variety of subsystems. The result is a blurring of the line between legitimate experimentation and "shadow AI". Without precise monitoring mechanisms, it is difficult to know who is using these models, with what data and for what purposes.

New requirements for risk management

This inevitably raises the question of risk management of a legal or technical nature, such as governance of access, traceability of usage and protection of sensitive data. The fact that Grok now exists alongside other AI tools on the same platform requires a granular reassessment of the impact on data processing and operational resilience. A least privilege philosophy must prevail, with tighter controls on identities and usage sessions. Otherwise, the risk of sensitive information being compromised or leaked simply due to configuration errors becomes non-trivial.

Finally, beyond the access and visibility issues, controlling sensitive data flows is a critical blind spot. Seemingly innocuous interactions between employees and AI can hide data exfiltration or processing operations that violate security policies. In an environment where traditional data loss prevention solutions were already complex to apply, the challenge takes on a new dimension. This requires holistic cybersecurity measures that go beyond mere reactivity and are integrated into the corporate strategy from the ground up. This includes comprehensive mechanisms to enforce zero-trust principles that ensure that every access request - whether from a human or AI - is authenticated, authorized and continuously validated, regardless of location or device.

Digital sovereignty as the key

Controlling the flow of data in the context of AI applications also requires innovative solutions that go beyond traditional perimeter defense. An effective security strategy must be able to analyze AI-generated content and AI-driven interactions in real time to prevent potential misuse or leakage of sensitive information. This requires advanced network-level inspection capabilities and endpoint protection capable of detecting and preventing unusual behavior or suspicious patterns emanating from AI models.

Given the rapid pace of development and potential risks, it is essential to rely on trustworthy and transparent cybersecurity solutions, especially when it comes to protecting critical data and infrastructure. Only by building a robust security foundation that prioritizes digital sovereignty and compliance with European standards can companies reap the full benefits of AI safely and responsibly. Such a comprehensive strategy is key to unleashing the innovative power of AI without losing control. Because without a rigorous approach to governance and oversight, AI, whether generative or not, is likely to evolve in organizations faster than the means to control it.

Source: Stormshield

(Visited 359 times, 1 visits today)

More articles on the topic