ChatGPT & Co.: How can companies avoid data leaks?

AI tools like ChatGPT, Bard, and Copilot are growing in popularity, but they put data security at risk. How can companies successfully prevent confidential information leakage and data breaches?

AI tools like ChatGPT have already established themselves in many companies. However, they can also be causes of data leaks if used improperly. (Image: Unsplash.com)

Generative AI is already a great help for numerous tasks in everyday work. It answers questions, creates texts for marketing, translates emails and documents, and even optimizes source code. So it's no wonder that employees are eager to use these tools to make their work easier and more productive. However, this creates risks for data security in the company: Confidential or personal data can easily end up in ChatGPT, Bard or Copilot and thus possibly even in the replies for other users. Finally, providers use not only data available on the web, but also user input to train their AI models and improve their responses.

Firewall against data leaks: not an ideal solution

If companies don't want to lose control of their data, they have to take action. The easiest way is to train employees in the security-conscious use of generative AI, but mistakes happen - in the hustle and bustle of everyday work, attention can lapse so that employees still upload sensitive data to the services. That's why some companies choose to firewall the URLs of the various AI tools, but that's not an ideal solution either. For one thing, the blocks do not provide sufficient protection because employees can easily bypass them by accessing the services from outside the corporate network. For another, companies hinder their workforce from working productively and potentially cause frustration. 

Zero trust approach as an alternative

To regulate access to AI tools and protect data, companies would be better off adopting a zero-trust approach. Here, security solutions such as Secure Web Gateway (SWG) and Cloud Access Security Broker (CASB) ensure that only approved services are used, and only by authorized employees - regardless of where they are located and what device they are using. A central policy set reduces management overhead and makes it easier to prevent security breaches across all AI tools, communication channels, and devices.

In addition, consistent control of the data provided by the services is necessary. Only when companies recognize that employees are in the process of sharing personal data or source code with intellectual property via chat or file upload with the AI tools, for example, can they put a stop to this. This requires classification of data and policies that regulate and monitor how the data is handled. Data loss prevention (DLP) solutions combine both and minimize setup effort because they come with ready-made classifications for a wide range of data and a large set of predefined policies. 

Focus on data worth protecting

In addition, companies usually do not need to classify their entire data stock - it is sufficient to focus on the data that needs to be protected. The individual departments usually know exactly what the data is and can provide examples: customer lists, presentations, contracts, code snippets. DLP solutions analyze these and are then able to reliably identify similar data. Depending on how sensitive the data is, they allow for graduated responses: For less critical data, it is usually sufficient to notify the employee of a possible data security breach; for more important data, approval by the supervisor may be required, while the upload of particularly sensitive information is blocked directly.

"ChatGPT and other AI tools solve even complex tasks within seconds. This is extremely convenient in everyday work, but can lead to data breaches if employees accidentally enter confidential or personal data into the services," emphasizes Frank Limberger, Data & Insider Threat Security Specialist at IT security service provider Forcepoint. "With DLP, organizations can reliably protect their data without restricting the use of AI tools, which would inevitably impact employee productivity and motivation. The solutions can be deployed faster than companies often assume, delivering initial results in just a few days or weeks."

Source: Forcepoint

(Visited 201 times, 1 visits today)

More articles on the topic