Safe AI is not a sure-fire success - it needs leadership, structure and communication. Smaller companies and public institutions in particular have a lot to gain if they act boldly but thoughtfully. Alexandra Schillo on the responsible use of AI, the importance of data protection and the underestimated risk factor of shadow AI.
Interview with Alexandra Schillo:
Alexandra Schillo
Managing Director | Zyntak
Ms. Schillo, why should companies think strategically about data protection today - especially with regard to artificial intelligence?
Data is the backbone of almost every business activity. In our data-driven society, it is no longer just about legal obligations, but also about protecting know-how, stability and competitiveness. Attacks on data can paralyze companies just as effectively as physical damage - from small workshops to large corporations. AI amplifies this effect because it creates new opportunities to work with data - but also new risks. This is why data protection is not an obstacle to innovation, but a prerequisite for it.
How can companies deal with AI in a safe and practical way?
By seeing AI for what it is: a tool. Not a miracle cure, but not a threat either. The key is to analyze specific business processes, define goals and then check whether and how AI can be used sensibly. Every company has different professional, technical and financial requirements - and a solution must be tailored to these. We often find that companies start with a ready-made tool idea. Our approach is the other way around: first the process, then the technology.
What helps companies choose the right technology?
The most important question is always: What problem should be solved? This is followed by technical and organizational considerations. For example: Is it about analyzing confidential data or public text material? Is a high quality of results required or rather traceability for audits? Are there specifications for data processing, or does the system need to be integrated into an existing infrastructure? Such questions are crucial when choosing between local machine learning, cloud-based generative AI or hybrid models. Without this analysis, any use of tools is a blind flight.
How do you counter the widespread concern that AI could open up security gaps?
The concern is justified - but solvable. The important thing is to realistically assess the risks and take targeted precautions. Techniques such as access control, input restrictions or human approval for sensitive steps help to prevent misuse. There is no one-size-fits-all solution. What is practicable for one company may be too expensive or too rigid for another. That's why we work with individually tailored measures based on risk potential and security requirements.
In your experience, what happens when companies don't have a clear AI strategy?
This almost inevitably results in a so-called shadow AI. Employees make use of freely available tools - often with private licenses, without approval, but with the best of intentions: They want to make their work more efficient. But this is precisely where considerable dangers lurk: Confidential data ends up in insecure systems, contractual obligations are breached and regulatory requirements are violated. In case of doubt, the damage is enormous - and completely uncontrollable.
Many companies are just at the beginning of their journey with AI. What they lack is not the technology - but guidance and confidence in their own decisions. Our claim is: Not to overwhelm, but to empower.
Alexandra Schillo
That sounds like a worst-case scenario. How can companies avoid shadow AI?
Through clarity. Companies do not have to introduce their own AI applications immediately, but they must communicate whether and how AI may be used in the company. A ban on AI is also a decision - but only if everyone is aware of it and it is actively enforced. Those who fail to act leave the field to chance. In addition to clear guidelines, training and understanding are required. Only if employees know and understand the framework can they act safely and responsibly.
What role does the company management play in this?
A central role! Decisions on the use of AI, risk appetite and company-wide rules cannot be delegated. Company management is responsible for setting clear framework conditions - not only from a regulatory perspective, but above all to protect its own business model. The use of AI is not an IT task, but a strategic management issue. Technology, organization and communication can only work together effectively if the management actively steers the process.
Many companies feel overwhelmed by new regulations such as the AI Act or DORA. Is this concern justified?
Partly. The requirements are high, but they are not about control for the sake of control. They require companies to better understand their processes, document them and safeguard them against risks. The perceived additional effort often arises because each requirement is treated in isolation - with separate measures and documentation. Our approach is an integrated system: we map legal, normative and contractual requirements together. The result is not bureaucratic overhead, but a resilient process framework.
In your opinion, what impulse is particularly important for companies that are faced with the decision of whether and how to use AI?
Not every company has to use AI immediately. But every company must make a decision - and back it up strategically. It's not about chasing the trend, but developing well-founded, realistic and future-proof solutions. This only works if technology, organization and communication are considered together. Those who have the courage to make clear decisions and actively shape processes will not only be more secure - but also more innovative.
About Alexandra Schillo
Alexandra Schillo is a lawyer and data protection expert. Together with Sascha Alder, she founded "Zyntak GmbH" in 2024. They support companies in assessing which of their data is particularly valuable - and develop practical protection concepts. Their goal: to help companies master the balancing act between protection and costs - with legally compliant, practical advice on data protection and information security.