
New Delhi, April 14 -- As Artificial Intelligence (AI) becomes deeply embedded in critical infrastructure, ranging from finance to healthcare, the cybersecurity industry is facing a significant transformation. New risks are emerging, driven by the rapid evolution of AI systems, and organisations are being forced to rethink how they secure digital environments.
In this context, SAP is working to align its AI initiatives with security and governance, focusing on how to build and manage trustworthy AI systems. The company is also exploring the implications of explainability, open-source AI usage, and the role of AI agents in enterprise settings.
In a conversation with TechCircle, Sudhakar Singh, Chief AI Security Officer at SAP, shares insights on emerging security challenges, the need for layered safeguards like AI "kill switches," and how roles in cybersecurity are evolving to meet the demands of the AI era. Edited Excerpts:
With AI models now being integrated into critical infrastructure (finance, healthcare, national security), do you think AI should have a 'kill switch'? Who should have the authority to trigger it?
AI in critical infrastructure requires strong oversight and control. A "kill switch" or mechanism to halt AI operations in case of anomalies can be a necessary safeguard, but its design and implementation must be carefully planned. Rather than relying on a single manual override, AI systems should include layered fail-safes, such as automated shutdowns triggered by predefined risk thresholds. The authority to activate such mechanisms should rest with a combination of regulators, enterprise security teams, and established governance frameworks, reducing the risk of misuse or single points of failure. At SAP, we support a "responsibility by design" approach to ensure AI functions within clear, transparent, and well-managed security policies.
The concept of 'explainable AI' is gaining traction, but is full transparency always desirable? Are there scenarios where too much transparency can introduce new risks?
AI algorithms are complex, and different users (data scientists, AI engineers, data administrators, end users, etc.) have different expectations from AI explainability. Therefore, explainability should be considered in the context of the application and its users. By focusing on human-relevant factors in decision-making and abstracting algorithmic complexity from end users, a balance can be achieved between useful and excessive explainability.
Explainability is important for trust and accountability, especially in regulated industries. However, full transparency does not always improve security. Over-explaining how an AI system makes decisions can expose vulnerabilities, making it easier for adversaries to manipulate outputs or bypass safeguards. For example, in cybersecurity, revealing too much about AI-driven anomaly detection methods could help attackers develop evasion techniques.
Explainability should be balanced with security needs-providing enough insight for compliance and ethical use, while protecting the integrity of AI systems. At SAP, AI models are designed to be interpretable and accountable within a framework that emphasises security and responsible disclosure.
As AI agents become more prevalent, what new cybersecurity challenges do they introduce, and how can organisations prepare?
AI agents represent a shift in automation and decision-making, but they also introduce new security risks. These include data leakage, adversarial attacks, unauthorised access, and exposure to untrusted Application Programming Interfaces (APIs). To address these risks, organisations must enforce strict access controls, implement real-time monitoring, and apply strong governance to AI agent interactions.
At SAP, security is built into AI agents through layered defenses such as identity authentication, encrypted communication, and anomaly detection, ensuring they operate within defined boundaries. SAP Joule, an AI-powered assistant, is one example. It runs within our company's cloud environment and is designed to protect sensitive enterprise data while supporting secure AI usage. As AI agents continue to evolve, securing them will be essential to enabling safe and scalable adoption.
How do you assess and mitigate security risks when using open-source AI models, especially in enterprise applications?
Open-source AI models, while effective, require thorough security assessments before deployment in enterprise settings. The risk profile of these models depends on several factors, including their training data, potential biases, and susceptibility to adversarial inputs. At SAP, we follow a structured evaluation process that includes identifying vulnerabilities, applying additional security measures such as prompt moderation, and enforcing strict data access controls. Hosting open-source models within secure environments also helps reduce risks by limiting exposure and ensuring compliance with enterprise security policies.
We ensure that any open-source models used in business applications go through a detailed review covering licensing, model capabilities, security, and compliance. This process allows us to maintain an approved model registry, thereby limiting open-source usage to vetted and trusted AI models only.
What industries or sectors do you think will benefit the most from implementing AI with a "Trust by Design" approach?
Industries that handle sensitive data, such as finance, healthcare, and public sector institutions, benefit the most from a "Trust by Design" approach to AI. These sectors operate under strict regulatory requirements, making trust, transparency, and security essential for AI adoption. AI applications like fraud detection in banking, predictive analytics in healthcare, and automated threat detection in cybersecurity require models that are accurate and minimise risk. Embedding security and compliance into AI from the beginning helps ensure regulatory compliance and builds lasting trust with users and stakeholders.
How do you envision the future of the cybersecurity industry in 2025 and beyond? With more and more AI applications being born, do you think employment opportunities in this industry will see a surge?
Cybersecurity is entering a phase of rapid growth. AI is both a disruptor and an enabler, introducing new threats while also providing tools for defense. As a result, cybersecurity roles will shift to require skills in AI risk management, adversarial testing, and secure AI deployment. As AI agents and automated workflows become more advanced, the need for professionals who can fine-tune, monitor, and secure these systems will grow. We are focused on developing AI-based security tools and training security teams to manage this changing environment. The overlap between AI and cybersecurity will define the next stage of digital resilience, creating new roles for professionals with the right expertise.
Published by HT Digital Content Services with permission from TechCircle.