
New Delhi, Aug. 13 -- Artificial Intelligence (AI), cloud infrastructure, and new attack methods are reshaping cybersecurity. Attackers use automation, botnets, and AI to bypass defenses, with smaller businesses increasingly targeted. Indusface, a cybersecurity SaaS company, focuses on reducing vulnerability response times, preventing false positives, and protecting critical infrastructure.
In a conversation with TechCircle, CEO, and Founder, Ashish Tandon outlines the industry gap the company addressed, the growth of API threats, AI's role in attacks and defense, and what enterprises must prepare for as AI, quantum computing, and decentralized infrastructure evolve. Edited Excerpts:
What gap did you see in cybersecurity when you founded Indusface, and how has it changed over the past decade?
In my previous company, we identified weaknesses in customers' web applications, reported the vulnerabilities, and explained how hackers could exploit them to steal data such as credit card information. That was where our work ended.
CIOs and CISOs in banks told me that fixing vulnerabilities often took 70 to 110 days, leaving them exposed the entire time. They asked why we didn't work on something that could mitigate issues in real time.
That led to starting Interspace. In our category, we are the only company that helps customers mitigate open vulnerabilities almost autonomously and in real time. We do this through a combination of software, AI, and human verification. This reduces the 70-110 day window to near zero.
We also addressed the problem of false positives. In cybersecurity, a false positive could mean blocking legitimate users when trying to stop attackers. That can be catastrophic for a business. Our AI ensures policies only block malicious traffic while allowing legitimate users uninterrupted access.
The core problem we solved is reducing the time to mitigate vulnerabilities and eliminating false positives. Since then, we've added more capabilities such as APIs, but the primary achievement over the last five years has been solving this critical issue.
Your company reported blocking over 7.15 billion cyberattacks in India in 2024. What emerging threats are most likely to evade traditional security tools?
This report is based on the attacks we observe on our platform before anyone approaches banks or insurance companies. We do not look at customer data directly, but we track malicious activity, including attack patterns.
The report identifies three major shifts: First, attacks exploiting open vulnerabilities have become one of the top three methods used by hackers. This aligns with the findings of the Verizon Data Breach Report. Hackers use bots to scan internet-facing websites and applications, often targeting large organizations, to detect vulnerabilities. Once identified, they launch coordinated attacks to steal credit card data, passwords, or other sensitive information.
Second, with the growth of cloud and AI, bot and DDoS attacks have become more sophisticated. Traditional signature-based defenses are less effective because attackers can quickly adapt, evade detection, and bypass protections. Cloud resources make large-scale attacks cheaper and easier, and underground marketplaces now sell such attacks at low cost.
Third, the targets have expanded beyond large enterprises to include small and medium-sized businesses. Since COVID-19, digital adoption has increased across all industries, but smaller organizations often lack mature security practices, making them vulnerable.
We saw these trends during Operation Sindhu, when malicious traffic increased by roughly 1,000 times. Attacks came from hostile countries and targeted critical infrastructure such as banking, insurance, and power. These were highly sophisticated, combining AI and cloud resources. AI allowed attackers to modify their methods in real time, adapting immediately after our defenses blocked an attempt. Overall, attacks are becoming more complex, adaptive, and accessible to adversaries of all sizes.
Your report shows a 94% rise in API attacks, which suggests a major blind spot for enterprises. Why is API security still under-addressed, and what needs to change at the boardroom level?
We launched our API security model about two and a half years ago, but initial adoption from existing customers was low. This year, usage has surged. APIs have become essential for business-to-business communication, third-party integrations, and payment services. Businesses are increasingly interconnected, and APIs are the primary way to integrate with partners and customers.
However, many attacks now originate through these third-party integrations. For example, a bank might integrate a rewards program from a SaaS provider into its banking application. If that SaaS provider is insecure, it can become a gateway for attackers into the bank's systems. Several large financial institutions have been compromised because their APIs were not protected. Large organizations are now actively working to secure their APIs.
A further challenge is that APIs often bypass the oversight of a company's CIO or CISO. Departments such as HR, marketing, or sales may integrate cloud-based services without the central IT team's knowledge. This makes discovering and cataloguing APIs a significant issue. For example, a large insurance company recently told us that their biggest challenge is simply identifying all their APIs. Our product addresses this by discovering and cataloguing them.
Hackers are aware of these gaps and increasingly target APIs as they are often under the radar. Recent large-scale API attacks have gained media attention, driving more organizations to prioritize API security. We are now seeing strong demand from customers to secure their APIs, as they recognize this as a critical weak link.
How do you see AI and cybersecurity intersecting, and are attackers adopting AI and LLMs faster than defenders?
Cybersecurity is a constant contest between defenders and attackers. Defenders try to keep up while attackers look for new ways in. Recognizing this, Interspace began using AI about a year and a half ago after identifying it as an important emerging area.
In the past nine months, we have seen clear benefits. AI improves efficiency across our operations. Our R&D team uses it to speed up development, our support team uses it for faster customer responses, and our product team uses it to analyze data and make decisions more quickly. We have also automated our vulnerability mitigation program with AI, improving verification and accuracy.
In cybersecurity, staying ahead of zero-day vulnerabilities is critical. For example, a major SharePoint zero-day vulnerability was discovered last week and quickly exploited by attackers. Detecting zero days requires gathering intelligence from many sources. Our signature labs focus on identifying new vulnerabilities and attack vectors, but this used to be a manual process involving multiple information sources. It took time to compile and publish our monthly zero-day report. Now, with AI integrated into the process, we can detect zero-day information in seconds, prioritize threats, and act faster.
Adversaries are using AI and cloud technologies to make their attacks more sophisticated. As a cybersecurity vendor, we use AI to match or exceed their capabilities, protect our customers, and improve operational efficiency.
AI is valuable across our business, but we do not fully automate processes. We rely on AI to deliver information quickly, but humans still verify outcomes to ensure accuracy.
Do current laws adequately address today's cyber threats, especially AI-driven ones, and how does this apply to markets like India?
AI and large language models are still evolving. Every category is exploring how to understand and implement them. Many people use them for tasks like analysis, writing clearer emails, or creating blog content, but the full range of uses and outcomes is still developing.
Introducing regulations now could be counterproductive because the technology is changing quickly. It's not clear how many people are using it efficiently or effectively in their products and capabilities. At this stage, responsibility should lie with enterprises and users to ensure they use it responsibly and understand the risks and benefits.
Questions about regulation, such as whether certain AI features should be restricted, are premature. Even in the US, where there is significant discussion about AI, regulators are not yet imposing strict rules. Issues like where AI-generated code is stored, how it's used, and whether it's altered are valid concerns, but organizations using AI are already aware of them. For now, allowing the technology to evolve before adding regulation is likely the better approach.
What are the main motivations and ecosystems driving the recent surge in bot attacks, financial, geopolitical, or something else?
A new type of bot has emerged. Geopolitical and financial gain bots have always been present, and they've become more sophisticated, using AI, cloud infrastructure, and other technologies to operate.
When we launched a recent feature, we noticed another pattern. Traditionally, applications allow Google to scan them daily for search engine optimization. Now, LLMs such as OpenAI and Google Gemini are also crawling sites.
Many customers began seeing sudden increases in traffic and usage, which we traced back to LLMs and AI-based crawlers. We built a feature that lists all AI and LLM crawlers visiting a customer's site and allows them to review whether they want those crawlers. Around 80-90% of the crawlers were unwanted, unknown to the customer, and in some cases could have been adversarial.
This discovery has been important. While we've already matured our understanding of geopolitical and financial bots, these AI and LLM crawlers represent a newer threat that can also be used to harm customers. The landscape continues to evolve.
How do you see the cybersecurity landscape changing in the near future with the rise of AI, quantum computing, and decentralized digital infrastructure, and what should enterprises start preparing for now that they may not yet be considering?
Enterprises must decide which AI and LLM tools and models to use based on their business needs. My focus is on the cybersecurity aspect. From my experience, when selecting cybersecurity tools or solutions, enterprises should ensure that vendors incorporate AI into their processes and products.
AI is real and will continue to evolve. If a solution is not AI-ready or cannot detect AI-based attacks, it will fall behind and fail to protect the enterprise effectively. Using AI to improve product capabilities is essential. As a buyer, I would expect any service provider or product vendor to integrate emerging technologies into their products so they can detect, protect, and respond quickly, matching the pace and methods attackers use to disrupt enterprise applications.
Published by HT Digital Content Services with permission from TechCircle.