New Delhi, Nov. 28 -- Top consulting firm Deloitte found itself in an awkward situation this week, after it was found that a major workforce report it prepared for the Canadian government allegedly contained AI errors. A report by the news outlet Independent identified that the study prepared for the government of Newfoundland and Labrador under a partnership valued at $1.6 million seemed to refer to made-up or inaccurate sources spun up by AI tools. The situation is precarious for Deloitte, as the latest news comes at the heels of a similar mishap it suffered, not too long back. Last month, the consulting firm had to refund part of a $440,000 fee to the Australian government after admitting that a report it delivered had AI-generated content and was riddled with errors. These incident has yet again shone a bright spotlight on the need for responsible AI usage guidelines for companies. AI usage policy, also called the AI acceptable use policy, is a set of rules that lays out how and what AI tools employees can use. This policy aims to ensure privacy, governance, adherence to intellectual property rights, and confidentiality. Companies have now started to face pressure from clients and stakeholders. As AI becomes more widely integrated into business processes, companies will be compelled to set clearer rules and controls, experts said. "There are four key areas where responsible AI use policy will become increasingly important. It includes data privacy and confidentiality, liability arising from AI reliance and accuracy issues, disclosure obligations and compliance with emerging AI regulations, and intellectual property risks, both infringement and loss of protection," said Vaibhav Parikh, Partner, Nishith Desai Associates. The Digital Personal Data Protection 2025 rules (DPDP 2025) that were notified recently are expected to encourage companies to develop a usage policy further. These rules impose duties on data fiduciaries regarding lawful processing, safeguarding data, and reporting data breaches. Market regulator Securities and Exchange Board of India (SEBI) released a consultation paper on guidelines for responsible usage of AI/ML in Indian Securities Markets. Similarly, last year, the Reserve Bank of India (RBI) said that it would be setting up a committee to recommend a Framework for Responsible and Ethical Enablement of AI (FREE-AI) in the finance sector. This framework will examine AI-associated risks such as algorithmic bias, explainability, and data privacy. This framework will help address these challenges. AI usage policy as part of employment contract In some cases, AI usage guidelines are making their way into the employment contracts. "We have helped several clients draft AI-usage clauses, and in many cases, the conversation starts even before the drafting stage. Depending on the nature of the company, the industry, and the kind of work employees perform, we often recommend that such clauses be included in employment agreements," said Yajas Setlur, Partner at JSA Advocates & Soliciters. For example, companies that develop or deploy customised or bespoke software or build tools for other businesses must ensure that employees are not using restricted AI tools, which can create significant liability not only for the employer but also downstream for the client using the final product. These are exactly the kinds of risks companies are trying to address, Setlur added. "While clients have not specifically asked us to amend their employment contracts in line with AI usage, they have asked what they should be aware of and what positions they should take regarding employees' use of AI. One of the key points we advise is to clearly set out, in the contract or in a responsible AI use policy, how AI may be used," said Deepika Thakkar, Partner at Nishith Desai Associates. The firm typically recommends including a representation from the employee that any use of AI does not violate third-party copyright or IPRs to protect employers from being exposed to litigation arising from AI-generated or AI-assisted work. "We also suggest taking an indemnity. While most employment contracts already include a standard indemnity for losses caused by an employee's actions, we advise expressly covering actions that involve reliance on AI-generated material," she said. Confusion persists Even as companies are beginning to take cognisance of the situation at hand, many employers complain about confusion around AI usage rules within an organisation. Only three-in-ten India-based professionals feel confident in their AI skills, and 61% professionals agree that their employers don't provide clear ways to use AI for their day-to-day tasks, a recent study by learning platform Udemy found. Compounding this challenge, most employees in India believe it is their personal responsibility to acquire AI skills, highlighting a widening enablement gap. This ambiguity leads to a shadow AI culture, where nearly half of employees admit to using AI in ways that might violate unstated or vague rules, increasing compliance and security risks.

Despite the rapid rise of AI, many employees still do not fully understand how they are expected to use AI within their organization. Several studies have reflected this uncertainty. Employees often struggle to understand what is allowed, what must be avoided, and where the boundaries lie. Clearer policies, training, and internal communication will therefore become increasingly important," said staffing firm Teamlease Digital's CEO Neeti Sharma. For India Inc., the next frontier is learning to use AI without tripping over its own blind spots.

Published by HT Digital Content Services with permission from TechCircle.