Taming abuse and hate online
India, Nov. 29 -- When the Supreme Court directed the Centre this week to draft guidelines within four weeks for regulating user-generated content, it acknowledged a problem technology companies have long preferred to ignore. Harmful content - spreading hate, defamation, targeting of vulnerable communities - spreads with impunity while platforms hide behind inadequate self-regulation. The apex court observed that there must be accountability for content uploaded on platforms, and current mechanisms have proven ineffective: It also spoke about the need for an independent oversight body. The same bench ordered comedians who mocked persons with disabilities to host fundraisers twice monthly - after petitions detailed how India's Got Latent, a purported comedy-talent show, mocked families desperately raising funds for children with spinal muscular atrophy. A key part of the problem in the recent cases was the format - videos and audio - where detection proves most challenging. Audio and video content present distinct moderation challenges. By the time objectionable material is flagged and removed - typically 48 to 72 hours after first flagged - it has already gone viral. Automated systems trained predominantly on English-language datasets struggle with context, cultural nuance, and linguistic complexity in other languages.
Studies show that content moderation accuracy drops precipitously for non-English speakers. The India's Got Latent show crystallises the failure. Crass humour is not illegal, but the show has long flirted with hateful notions until its creators cancelled it in February due to the backlash. The cancellation came when podcaster Ranveer Allahbadia made remarks about parental intercourse in February, triggering multiple police cases against the show. It was only then that YouTube intervened, and Samay Raina, the host of India's Got Latent and one of its judges, deleted all content. The distinction matters: Edgy comedy that challenges conventions differs fundamentally from content that normalises harm.
The apex court's intervention drew understandable concerns about free speech. Senior advocates representing platforms warned against overreach, noting existing frameworks under challenge in high courts. The court itself said it seeks regulations that are not meant to "throttle" anyone but to create a "sieve", filtering out bad content. Here, it is crucial to remind ourselves of India's constitutional architecture - an approach that rejects absolutist interpretations of free speech. Article 19(2) permits reasonable restrictions on speech because unfettered expression can inflict measurable harm. This is where the Big Tech platforms, more than fame-seeking internet users such as Allahbadia and Raina, need to be held accountable.
Big Tech relies on self-regulatory bodies for most content policing decisions (except for outright illegality, including gore or pornography). When self-regulation fails to prevent months of accumulated harm that damages vulnerable communities, regulatory intervention becomes necessary. The platforms' inability in moderating audio-visual content in non-English languages at scale is not merely a technical limitation - it reflects prioritisation choices about where to invest resources. The Court directive represents overdue calls for accountability over systematic failures. A well thought-out policy must follow, possibly after the executive considers the views of all stakeholders and due deliberation in Parliament....
To read the full article or to get the complete feed from this publication, please
Contact Us.