PTV Network
South Asia4 HOURS AGO

AI enabled hate on the rise in India: Report

AI enabled hate on the rise in India: Report

Jama Masjid, Delhi (File Photo: Wikimedia Commons/ Sripathroy)

ISLAMABAD: Discrimination against Muslim and Christian minorities in India is being fuelled by two converging trends, the viral spread of generative AI hate content and the rapid expansion of AI-enabled policing and surveillance, warns a report published on Feb. 11 by the Centre for the Study of Organized Hate (CSOH).


The report has come ahead of India AI impact summit, 2026, which will be from Feb. 16 to 20.


The CSOH report says hate narratives that have circulated for years; often packaged as conspiracies such as “love jihad” and “population jihad,” or fuelled through cow-protection and temple-mosque disputes, are now being amplified with synthetic media that looks real.

 

Text-to-image and text-to-video tools can produce photorealistic depictions, caricatures, and “evidence-like” scenes that reinforce stereotypes, Muslim men framed as violent or criminal, and Muslim women dehumanized through sexualized, non-consensual imagery.


Communal targeting

Moments of public tragedy are turned into accelerants for communal targeting, with AI-generated visuals circulated alongside inflammatory commentary to cast minorities as villains in the immediate aftermath of attacks or accidents.


The report highlights that political messaging has also begun using generative AI directly.

 

Official social media accounts linked to the ruling BJP, have posts that depict Muslims as “ghuspaithye/infiltrators” and portray opposition leaders as "secretly" aligned with them.

 

A pattern of dehumanizing metaphors is developing, where minorities are rendered as pests or threats, paired with meme-like humor or dramatic background music that helps normalize extreme hostility, especially in battleground states in the run-up to 2026 assembly election such as Assam and Delhi.


A second cluster of concerns centers on the state’s deployment of AI.


Predictive policing

It notes remarks by Devendra Fadnavis, CM Maharashtra, about an AI tool being developed with IIT Bombay to detect alleged undocumented migrants using language cues, an approach that risks profiling Bengali-speaking Muslims and low-income migrant workers.

 

The spread of predictive policing has also been flagged, large criminal databases and facial recognition are systems that can embed historical bias into “data-driven” enforcement.


Examples of this “data-driven” enforcement referenced in the report include long-running predictive systems used by Delhi Police and expanding surveillance networks in cities such as Hyderabad, Bengaluru, and Lucknow.

 

Without transparent safeguards, independent oversight, and clear legal limits, AI risks hardening discrimination into automated, scalable practice, the report states.