Racist, sexist, casteist: Is AI bad news for India?

After communal clashes in Delhi’s Jahangirpuri area last year, police said they used facial recognition technology to identify and arrest dozens of men, the second such instance after a more violent riot in the Indian capital in 2020.

In both cases, most of those charged were Muslim, leading human rights groups and tech experts to criticize India’s use of the AI-based technology to target poor, minority and marginalized groups in Delhi and elsewhere in the country.

As India rolls out AI tools that authorities say will increase efficiency and improve access, tech experts fear the lack of an official policy for the ethical use of AI will hurt people at the bottom, entrenching age-old bias, criminalizing minorities and channeling most benefits to the rich.

People mourn next to the body of Muddasir Khan, who was wounded on Tuesday in a clash between people demonstrating for and against a new citizenship law, after he succumbed to his injuries, in a riot affected area in New Delhi, India, February 27, 2020. (REUTERS)

“It is going to directly affect the people living on the fringes — the Dalits, the Muslims, the trans people. It will exacerbate bias and discrimination against them,” said Shivangi Narayan, a researcher who has studied predictive policing in Delhi.

With a population of 1.4 billion powering the world’s fifth-biggest economy, India is
undergoing breakneck technological change, rolling out AI-based systems — in spheres from health to education, agriculture to criminal justice — but with scant debate on their ethical implications, experts say.

In a nation beset by old and deep divisions, be it of class, religion, gender or wealth, researchers like Narayan — a member of the Algorithmic Governance Research Network — fear that AI risks exacerbating all these schisms.

“We think technology works objectively. But the databases being used to train AI systems are biased against caste, gender, religion, even location of residence, so they will exacerbate bias and discrimination against them,” she said.

Security personnel stand guard on a road as a Hindu religious flag is seen on a minaret (C) of a burnt-out mosque following clashes between people supporting and opposing a contentious amendment to India's citizenship law in New Delhi on February 26, 2020. (AFP)

Facial recognition technology — which uses AI to match live images against a database of cached faces — is one of many AI applications that critics say risks more surveillance of Muslims, lower-caste Dalits, Indigenous Adivasis, transgender and other marginalized groups, all while ignoring their needs.

Linking databases to a national ID system and a growing use of AI for loan approvals, hiring and background checks can slam doors firmly shut on the marginalized, said Siva Mathiyazhagan, an assistant professor at the University of Pennsylvania.

The growing popularity of generative AI applications such as chatbots further exacerbates these biases, he said.

“If you ask a chatbot the names of 20 Indian doctors and professors, the suggestions are generally Hindu dominant-caste surnames — just one example of how unequal representations in data lead to caste-biased outcomes of generative AI systems,” he told the Thomson Reuters Foundation.

Caste discrimination was outlawed in India 75 years ago, yet Dalits still face widespread abuse, many of their attempts at upward mobility met with violent oppression.

Under-represented in higher education and good jobs despite affirmative action programs, Dalits, Muslims and Indigenous people lag higher-caste Indians in smartphone ownership and social media use, studies show.

About half of India’s population — primarily women, rural communities and Adivasis — lacks access to the Internet, so “entire communities may be missing or misrepresented in datasets ... leading to wrong conclusions and residual unfairness,” analysis by Google Research showed in 2021.

“Rich people problems like cardiac disease and cancer, not poor people’s tuberculosis, is prioritized, exacerbating inequities among those who benefit from AI and those who do not,” researchers said in the Google analysis.

Similarly, mobile safety apps that use data mapping to flag unsafe areas are skewed by middle-class users who tend to mark Dalit, Muslim and slum areas as dodgy, potentially leading to over-policing and unwarranted mass surveillance.

“The irony is that people who are not counted in these datasets are still subject to these data-driven systems which reproduce bias and discrimination,” said Urvashi Aneja, founding director of Digital Futures Lab, a research collective.

India’s criminal databases are particularly problematic, as Muslims, Dalits and Indigenous people are arrested, charged and incarcerated at higher rates than others, official data show.

The police registers are used for potential AI-assisted predictive policing to identify who is likely to commit a crime. Generative AI may come to court, with the Punjab and Haryana high court earlier using ChatGPT to decide whether to award bail for a suspect in a murder case — a first in the country.

“Any new AI-based predictive policing system will likely only perpetuate the legacies of caste discrimination and the unjust criminalization and surveillance of marginalized communities,” said Nikita Sonavane, co-founder of the Criminal Justice and Police Accountability Project, a non-profit.

“Policing has always been casteist in India, and data has been used to entrench caste-based hierarchies. What we’re seeing now is the creation and rise of a digital caste panopticon.”

The ministry of information technology did not respond to a request for comment.


 


 

Previous Post Next Post