Explained | Delhi Police uses facial recognition technology

When was FRT first introduced in Delhi? What are the concerns of using the technology on a mass scale?

When was FRT first introduced in Delhi? What are the concerns of using the technology on a mass scale?

The story so far: Right to Information (RTI) responses received by the Internet Freedom Foundation, a New Delhi-based digital rights organization, reveal that the Delhi Police treats matches with over 80% similarity generated by its facial recognition (FRT) system as positive results.

Why is Delhi Police using facial recognition technology?

Delhi Police first got FRT to trace and identify missing children. According to the RTI replies received by the Delhi Police, the procurement was allowed as per the Delhi High Court’s 2018 direction. Sadhan Haldar vs NCT of Delhi. However, in 2018, the Delhi Police told the Delhi High Court that the accuracy of the technology they purchased was only 2% and “not good”.

Things took a turn after multiple reports surfaced that the Delhi Police used the FRT to monitor the anti-CAA protests in 2019. In 2020, the Delhi Police said in an RTI reply that although it had received the FRT under Sadhan Haldar direction that was specifically about finding missing children, they used FRT for police investigations. Expanding the purpose of FRT use clearly demonstrates an example of “function creep,” where a technology or system gradually expands its scope from its original purpose to encompass and perform broader functions. According to available information, the Delhi Police subsequently used the FRT for investigative purposes and also notably during the 2020 Northeast Delhi riots, the 2021 Red Fort violence and the 2022 Jahangirpuri riots.

What is facial recognition?

Facial recognition is an algorithm-based technology that creates a digital map of the face by identifying and mapping an individual’s facial features, which it then matches against an accessed database. It can be used for two purposes: first, 1:1 identity verification, where a person’s card is obtained to match the person’s photo in a database to verify their identity. For example, 1:1 verification is used to unlock phones. However, it is increasingly being used to provide access to any benefits or government schemes. Second, there is 1:n identity identification, where a face map is obtained from a photo or video and then matched against the entire database to identify the person in the photo or video. Law enforcement agencies like Delhi Police usually provide FRT for 1:n identification.

For identification, 1:n FRT generates a probability or match score between the suspect to be identified and the available database of identified criminals. A list of possible matches is generated based on their probability of being the correct match with the corresponding match results. However, in the end, a human analyst selects the last likely match from the list of matches generated by the FRT. According to the Internet Freedom Foundation’s Panoptic Project, which tracks FRT deployment in India, there are at least 124 government-approved FRT projects in the country.

Why is using FRT harmful?

India has witnessed rapid deployment of FRTs in recent years, both by the Union and state governments, without introducing a law to regulate their use. The use of FRT presents two problems: problems related to misidentification due to inaccuracy of the technology, and problems related to mass surveillance due to misuse of the technology. An in-depth study of the technology revealed that its accuracy rate drops sharply by race and gender. This can result in a false positive when a person is misidentified as someone else, or a false negative when a person is not confirmed as themselves. Cases of false positives may lead to a bias towards the person who was misidentified. In 2018, the American Civil Liberties Union revealed that Amazon’s facial recognition technology, Rekognition, incorrectly identified 28 members of Congress as people who had been arrested for a crime. Of the 28, a disproportionate number were people of colour. Also in 2018, researchers Joy Buolamwini and Timnit Gebru found that facial recognition systems had higher error rates in identifying women and people of color, with the highest error rates in identifying women of color. The use of this technology by law enforcement agencies has already led to the wrongful arrest of three people in the US. On the other hand, cases of false negatives may result in the individual being excluded from access to major schemes that may use FRT as a means of granting access. One example of such exclusion is the failure of biometrics-based authentication under Aadhaar, which resulted in the exclusion of many people from receiving basic government services, which in turn led to starvation.

However, even if accurate, this technology can cause irreversible harm as it can be used as a tool to facilitate state-sponsored mass surveillance. Currently, India does not have a data protection law or a specific FRT regulation to protect against abuse. In such a legal vacuum, there are no safeguards to ensure that the authorities use the FRT only for the purposes for which they have been authorized, as is the case with the Delhi Police. FRT can enable constant surveillance of an individual, resulting in a violation of their fundamental right to privacy.

What do Delhi Police’s 2022 RTI responses reveal?

The RTI replies dated July 25, 2022 were shared by the Delhi Police after the Internet Freedom Foundation filed a complaint with the Central Information Commission to obtain the information after the Delhi Police repeatedly denied it. In their response, Delhi Police revealed that matches above 80% similarity are treated as positives, while matches below 80% similarity are treated as false positives, which require further “corroborating evidence”. It is not clear why 80% was chosen as the threshold between positive and false positive. No justification has been provided to support the Delhi Police’s claim that over 80% matching is sufficient to assume that the results are correct. Second, categorizing results below 80% as false positives instead of negatives indicates that Delhi Police may continue to investigate below 80% results. Thus, people who share familial facial features, such as in extended families or communities, may find themselves targeted. This can lead to the targeting of communities that have historically been over-policed ​​and faced discrimination by law enforcement.

The responses also mention that the Delhi Police matches the photographs/videos with photographs collected under sections three and four of the Identification of Prisoners Act, 1920, which has now been replaced by the Criminal Procedure (Identification) Act, 2022. This law allows for wider categories of data to be collected from a wider range of people, i.e. “convicted persons and other persons for the purposes of identification and investigation of criminal cases”. There are concerns that the law will lead to excessive collection of personal data in violation of internationally recognized best practices for data collection and processing. This revelation raises many concerns, as the use of facial recognition can lead to wrongful arrests and mass surveillance, leading to privacy violations. Delhi is not the only city where such surveillance continues. A number of cities, including Kolkata, Bengaluru, Hyderabad, Ahmedabad and Lucknow, have introduced ‘Safe City’ programmes, which implement surveillance infrastructures to reduce gender-based violence in the absence of any regulatory legal frameworks to operate as a precaution.

Anushka Jain is an Associate Policy Advisor and Gian Prakash Tripathi is a Policy Intern at the Internet Freedom Foundation, New Delhi

THE ESSENCE

RTI responses obtained by the Internet Freedom Foundation reveal that the Delhi Police treats matches with over 80% similarity generated by its facial recognition technology system as positive results. Facial recognition is an algorithm-based technology that creates a digital map of the face by identifying and mapping an individual’s facial features, which it then matches against an accessed database.

Delhi Police first got FRT to trace and identify missing children as directed by Delhi High Court in Sadhan Haldar vs NCT of Delhi.

Extensive research on the FRT has revealed that its accuracy levels drop off sharply by race and gender. This can result in a false positive when a person is misidentified as someone else, or a false negative when a person is not confirmed as themselves. The technology can also be used as a tool to facilitate state-sponsored mass surveillance.

Leave a Comment