Discussion
Essex police pause facial recognition camera use after study finds racial bias
ap99: > more likely to correctly identify men than women.> more likely to correctly identify black participants than participants from other ethnic groups.> AI surveillance that is experimental, untested, inaccurate or potentially biased has no place on our streets.I wonder if they're more worried about putting too many men in prison or too many black people.
gib444: Alternative headlines:Essex police, well aware of all the issues before using it, pause use until expected bad publicity dies downOrEssex police chosen as force to take some flack for the issues while other forces steam ahead
pingou: If the suspect is Black, the software should automatically return zero matches in 30% of cases. Problem solved.
ghusto: > the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”.I am genuinely unsure what's going on.My understanding of the article is that the system is problematic because it is more likely to correctly identify black people than "other ethnic groups". Is that right?
OJFord: Essentially (with made up numbers): 100 men on a high street, 4 of which are on a watch-list; 2 of which are black. Both black guys get identified, only one of the others does.Ditto men vs. women, mutatis mutandis.
defrost: It's problematic for use in Essex as it works best for a small minority of the Essex population and has a much higher error rate for a typical sample of the Essex community.Adendum: Essex Ethnicity breakdown- 85.1% White British · 5.2% Other White · 3.7% Asian · 2.5% Black · 2.4% Mixed · 1.1% Other · (2021).from: https://en.wikipedia.org/wiki/Essexie: most accurate (however acccurate that is) for the men of 2.5% of the regions populationNot so accurate for 98.75% of the regions population.
OJFord: This is actually more (socially/ethically/philosophically) interesting than one might assume from the headline: it's not false positives, it's that it's more effective (correctly identifies someone is on a watch-list) for one group than another within a protected characteristic.So essentially they're pausing the use of it because it works too well for group A / not well enough for group B, potentially leading to disproportionate (albeit correct) arrests of group A.
bloqs: Correlation does not indicate causation
edgyquant: So it should be improved but sounds like it’s just catching criminals who need to be caught no?
metalman: Absolutly impossible to condone further structural bias against a minority, and just ignore the free "white pass" built into the software, and esspecialy troubling that it passes white women, the most. The only possible action is to reject and dissable any system with a racial bias, investigate how such a thing happened, with a very pointy look for intent on the part of the vendors, who would then qualify for bieng housed in one of his majestys facilities for persons such as these.
edgyquant: If it’s not falsely identifying people I don’t see a problem at all. If it’s identifying criminals every criminal should be caught
blitzar: > the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”Technology has moved on a lot no doubt, however, studies were finding the opposite (and with order of magnitude errors) as recently as 2020 with a lazy google literature search> these algorithms were found to be between 10 and 100 times more likely to misidentify a Black or East Asian face than a white facehttps://jolt.law.harvard.edu/digest/why-racial-bias-is-preva...
bondarchuk: https://en.wikipedia.org/wiki/Selective_enforcement