I’ve been a sceptic of the promises made about machine learning, something the media usually calls “A.I.”, for “artificial intelligence”. It’s not that I think it’ll rapidly become self-aware and decide to kill us all—that’ll come later (mostly kidding…). My scepticism is because the promises made have mostly been hype and marketing puffery with little to show for all the hot air expelled. And yet, A.I. is already everywhere, and together with algorithms of various sorts, they will increasingly dominate how we interact with companies and the world. In fact, they already do—and that’s a problem.
Back in February, one of New Zealand’s two supermarket companies, NZ-owned Foodstuffs, announced they would trial Facial Recognition Technology (FRT) as part of their store security, in an effort to reduce in-store crimes. Customers entering a store will be scanned and the system will compare the images with folk in their database. The company said that if the computer finds a match, it will require a second person to do a visual match. If the individual is still considered a match, they’ll be confronted by store security. The company talks about reducing violent and abusive behaviour—which is a very real problem—but it’s obvious it has a roll in excluding recidivist shoplifters, too.
However: FRT is not even remotely foolproof, and the systems make mistakes constantly—especially for women who are not of European descent, such as, Māori, Pacific Island, and Indian. In fact, this problems has already happened.
On Monday, 1News reported that a woman of Māori descent in Rotorua was misidentified as a thief. The woman says she provided the store security with her ID and told them she was not the person they’d trespassed. "It didn't seem to change their mind which was already made up based on what they saw," she said. The company apologised and blamed “human error”, which is another huge problem in the system: It relies on the human verifier being unbiased, something all humans struggle with to varying degrees, and many studies have indicated racial and ethnic bias is common in these sorts of situations. Put another way, no one's perfect. The woman observed, "[It's] ironic they blame human error for an AI piece of technology knowing it will have false positives and errors across the board." Exactly. Also, the incident happened on her birthday.
New Zealand’s Privacy Commissioner already had concerns about this use of FRT, so it seems probable his level of concern will be heightened. He’s definitely not alone in that. My local New World supermarket is part of the FRT trial, and I’ve never been comfortable with that fact.
Meanwhile, NZ’s other supermarket company, Australian-owned Woolworths, announced yesterday that it was rolling out body cameras to all of its stores because, the company says, it’s seen “a 75% increase in physical assaults and 148% increase in ‘serious reportable events’ in the past three years”. The cameras will be worn around the neck and only turned on if there’s an incident. Also, staff are supposed to notify customers before recording.
It’s easy to see how in a tense situation a staff member may forget to tell a customer they’re being recorded—or, the staff member may forget to turn it on at all. In a statement quoted in the linked article, the company claims, “Footage will not be released except when requested by police as part of an investigation." So… they won’t turn it over to the police unless its requested by them? What will they do with it if it isn’t requested? How long will they store the footage, and how securely will it be kept? Who will have access to it, and for what purposes? Will A.I. be used, as with Foodstuffs, to identify repeat offenders? Seems to me the Privacy Commissioner ought to be concerned about this, too.
There’s no sensible person one who isn’t concerned about the rise in abuse and even violence directed at retail workers, and we’re well aware that shoplifters are driving up costs for us all. However, that doesn’t mean that there aren’t legitimate questions that need to be answered, and it also doesn’t mean companies—or governments, for that matter—can do absolutely anything they want and have no controls or restrictions just because they say it’s for “security”. We’re in a completely new arena now, and that’s all the more reason there must be extra caution.
The main problem is that just like its human creators, A.I. isn’t perfect, and its flaws are compounded when the humans in the mix are placed in situations in which their inherent biases may be reinforced by the A.I. system’s inherent flaws. There’s also no such thing as a computer system that can’t be hacked, which is another reason privacy considerations are so important.
We need to dial down the hype and pay more attention to the legitimate concerns about privacy and the potential for harm caused by mistakes—by A.I. or the humans involved in the process. Meanwhile, we do need to do more to stop crimes and violence against workers in stores, and that will likely require legislation. Indeed, the two supermarket companies’ use of technology is happening at least in part because government hasn’t provided any solutions. But charging ahead into the unknown with little oversight doesn’t seem like a great idea. What I’m really saying is, let’s get this right—while we still can.
No comments:
Post a Comment