The information commissioner has warned companies to steer clear of “emotional analysis” technologies or face fines, because of the “pseudoscientific” nature of the field.
It’s the first time the regulator has issued a blanket warning on the ineffectiveness of a new technology, said Stephen Bonner, the deputy commissioner, but one that is justified by the harm that could be caused if companies made meaningful decisions based on meaningless data.
“There’s a lot of investment and engagement around biometric attempts to detect emotion,” he said. Such technologies attempt to infer information about mental states using data such as the shininess of someone’s skin, or fleeting “micro expressions” on their faces.
“Unfortunately, these technologies don’t seem to be backed by science,” Bonner said. “That’s quite concerning, because we’re aware of quite a few organisations looking into these technologies as possible ways to make pretty important decisions: to identify whether people might be fraudsters, or whether job applicants are worthy of getting that role. And there doesn’t seem to be any sense that these work.”
Simply using emotional analysis technology isn’t a problem per se, Bonner said – but treating it as anything more than entertainment is. “There are plenty of uses that are fine, mild edge cases … if you’ve got a Halloween party and you want to measure who’s the most scared at the party, this is a fun interesting technology. It’s an expensive random number generator, but that can still be fun.
“But if you’re using this to make important decisions about people – to decide whether they’re entitled to an opportunity, or some kind of benefit, or to select who gets a level of harm or investigation, any of those kinds of mechanisms … We’re going to be paying very close attention to organisations that do that. What we’re calling out here is much more fundamental than a data protection issue. The fact that they might also breach people’s rights and break our laws is certainly why we’re paying attention to them, but they just don’t work.
“There is quite a range of ways scientists close to this dismiss it. I think we’ve heard ‘hokum’, we’ve heard ‘half-baked’, we’ve heard ‘fake science’. It’s a tempting possibility: if we could see into the heads of others. But when people make extraordinary claims with little or no evidence, we can call attention to that.”
The attempted development of “emotional AI” is one of four issues that the ICO has identified in a study of the future of biometric technologies. Some are simple regulatory matters, with companies that develop similar technologies calling for further clarity on data protection rules.
But others are more fundamental: the regulator has warned that it is difficult to apply data protection law when technology such as gaze tracking or fingerprint recognition “could be deployed by a camera at a distance to gather verifiable data on a person without physical contact with any system being required”. Gathering consent from, say, every single passenger passing through a station, would be all but impossible.
In spring 2023, the regulator will be publishing guidance on how to use biometric technologies, including facial, fingerprint and voice recognition. The area is particularly sensitive, since “biometric data is unique to an individual and is difficult or impossible to change should it ever be lost, stolen or inappropriately used”.
Read more:
Information commissioner warns firms over ‘emotional analysis’ technologies