At a glance
- Whilst facial recognition has been around for decades, the meteoric rise of social media and technology has meant that it is becoming a much more common factor of life
- For businesses, there is a fine line between storing data to provide a better service and breaching GDPR and compliance regulations
- It is essential that businesses and organisations who are looking to use facial recognition for employees, customers or visitors, know exactly what they can and cannot do with the data, and how best to store it in compliance with regulations.
Whilst facial recognition has been around for decades, the meteoric rise of social media and technology has meant that it is becoming a much more common factor of life.
In 2017, Apple announced the use of Face ID, or face recognition, as a way to unlock their latest phones, and as well as social media apps, various companies such as Lloyds, Mastercard and Amazon have also either trialled or investigated using similar features for payment. More widely, we also see the use of facial recognition scanning at airports throughout the world, such is the way with modern day passports.
From a customer perspective, it does raise a good question: why are some people happy to use facial recognition for their convenience, but hesitant when organisations want to use similar data? Global protests have taken place recently against the use of facial recognition for surveillance purposes, and there will likely be more to come as the use of cameras increases. For organisations, it’s more a question of convenience vs compliance, with question marks over how the data is stored and used if it is to become a part of their process or customer offering.
Convenience vs Compliance
There is little to no doubt that using facial recognition as a way to log into apps or to complete a payment is a smooth process, and one that does away with needing to remember passwords with a dozen characters, special letters and numbers. As mentioned above, more and more organisations are beginning to use faces as identifiers, whether that be turning a phone on, logging on to apps or even crossing a border. These are all things that provide convenience, generally saving people time, so it could be suggested that customers don’t mind providing personal and sensitive data if they get something in return.
For businesses, however, there is a fine line between storing data to provide a better service and breaching GDPR and compliance regulations. The storage of such data is a huge risk for any organisation, especially so for faces which pose the risk of misuse, unauthorized tracking and identity theft. With increasing numbers of apps and opportunities for people to use facial recognition, the databases held with such images get larger and, consequently, the risk of hacking and violation of privacy grows.
Under GDPR, in place since 2018, facial imagery is deemed to be sensitive data and therefore needs to be adequately protected. This was confirmed as such by the EU Digital Chief earlier this year, who said facial recognition in the EU requires consent in order to not breach GDPR. The use of facial recognition involves processing personal data, and therefore data protection laws apply and considerations need to be made regarding deployment, watchlists, the process of biometric data and everything through to the retention and deleting of it.
There have already been instances in which organisations have been fined for their use of facial recognition, such as this school in Sweden, who used it as an opportunity to help keep track of attendance and were subsequently fined 200,000kr (around £16,800) for breaching GDPR. The fine, handed down by the Swedish Data Protection Authority, would have been heavier had the use of the facial recognition been longer than just the three weeks it was in place for.
It is essential that businesses and organisations who are looking to use facial recognition for employees, customers or visitors, know exactly what they can and cannot do with the data, and how best to store it in compliance with regulations.
Human rights concerns
Also of concern to businesses could be the recent ruling in the South Wales Police case, in which a legal challenge brought against them ruled their use of facial recognition to be unlawful and interfered with human rights.
The challenge centred around the use of facial recognition to map faces in crowds, and whilst it had been proven to help arrest people for previous offences, it was deemed to breach rights and to have been done without anyone’s knowledge or consent. The use of facial recognition for law enforcement purposes constitutes ‘sensitive processing’ and as such a Data Protection Impact Assessment and an ‘appropriate policy document’ must be in place. ‘Sensitive processing’ occurs irrespective of whether the image yields a match to a person on a watchlist or the biometric data of unmatched persons is subsequently deleted within a short space of time
Although deemed in breach, the ruling isn’t a final nail in the coffin for police use of facial recognition, and therefore we could see an improved and more accepted use in the future if certain requirements are met.
As with any risk management process, horizon scanning is key. As 2020 has shown, there is always the possibility for new risks to present themselves, meaning it is essential that organisations have a strong knowledge of any weak spots which could be hit and any regulations that always need to be met.
For any organisations looking into the use of facial recognition for their business, contacting Zurich’s risk engineers will help put together an overview of the risks, and how to mitigate them.
For more information on the risks of facial recognition, or to learn how Zurich can help, please speak to your local contact.