Companies are collecting our biometric data. Society needs assurances of security
7 mins read

Companies are collecting our biometric data. Society needs assurances of security

crowded station

Source: Pixabay/CC0 Public Domain

Imagine walking through a noisy train station. You’re rushing through the crowd, unaware that cameras are not only watching you, but recognizing you.

Today, our biometric data is valuable to businesses for security reasons, allowing them to improve customer service and improve their own efficiency.

Biometrics are unique physical or behavioral characteristics that are part of our daily lives. The most common of these is facial recognition.

Facial recognition technology comes from a branch of artificial intelligence called computer vision and is similar to giving computers eyesight. The technology scans images or videos from devices, including CCTV cameras, and selects faces.

The system typically identifies and maps 68 specific points known as facial landmarks. These create a digital fingerprint of your face, allowing the system to recognize you in real time.

Facial landmarks include the corners of the eyes, the tip of the nose, and the edges of the mouth. They help create a mathematical representation of the face without storing the entire image, improving both privacy and efficiency.

From supermarkets to car parks and train stations, CCTV cameras are everywhere, quietly doing their job. But what exactly is their job now?

Businesses can justify collecting biometric data, but with power comes responsibility, and the use of facial recognition technology raises serious concerns about transparency, ethics and privacy.

Even if police use of facial recognition technology can be considered unethical, then the business case becomes less compelling, especially since little is known about how companies store, manage and use data.

Collecting and storing biometric data without consent may violate our rights, including protection against surveillance and storage of personal images.

Balancing security, performance, and privacy rights is a complex ethical choice for a business.

As consumers, we can often be reluctant to share our personal information. However, facial recognition poses more serious threats, such as deepfakes and other spoofing threats.

Take, for example, the recent revelation that Network Rail has been secretly monitoring thousands of passengers using Amazon AI software. This surveillance highlights a key issue: the need for transparency and rigorous regulation, even as the company watches us to improve services. A Network Rail spokesman said: “When we deploy technology, we work with the police and security services to ensure we take proportionate action and always comply with the relevant laws on the use of surveillance technology.”

One of the main challenges is the issue of consent. How can the public ever give informed consent if they are constantly monitored by cameras and do not know who is storing and using their biometric data?

This fundamental problem underscores the difficulty of addressing privacy concerns. Companies face the daunting task of obtaining clear, informed consent from people who may not even know they are being observed.

Without transparent practices and clear consent mechanisms, it is virtually impossible to ensure that the public is truly aware of and consents to the use of their biometric data.

Think about your digital security. If your password is stolen, you can change it. If your credit card is compromised, you can cancel it. But your face? That’s permanent. Biometrics are incredibly sensitive because they can’t be changed once they’re compromised. That makes it a high-stakes game when it comes to security.

If the database is breached, hackers can use the data for identity theft, fraud, or even harassment.

Another concern is algorithmic bias and discrimination. If data is used to make decisions, how can companies ensure that diverse and sufficient data is included to train the algorithm?

Businesses can use biometrics for authentication, personalized marketing, employee monitoring, and access control. There is a significant risk of gender and racial bias if the algorithm is trained primarily on data from a homogeneous group, such as white men.

Companies should also ensure that digital biases are not perpetuated. Failure to do so can lead to social inequality.

Legislation and awareness

As facial recognition becomes more widespread, the need for robust regulation becomes urgent. Regulations must require explicit consent before anyone’s biometric data is captured. They should also establish strict standards for how that data is stored and secured to prevent breaches.

It is equally important for society to become more aware of the issue. While people are becoming more aware of data protection, facial recognition often goes unnoticed. It is invisible in our daily lives, and many are unaware of the risks and ethical issues. Educating the public is key.

Incorporating the principles of responsible AI into the implementation of facial recognition technology would be a good place to start. Responsible AI emphasizes fairness, accountability, transparency, and ethics. This means that AI systems, including facial recognition, should be designed and used in a way that respects human rights, privacy, and dignity.

However, businesses will not necessarily prioritise these principles if they are not accountable to regulators or the public.

Transparency is the cornerstone of responsible AI. If organizations using facial recognition remain secretive about their practices, we can’t trust them with our biometric data.

Companies armed only with your personal data can be very powerful in terms of manipulative marketing. Just one “like” is enough for personalized campaigns to be very accurate for you.

Now, however, political parties like the PTI in Pakistan have adopted artificial intelligence (AI) technology to enable leader Imran Khan to campaign despite serving a prison sentence.

Artificial intelligence allowed Imran Khan to speak to his supporters from prison.

The collection and analysis of visual data is particularly important compared to non-visual data because it provides richer, more personal and direct insights into human behavior and identity.

That’s why its growing use by businesses raises so many concerns about privacy and consent. While the public remains unaware of the extent to which their visual data is being captured and used, their information will be vulnerable to misuse or exploitation.

Brought to you by The Conversation

This article is reprinted from The Conversation under a Creative Commons license. Read the original article.Conversation

Quote:Businesses Harvesting Our Biometrics: Society Needs Security Assurances (2024, July 10) retrieved July 10, 2024, from

This document is subject to copyright. Apart from any fair use for private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.