Skip to content
Blog

Trends in biometric information regulation in the USA

What are the precedents, arguments and future prospects for legislation at city to state level?

Hayley Tsukayama

5 July 2022

Reading time: 9 minutes

The curve of your cheek, the swing of your walk, the exact angle of your crooked smile – these are pieces of information that are tied to who you are. This permanence makes biometric data valuable to governments and businesses who might want to use it to track where we go, what we do and who we meet. Yet such collection carries with it inherent privacy, equity and security risks.

The unrestricted layering of a world, already blanketed with surveillance cameras, with biometric technologies threatens to erase the line between public and private life.

The first two waves of biometric regulation

Historically, American lawmakers in a handful of states have paid attention to the need to regulate the use of biometric tools. In 2008, the state of Illinois passed the Biometric Information Privacy Act (BIPA), a groundbreaking law to address the growing use of biometric data to, for example, validate people’s identities – a practice that had started sparking alarm. As the Bill itself states, ‘an overwhelming majority of members of the public are weary of the use of biometrics when such information is tied to finances and other personal information.’

More than a decade later, BIPA remains the gold standard for consumer biometric legislation in the United States. The Act requires companies to obtain consent before collecting biometric data – a rare thing in the USA. It also forbids companies from selling the information collected without permission, requires them to disclose why they are collecting it and sets rules for deleting it. Crucially, the Bill also allows individuals to sue companies for violating the law.

Illinois’ BIPA set off a small trend in legislation after it first passed. In 2009, Texas followed suit with a weaker version of the law, which lacked the right for individuals to sue. Washington State passed its version of biometric regulation in 2017.

The following wave of regulatory action came amid a renewed push for criminal justice reform in the USA. Law enforcement’s use of facial recognition technology – to identify individuals, track group or individual movements through cities or determine (often with specious scientific claims) someone’s emotions, sexual orientation or racial identity – has led many communities to forbid this type of biometric application. Bans have recently been passed at the city level in San Francisco and Oakland (California) in 2019; in Portland (Oregon) in 2020; and in Minneapolis (Minnesota) in 2021, to name a few.

Other local governments have taken different approaches. Massachusetts, for example, passed a state law in 2021 that did not completely eliminate government use of facial recognition. Instead, it set up several significant rules for its use, such as requiring a court order before police can compare images to databases of personal information, using biometric technology. This approach aims to limit the risk of police use of facial recognition rather than eliminate it, as is the goal, for instance, of San Francisco’s ordinance.

In another example, again from 2021, New York City placed some limits on how private companies can collect biometric identifiers. This ordinance took a more moderate approach than Illinois’ BIPA by, for example, allowing signs posted at shop doors to replace obtaining explicit consent before biometric data is collected. This arguably undercuts the consent requirement considerably, as someone may simply not see a posted sign and, even if they did, they could not possibly understand all the ways their data may be used from a single notice.

However, these more moderate approaches demonstrate that even those who may see some benefits in using technologies such as facial recognition know that they cannot operate unchecked.

The key arguments for regulation

What are the arguments in favour of biometric technology regulation? The first is to ensure that people have control over their data. While this should be the case for all data collection, the point carries particular weight with information such as faceprints or gait (used to identify or classify a person based on their walking posture) that can be collected from long distances away from the subject.

Secondly, facial recognition chills and deters freedom of expression. Biometric scanning has been pointed at protestors, who are simply (and legally) expressing their opinions, for instance, by the US Park Police, the US Postal Inspection Service and local police in several American cities across the country. Additionally, private entities that collect biometric data, such as private camera networks, often share this information with law enforcement.

Thirdly, there are still serious concerns with regards to the accuracy of many biometric technologies. Clearly, even if these worked perfectly, they would still present privacy and equity issues. However, it is important to list accuracy among the arguments for regulation because those producing and relying on biometric technology often reference the supposed accuracy of their tools as a selling point for their use.

The false veneer of unbiased and accurate judgement often attributed to biometrics produces serious consequences. Historically over-surveilled minority communities, such as Black Americans, suffer disproportionately from such claims. Many studies have shown that the algorithms used in common facial recognition technology are biased. In the wake of groundbreaking work by the Algorithmic Justice League, scholarship has suggested that this is often due to the fact that few people of colour are featured in the datasets used for testing the recognition tools. More generally speaking, technology cannot shake the racial biases, even if unconscious, of its designers and users. As such, biometrics are more likely to misidentify people of colour than white people.

Algorithmic errors, for some, are literally a matter of freedom. The use of facial recognition has led to the wrongful arrests of at least three Black men – Michael Oliver, Nijeer Parks and Robert Williams. All of them could provide clear evidence that they were in different places at the time of the crimes they were arrested for. Williams even has tattoos that the perpetrator in his case does not have. Yet all three men were charged based on ‘evidence’ from facial recognition software.

Their false arrests are not random, but rather point to both the technical limits of current technology and the difficulty of training people to use and interpret it responsibly. The track record and serious effects of government use of facial recognition has prompted our organisation, the Electronic Frontier Foundation, to advocate for a complete ban.

Court cases and developing state-level data protection regimes

Mirroring new local legislation, regulatory agencies and courts have also been more active regarding biometric data collection, spurred on by the growing public distrust towards technology companies. Notably, Meta settled with a class of Illinois Facebook users for $650 million in 2021, after they filed suit accusing the company of collecting its users’ faceprints without permission.

The differences between the efficacy of different regulatory approaches reveals itself in this context. Illinois BIPA’s right to sue allowed for a class action lawsuit of Facebook users to force a shift in the company’s behavior. While the company had the same practices across the country, in Texas – where the law allows only the state to sue companies violating biometric privacy – the state chose not to file a similar suit until after individuals proved the merits of their case in the Illinois settlement. The February 2022 suit in Texas is believed to be the first time the 2009 law had ever been used.

Protection for biometric information has also emerged as a key element of more general privacy laws. In 2018, California passed the first comprehensive privacy law in an American state, the California Consumer Privacy Act, which applies to most companies that collect data and sets up rights to access, delete and opt out of the sale of personal information.

In 2019, the Act was amended to set higher standards for the collection and use of biometric information. For example, while companies can collect personal information for any ‘business purpose’ under California’s law, biometric information is one of several types of data categorised as ‘sensitive personal information’ that can only be collected if it’s ‘necessary’ to carry out the service a consumer has requested. When examining the five general state privacy laws passed in the USA to date – besides California, in Colorado, Connecticut, Utah and Virginia – there’s a marked trend towards defining biometric information for identification purposes as being uniquely sensitive in some way.

The work ahead

The gains around protecting biometric information have not been easy and we cannot assume they will be permanent.

Faced with a reactionary swell of law enforcement support – a push back against calls for criminal justice reform in the wake of George Floyd’s death – a handful of bills to extend facial recognition moratoriums or even replicate Illinois’ law in other states – California, Maryland and Maine – have failed.

Some lawmakers may be shying away from action, but coalitions to address biometric data collection are expanding. Scholars such as Ruha Benjamin and Safiya Noble have broken ground examining the intersection between racism and surveillance technologies. Along similar lines, Federal Trade Commissioner Alvaro Bedoya has noted how face recognition technology echoes so-called ‘lantern laws’ from seventeenth-century America, which required people of colour to carry candle lanterns after dark, to allow police to better identify them and monitor their movements. A growing number of civil rights groups have raised concerns about how biometric data collection – particularly when used in the criminal justice system or by law enforcement agencies – threatens fundamental human rights and often exacerbates racism.

Interrogations of the use of algorithms in the workplace have also prompted advocates that advance workers’ interests to consider the value of regulating biometrics. Companies are turning to biometric technology to monitor workers, particularly in warehouses or on assembly lines but, unlike consumers, workers are not always able to say ‘no’, as doing so may cost them their livelihood. The strength of Illinois’ law has proved beneficial also in this case. Thanks to BIPA, employees have pushed back against companies that forced them to use biometric time clocks.

Labour groups have now called for regulations that give employees more control over the personal data collected in the workplace, including biometric data, and ensure more transparency of use. For example, a regulatory framework suggested by the University of California at Berkeley’s Labor Center states that employers should minimise employee biometric data collection and not disclose it to other entities, unless required by law.

The story of biometric data regulation is still being written and discussions on regulatory and legislative interventions are leading to important opportunities to raise awareness of the issues at stake. For instance, more lawmakers are now challenging assertions from enforcement agencies and companies that biometric data collection is always necessary or contributes to countering bias.

The goals of these challenges, however, should be clear: people should have control over their data, which means that they should be able to consent meaningfully to having their biometric information collected. And strong regulation should stop or seriously curtail the most harmful applications of biometrics, such as government uses that risk chilling freedom of speech, curbing movement or limiting the autonomy of individuals in other ways.

Companies and government often try to cast ‘data’ as impersonal, but it is, in fact, deeply personal. It’s drawn from our actions, reflective of our beliefs and tied to the core of who we are. This personal connection is doubly true when it comes to biometric information – literally information derived from our bodies. Supposedly free societies should respect and protect it.


Image credit: ThePokerMan