The views expressed by contributors are their own and not the view of The Hill

To ‘keep Americans safe,’ Biden’s AI executive order must ban these practices

In a July 21 announcement, the Biden-Harris administration quietly hinted that it will soon issue a presidential executive order to “keep Americans safe” from AI. What goes into the order will have major consequences, shaping how lawmakers, state and local officials, private companies and other nations approach regulating AI and other automated systems. 

It is essential that the administration seize this opportunity to draw bright-line rules to safeguard people’s civil rights and prohibit the uses of AI where the risks it poses to those rights are too great.  

Algorithmic harms are increasingly ubiquitous. Every day, AI and machine learning technologies determine — often without human oversight — people’s liberty and their access to jobshealth carehousing and benefits, among other things. The algorithms may be indiscriminate in design — purporting to affect everyone equally — but because they are deployed in the real world, atop existing inequalities, they disproportionately impact communities of color and other protected groups, exacerbating the obstacles they already face. 

Without a comprehensive effort to identify, stop and remedy algorithmic harms, we risk entrenching existing inequalities and undermining our ability to build a fair and just society.

The use of advanced automated technologies in the criminal legal system, like “predictive policing” and risk assessment instruments, for example, pose grave risks to people’s lives and liberty not only due to the technology’s inability to predict the future but also due to structural anti-Black bias within policing and the system itself. Rather than address root causes of violence, these technologies perpetuate a cycle of disparate criminalization, especially of Black people, that destabilizes communities and makes people less safe. 

Proponents of these systems claim they are objective, or at least better than human biases. But these technologies rely on historic crime and policing data that reflect long histories of racial segregation, racially-biased policing practices and disinvestment. Law enforcement’s use of facial recognition has repeatedly led to false arrests of Black people, including the recent arrest of Porcha Woodruff in Detroit, who was accused of carjacking. While many cases of false identification due to facial recognition technology are likely unknown, at least six Black people have reported being falsely accused of a crime, with severe and lifelong consequences. And while important work on algorithmic transparency and documentation is being done to mitigate bias in AI, technical fixes cannot begin to address the structural biases within and harms of the criminal legal system. 

The executive order on AI should prohibit the procurement, use and funding of AI and data-driven technologies at the federal level such as predictive policing, facial recognition technology, risk assessments and algorithmic criminal sentencing. These technologies lack a sound scientific basis, endanger communities and perpetuate racial discrimination in the criminal legal system. Even if the tech improves over time, its use is too risky to people’s fundamental rights to be employed at any level.  

Similarly, AI deployment in the workplace undermines workers’ health, safety and dignity, treating human beings like task-to-task automatons. More and more workers, especially in low-wage sectors, are directed by faceless algorithms and monitored by near-constant surveillance

Amazon, for example, uses cameras in its delivery vans to monitor drivers, reporting on whether they drink coffee while driving and how many times they buckle their seatbelts. The data these cameras collect are used to algorithmically evaluate drivers’ performance and, ultimately, to determine how much they are paid. Algorithmically-determined pay often results in low, fluctuating wages that workers and their families cannot reliably depend on. Technologies like this disproportionately harm workers of color, who are overrepresented in low-wage jobs where tasks are easily measurable and thus susceptible to datafication

The White House should prohibit the federal government — and its many, many contractors and grantees — from using AI or similar forms of algorithmic management to supervise workers, including by determining pay, promotions and the terms and conditions of their employment. And it should direct the relevant federal agencies to issue guidance to employers on how these technologies may conflict with existing worker protections. 

Any AI executive order need not start from scratch. The administration’s Blueprint for an AI Bill of Rights, published in October 2022, offers a comprehensive roadmap for addressing AI harms, and an executive order could include enforceable steps to implement it. And existing civil rights laws that prohibit discrimination already prohibit many forms of algorithmic discrimination; that’s why the administration directed federal agencies to address AI-driven bias earlier this year. 

But these actions do not adequately protect against all AI harms. Considering the unprecedented scale and pace at which AI operates, the executive order needs to do more and can set the standard when it comes to regulating AI.

Civil rights are an integral part of free and equal citizenship in the United States. For the Biden-Harris administration to advance its commitment to civil rights, its upcoming AI executive order must do more than mitigate risks at the margins. It must address the dangers of AI head-on.

Puneet Cheema is the manager of the Justice in Safety Project at the NAACP Legal Defense Fund (LDF). Brian J. Chen is the policy director at Data & Society. Amalea Smirniotopoulos is senior policy counsel at NAACP LDF.

Tags AI bias Algorithmic bias Artificial intelligence facial recognition Joe Biden Kamala Harris Politics of the United States Predictive profiling