Accountability in Immigration: DHS Faces Pushback Over Rapid A.I. Expansion

Written by: Sierra Mead

In February 2024, the Department of Homeland Security (DHS) announced its decision to significantly expand its use of artificial intelligence (A.I.). The push to expand the use of A.I. includes recruiting 50 A.I. experts to form the Department’s “A.I. Corps” and partnering with OpenAI, Anthropic and Meta to launch pilot programs to tackle various national security challenges, including drug and human trafficking, cybersecurity, and immigration enforcement.

Before this influx of resources, A.I. was already integral to certain DHS initiatives. For example, “Operation Renewed Hope” used facial recognition and A.I. to enhance images and create probable identities for victims of sexual exploitation. This technology enabled DHS to identify hundreds of previously unknown victims in less than a month. However, DHS’s rapid adoption of A.I. is raising serious concerns regarding algorithmic biases and discrimination in its application to immigration enforcement which can affect decisions related deportation, naturalization, and detention. Studies have shown A.I. systems that learn from large databases can exhibit discriminatory behaviors based on structural inequalities and societal biases within the databases. For example, Amazon’s A.I. hiring tool was found to be biased against women applying for technical roles, and A.I. predictive policing tools have shown racial bias by unfairly targeting Black communities.

In September 2024, 142 human rights, immigration rights and privacy advocacy groups signed a letter to the DHS Secretary requesting that DHS suspend its use of A.I. technology for immigration enforcement. The letter expressed the organizations’ concerns that A.I. is being used to make critical decisions about deportation, asylum, separation of families and detention without public accountability or responsible risk management. With A.I. making life-impacting decisions of immigration enforcement and adjudications, the organizations expressed serious concern that DHS’s A.I. products could exacerbate existing biases or supercharge detention and deportation. They noted that the public has received little to no information on the agency’s tracking of A.I.’s civil rights impact and claim that DHS, having fast-tracked the use of A.I. technologies, appear to be in violation of federal policies governing the responsible use of A.I..

In line with these concerns, on October 3, 2024, three immigration groups, Pangea Legal Services, Mijente Support Committee and Just Futures Law filed a complaint against DHS in the U.S. District Court of the District of Columbia. The complaint outline’s the plaintiffs’ concern regarding DHS’s non-compliance with federal policies and their prayer for a judicial order demanding DHS to produce improperly held records requested through the Freedom of Information Act (FOIA).

The plaintiffs submitted FOIA requests in July and August 2024 regarding the nature of DHS’ A.I. usage. FOIA enables citizens to request unclassified federal agency records and requires the agency to respond to requests within 20 business days. The plaintiffs submitted requests to DHS for their A.I. impact assessments, verifications of accuracy, reliability and lack of system bias. DHS did not provide the groups with the requested records.

The complaint further alleges that DHS is adopting A.I. at an “alarming rate” and that technologies are being used to make a range of life-altering decisions without following federal policies to safeguard and monitor federal agencies’ A.I. usage. The complaint indicates that DHS has not provided evidence that they notify individuals about use of A.I. in adjudicating benefits or offer a process to address adverse A.I. decisions. The agency has not released information regarding how the technology is being used, whether they have taken necessary bias-mitigation measures or what safeguards the agency has adopted.

DHS must comply with several executive orders (“Orders”) and agency memoranda concerning the use of A.I. Executive orders 13960 and 14110 create standards for A.I. safety, security and risk management. The Orders direct agencies to publish the format and mechanisms of the agency’s A.I. tools and address to civil rights, civil liberties and privacy. The Office of Management and Budget (OMB) Memorandums M-24-10 and M-24-18 obligate agencies to publish an inventory of each of its A.I. use cases; to notify individuals of the use of A.I. in the adjudication of their benefits; to provide a process to seek redress against adverse A.I. decisions; to establish and publish processes that safeguard, measure, monitor, and evaluate the ongoing performance and effectiveness of the agency’s A.I. applications; and to publish the agency’s compliance plans indicating their plan on achieving consistency with these OMB guidelines.

Government agencies have access to powerful tools to enhance national security and improve public services through A.I. However, the adoption of A.I. in areas like immigration enforcement underscores the need for transparency and oversight to prevent biased decision making that could disproportionately affect vulnerable populations. This impending legal battle highlights the necessity for agencies to balance the benefits of A.I. innovation with protecting the rights of the individuals they serve by ensuring that the technology is used justly and without bias.

Sources:

Andrew Kreighbaum, Immigrant Groups Sue DHS for Artificial Intelligence Disclosures, Bloomberg Law (Oct. 4, 2024).

Artificial Intelligence at DHS, Homeland Sec. (last visited Oct. 12, 2024).

Cecilia Kang, The Department of Homeland Security Is Embracing A.I., N.Y. Times (Mar. 18, 2024).

Cate Burgan, Human Rights Orgs Call on DHS to End Some AI Uses, Meritalk (Sept. 10, 2024).

Complaint, Pangea Legal Serv. et al v. USCIS et al, (D. D.C. 2024) (No. 1:24-cv-2809).

DHS Launches First-of-its-Kind Initiative to Hire 50 Artificial Intelligence Experts in 2024, Homeland Sec. (Feb. 6, 2024).

Exec. Order No. 13960, 85 Fed. Reg. 78939 (Dec. 3, 2020).

Exec. Order No. 14110, 88 Fed. Reg. 75191 (Oct. 30, 2023).

FOIA Processing, Homeland Sec. (last visited Oct. 12, 2024).

Jeffery Dastin, Insight – Amazon scraps secret AI recruiting tool that showed bias against women, Reuters (Oct. 10, 2018).

Letter from 142 Orgs. to Sec’y Alejandro Mayorkas, Dep’t of Homeland Sec.

Off. of Mgmt. & Budget, Advancing Governance, Innovation, and Risk Management for Agency use of Artificial Intelligence (2024).

Off. of Mgmt. & Budget, Advancing the Responsible Acquisition of Artificial Intelligence in Government (2024).

Will Douglas Heaven, Predictive policing is still racist – whatever data it uses, MIT Tech. Rev. (Feb. 5, 2021).