Neha Mehta is a J.D. candidate, 2021 at NYU School of Law.

Introduction to Government Use of Algorithmic Decision Making

Amazon’s Alexa. Roomba vacuums. Tesla’s self-driving vehicles. In today’s economy, algorithmic decision making, powered by artificial intelligence (“AI”), is far from novel.  The widespread adoption of algorithmic decision making technology, or automated decision systems, in the private sector can be attributed to improvements in computing power, increased data volumes, and heightened consumer demand. However, public sector use of AI tells a vastly different story: application of algorithmic decision making in government has lagged behind private industry use. Only in recent years have federal agencies increasingly been employing algorithmic decision making tools to help officials make decisions about resource allocation and assisting them in furthering public policy objectives. For instance, algorithms are currently being used to help conduct child risk and safety assessments, provide predictive analytics for healthcare providers, and detect the abuse of public benefits. Government use of automated decision systems may be perceived as cost-effective and efficient, helping to equalize access to information and eliminate human bias. Yet, there is growing concern that public sector use of algorithmic decision making tools amounts to black box technology, posing challenges to deeply-held principles of accountability, privacy, and transparency. Mounting criticism over government use of artificial intelligence is exemplified nowhere better than with the implementation of algorithmic risk assessment tools, used to evaluate the risk of criminal recidivism.

Lawsuit Against ICE and its Risk Classification Assessment Software

In 2013, the US Immigration and Customs Enforcement (“ICE”) agency began using an algorithmic risk classification assessment tool (“RCA”) during its deportation proceedings. The system, which considers factors such as criminal history, community connections, and family ties, would assess whether people arrested over immigration violations should be released after 48 hours, or detained until they could appear in front of a judge. Between 2013 and 2017, the risk assessment tool identified 47% of individuals as low-risk, allowing those arrested to be granted release. However, in 2017, ICE radically transformed its risk assessment tool at the behest of the new administration. Under the altered RCA tool, the number of detainees who were deemed to be low-risk, and who could be granted released, drastically plummeted to less than 3%. ICE’s RCA software now routinely finds that close to 100% of detainees are too risky to be granted release.

Given the radical change to ICE’s detention policy, the NYCLU and Bronx Defenders filed a lawsuit alleging that the RCA algorithm violates constitutional due process protections, as well as federal immigration law, which requires individualized determinations for release. Even though the risk assessment algorithm is meant to merely provide a recommendation to ICE officers who are responsible for making final decisions, the NY ICE field office deviates from the automated decision system’s ruling less than 1% of the time. The NYCLU and Bronx Defenders claim that the algorithm, originally meant to weigh pre-trial risk factors, wholly eliminates the possibility of release, especially for detainees who pose no threat.


The NYCLU and Bronx Defender’s lawsuit against ICE addresses fundamental due process concerns. Under federal law, ICE is legally mandated to make individual assessments regarding detention. However, the agency has disregarded the constitution by contracting away its legal duties to an algorithm. ICE detainees are unaware of how the algorithm functions or operates, including how it weighs risk factors, and they do not have access to how the algorithm has classified them. This is exacerbated by the fact that immigrants arrested by ICE are not provided with legal representation at the time of detention.  ICE, in employing the RCA software, is not held accountable, resulting in detainees having no legitimate chance for redress. Even more, concerns surrounding transparency, potential discrimination, or loss of privacy are meaningless without robust oversight. 

Moreover, ICE’s risk assessments for those arrested over immigrant violations do not correspond with the federal judges’ determinations surrounding detention and release. Individuals are almost always treated as high-risk by ICE, whereas judges find that 40% of detainees can be released on bond. Given the discrepancy in treatment of immigrants by ICE and immigration judges, it would seem like the most obvious solution would be for ICE to alter its RCA tool to reflect such outcomes. However, the agency has implemented a software that will continuously produce one substantive outcome, one that it seeks—detention without bond. When agency use of risk assessment tools do not accurately calibrate outcomes—their sole function—public trust in the government begins to waver.

Additionally, when individuals are denied release by ICE, they must remain incarcerated in jails for extended periods of times before they are even provided the opportunity to ask for release. The median time detained immigrants must wait before appearing before an immigration judge stands at 80 days. Detention has debilitating collateral consequences for individuals, resulting in a loss of community ties, ineligibility for benefits, unemployment, and so forth.

The NYCLU and Bronx Defender suit against ICE is a crucial step in recognizing the opaqueness and ambiguity that accompanies the public sector use of algorithmic decision making systems. Hopefully, legal resolution of the plaintiff’s claims will provide some clarity on how risk assessment tools can better accomplish democratic objectives.