Automation Nation: What Happens When Algorithms Decide Entry for Immigrants and Refugees?

By Alexandra B. Harrington

February 17, 2023

Automation Nation: What Happens When Algorithms Decide Entry for Immigrants and Refugees?


By Alexandra B. Harrington

In an increasingly digital world, we now have an expectation of immediacy. We demand instant replies. We can get nearly anything delivered to us within hours, with the fine-tuned forecasting and optimization algorithms of services like Amazon Prime efficiently selecting and sending out goods.[1] Yet, in return for quick delivery, there is a human cost. Amazon also used algorithms to fire employees expediently with little to no human oversight.[2] How, then, does one balance the benefits of efficient algorithms with usage that infringes on the rights of those it targets?

When international organizations and state governments use algorithms for decisions, it may be difficult to detect problems, and the consequences are dire. A key example of this is the use of automated decision-making systems in the context of migration. Individual states such as the United Kingdom and Canada employ automated decision-making for deciding visa applications, and the United States and many European Union member states use artificial intelligence and automated decision-making at some level of their immigration and migration policy.[3] Though use of automated decision-making is acknowledged, the extent to which public bodies are allowing algorithms to make decisions is difficult to ascertain; many remain intentionally silent on the matter.[4]

This article explores the use of automated decision-making in migration considerations, particularly immigration and refugee applications. It defines key terms, looks at the history surrounding treatment of immigrants and refugees, identifies and explains relevant legal instruments, examines the uses of automated decision-making by states and international organizations, evaluates the benefits and ramifications, and concludes by offering suggestions for best practices moving forward.



Automated decision-making is a process by which decisions are made by automated means, without any human involvement, using machine learning.[5] Machine learning is a commonly used subset of artificial intelligence in which an algorithm is given raw data and generates a series of rules or predictions based on that data, known as training data. The algorithm learns from those patterns and applies them to new situations.[6] The legal definition of “refugee” comes from the 1951 Refugee Convention and is someone who “is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country.”[7] The Office of the U.N. High Commission for Refugees (UNHCR) notes the requirement of border-crossing, stating that “[r]efugees are people who have fled war, violence, conflict or persecution and have crossed an international border to find safety in another country.[8]

Immigration and Refugee History and Policy

For as long as countries and states have had borders, there have been people attempting to flee and cross them. No doubt someone in Israel was making a comment on all of the Egyptians flooding the border in the age of Moses. Nevertheless, it is not until relatively recently that states have begun to control migration at their borders and set parameters over who may enter and remain. Throughout the 1800s in the U.S., immigration was open and encouraged; the Bureau of Immigration was not created until the 1890s.[9] Following the “Great Wave” of immigration in the early 1900s, the U.S. Border Patrol was created, and quotas were introduced to limit numbers by nationality.[10] In 1952, the first Immigration and Nationality Act was passed, and in 1965, Congress amended the act to abolish national origins quotas and expand immigration opportunities for those outside the Western hemisphere.[11] Over the past decade, right-leaning politicians increasingly stoked fears about immigrants, particularly non-white immigrants, replacing citizens in their jobs.[12] This rhetoric included refugees fleeing untenable situations and those who support them.[13]

The UNHCR estimates there were 21.3 million refugees at the end of 2021, not including 53.2 million internally displaced people and those who do not survive border-crossing.[14] Since 2014, an estimated 24,000 people have died or gone missing when attempting to cross the Mediterranean Sea, fleeing war-torn countries such as Syria, Iraq and Afghanistan.[15] The war in Ukraine drastically increased the numbers of forcibly displaced people. By the end of April 2022, less than two months into the conflict, reports estimated 11 million people fled their homes in Ukraine, with 5.3 million leaving for neighboring countries, and nearly 6.5 million being displaced internally.[16] The true number is likely much higher, with those from Russia and neighboring countries fleeing for fear of persecution.[17] Nations worked quickly to aid those fleeing Ukraine. The U.K. instituted a new visa for Ukrainian nationals coming to the U.K., provided they have an eligible sponsor and enacted “Home for Ukraine,” providing monetary incentive to those in the U.K. willing to house Ukrainian refugees.[18] Unfortunately, with the program due to expire, there is concern that the over 100,000 Ukrainian nationals participating will face homelessness.[19]

Applicable International and State Law

Multiple legal instruments, ranging from human rights legislation to data protection laws, apply to those immigrating to or seeking refuge in a foreign country.

1948 Universal Declaration of Human Rights and the International Bill of Human Rights

Following World War II, the newly created United Nations enacted the Universal Declaration of Human Rights, a milestone document mandating that certain rights, such as the right to life and the right to seek and enjoy asylum from persecution, are fundamental.[20] Stemming from the declaration is the International Bill of Human Rights, composed of the International Covenant on Civil and Political Rights and International Covenant on Economic, Social and Cultural Rights.[21] It guarantees the right to life, protection from torture, cruel, inhuman, or degrading treatment, freedom from arbitrary arrest or detention and basic trial rights.[22]

UNHCR and the 1951 Refugee Convention

The United Nations Human Rights Commission on Refugees was established to help displaced persons, including refugees, internally displaced persons, asylum seekers and those hoping to return home.[23] The 1951 Refugee Convention and 1967 protocol defines who may be classified as a refugee, provides for the protection of refugees and forbids refoulement, the practice of expelling a refugee against their will and returning them to a territory in which they fear for their life.[24] It is grounded in the declaration’s Article 14, the right to seek asylum for persons fleeing persecution.[25]

European Convention on Human Rights

In 1950, the Council of Europe passed the European Convention on Human Rights to further realize the protections and rights guaranteed by the Universal Declaration of Human Rights.[26] It provides basis for legal actions in the U.K. and EU when an individual’s human rights have been breached and there is not adequate regulation in their home jurisdiction on which to bring an action.

General Data Protection Regulation

The General Data Protection Regulation is a data protection regulation adopted by the EU in 2016.[27] It protects an individual’s data, including how organizations may use, process, and store that data; provisions for obtaining consent; and remedies.[28] It makes data protection a fundamental right, relating to Article 8 of the European Convention on Human Rights, which guarantees the right to respect for private and family life.[29] Under this regulation, when personal data has been obtained about a data subject, the data subject should be told when automated decision-making is used and given the logic behind it to ensure fair and transparent processing.[30] Further, Article 22(1) states that a data subject has the right not “to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”[31]

U.K. Data Protection Act 2018

When the General Data Protection Regulation came into effect, it was directly binding on the U.K. as an EU regulation. The Data Protection Act is a ratification of the General Data Protection Regulation into U.K. domestic law, and post-Brexit, the U.K. is free to change these regulations.[32] Currently, there are parallel provisions to the regulation in the Data Protection Act, such as Article 22(1).[33]

Automated Decision-Making for Migration Decisions

Automated decision-making is not new in the public law sphere, and concerns have been raised, especially when it is used to make welfare decisions, from the allocations of benefits to predicting the risk of recidivism.[34] As discussed below, both international organizations and states have used automated decision-making for immigration and refugee decisions, and its use is likely on the rise with many countries having severe backlogs and longer-than-expected processing times.[35]

Uses of Automated Decision-Making by International Organizations and States

The United Nations

In 2018, the U.N. released a report entitled “United Nations Activities on Artificial Intelligence,” listing how agencies use artificial intelligence, from screening job candidates to studying population migration within Somalia.[36] The UNHCR currently sorts refugee applications through various categories and procedures. The regular refugee status determination is examined on an individual basis by a trained eligibility officer, with an accelerated process when a special need or groups for protection arises.[37] The simplified refugee status determination procedure pre-populates the forms with legal analysis and any relevant country of origin information for the sake of efficiency.[38] Additionally, there is a prima facie approach to admit large groups when, on the face of it, they meet the criteria. The UNHCR’s Procedural Standards as well as data protection policies show much of the intake management is digitized, though nothing indicates they currently use automated decision-making to process refugee applications.[39]

The EU’s Failed Program

The EU funded and piloted a program that used automated decision-making to screen non-EU nationals at the border, known as iBorderCtrl and then iCROSS.[40] A virtual border guard asked questions to the non-EU national and if they passed, they could continue. If not, they were taken to a border control point, had their biometric information taken, and a human agent would review and make an assessment.[41] Facing controversy, including a lawsuit by Patrick Breyer, a member of European Parliament, and a court decision by the Court of Justice of the EU, the program ended.[42]

Individual States

Germany has partly digitized the asylum-seeking process, with name translation and dialect recognition as examples of now-automated components.[43] Their tools determine the plausibility of information submitted by refugee applicants but have document error rates, with one report indicating that in the 6,000 times the process was used, it could have produced approximately 900 false results.[44]

Canada has been using automated decision-making since 2014 in its immigration decisions.[45] Its system divides applications into simple and complex cases. Simple cases are reviewed and decided by an automated process; complex cases require a level of human review.[46] There have been concerns about impacts on refugees when making these decisions, and in 2021, Canada announced they were halting the use of algorithms for asylum-based decisions.[47]

The U.K. has been a notorious user of automated decision-making in immigration decisions.[48] In 2018, it deported over 7,000 students, revoking their visas immediately without appeal, when they were erroneously accused of cheating on an English language exam.[49] Later reports revealed the automated decision-making used by the testing organization to spot fake results was flawed.[50] The U.K. Home Office has since expanded its use of automated decision-making for immigration decisions, though applicants are not told that decisions made about their applications are being made by a computer.[51] One such process used by the home office is the “sham marriage” tool, which identifies eight undisclosed risk factors, flagging those who meet them for review.[52] This results in an intrusive investigation and elongated process for individuals.[53] While the goal is important, reports show that automated decision-making has disproportionate negative effects on certain nationalities, including Bulgarian, Greek, Romanian and Albanian people.[54] Automated decision-making is also used in the processing of other visa applications, though the U.K. Home Office refuses to say to what extent.[55]

In 2019, the UNHCR began sharing biometric information of refugees with U.S. Customs and Immigration Services, raising concerns on how that information is used, such as for surveillance.[56] The U.S. uses automated tools in immigration and refugee matters, including for determining potential removal of immigrants[57] and placement for refugees.[58] This includes the “Annie MOORE” program, a matching system designed to provide optimal placement and employment for refugees.[59]

Effects of Using Automated Decision-Making

The past decade has seen a dramatic increase of refugees.[60] This escalated in 2015 when over a million refugees entered Europe. Syrian refugees seeking to find safety from a civil war were the largest group represented.[61] In 2022, an ongoing tally at the UNHCR’s Operational Data Portal focusing on Ukrainian refugees counts 7,785,514 refugees fleeing Ukraine since Feb. 24 (as of Nov. 1, 2022), with 4,460,847 registered for temporary protection or other similar schemes throughout Europe.[62] This has added exponentially to an already stressed system. With this increase in the number of people in a highly vulnerable situation, it is crucial to have efficient systems in place to process applications.

Simultaneously, it is difficult not to think about those for whom automated decision-making will make a wrong decision and lead to the deprivation of their rights. Many organizations and governments are not transparent in their use of automated processes, especially when making immigration and refugee decisions.[63] For example, the U.K. government’s website has a standard disclaimer on their refugee and immigrant visa pages, including on the Homes for Ukrainians site, stating that an application decision will not be based solely on automated decision-making.[64] This is the conundrum of Article 22 of the Data Protection Act and General Data Protection Regulation: it specifies that decisions cannot be made solely by automated decision-making.[65] What that looks like, however, is open to interpretation. The vagaries of Article 22 have incited calls to repeal it; nevertheless, others feel it is an important safeguard, even in flawed form, and should instead be reformed.[66]

Due to a lack of transparency over how automated decision-making is used, it is difficult to definitively say if Article 22 is being breached. Even with some human oversight, there is a danger of automation bias.[67] Automation bias is the propensity for individuals to believe the decisions of an algorithm even when it contradicts their instincts or training.[68] In essence, humans tend to believe what computers tell them, not appreciating that machines are as fallible as the humans who programmed them. Trained professionals, including pilots and clinicians, have been documented as disregarding their own judgement in favor of an incorrect opinion produced by automated decision-making.[69] If automated systems are deciding immigration or refugee placements, with the priority being efficiency, it is highly likely that automation bias will lead to officials rubber-stamping the decisions given to them by the system.[70] This may not contravene the regulation to the letter, but it certainly goes against the spirit of Article 22.


In September 2021, the U.N. High Commissioner for Human Rights issued a statement warning about problematic implementations of artificial intelligence and its impact human rights. In her statement, Commissioner Bachelet cautioned, “Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”[71] The accompanying report advocates for a human rights-based lens when deploying artificial intelligence, highlighting fundamental human rights problems intrinsic in the algorithms.[72] This includes biased training data leading to the incorrect result and the opacity and “black box” nature of machine learning, which lacks transparency and the ability to explain decision making.[73] It also discusses border-management concerns and the consequences of faulty decisions such as arbitrary arrest, and perpetuating discrimination from data that reflects historic ethnic and racial biases.[74]

The report was largely a response to the increasing use of artificial intelligence and automated decision-making by governments that may be quick to implement the efficient technology without due diligence.[75] As countries face an increasing backlog on immigration visas, and as refugee cases continue to increase, it is inevitable that automation, if not already employed, will be needed at some level to handle the volume. With overwhelming numbers of people needing to find safety, there is a benefit to using automated decision-making to facilitate people leaving incredibly dangerous, life-threatening situations as quickly as possible. At this intersection of efficiency and protection of human rights, how does one balance a system that can accelerate outcomes, getting more people to safety, with one that can also strip them of fundamental human rights?

When considering this, it is important to weigh the benefits of using automated decision-making, the potential ramifications, and the role international organizations must play in regulating this technology. The UNHCR, for example, delegates refugee applications directly to member states who have the infrastructure to process them; it is primarily their responsibility.[76] Though the U.N. may not be using automation to inform refugee decisions, many of its member states currently or have used automated decision-making for immigrant or refugee determinations, with varying levels of regulation. Some, like the U.S., do not even have comprehensive data privacy legislation or protections.[77] Therefore, it is the responsibility of the U.N. and other international organizations to provide guidance and consistency to its members and ensure the technology being deployed by states is compatible with human rights protections. Automated decision-making alone is not good or bad; it is neutral. It is the way human beings use it that determines its value, and there is clear value in using automation in the face of crisis and backlog. The problem comes with the unexplainable or undetectable errors, wrongful denial of applications and the lack of a framework to recognize and intervene when this occurs.

Ideally, in addition to the appeals process for negative decisions, every flagging or refusal of an application would be blindly reviewed by a human being. Realistically, a more efficient approach would be a sub-sampling of both negative and positive decisions for blind, human review. This would prevent automation bias and ensure human oversight in some refusals. Then, by cross-checking these against the initial decisions, potential error rates and discrepancies could be uncovered, including any bias or commonalities in these errors. This would allow for an overall increase in processing speed and aid in error detection.

International organizations need to create clear, unified processes on how automated decision-making may be used and under what circumstances. With organizations like the U.N. having 193 sovereign states, it is not reasonable to suggest a “one-size fits all” approach. Instead, there must be rules and guidance that can be flexibly tailored to the needs of member states and demands of the time, especially given the speed with which technology changes. There should be a robust, fundamental framework that new machine learning models, including automated decision-making, must satisfy. As we continue to move further into the digital world, it is crucial that humans and international organizations remain at the helm to ensure technology is used responsibly in the physical world.

Alexandra Harrington graduated with her J.D. from Albany Law School in December 2022. She also holds an LLB in Scots law from the University of Edinburgh School of Law. She is particularly interested in civil liberties and cross-border concerns with uses of AI. She plans to qualify in the U.S. and U.K. in order to practice litigation with an international focus.

This article was an entry in the 2022 Albert S. Pergam International Law Writing Competition sponsored by NYSBA’s International Section. For more information, please see NYSBA.ORG/ILP.

[1] See Alina Selyukh, Optimized Prime: How AI and Anticipation Power Amazon’s 1-Hour Deliveries, Nat’l Pub. Radio, Nov. 21, 2018,

[2] See Tim De Chant, Amazon Is Using Algorithms With Little Human Intervention To Fire Flex Workers, ArsTechnica, June 28, 2021,

[3] See Astrid Ziebarth and Jessica Bither, Automated Decision-Making in Migration Policy: A Navigation Guide, Part A. Processing Visa Applications, German Marshall Fund, Nov. 18, 2021,; see also The Use of Digitalisation and Artificial Intelligence in Migration Management, EMN-OECD Inform, Feb. 2022,

[4] See Tatiana Kazim and Sara Lomri, Time To Let In the Light on the Government’s Secret Algorithms, Prospect Mag., Mar. 2, 2022,

[5] See What Is Automated Decision-Making and Profiling? Information Commissioner’s Office,

[6] See The People’s Guide to AI, 9, Allied Media,

[7] 1951 Refugee Convention, U.N.,

[8] See What Is a Refugee?, UNHCR,

[9] See Historical Overview of Immigration Policy, Ctr. For Immigration Studies,

[10] See id.

[11] See The Nation’s Immigration Laws, 1920 to Today, Pew Research Center, Sept. 28, 2015,

[12] See Domeneco Montanaro, How the ‘Replacement’ Theory Went Mainstream on the Political Right, Nat’l Pub. Radio, May 17, 2022,

[13] See Lorena Gazzotti, Coming to the Aid of Drowning Migrants? Get Ready To Be Treated Like a Criminal, The Guardian, Dec. 20, 2017,

[14] See UNHCR: Figures at a Glance,

[15] Migration Within the Mediterranean, Missing Migrants Project,

[16] See How Many Ukrainians Have Fled Their Home and Where Have They Gone? BBC News, Apr. 28, 2022,; see also Almost 6.5 Million People Internally Displaced in Ukraine. IOM U.N. Migration, Mar. 21, 2022,

[17] See id.

[18] See U.K. Government, Homes for Ukraine Sponsorship Scheme,

[19] See James Tapper, Ukrainian Refugees in U.K. Face Homelessness Crisis as Councils Struggle To Find Hosts, The Guardian, Oct. 30, 2022,

[20] The Universal Declaration of Human Rights, U.N.,

[21] See Alexandra R. Harrington, International Organizations and the Law, 242.

[22] See International Covenant on Civil and Political Rights, U.N. Human Rights Office of the High Commissioner, General Resolution 2200A XXI, art. 6, 7, 9, Dec. 16, 1966,

[23] See Harrington at 249.

[24] See Convention and Protocol Relating to the Status of Refugees,

[25] See id.

[26] See Convention for the Protection of Human Rights and Fundamental Freedoms at 5,

[27] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1 (hereinafter GDPR).

[28] GDPR at art. 4, 15, 17.

[29] GDPR at art. 1.

[30] See id. at 13(2)(f), 14(2)(g).

[31] See id. at art. 22(1).

[32] Data Protect Act 2018, hereinafter DPA, c. 12 (U.K.).

[33] DPA at Article 22(1).

[34] See Malcolm Langford, How Will Artificial Intelligence Affect International Law? The Digital Leviathan: Automated Decision-Making and International Human Rights, 114 AJIL Unbound 141, 142, Apr. 27, 2020,

[35] See Camilo Montoya-Galvez, U.S. Immigration Agency Moves To Cut 9.5 Million-Case Backlog and Speed Up Processing, CBS News, Mar. 29, 2022,

[36] See U.N. Activities on Artificial Intelligence,

[37] See UNHCR Procedural Standards at 34, UNHCR,

[38] See id at 190.

[39] See id. at 2–32, 34.

[40] Intelligent Portable Border Control System, European Commission: CORDIS,

[41] See Kristina Penner, Automating Society 2019: European Union, Algorithm Watch,

[42] See Peter Molnar, European Court Supports Transparency in Risky Border Tech Experiments, EDRi, Dec. 16, 2021,

[43] See Digitalising the Asylum Procedure, Federal Office for Migration and Refugees, (last visited May 20, 2022).

[44] See Submissions to the Report of the United Nations special rapporteur on extreme poverty and human rights, Algorithm Watch, June 11, 2019,

[45] See Petra Molnar, Bots at the Gate. Univ. of Toronto IHRP (Sept. 2018),

[46] See id.

[47] See Ziebarth, A Navigation Guide.

[48] See Chris Baynes, Government ‘Deported 7,000 Foreign Students After Falsely Accusing Them of Cheating in English Language Tests, The Independent, June 14, 2019,

[49] See id.

[50] See id.

[51] See Kazim, Time To Let the Light In.

[52] See id.

[53] See id.

[54] See Tatiana Kazim, Home Office Refuses To Explain Sham Marriage Algorithm, Free Movement,

[55] See id.

[56] See Roxana Akhmetova, Efficient Discrimination: On How Governments Use Artificial Intelligence in the Immigration Sphere To Create and Fortify ‘Invisible Border Walls,Working Paper 140, Univ. of Oxford 2020, Centre on Migration, Policy and Society,

[57]See Estefania McCarroll, Weapons of Mass Deportation: Big Data and Automated Decision-Making Systems in Immigration Law,34 Geo. Imm. J. 705, 715 (Aug. 19, 2020),

[58] See Ziebarth, A Navigation Guide.

[59] See id. at 23.

[60] Migrant Crisis: Migration to Europe Explained in Seven Charts, BBC News, Mar. 4, 2016,

[61] See id.

[62] See Ukraine Refugee Situation, UNHCR Operation Data Portal,

[63] See, e.g., Ziebarth, A Navigation Guide.

[64] U.K. Government, Ukraine Visa Sponsorship, Automated Decision Making and Profiling,

[65] See GDPR art. 22(1); see also DPA art. 22(1).

[66] See Kazim, Time to Let the Light In; see also LawPod UK Ep. 163: Computer Says No! Automated Decision, One Crown Office Row,

[67] See Kazim, Sham Marriage.

[68] See id.; see also Kate Goddard, Abdul Roudsari & Jeremy C. Wyatt, Automation Bias: A Systemic Review of Frequency, Effect Mediators, and Mitigators, 19 J. Am. Med. Info. Assoc. 121 (Jan–Feb 2012),

[69] See Goddard, Automation Bias.

[70] See Kazim, Sham Marriage.

[71] See Artificial Intelligence Risks to Privacy Demand Urgent Action- Bachelet, U.N. OHCHR, Sept. 15, 2021,

[72] See id. at 10–11.

[73] See The Right to Privacy in the Digital Age: Report of the United Nations High Commissioner for Human Rights at 5–6, U.N. OHCHR, Sept. 13, 2021.

[74] See id. at 6–7.

[75] See Bachelet.

[76]See UNHCR Procedural Standards at 14.

[77] See Thorin Klosowski, The State of Consumer Data Privacy Laws in the U.S. (And Why It Matters), N.Y. Times, Sept. 6, 2021,

Related Articles

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)


My NYSBA Account

My NYSBA Account