Alert for Refugee Practitioners: Undisclosed Use of Generative AI in PRRA Decisions
We want to alert colleagues about a troubling development involving the unauthorized and undisclosed use of generative AI by risk assessment decision-makers. This stems from a JR we handled regarding a PRRA refusal [Singh v. MCI IMM-8235-24].
In Singh we raised challenges at the intersection of administrative law, procedural fairness, and generative Artificial Intelligence (AI) in the Pre-Removal Risk Assessment (PRRA) process. The matter resolved by consent from the respondent’s counsel without further litigation. We regard this as a tacit acknowledgment that we identified improper conduct by the officer, though not explicitly stated.
Undisclosed Use of Generative AI as a Breach of Procedural Fairness
We argued that the PRRA officer breached procedural fairness by using an unknown and undisclosed generative AI program to analyze the applicant’s risk. We submitted evidence from AI detection programs (GPTZero and Quillbot) indicating an 87% to 91% likelihood that the decision’s analysis section was generated by AI. This evidence was submitted by way of a second affidavit signed off by one of our associates and we relied on the exemptions in place for providing evidence that was not in the record/before the Officer.
The failure to disclose the AI use, the specific program, and the inputted prompts deprived the applicant of understanding the decision-making process. Drawing on relevant jurisprudence (and extending reasoning since we are still in uncharted waters), we argued that keeping an individual unaware of how an automated system generates a decision prevents a meaningful response, making the process procedurally unfair.
Impermissible Subdelegation (Delegatus Non Potest Delegare) and Fettered Discretion
We asserted that delegating the risk analysis to a third-party AI program violated the principle of delegatus non potest delegare. A statutory decision-maker must retain discretion and cannot subdelegate it without authority.
Generative AI systems output text based on patterns in massive training data that may not align with Canadian legal or administrative standards, resulting in arbitrary administration. Relying on AI for a discretionary decision requiring individualized risk assessment is anathema to basic administrative principles and denies the applicant their right to be (actually) heard.
Distinguishing Generative AI from Existing Automated Tools (Chinook)
We anticipated the respondent’s reliance on jurisprudence permitting automated tools and sought to deal with this upfront and so we peremptorily distinguished this from cases involving IRCC’s Chinook software. Unlike Chinook 3+, which triages temporary residence applications without auto-refusing or replacing substantive analysis, the generative AI here purportedly wrote the substantive refusal and risk analysis. We highlighted bizarre phrasing where the AI referred to the decision-maker in the third person, stating the applicant failed to convince “the decision maker.”
Stringent Procedural Protections in the PRRA Regime
Applying the Baker-Vavilov framework, we emphasized that procedural protections must reflect the decision’s consequences. A PRRA decision involves fundamental liberty and security interests, including risks of deportation, persecution, and torture. Lacking appeal to the Refugee Appeal Division (RAD), it demands highly responsive justification. An opaque AI-generated refusal inherently fails to provide this.
Privacy and Confidentiality Risks
We also raised the practical concern of potential confidentiality breaches, noting the possibility that highly sensitive personal information about the applicant’s risk profile was fed into an unknown third-party generative AI program via prompt.
Adverse Credibility Findings Without an Oral Hearing
Independent of the AI issue, we argued the PRRA process was procedurally unfair because the officer made adverse credibility findings on a first-instance risk assessment without an oral hearing. Despite the applicant’s request for an interview if veracity was in doubt, the officer assigned minimal weight to his statutory declaration, citing insufficient evidence for torture claims. Relying on Tekie v. Canada and Ahmed v. Canada, we argued that a central negative credibility finding in dismissing a first-instance risk claim requires an oral hearing, constituting a fatal breach otherwise. The officer erroneously conflated reliability (credibility) with sufficiency of evidence to mask this finding.
Conclusion
In this “brave new world” refugee practitioners should scrutinize PRRA decisions for undisclosed AI use and challenge such practices to ensure procedural fairness.