Immigration agents use AI to draft use-of-force reports, a practice recently criticized by a federal judge for creating inaccurate narratives.
In a 223-page opinion issued last week concerning immigration enforcement in the Chicago area, U.S. District Judge Sara Ellis highlighted that reliance on tools like ChatGPT for these critical documents undermines the credibility of the agents involved. The judge noted that this approach could foster inaccuracies.
The inaccuracy of AI-generated reports, Judge Ellis details, observing body camera video where an agent instructed ChatGPT to compile a narrative for a report. The agent provided the programme with only a brief descriptive sentence and several images. The court opinion noted that this method may explain why there were significant factual discrepancies between the official narratives submitted by law enforcement and what the actual body camera footage revealed. Experts contend that employing AI to draft a report that requires an officer’s specific, first-hand perspective constitutes the worst possible use of the technology, raising concerns about accuracy.
Experts warn about standards and privacy. Law enforcement agencies across the country are grappling with creating guardrails that allow officers to use available AI technology while maintaining professionalism. Ian Adams, an assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence, described the scenario mentioned by the judge as a “nightmare.” Adams explained that courts establish a standard of “objective reasonableness” when considering if force was justified, relying heavily on the specific thoughts of that specific officer at that moment.
Generating reports this way also raises privacy concerns. Katie Kinsey, tech policy counsel at the Policing Project at NYU School of Law, noted that if an agent uses a public ChatGPT version, they lose control of uploaded images, potentially allowing them into the public domain. This erosion of privacy protocols could further damage public confidence in how law enforcement manages operations.
Lack of policy and visual challenges. The Department of Homeland Security did not respond to requests for comment, and it remains unclear if the agency has guidelines on the use of AI by agents. While some tech companies offer AI components operating on closed systems to assist with reports based on audio, using visuals is different. Andrew Guthrie Ferguson, a law professor at George Washington University Law School, noted that AI prompts return different results for visual components, making them complicated and currently less effective for accurate police reporting. Kinsey suggested that agencies need to develop guardrails around risks before deploying technology, emphasizing the need for transparency.
Therefore, the primary keyword used is immigration agents use AI, the secondary keywords are factual discrepancies and public confidence.
Discover more from Hazleton Immigrant Advocacy Services Inc – Trusted Immigration Support
Subscribe to get the latest posts sent to your email.
