The Limits of AI in Immigration Law
- by Ronalee Carey Law
January 2026

We added a new member to our family early this year. Her name is Roberta. She has quickly become a cherished member of our little tribe. We love watching her to see what she will do next. The pets still don’t quite know what to make of her, but for the most part, will move aside for her as she makes her way from room to room.
Roberta is a robot vacuum. And she not only vacuums, but she also mops. She has the cutest little mop pads that she spins around. When she’s done her work, she empties her accumulated dirt and dirty mop water into a container in her home base. She sends us a notification when these need to be emptied.
Robot vacuums are amazing things, and a testament to human ingenuity and engineering. However, Roberta has not completely replaced me. Roberta’s sensors keep her a certain distance from objects like furniture, and she can’t move things like potted floor plants to vacuum behind them. So, a few times a month, I still lug out the traditional vacuum and mop and clean where Roberta can’t.
Artificial intelligence is very much like Roberta. It has its uses and its limitations. As I wrote in a previous newsletter, ‘Why I’m not Worried my Job Will be Taken Over by a Robot’, chatbots and other forms of artificial intelligence cannot provide emotional support through a stressful immigration journey. Though AI has come a long way since I wrote that newsletter in 2018, it still needs human oversight. At our firm, we use an AI writing assistance tool called Grammarly. It’s great for spotting spelling and grammatical mistakes and can suggest improved sentence flow. I use these tools all the time; Grammarly works much better than traditional word processing tools. However, Grammarly wanted me to put a comma between ‘vacuum’ and ‘mop’ in the preceding paragraph. Putting a comma there would have changed the meaning of the sentence in a way that wouldn’t have reflected what I wanted to say. So, I clicked ‘dismiss’ to get rid of the accusatory blue line Grammarly put under those words.
Grammarly is the only AI tool our law firm currently uses. As technology companies continue to push and advertise AI, we have considered other tools. Each time, we’ve found a number of concerns to the technology that makes us choose not to proceed.
Security Issues
AI programs consistently scan and store the information they encounter. What does the program have access to? Does the program store client information and if yes, where and how long? Where are the storage servers located (what country) and what laws apply to the government of that country accessing that stored information? Can we manually delete that information (and do we as a firm have time to do so)?
One question that often gets overlooked when considering security is: Does the company offering the program have its own servers for data storage, or does it contract this out?
Fireflies.ai, a note-taker program, came highly recommended to us. Its security compliance levels are high, but its list of sub-processors is long. We would need to continuously monitor data breaches across US companies, as well as US laws governing government access to AI databases. This would certainly be of major concern to any of our LGBTQIA+ clients coming from the USA in the current political climate.
Further, other staff at Ronalee Carey Law were able to review a meeting transcript via a forwarded link, without having to log in.
A call to The Law Society of Ontario’s Practice Management Helpline confirmed that any transcript the AI notetaker made would need to be reviewed for mistakes and that clients would need to be notified of the use of the AI and would need to give permission.
We decided it wasn’t worth the headache!
Environmental and Humanitarian Concerns
We also had concerns about the energy and water AI consumes. Being considerate of the environment is not a new concern for our firm. In 2021, I wrote an article about Climate Migrants. It's no surprise then, that we would want to limit the use of something contributing to the cause. The United Nations and MIT Technology Review have both discussed the rapid need for AI programs to become more energy and water-efficient before expansion. META AI data centre is currently running water taps dry and filling what does run with dark sediment for those who live nearby. The major companies are giving no thought to local water systems or environment.
We’d rather leave the energy and water for life-saving AI programs like early cancer detection.
Hallucinations – AI programs make things up
AI programs are designed to make users happy. It doesn’t weed through sources to check for source quality. It simply finds things randomly tossed into the internet’s abyss that match what you ask for and spits it out at you or mashes things together from different sources like a game of MadLibs. You may have heard the sentiment “Don’t trust Wikipedia”, but you can follow Wikipedia to the source material, which may or may not be of good use to you. The same can be said for the information AI provides, but we don’t hear “Don’t trust ChatGPT” from those around us (yet).
Sometimes the hallucinations can be funny, like when Google AI told users to glue pizza, or eat rocks. X users also report that Google’s search engines are having issues with basic calculations that have functioned for a decade.
It’s harder to differentiate the hallucinations when they give real-sounding names and references. However, Stanford University reports that AI hallucinations happen in 1 out of every 6 inquiries or more.
Of course, with AI usage rampant, legal hallucinations have already hit Ontario courts. In May 2025, Ontario Court Justice Joseph Kenkel had to reject a lawyer’s legal arguments for citing an fictious case. Justice Myers considered holding a different lawyer in contempt of court for the same issue. The BBC also warns that reliance on the use of AI damages your thinking skills and lowers your ability to think critically and problem-solve.
As the technology advances, we will likely adopt more AI tools within our firm. We would be happy to spend more time providing a high level for care for our clients by allowing technology to handle more repetitive tasks (with human review, of course). Until then, we will watch what human ingenuity continues to produce.
