
A disturbing new TikTok trend powered by artificial intelligence is leading to frantic 911 calls from terrified people who believe a stranger has broken into their homes.
The viral prank uses AI tools to fabricate hyper-realistic images or videos of a so-called “homeless man” sneaking into someone’s house, rummaging through the refrigerator, or even lying on their bed. Pranksters then send these fake clips to family members or friends, who are tricked into thinking the footage is real and call the police.
According to The New York Times, police departments in multiple states have now responded to what appeared to be home invasions, only to discover that the supposed intruder was a computer-generated illusion.
Authorities near Detroit, Michigan, said they’ve received several such reports. The West Bloomfield Police Department confirmed that some residents had fallen for the AI prank and stressed that it wastes critical resources.
“Here’s the problem: officers are responding FAST using lights-and-sirens to what sounds like a call of a real intruder — and only getting called off once everyone realizes it was a joke,” explained the Yonkers Police Department in New York in a Facebook post. “That’s not just a waste of resources… it’s a real safety risk for officers who are responding and for the family members who are home if our officers get there before the prank is revealed and rush into the home to apprehend this ‘intruder’ that doesn’t exist.”
Experts say the prank is disturbingly easy to pull off. Greg Gogolin, a professor and director of cybersecurity and data science at Ferris State University, demonstrated how quickly such technology can be used to manipulate reality.
“It’s frustratingly easy to do,” Gogolin said, noting that he created a program in just a few hours to show how AI can alter images. “This is a natural language processing machine learning program called a face swapping,” he explained.
The software can convincingly merge facial features from one person into another image or video, producing visuals that look nearly indistinguishable from real footage.
Gogolin warned that once these tools are released, they often end up in the wrong hands. “They share that out or sell it. … It’s dispersed and that’s where the real danger is because people without any technical background can then utilize that the way they wish,” he said.
While older versions of AI image generators often made obvious mistakes — like limbs in the wrong place or people with extra arms — the technology has advanced dramatically. “You might generate something and an arm will be off, the elbows are in the wrong place. It used to be you would often see people with like three arms. A long arm, a long leg, the dynamics were not correct. A lot of that has been corrected or at least drastically improved with the newer versions,” Gogolin noted.
He also cautioned that law enforcement agencies need better cyber training to deal with AI-related crimes. “There are very few degreed investigators that have a cyber security background, let alone a computer science background particularly at the local level, even at the state level,” Gogolin said.



