
Bob Sullivan
Dr. Maurice Sholas has a beautiful, challenging calling — he cares for very sick children. He takes on the saddest of cases, and works with families so kids with spina bifida or traumatic injuries can still “win” at life. For some, that means gaining the ability to visit the bathroom independently.
But lately, Sholas has been put in a no-win situation by artificial intelligence. His likeness was used to create a deepfake video hawking supplements — specifically targeting Black consumers. Try as he might, he still hasn’t been able to remove all the various videos that have landed on places like TikTok and Twitter.
So instead of caring for very sick children, the Harvard-educated New Orleans doctor now spends time fighting AI and learning about intellectual property law.
“What’s frustrating is that it costs money, time, effort, and relationships to protect something that should be intrinsically mine, ” he told me during our interview for The Perfect Scam podcast I host for AARP.
There’s been a lot of talk about the problem of Deepfake videos and politics — how activists might change an election by, quite literally, putting words into a leader’s mouth. I believe consumers have become relatively sophisticated at spotting the more outlandish fakes — President Trump wearing Pope garments, for example. On the other hand, fake ads — especially those involving less popular figures — can be harder to discern. And they might ultimately cause more damage.
Sholas told me he knows of at least one person who bought the supplements based on the fake videos. After telling his story on local television, a victim reached out.
Scholas is not identified in the video; his appearance is altered slightly, and a fake voice is dubbed onto it. But his lab coat nametag is visible.
There is very little a victim can do to get fake content removed from the Internet. Sholas first reached out to the account that posted the videos, which ultimately blocked him. The very tool used to abuse his identity was now being used to prevent him from defending himself. Initially, he says, social media companies ignored his complaints. Later, after the local story aired, some services took action, but by then, copies of the video had spread across multiple services. He consulted a lawyer and was redirected to a PR company.
“They said the best thing you could do is hire a PR firm basically to go out there and do a sweep of the internet and push positive content to counteract whatever misinformation is there,” he said. That kind of search engine optimization could cost up to $20,000, he was told. Instead, he has taken to posting a series of self-made content.
“When someone borrows, to use a kind word, or steals, to use a real word, it puts me at risk, it puts my medical license at risk, and it puts my livelihood at risk. And to protect all of that, there’s nothing I can do as a small guy but spend more money,” he said.
Fake video is far more pervasive on social media than most people realize, says Frank McKenna, chief fraud strategist of a company called Point Predictive. He’s also the author of the popular Frank on Fraud newsletter.
“I see these all over TikTok, all over Instagram, all over Facebook. They’re inundating people’s news feeds; the social media platforms I don’t think are doing enough to kind of control the problem,” he told me.
NBC’s Al Roker was actually the victim of a similar deepfake attack about a year ago. You can watch his interiew about it at this link.
“I think people probably don’t realize how many deep fakes they’re seeing as they scroll through social media. From my experience, it’s at least half the videos that you’re seeing ….there’s some element of AI generation in those videos. And that’s only going to get worse,” he said. “The case will be that most of the content you’re looking at online is AI-assisted in some way … So people are going to have to get accustomed to the fact that they’re going to have to question pretty much everything. … These other celebrity deep fakes, I think, are going to surprise a lot of people, because they’re becoming more and more common.”
How hard is it to make fake videos like the ones that use Sholas’ likeness? Not hard at all, McKenna says.
“Using information off of YouTube videos, Instagram videos, or Facebook videos that you post, the criminals and scammers can take that content and put those into AI generating videos, and make you say anything that they want,” he said. “So just a few seconds of video can create these…they call them AI avatars, and they can basically make you sell vitamins or make you sell crypto investments and things like that. So it’s not hard at all, anybody can do it and a lot of scammers are.”
And, perhaps the most alarming part of this dark new trend — consumers are over-confident in their ability to spot fakes.
“The thing about AI deep fakes is 60 percent of the population thinks they can spot them, but in reality, I think a study … found that only .1% of people can actually identify those deep fakes,” he said.

