While health scams and misleading information are nothing new, the digital evolution and rise of artificial intelligence (AI) have caused grave concern among the medical community and online users.

Today's technology enables bad players to manipulate videos and images of reputable health care professionals, making them appear to endorse fake products or demand personal medical information. The phenomenon has become even more widespread on TikTok and similar social media platforms, where individuals have difficulty deciphering misinformation.

What Are Deepfakes? How They're Used in Health Scams

Deepfakes utilize AI to generate realistic representations of humans in synthetic media. It is a relatively inexpensive and straightforward process, and misleads 25% to 50% of users who are often unable to determine whether the videos are authentic.

Hackers gather as much information as possible on a person and train their AI model to learn how they look, talk, and move. The tool then creates a seemingly exact audio clip, video, or image of them. Recently, a Hong Kong-based finance company fell for a deepfake hoax, wiring $25 million to scammers after receiving a convincing imitation video of the finance director.

In 2024, Diabetes Victoria in Australia issued a press release about the circulation of deepfake videos to promote diabetes supplements. The scammers used AI technology to manipulate experts from the Baker Heart and Diabetes Institute in Melbourne without their consent or endorsement.

TikTok Shop, in particular, has become a hot spot for deepfake doctors to trick users into buying fraudulently promoted health products. According to Media Matters, popular TikTok accounts have accrued over 10 million views by using generated images of real medical experts to sell their goods.

While researching the situation, Media Matters found that all the products originated from China and followed the same split-screen video format of the product in use alongside a deepfake doctor approving it.

Why Deepfake Health Scams Matter

Deepfakes are part of a broader cybersecurity and safety concern involving phishing, malware, and social engineering strategies. The content puts people at risk of falling for money scams, potentially harmful health products, and miracle cures while ruining the credibility of reputable and reliable medical experts.

The health industry is already among the top cybersecurity targets, especially with the rise of medical devices and other wearable technology. In fact, 46 hospitals experienced ransomware attacks in 2023, up from 25 the year before. The disruptions impacted 141 hospitals by preventing access to computer systems and patient data.

Deepfake health scams increase the risk of data theft when people share their personal medical information. On platforms like TikTok, fake doctors may collect payment for bogus products and services, prompting users to follow dangerous or phony medical advice. These incidents further erode public trust in professionals and facilities.

Red Flags and Tactics of a Deepfake Doctor

Spotting a fake TikTok doctor or other deepfake health scam is tricky as AI generation technology advances. According to the Massachusetts Institute of Technology's Media Lab, users should look for the following to detect fakes:

Ultra-clean audio and backdrops that don't shift also indicate a fake. Conversely, in a real video, you might hear breathing or birds chirping in the background, while cameras might shake slightly.

Scammers may create fraudulent credentials or run viral challenges to garner user attention and interest. They can exploit people's trust by impersonating a medical authority or developing a sense of urgency around purchasing a health product. Others might include medical jargon or white lab coats to make the content appear factual.

Red flags typically include unverifiable information or credentials when checked against peer-reviewed studies. Likewise, accounts with disabled comments, high-pressure marketing tactics, and a minimal digital footprint are a clear warning sign.

Protecting Yourself From Deepfake Health Scams

Protecting yourself from deepfake health scams is becoming increasingly complex as the technology grows more sophisticated. It is always best to contact the person endorsing a health product or service to determine its legitimacy. Leaving a comment under a post regarding its authenticity is another way to engage with the account and other users who may question whether it is real.

It is crucial to report presumably fake products and accounts spreading misinformation, and encouraging others to do the same enhances the collective effort to mitigate risk.

Utilizing reverse image and video search tools helps pinpoint the original content source, while deepfake detectors can identify manipulations. This could include Microsoft's Video Authenticator, which provides a real-time confidence score while a video plays. It also detects grayscale or fading, which people may not easily recognize with the naked eye.

Social media platforms also utilize AI detection, flagging generated content, deploying user reporting systems, and establishing fact-checking partnerships. The process isn't perfect, though. In 2024, Meta announced a new approach to labeling AI-generated content throughout Facebook and Instagram when detected by AI assessment tools. However, the technique is not always valid and has endured criticism from users who claim their content is real.

Ultimately, users can best protect themselves by conducting comprehensive research and reaching out to trusted experts or organizations to authenticate health products and services.

Staying Vigilant in the Age of Synthetic Media

Deepfake doctors will likely continue to dominate the digital landscape despite the development of AI detection tools. The heightened cybersecurity risks also make synthetic media in health care even more imperiling. Therefore, users must remain vigilant and practice due diligence to avoid scams.