While health scams and misleading information are nothing new, the digital evolution and rise of artificial intelligence (AI) have caused grave concern among the medical community and online users.
Today's technology enables bad players to manipulate videos and images of reputable health care professionals, making them appear to endorse fake products or demand personal medical information. The phenomenon has become even more widespread on TikTok and similar social media platforms, where individuals have difficulty deciphering misinformation.
What Are Deepfakes? How They're Used in Health Scams
Deepfakes utilize AI to generate realistic representations of humans in synthetic media. It is a relatively inexpensive and straightforward process, and misleads
Hackers gather as much information as possible on a person and train their AI model to learn how they look, talk, and move. The tool then creates a seemingly exact audio clip, video, or image of them. Recently, a Hong Kong-based finance company fell for a deepfake hoax,
In 2024, Diabetes Victoria in Australia
TikTok Shop, in particular, has become a hot spot for deepfake doctors to trick users into buying fraudulently promoted health products. According to Media Matters, popular TikTok accounts have
While researching the situation, Media Matters found that all the products originated from China and followed the same split-screen video format of the product in use alongside a deepfake doctor approving it.
Why Deepfake Health Scams Matter
Deepfakes are part of a broader cybersecurity and safety concern involving phishing, malware, and social engineering strategies. The content puts people at risk of falling for money scams, potentially harmful health products, and miracle cures while ruining the credibility of reputable and reliable medical experts.
The health industry is already among the top cybersecurity targets, especially with the rise of medical devices and other wearable technology. In fact,
Deepfake health scams increase the risk of data theft when people share their personal medical information. On platforms like TikTok, fake doctors may collect payment for bogus products and services, prompting users to follow dangerous or phony medical advice. These incidents further erode public trust in professionals and facilities.
Red Flags and Tactics of a Deepfake Doctor
Spotting a fake TikTok doctor or other deepfake health scam is tricky as AI generation technology advances. According to the Massachusetts Institute of Technology's Media Lab, users should look for the following to detect fakes:
- Skin that
looks too smooth or wrinkly — hackers often focus on facial transformations - Inconsistent lighting and shadows
- Synthetic-looking hair, including mustaches, sideburns, and beards
- Fake-looking moles or freckles
- Poorly synced lip movements against audio
Ultra-clean audio and
Scammers may create fraudulent credentials or run viral challenges to garner user attention and interest. They can exploit people's trust by impersonating a medical authority or developing a sense of urgency around purchasing a health product. Others might include medical jargon or white lab coats to make the content appear factual.
Red flags typically include unverifiable information or credentials when checked against peer-reviewed studies. Likewise, accounts with disabled comments, high-pressure marketing tactics, and a minimal digital footprint are a clear warning sign.
Protecting Yourself From Deepfake Health Scams
Protecting yourself from deepfake health scams is becoming increasingly complex as the technology grows more sophisticated. It is always best to contact the person endorsing a health product or service to determine its legitimacy. Leaving a comment under a post regarding its authenticity is another way to engage with the account and other users who may question whether it is real.
It is crucial to report presumably fake products and accounts spreading misinformation, and encouraging others to do the same enhances the collective effort to mitigate risk.
Utilizing reverse image and video search tools helps pinpoint the original content source, while deepfake detectors can identify manipulations. This could include Microsoft's Video Authenticator, which
Social media platforms also utilize AI detection, flagging generated content, deploying user reporting systems, and establishing fact-checking partnerships. The process isn't perfect, though. In 2024, Meta announced a
Ultimately, users can best protect themselves by conducting comprehensive research and reaching out to trusted experts or organizations to authenticate health products and services.
Staying Vigilant in the Age of Synthetic Media
Deepfake doctors will likely continue to dominate the digital landscape despite the development of AI detection tools. The heightened cybersecurity risks also make synthetic media in health care even more imperiling. Therefore, users must remain vigilant and practice due diligence to avoid scams.