🚀Digital Health Futures
AI Ethics

When Seeing Is No Longer Believing: Deepfakes as a Public Health Crisis

December 3, 2025

AI
Misinformation
Digital Health
Mental Health
Health Equity
Deepfakes

As deepfake technology makes it impossible to trust what we see and hear, the implications extend far beyond fraud into mental health, emergency response, and the very foundation of public health communication. The erosion of reality itself has become a health crisis.

When Seeing Is No Longer Believing: Deepfakes as a Public Health Crisis

On a January morning in 2024, a finance worker in Hong Kong joined what appeared to be a routine video call with her company's chief financial officer. The face on the screen was familiar, the voice reassuring, the request straightforward: authorize a $25 million transaction. She complied without hesitation. The person she trusted was, after all, right there on her screen—or so she thought. Within hours, the devastating truth emerged: every face on that call, every voice, every reassuring gesture had been fabricated using deepfake technology. The $25 million was gone, transferred to criminals who had weaponized artificial intelligence to manufacture reality itself.

This isn't science fiction. This is the world we inhabit today—a world where the boundary between authentic and artificial has become dangerously blurred, where trust itself has become a vulnerability, and where the implications extend far beyond financial fraud into the realm of public health.

From Infodemic to Manufactured Reality

When Dr. François Marquis, chief of intensive care at Maisonneuve-Rosemont Hospital in Montreal, discovered his face and voice being used to sell fraudulent health products online, his first thought wasn't about his own reputation. "My primary worry is for the individuals who trust me," he explained to reporters. "It's putting a lot of damage on all physicians in Quebec and in Canada." Dr. Marquis had become an unwitting participant in a new frontier of health misinformation—one where artificial intelligence doesn't just spread false information, but manufactures credible spokespersons to deliver it.

His experience is far from isolated. Dr. Alain Vadeboncoeur of the Montreal Heart Institute found himself digitally cloned across multiple deepfake videos discussing urology, prostate cancer, and sexual dysfunction—subjects entirely outside his medical specialty. "The danger is you don't know what it is that they're taking," Dr. Marquis warned, highlighting how even seemingly harmless supplements could trigger adverse reactions or, worse, convince patients to abandon proven treatments like insulin or anticoagulants for counterfeit alternatives.

These cases illuminate a critical evolution in the information landscape. During the COVID-19 pandemic, the World Health Organization warned of an unprecedented "infodemic"—an overabundance of information, both accurate and false, that made it difficult for people to find trustworthy guidance when they needed it most. But deepfakes represent something qualitatively different: not just the spread of misinformation, but the systematic erosion of our ability to determine what is real at all.

The Mental Health Crisis Hidden in Plain Sight

The health impacts of deepfakes extend far beyond misguided medical decisions. When we examine the psychological consequences documented in clinical research, we find a cascade of harm that rivals physical assault in its severity and duration.

Consider the mental health outcomes reported by victims of image-based sexual abuse using deepfake technology. Research consistently documents rates of depression, anxiety, and post-traumatic stress disorder comparable to those seen in survivors of physical violence. "Being victimized through deepfakes can erase your sense of reality," explains therapist Francesca Rossi, who specializes in treating survivors of this form of abuse. The dissonance between knowing the imagery is fabricated while watching it look utterly convincing creates a profound psychological rupture.

For children and adolescents, the stakes are even higher. The American Academy of Pediatrics reports that young victims of deepfake pornography experience humiliation, shame, violation, and self-blame that can lead to immediate emotional distress, withdrawal from school and family, and in severe cases, self-harm and suicidal ideation. One in six minors who experience potentially harmful online sexual interactions never disclose it to anyone, with boys even less likely to seek help—intensifying their isolation and suffering.

The trauma extends to healthcare workers themselves. A study of Romanian frontline medical staff during the COVID-19 pandemic found that clinicians who reported being affected by false news in their professional activities experienced significantly higher levels of stress, anxiety, and insomnia than their colleagues who felt insulated from the infodemic. These healthcare workers described feeling emotionally overwhelmed by fake news and reported that misinformation damaged the doctor-patient relationship, with patients increasingly distrusting their physicians.

The Equity Crisis Hiding in the Technology

The threat deepfakes pose to public health extends beyond individual psychological trauma to systemic failures in emergency response and community health. During the COVID-19 pandemic, health misinformation about vaccines, treatments, and preventive measures spread with viral efficiency across social media platforms. Analysis of Facebook posts found that approximately 46.6% of vaccine-related content contained misinformation, while fact-checking posts represented only 47.4% of the conversation—and 28.5% of those fact-checks actually repeated the false claims they were trying to correct.

Deepfakes threaten to exponentially amplify this problem. While traditional misinformation could be produced and shared by anyone, deepfakes carry the manufactured authority of trusted experts. Imagine a deepfake video of a public health leader promoting unproven treatments or discouraging vaccination, distributed during a critical phase of pandemic response. The consequences could be catastrophic.

Vulnerable populations bear disproportionate risk. Research on deepfakes in resource-limited communities reveals critical knowledge gaps and a lack of effective detection tools. Marginalized communities, already facing systemic barriers to healthcare and experiencing lower levels of institutional trust due to historical discrimination, may be particularly susceptible to deepfake-enabled health misinformation. The elderly, who reported $3.4 billion in fraud losses in 2023, face special vulnerability to sophisticated AI-generated scams.

With deepfake incidents increasing from just 22 between 2017-2022 to 179 in the first quarter of 2025 alone—a 19% increase over all of 2024—we are witnessing an acceleration that outpaces nearly every other threat to information integrity.

When Reality Becomes Infrastructure

The deepfake crisis forces us to confront a fundamental challenge to what security experts call "cognitive security"—the protection of human cognitive processes from manipulation and the integrity of information ecosystems. In healthcare contexts, where accurate information can mean the difference between life and death, cognitive security becomes a matter of survival.

The concept of "information integrity"—the consistency, reliability, accuracy, fidelity, safety, and transparency of an information ecosystem—provides a framework for understanding what's at stake. When deepfakes proliferate, they don't just spread individual pieces of false information; they undermine the entire foundation of trust that makes public health communication possible.

This erosion of trust has cascading effects. Canadian physicians report that health misinformation leads patients to refuse established treatments, resulting in severe consequences including preventable deaths. During COVID-19, vaccine hesitancy driven by misinformation was directly linked to thousands of preventable hospitalizations and deaths. Now imagine that same dynamic amplified by the manufactured credibility of deepfake technology.

Building Defense at Multiple Levels

Addressing the deepfake crisis requires coordinated action across technology, policy, education, and clinical practice. What once required extensive technical expertise and significant computing power can now be accomplished with widely available tools costing less than $400 per month. The accessibility of these technologies means that malicious actors—from individual scammers to state-sponsored disinformation campaigns—can produce convincing deepfakes at scale.

At the technological level, researchers are developing detection systems using the same AI techniques that create deepfakes. Modern deepfakes are created using Generative Adversarial Networks (GANs)—where one neural network generates fake content while another tries to detect it. Through millions of iterations, the generator becomes increasingly skilled at fooling the discriminator. But this same dynamic can be reversed, with detection algorithms learning to identify the subtle artifacts that even the most sophisticated deepfakes leave behind.

Yet technology alone cannot solve this problem. Research tracking deepfake technology in 2024 found that many academic detection systems have become outdated, unable to identify the latest manipulation techniques circulating on social media. The arms race between creation and detection tools is ongoing, and detection will likely always lag behind the cutting edge of synthesis technology.

At the policy level, governments and platforms are beginning to act. Regulatory frameworks are emerging that require disclosure of AI-generated content, criminalize non-consensual deepfakes, and establish liability for platforms that host harmful synthetic media. But policy implementation varies widely across jurisdictions, creating gaps that bad actors can exploit.

Perhaps most importantly, we need a fundamental shift in how we approach media literacy and trust. The old heuristic—"seeing is believing"—no longer serves us in a world where any image, any voice, any video can be fabricated with startling realism. We must develop new cognitive frameworks for evaluating authenticity, new social norms around verification, and new institutional mechanisms for establishing credibility.

The Window for Thoughtful Action

The deepfake crisis in public health is not a future threat—it is happening now. The decisions we make in the next few years about how to design detection systems, implement regulations, and educate the public will determine whether we can preserve the information integrity that public health depends on.

This requires frameworks that embed equity considerations from the start. Detection tools must work across languages and cultural contexts. Educational initiatives must reach vulnerable populations most at risk of exploitation. Policy responses must protect privacy and free expression while preventing harm. Clinical practice must evolve to help patients navigate an information landscape where manufactured reality is increasingly indistinguishable from the authentic.

The communities most likely to be harmed by deepfakes—those already marginalized, already facing barriers to healthcare, already experiencing lower levels of institutional trust—are the same ones least likely to have access to detection tools and least likely to be included in the design of solutions. This pattern, familiar from every previous wave of health technology, cannot be allowed to repeat.

The question isn't whether deepfakes will transform the landscape of health communication—they already are. The question is whether we can build the technological, policy, educational, and clinical infrastructure needed to preserve trust and protect vulnerable populations in a world where seeing is no longer believing. The window for thoughtful action is narrowing, but it has not yet closed.


This article draws on insights from reporting by CNN and CBC on deepfake fraud cases, research from the American Academy of Pediatrics and European Parliament on mental health impacts, and analysis from the World Economic Forum on emerging threats to information integrity.

AI Gap Analysis
An AI-generated summary highlighting the key digital health equity issues in this post.Limited to 3 requests per hour to manage costs.
Equity Gaps Identified
AI-identified digital health equity gaps discussed in this post
Connected Posts
Explore thematic connections between posts