Skip to content

Russia cracks down on AI-generated beauty ads to protect patients

Fake before-and-after photos are flooding social media—now Russia is fighting back. Could this law change how beauty clinics market their services forever?

The image shows an advertisement for a beauty treatment featuring a man and woman embracing each...
The image shows an advertisement for a beauty treatment featuring a man and woman embracing each other, with two boxes of cream in the background. The poster has text written on it, likely describing the product and its benefits.

Russia cracks down on AI-generated beauty ads to protect patients

Social Media Feeds Flooded with AI-Generated Beauty—But the Results Are Far from Real

Today's social media feeds are dominated by clinic ads showcasing flawless transformations. From smartphone screens, eternally youthful models with perfect proportions, porcelain skin, and unnaturally lifted eyebrows and eye corners gaze back at viewers. But behind this polished aesthetic lies not a surgeon's skill, but the work of neural networks.

To protect consumers of medical services, Russia's State Duma has begun drafting a law requiring mandatory AI-content labeling in advertisements for medicine and beauty treatments, RIA Novosti reports. The initiative was proposed by MP Dmitry Svishchev, who argued that it is time to put an end to digital deception. According to him, the unmarked use of AI in medical and beauty advertising poses a direct risk to patients' health.

A Reality Check

The bill would require clinics to add a watermark or the disclaimer "Image generated by AI" if a photo does not depict a real patient. Too often, clients sign contracts expecting results like those in the ads—only to find the reality falls far short.

Meanwhile, Ilya Sopin, a lawyer with the Omsk Regional Bar Association and a lecturer in criminal law and criminology at F.M. Dostoevsky Omsk State Pedagogical University, explains that the absence of the term "neural network" in legal codes does not mean anything goes. On the contrary, the use of AI-generated models automatically places advertising in the high-risk category, and replacing real results with synthetic faces violates several fundamental legal requirements.

"As of March 26, 2026, Russia has no specific regulation explicitly requiring clinics to label AI-generated images in medical and cosmetic advertising as a mandatory disclosure," Sopin notes. "However, the use of AI-enhanced visuals—particularly those mimicking 'procedure results'—already falls into a high-risk zone. If a consumer perceives an AI-generated image as a 'real outcome' of medical treatment when it is, in fact, synthetic, regulators may interpret this as misleading—either through 'deceptive information' or 'concealment of material facts.'"

The expert emphasizes that medicine is a strictly regulated field—one where you can't simply "paint a pretty picture." Under current law, medical advertising cannot reference specific cases of recovery or personal testimonials, yet AI-generated images often simulate precisely that "happy patient" effect.

What's more, the use of neural networks undermines the informational standards the state imposes on clinics. Sopin points out that paid medical services are governed by the Law on Consumer Rights (Articles 8–10), which sets strict transparency requirements. When AI blurs the line between fiction and reality, it doesn't just mislead—it erodes trust in an industry where accuracy can be a matter of health.

Practitioners Must Provide Accurate Information on Methods, Risks, and Expected Outcomes

In this context, AI-generated imagery that misrepresents real results could be viewed by courts as a tool for creating false expectations, a legal expert emphasized. This connection is particularly significant in cases involving allegations of deception and in establishing a causal link between advertising and a patient's choice. In such situations, administrative liability may apply.

"Beauty" at Half the Price

The issue of labeling neural network-generated content goes hand in hand with the question of professional qualifications. Today's market is flooded with at-home specialists, thousands of whom call themselves cosmetologists—yet few hold proper medical degrees.

Currently, all manner of fillers and botulinum toxins can be ordered on online marketplaces at two or even three times cheaper than licensed products. At best, the results of such procedures will be nonexistent; at worst, they may pose serious health risks.

Olga Khrestyanova, a cosmetologist from Omsk, confirmed that the "perfect" faces in advertisements create a dangerous trap—not just for wallets, but for patients' health. Practitioners are increasingly encountering clients who demand results based on edited images.

"The core problem is that neural networks depict things that are impossible in real life," Khrestyanova told Omsk-Inform. "Young women bring me these photos and say, I want to look like this. A qualified specialist will refuse and explain the risks, while an unscrupulous one will take the money and inject a dubious substance."

To ensure a beauty treatment doesn't end in an emergency hospital visit, the expert urged Omsk residents to exercise caution. She noted that reputable professionals prioritize keeping clients informed and at ease.

"For your own peace of mind, don't hesitate to ask for documentation," Khrestyanova advised. "You have every right to see the clinic's license, the doctor's diploma, and the product's certificate with a valid expiration date—if a specialist is confident in their work, they'll never refuse such a request."

A Trap for the Mind

At a deeper level, the problem lies not only in legal nuances but in how our brains process visual content. Even when people know an image is AI-generated, they subconsciously continue comparing themselves to digital ideals. Psychologists have already dubbed this phenomenon "digital dysmorphia"—a mental disorder in which distorted body image develops due to excessive use of filters and social media.

Read also:

Latest