Trump Signs ‘Take It Down Act’ Criminalizing Deepfake, Non-Consensual Intimate Images

WASHINGTON, May 20 (Alliance News): US President Donald Trump has signed into law the Take It Down Act, making it a criminal offense to share real or AI-generated intimate images without the subject’s consent — a move aimed at curbing the abuse of deepfake technology and protecting victims of online exploitation.

The bipartisan legislation, signed during a Rose Garden ceremony, specifically targets the spread of deepfake videos and images created using artificial intelligence.

These realistic but fabricated visuals have increasingly been used to harass individuals — particularly women — by distributing explicit content without consent.

With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will.

And today we’re making it totally illegal,” President Trump said at the ceremony. The law carries a penalty of up to three years in prison for anyone found guilty of intentionally sharing such content.

The Act not only criminalizes the non-consensual distribution of intimate imagery but also requires social media platforms and websites to swiftly remove such content upon notification from victims.

First Lady Melania Trump, in a rare public appearance at the White House, praised the bill as a “national victory” for families and young people. “This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,” she said.

The law responds to a surge in non-consensual deepfakes, driven by a boom in AI tools and apps that can digitally manipulate or undress images.

Schools across the US have seen hundreds of teenagers fall victim to such abuse by peers, while celebrities like Taylor Swift have also been targeted.

While states like California and Florida already have laws against deepfake pornography, critics warn that the new federal law could expand government censorship powers.

The Electronic Frontier Foundation cautioned that it might allow authorities or powerful individuals to pressure platforms into removing lawful content.

Renee Cummings, an AI ethicist at the University of Virginia, called the law a “significant step” but stressed that success will depend on “swift enforcement, strict penalties, and adaptability to evolving tech threats.”

As AI technology evolves rapidly, the law is seen as a crucial move to close the legal gap and protect people — especially women and children — from harassment, blackmail, and mental trauma caused by non-consensual digital abuse.