The word deepfake is everywhere now — in news about political propaganda, fake celebrity videos, and debates over AI safety. But many people don’t know where the term came from or why it matters. This article is written for general readers, parents, students, and policymakers who want to understand the history of deepfakes, how the word was born, and why experts worry about it today.

By explaining the origin, we can see how a single Reddit user shaped a global debate around artificial intelligence, ethics, and online safety.

The Reddit User Who Coined “Deepfake”

In 2017, a Reddit user with the name Deepfakes began experimenting with artificial intelligence. They used deep learning algorithms to swap the faces of celebrities onto porn actors in explicit videos. The results, though imperfect, spread quickly online because the technology was new and shocking.

The user launched a subreddit called r/deepfakes, where thousands of people gathered to share and discuss these AI-generated clips. The name combined two ideas: deep learning and fake videos — creating the word that would soon appear in global headlines.

The Rise and Removal of r/Deepfakes

The Rise and Removal of r/Deepfakes

Initially, the subreddit was viewed as a curiosity, attracting hobbyists and technologists who were fascinated by what AI could accomplish. But as it grew, ethical concerns exploded. Every video was non-consensual because celebrities never agreed to appear in them.

By early 2018, Reddit banned r/deepfakes under its policy against involuntary sexual content. Other platforms like Pornhub, Twitter, and Discord soon followed, removing or banning deepfake porn. The original Reddit account was deleted, but the word deepfake had already taken on a life of its own.

From a Subreddit to a Global Issue

What started as one user’s project quickly turned into a worldwide concern. Today, deepfakes are used for many things:

  • Special effects in movies, like de-aging actors
  • Dubbing and translation in TV and film
  • Satire, parody, and entertainment videos
  • Political disinformation and fake news campaigns
  • Non-consensual pornographic content

Studies show that most deepfake material online is still pornographic, with women targeted far more often than men. This continues the controversy that began in 2017.

The Legacy of the Original Deepfakes User

The Reddit user Deepfakes is gone, and the original subreddit is deleted. But their impact remains. They didn’t just make videos — they gave the world a new word and a new problem.

Every law debated about AI-generated content, every news story about fake videos, and every tech conference on online trust now uses the word deepfake, first coined by that anonymous Reddit account.

Why It Matters Today?

Understanding the origin of deepfakes is important for two reasons. First, it shows how quickly technology can move from a hobbyist forum to a worldwide issue. Second, it explains why experts treat non-consensual AI media as both a legal problem and a moral one.

For parents, it’s a warning about what technology can expose teens to. For students and researchers, it’s a reminder that AI doesn’t exist in a vacuum — it shapes culture. For policymakers, it shows how one small online community created a global safety challenge.

Conclusion

The term deepfake began in 2017 with a Reddit user experimenting on a small subreddit. In less than a year, the site was banned, but the word had already spread into newsrooms, universities, and government hearings.

The story of MrDeepFakes and the original subreddit is more than internet history. It’s the starting point of one of the biggest debates about AI, consent, and online safety. What began as a niche project is now a central issue in how we think about truth, technology, and human rights.