Recent News

OpenAI’s Sora App Sparks Global Debate on AI Deepfakes

Table of Content

OpenAI’s Sora App Challenges Reality and Public Trust

Millions of people have fallen in love with Sora, OpenAI’s new short-video platform, which features strange, hyperrealistic AI-generated clips. The software makes it hard to tell what’s genuine and what’s not, like Queen Elizabeth plunging off a pub table or aliens frying burgers.

Experts, on the other hand, say that this new form of entertainment might lead to a bigger problem: it could make people less trusting of anything they see online. Sora’s ascent to fame shows how generative AI changes what people see and how they think about things across the world.

Source: Google Play

Experts Warn of Deepfake Threat to Digital Authenticity

According to Sam Gregory of the human rights group Witness, Sora might make people “fog of doubt” in digital settings. He said that plausible deniability makes it tougher to tell the difference between real and fake news.

Gregory said, “The biggest risk is that Sora makes people less sure that we can tell the difference between real and fake.” He said that when modified realities take over timeframes, even true evidence might be seen as fraudulent.

AI Re-Creations Make Artists and Families Angry

People who used Sora’s text-to-video engine rapidly started making videos of famous people and made-up characters. Within days, social media was full of clips showing Hitler arguing with Michael Jackson or Robin Williams making jokes.

The late comedian’s daughter, Zelda Williams, spoke out against these downloads, calling them “disgusting, over-processed hot dogs.” Families of famous people who have died and owners of material have asked OpenAI to stop unauthorized likenesses on their platform.

Recommended Article: Marcos Pushes Rapid AI Integration Across Philippine Government

Hollywood Pushes Back Over Unauthorized Digital Likenesses

Major studios and unions say that OpenAI used actors’ photos without their permission. A representative for Fred Rogers Productions indicated that AI versions of “Mister Rogers” went against his morals and were not appropriate for children.

They said they had been in touch with OpenAI to limit the usage of Rogers’s picture. Hollywood is more worried about who owns the rights to actors’ digital likenesses and how much they should be paid for them.

OpenAI Promises New Controls and Revenue-Sharing Options

OpenAI CEO Sam Altman promised to allow rights holders to “opt in” before characters or likenesses show up in Sora videos in response to growing criticism. The corporation also wants to look for ways to share profits.

Altman claimed that AI technologies of the future might make media more personal for “an audience of one” while still safeguarding the authors’ intellectual property. Critics are still doubtful and want better protections and clearer ways to get permission.

Loti AI Partners With Estates to Detect Unauthorized Deepfakes

As the uproar developed, CMG Worldwide teamed up with Loti AI, a company that finds deepfakes, to protect the estates of famous people who have died. The technology looks at websites and takes down impersonations that weren’t approved.

Luke Arrigoni, the CEO of Loti AI, said that sign-ups went up thirty times after Sora was released. Since January, the business has removed thousands of bogus films as consumers try to get back control of their online identities.

OpenAI Tightens Content Policy After Legal Pressure Grows

After being looked at by the law, Sora started flagging and taking down videos that included copyrighted content or images of celebrities. If you try to make Disney characters now, the software will automatically alert you that you’re breaking the rules.

Even with new rules, individuals are still pushing the bounds by making jokes criticizing censorship itself. This growing friction shows how hard it is to find a balance between creative freedom and digital ethics and responsibility.

Growing Concerns About AI Lies and Political Manipulation

Experts say that generative AI might be used to spread false information, fake protests, or fake crimes. Gregory added that evil people and countries might use this kind of stuff as a weapon to change stories or cast doubt on true facts.

He said, “The danger is when people in power can say that AI made even real proof.” Society is facing an unparalleled problem of credibility and trust as the border between reality and simulation becomes less clear.

Tags :

Krypton Today Staff

Popular News

Recent News

Independent crypto journalism, daily insights, and breaking blockchain news.

Disclaimer: All content on this site is for informational purposes only and does not constitute financial advice. Always conduct your research before investing in any cryptocurrency.

© 2025 Krypton Today. All Rights Reserved.