The Details Are in on How the Feds Are Blowing Your Tax Dollars
Here's the Final Tally on How Much Money Trump Raised for Hurricane Victims
Here's the Latest on That University of Oregon Employee Who Said Trump Supporters...
Watch an Eagles Fan 'Crash' a New York Giants Fan's Event...and the Reaction...
We Almost Had Another Friendly Fire Incident
Not Quite As Crusty As Biden Yet
Legal Group Puts Sanctuary Jurisdictions on Notice Ahead of Trump's Mass Deportation Opera...
The International Criminal Court Pretends to Be About Justice
The Best Christmas Gift of All: Trump Saved The United States of America
Who Can Trust White House Reporters Who Hid Biden's Infirmity?
The Debt This Congress Leaves Behind
How Cops, Politicians and Bureaucrats Tried to Dodge Responsibility in 2024
Meet the Worst of the Worst Biden Just Spared From Execution
Celebrating the Miracle of Light
Chimney Rock Demonstrates Why America Must Stay United
Tipsheet
Premium

Deepfake Technology Is Now 'One of the Greatest Challenges We Face,' Expert Tells Lawmakers

Artificial intelligence is developing faster than any rules or regulations can keep up.  

At least 30 female students at a New Jersey high school were recently victimized by a classmate who used AI to put their faces on pornographic images and shared them online. Now the students and their families are looking for accountability from officials at the local, state and federal levels.  

The incident was just one of the many examples Republican Rep. Nancy Mace of South Carolina discussed in Wednesday’s House Oversight Subcommittee on Cybersecurity, Information Technology, and Government hearing regarding “Advances in Deepfake Technology.”

While AI deepfakes can be useful in the entertainment industry and to advance medical research, it can also be weaponized, she explained. 

“It can be used to make people appear to say or do things that they have not actually said or done. It can be used to perpetrate various crimes, including financial fraud and intellectual property theft. And it can be used by anti-American actors to create national security threats,” Mace noted. 

Currently, one company that studies deepfakes determined that about 90 percent of them are being used to generate pornographic material, Mace added, making it an urgent issue and one the attorneys general of 54 states and territories are calling on Congress to address, particularly as it relates to the generation of child sexual abuse material.  

Mace said she’s not interested in banning fake images and videos, but there is a problem when fact and fiction are indistinguishable—"we can’t ensure our laws are enforced or that our national security is preserved.” 

Deepfakes are also being used to spread disinformation from war zones.

"Videos purportedly taken from on the ground in Israel, Gaza and Ukraine have circulated rapidly around on social media – only to be proven inauthentic," Mace said. "One AI-generated clip showed the Ukraine president urging troops to put down their arms."

Witness Mounir Ibrahim, executive president of Truepic, a technology company focused on transparency and authenticity in digital content, explained how he saw in his previous work with the UN how images from conflict zones were constantly being questioned as fake or altered, and that was prior to generative AI. 

“Today, this strategy for undermining reality is now commonly referred to as the 'Liar’s Dividend,'" he explained. "Bad actors benefit from the rapid increase in fake and manipulated imagery. It makes their false claims that a real image or video is fake more believable, giving them the ability to sow doubt in what we see and hear online.”

But given the world has digitized nearly every aspect of life, this problem of determining what is real and what is fake is one all should be interested in addressing.

There isn't a "silver bullet," Ibrahim said, which means what's truly needed is a "transparent ecosystem for digital content."

“In my opinion, this is one of the greatest challenges we face today," he said. "Some estimates are that in one to two years, 90 percent of new digital content created online will be wholly or partially synthetic. Without wide adoption of interoperable standards to clearly differentiate authentic content, AI-assisted, and fully generated content, our entire informational ecosystem will be at risk.”

A legislative fix is one tool but Ibrahim said it won't be enough. Work on content provenance is already being advanced, while other stakeholders are exploring different remedies. 

The hearing came just weeks after President Biden signed an executive order establishing new standards to safeguard Americans from the dangers of AI technology. 



Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement