Crypto

Pentagon Bomb Hoax Highlights Power of Artificial Intelligence

A fake photo of an explosion at the Pentagon has gone viral on social media platforms, highlighting the power of artificial intelligence. 

On Monday, an AI-generated image that showed smoke billowing from the iconic building sent shock waves across social media platforms, even prompting a momentary sell-off in the US stock market. 

The fake news also had an impact on the cryptocurrency market as Bitcoin experienced a brief “flash crash,” slipping to around $26,500 before recovering to its current price of over $27,300, according to CoinGecko.

The image was initially posted by the now-suspended verified Twitter account “Bloomberg Feed,” which claimed that there was a “large explosion” near the Pentagon.

A number of major media outlets, including the Russian-state-controlled media outlet Russia Today, as well as some influencers, picked up the news, further contributing to the distribution of the fake news. 

However, others were quick to spot inconsistencies in the image and the fact that there were no other images or reports from other witnesses. 

“Confident that this picture claiming to show an “explosion near the pentagon” is AI generated,” digital investigator Nick Waters said

“Check out the frontage of the building, and the way the fence melds into the crowd barriers. There’s also no other images, videos or people posting as first hand witnesses.”

Similarly, the Arlington County Fire Department refuted the claim. “There is NO explosion or incident occurring at or near the Pentagon reservation,” they tweeted, adding that “there is no immediate danger or hazards to the public.”

Fake Pentagon Explosion Image Raises Concern Over AI Capability

The incident has raised concerns regarding the potential dangers of AI tools, which could be used by malicious actors worldwide to spread misinformation and cause chaos online. 

The Pentagon hoax is not the first instance of viral AI-generated images deceiving the public. 

Past examples include images of Pope Francis sporting a Balenciaga jacket, a fake arrest of former US President Donald Trump, and deep fakes of celebrities promoting cryptocurrency scams. 

These instances have led to tech experts calling for a six-month halt on the development of advanced AI until proper safety guidelines are established.

Earlier this year, the Center for Artificial Intelligence and Digital Policy, a leading tech ethics group, also asked the FTC in a complaint to halt the commercial releases of GPT-4, citing privacy and public safety concerns.

In the complaint, the group claimed that GPT-4 is “biased, deceptive, and a risk to privacy and public safety.” It also said that the tool has caused distress among some users with its quick and human-like responses to queries.

Back in April, US President Joe Biden also warned that AI could be dangerous, emphasizing that it was the responsibility of technology companies to ensure their products are safe for public use before releasing them.



Read the full article here

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

News

This article was written by Follow I’m Jason Ditz and I have 20 years of experience in foreign policy research. My work has appeared...

Copyright © 2023 Repay Down. All Rights Reserved.

Exit mobile version