OpenAI has seen quite a few makes an attempt the place its AI fashions have been used to generate faux content material, together with long-form articles and social media feedback, aimed toward influencing elections, the ChatGPT maker stated in a report on Wednesday.
Cybercriminals are more and more utilizing AI instruments, together with ChatGPT, to help of their malicious actions comparable to creating and debugging malware, and producing faux content material for web sites and social media platforms, the startup stated.
Up to now this yr it neutralized greater than 20 such makes an attempt, together with a set of ChatGPT accounts in August that had been used to supply articles on subjects that included the U.S. elections, the corporate stated.
It additionally banned quite a few accounts from Rwanda in July that had been used to generate feedback in regards to the elections in that nation for posting on social media website X.
Not one of the actions that tried to affect world elections drew viral engagement or sustainable audiences, OpenAI added.
There’s rising fear about using AI instruments and social media websites to generate and propagate faux content material associated to elections, particularly because the U.S. gears for presidential polls.
In keeping with the U.S. Division of Homeland Safety, the U.S. sees a rising menace of Russia, Iran and China making an attempt to affect the Nov. 5 elections, together with by utilizing AI to disseminate faux or divisive info.
OpenAI cemented its place as one of many world’s Most worthy non-public corporations final week after a $6.6 billion funding spherical.
ChatGPT has 250 million weekly energetic customers since its launch in November 2022.
(This story has not been edited by NDTV employees and is auto-generated from a syndicated feed.)