TECH NEWS

Elon Musk's OpenAI has an algorithm that can generate weirdly believable fake news stories

Elon Musk’s OpenAI has an algorithm that can generate weirdly believable fake news stories

Artificial intelligence is getting fairly good at producing complete articles and stories, which raises troubling implications about its potential to mass produce fake news. A program developed by a workforce at OpenAI, the non-profit analysis institute based by Elon Musk and Sam Altman, can make up stories based mostly on a handful of phrases that really sound fairly believable. 

Here’s a snippet of what it is able to:

Russia has declared warfare on the United States after Donald Trump by chance fired a missile within the air.

Russia stated it had “recognized the missile’s trajectory and can take needed measures to make sure the safety of the Russian inhabitants and the nation’s strategic nuclear forces.” The White House stated it was “extraordinarily involved by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea area and backed separatists in japanese Ukraine.

The AI got here up with the complete story by itself, after merely being supplied with the phrases “Russia has declared warfare on the United States after Donald Trump by chance…” 

The researchers needed to develop a common goal language algorithm, skilled on an enormous quantity of textual content from the online. This encompassed 45 million pages from the online, chosen through Reddit.

They initially meant for this system to have the ability to do issues like translate textual content and reply questions, however it quickly grew to become clear that there was additionally nice potential for abuse and exploitation. It was just too competent at producing stories, which might then be misused. 

The program is an instance of how AI could possibly be used to mechanically create fake news, social media posts, or different content material that could possibly be disseminated broadly. This is particularly regarding as fake news is already an issue, which might be even tougher to take care of if it have been automated. It could possibly be used to sway public opinion, probably affecting essential elections and influencing different occasions. 

“It’s very clear that if this know-how matures – and I’d give it one or two years – it could possibly be used for disinformation and propaganda,” stated Jack Clark, coverage director at OpenAI. One of the group’s targets can be to spotlight the dangers of AI and get forward of them, so it is no shock that OpenAI is at the moment looking for a approach to mitigate the danger of abuse. 

The algorithm is much from good although, and the instance above is likely one of the higher ones. It nonetheless continuously produces textual content that feels like gibberish on nearer inspection or is clearly lifted from on-line news sources, so discerning readers will not be fooled. 

That stated, OpenAI nonetheless thinks this system is just too harmful for public use, and can solely make a simplified model of this system publicly out there. 

Source: MIT Technology assessment

 

  Apple's Powerbeats Pro is now available in Singapore (update: other colours coming Aug 30)
Tags

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close