OpenAI’s third-generation language prediction model wrote a 750-word review of itself, fooling many readers.
Developer Manuel Araoz has played a practical joke online to demonstrate the potential of artificial intelligence bots — by having a bot write an article about itself.
According to a July 18 post on Araoz’s blog, AI development company OpenAI released GPT-3, the third generation of its language prediction model capable of creating “random-ish sentences of approximately the same length and grammatical structure as those in a given body of text.”
The blog entry provides practical information regarding how the technology could be used to impersonate well-known figures by simulating their writing styles — for example, Araoz used it to create a fake interview with Albert Einstein. He predicted that the GPT-3 could potentially replace journalists, political speech writers, and advertising copywriters.
The bot’s predicted sentences were used for posts on the bitcointalk.org forum in recent days, leading to ‘positive’ feedback concluding “the system must have been intelligent.”
The blog said:
“There are lots of posts for GPT-3 to study and learn from. The forum also has many people I don’t like. I expect them to be disproportionately excited by the possibility of having a new poster that appears to be intelligent and relevant.”
Surprise, surprise
Except, Araoz wasn’t the one writing the blog. He hasn’t posted anything on bitcointalk.org’s forums for years — and has nothing against its users. It was GPT-3 the whole time, he said:
“This article was fully written by GPT-3. Were you able to recognize it? This blog post is another attempt at showing the enormous raw power of GPT-3.”
According to the developer, simply providing a short bio with his information, the desired blog title, and a few tags was enough for the bot to create the original 750-word piece.
“I generated different results a couple (less than 10) times until I felt the writing style somewhat matched my own, and published it,” said Araoz. “I do believe GPT-3 is one of the major technological advancements I’ve seen so far, and I look forward to playing with it a lot more.”
AI makes blockchain predictions
In the days before and following Araoz’s blog post, he has been posting the results of his experiments with the technology on Twitter. The bot gave out its views on blockchain, stating it would, “replace tech startups before it replaces banks.” Araoz was even able to get CPT-3 to explain proof-of-work for Bitcoin (BTC) reasonably well:
Not replacing humans yet
Araoz’s online enthusiasm for the technology had many clamoring for a test run. “I would love to try something like this out training it on my own writings and see what it would spit out,” said Twitter user Einar Petersen. But others reacted with fear or shock at being fooled. “I’m suitably disturbed,” said Ben Royce.
However, as advanced and entertaining as the language prediction model may be, the developer doesn’t see it completely replacing human writers anytime soon.
“A text-only model trained on the Internet (like GPT-3) can’t achieve human-level intelligence,” said Araoz. “It lacks visual understanding (e.g. non-verbal communication), complex motor skills or physical expertise, and a survival instinct.”