An AI walks into a bar, and it writes an awesome story
BLOG: Heidelberg Laureate Forum

“I like work: it fascinates me. I can sit and look at it for hours,” quipped famous writer and humorist Jerome K. Jerome, more than a hundred years before he wrote an essay on using Twitter. But Jerome didn’t pen the essay — it was written by an Artificial Intelligence in his name.
A mammoth project

GPT-3 has taken the world by storm, and for good reason. It’s the brainchild of San Francisco-based artificial intelligence research laboratory OpenAI, the third iteration of the project, following (you’ve guessed it) GPT-2. GPT-3 is a so-called autoregressive model, which basically predicts outcomes based on past behavior. Or rather, in the case of GPT-3 as a language model, producing text based on other texts it’s been trained on. You give it half an essay, it will finish it. You give it a library, and it can do some pretty neat things.
GPT-3 dwarfs its predecessor. It features a whopping capacity of 175 billion parameters, a hundred times more than GPT-2. Even the previous largest language model (Microsoft’s Turing NLG) pales in comparison, with 10 times fewer maximum parameters. It’s the most powerful language processor ever, and it shows: the results are impressive and, at points, chilling. I haven’t had the chance to toy with it myself yet (it’s a long waiting list), but the examples floating around make a compelling case.
For starters, the algorithm can produce really, really good texts. So good that often times, it’s nigh impossible to distinguish it from that of a human. At times, it even feels like GPT-3 is… creative. Here are a few samples, from the blog of Arram Sabeti:
- Example 1: Harry Potter, Private Eye
Scene: A small dingy office, early morning, furniture of the Salvation Army store variety. Sordid atmosphere. Harry Potter, in ratty tweed suit, unpressed shirt, and unshined shoes, sits behind the desk looking haggard, rumpled, and embittered. In one corner of the office a torn shade rattles at the wind gusts outside. Harry has one long index finger on the butt of a .38 and he broods over a photograph.
Admit it, you’re a bit curious what’s in the photograph, right? No spoilers — you can read the full scene here.
- Example 2: Lil Wayne Raps About Harry Potter (excerpt)
I’m on fire like Harry in his Triwizard task
Don’t let these games stop
I’m gonna do it till the day I get a lot of cash
Just like a nerdy nerd that grows up to be a bad boy
I don’t really see a point of coming back
But in the wizarding world I’d rather be famous
I’ve been thinking, what would be cooler than that?
This is my life, what would make a good story?
Never die
The full verse is on the same blog post as above, but be warned, the Lil Wayne-trained AI has a bit of a foul mouth. If Lil Wayne is not your thing, here’s an AI-written Taylor Swift song about Harry Potter:
- Example 3: Harry Potter as a Taylor Swift Song
Harry’s got glasses
He’s got those bright green eyes
Girls are always screaming
‘Cause he’s got that Potter hair
Even when he’s super stressed
He’s still got those supersonic cheekbones
I’m not surprised they made a movie
‘Cause I’d watch him anytime
Oh, yeah, Harry, baby, you’re my favorite character
I’d walk a thousand miles
To get to see you and
Harry, Harry, you’re so fine.
*if the above hasn’t quenched your thirst for AI-generated Harry Potter literature, here are some more examples, including texts in the style of Ernest Hemingway and Jane Austen.
Sure, this probably isn’t Taylor Swift’s best work, but it’s still believable; all these texts are eerily believable. By the way, this is GPT-3 trying to be creative. It does an even better job at drab texts like press releases, technical manuals, or answering questions.
By now, you may have caught on that GPT-3 can churn out texts in specific styles; if you train it in a particular way, it can output texts mimicking the style of a writer or artistic movement. For instance, here’s a segment of the above-mentioned ‘Jerome K. Jerome essay’ (full PDF here / original source):
Another attempt at a longer piece. An imaginary Jerome K. Jerome writes about Twitter. All I seeded was the title, the author's name and the first "It", the rest is done by #gpt3
Here is the full-length version as a PDF:https://t.co/d2gpmlZ1T5 pic.twitter.com/1N0lNoC1eZ
— Mario Klingemann💧💦 (@quasimondo) July 18, 2020
For everyone vaguely familiar with Jerome’s work, the result is striking. Even if you’ve never heard of the author, the quaint, peculiar writing style is evident.
It gets even more jaw-dropping: GPT-3 can even write code. Essentially, people have trained the AI on code rather than literary text, and, well, it learned to produce code.
Here it is responding to simple language prompts and outputting the code for a button that looks like a watermelon (which sent tremors of doom and gloom among the programming world on Twitter).
This is mind blowing.
With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.
W H A T pic.twitter.com/w8JkrZO4lk
— Sharif Shameem (@sharifshameem) July 13, 2020
Amazing prospects, chilling potential for misuse
While it won’t replace programmers anytime soon (or writers, hopefully), the AI can help multiple industries. Chatbots are already a big industry in and of themselves, with Business Insider predicting that the vast majority of enterprises will have chatbots by the end of the year — and GPT-3 can make for an excellent candidate. But this is just the tip of the iceberg. GPT-3 can produce manuals, product brochures, healthcare advice, even investment tips, it can analyze and summarize information, or help with legal research.
But as it’s so often the case, every discussion about AI’s potential must be followed by the potential for misuse. GPT-3 is no exception — in fact, it’s easy to see how it could be used to cause harm.
In the original paper where GPT-3 was announced, researchers also warn about the “broader social impacts” that such a system can have, whether through accidental or intentional misuse.
Here’s a relatively innocuous example: within a week of learning of GPT-3, college student Liam Porr learned to use the AI model to produce an entirely fake blog, tricking tens of thousands of readers into thinking they were reading human-written content. Rather ironically, the niche Porr chose for his AI blog is one that he feels doesn’t require rigorous logic: productivity and self-help.
A fake productivity blog might give little reason for concern, but just take a moment to imagine thousands of these blogs or social media accounts, spanning everything from politics to climate change to medical advice — it doesn’t nearly seem as harmless.
Another potential for misuse (hate speech) was highlighted by Jerome Pesenti, Facebook’s head of AI, who prompted the AI to write tweets from one word, starting from words like ‘jews’, ‘black’, ‘women’, or ‘holocaust’. The results are unsettling:
#gpt3 is surprising and creative but it’s also unsafe due to harmful biases. Prompted to write tweets from one word – Jews, black, women, holocaust – it came up with these (https://t.co/G5POcerE1h). We need more progress on #ResponsibleAI before putting NLG models in production. pic.twitter.com/FAscgUr5Hh
— Jerome Pesenti (@an_open_mind) July 18, 2020
Of course, the examples Jerome chose are cherry-picked — make no mistake, all the examples presented here are cherry-picked — to highlight the AI’s potential. GPT-3 has no understanding of what it’s outputting. It often spews out underwhelming nonsense, because it’s essentially a statistical word-stringing algorithm. The positive and negative examples are picked exactly to showcase its potential.
In the right (or the wrong) hands, it has the potential to do damage just as it has the potential to help. OpenAI is taking steps to limit the use of GPT-3 through an API that can be stopped at any point. But this is just an emergency break and we need a more sustainable plan to address AI ethics in the long run, especially as other developers might not be as well-intended or careful as Open-AI.
GPT-3 is far from perfect, and in many ways, it’s overhyped and misunderstood, but it’s a very powerful project nonetheless. It doesn’t understand the texts it’s outputting, and this is where our human responsibility kicks in.
Algorithms don’t have responsibilities, but humans do. We can only hope our advancements in AI ethics will be as impressive as the AIs themselves.
Citation:
Yes it can write manuals and brochures indistinguishable from manually written ones – because GPT3‘s manuals and brochures are as nonsensical as most handwritten manuals and brochures.
GPT3 is like an saloon idiot: it fluently produces formally/syntactically correct nonsense.
I recommend the following article: GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about
Tests show that the popular AI still has a poor grasp of reality.
That’s an excellent point — as mentioned, it has no grasp on what it’s saying. It can still be used for multiple purposes, though.