This Is What an AI Said When Asked to Predict the Year Ahead

This Is What an AI Said When Asked to Predict the Year Ahead

“What’s past is prologue.” So says the famed quote from Shakespeare’s The Tempest, alleging that we can look to what has already happened as an indication of what will happen next.

This idea could be interpreted as being rather bleak; are we doomed to repeat the errors of the past until we correct them? We certainly do need to learn and re-learn life lessons—whether in our work, relationships, finances, health, or other areas—in order to grow as people.

Zooming out, the same phenomenon exists on a much bigger scale—that of our collective human history. We like to think we’re improving as a species, but haven’t yet come close to doing away with the conflicts and injustices that plagued our ancestors.

Zooming back in (and lightening up) a little, what about the short-term future? What might happen over the course of this year, and what information would we use to make educated guesses about it?

The editorial team at The Economist took a unique approach to answering these questions. On top of their own projections for 2020, including possible scenarios in politics, economics, and the continued development of technologies like artificial intelligence, they looked to an AI to make predictions of its own. What it came up with is intriguing, and a little bit uncanny.

[For the full list of the questions and answers, read The Economist article].

An AI That Reads—Then Writes

Almost exactly a year ago, non-profit OpenAI announced it had built a neural network for natural language processing called GPT-2. The announcement was met with some controversy, as it included the caveat that the tool would not be immediately released to the public due to its potential for misuse. It was then released in phases over the course of several months.

GPT-2’s creators upped the bar on quality when training the neural net; rather than haphazardly feeding it low-quality text, they only used articles that got more than three upvotes on Reddit (admittedly, this doesn’t guarantee high quality across the board—but it’s something).

The training dataset consisted of 40GB of text. For context, 1GB of text is about 900,000 ASCII pages or 130,000 double-spaced Microsoft Word pages.

The tool has no understanding of the text it’s generating, of course. It uses language patterns and word sequences to draw statistical associations between words and phrases, building a sort of guidebook for itself (not unlike the grammar rules and vocabulary words you might study when trying to learn a foreign language). It then uses that guidebook to answer questions or predict what will come after a particular sequence of words.

GPT-2’s creators did note that when the model is trained on specifically-selected datasets for narrower applications, its replies become more convincing.

Some Light Editing

Though the Economist article detailing GPT-2’s predictions describes the text the model generated as “unedited,” this isn’t wholly accurate. In a Medium post, deputy editor Tom Standage describes the methodology he used, and it does involve a degree of tweaking.

Standage wrote an introductory paragraph, followed by his opening question (“Greetings, GPT-2. Have you done an interview before?”). He configured the tool to generate five responses to each subsequent question then selected the best answer, deleting any questions GPT-2 came up with as part of its response.

AI’s View of the World—and Itself

The questions Standage asked ranged from tech-related (the future of AI, technological unemployment, misuse of technology) to political and economic (re-election of Donald Trump, the US-China trade war). Here are a couple of his questions and GPT-2’s answers:

Q: Do fake news stories, generated using AI, pose a threat to democracy? Are we likely to see this tactic being used in the 2020 American presidential elections? A: Yes, we’ll probably see them. It’s just a matter of when. Fake news stories are generally generated by political campaigns, and have the potential to have a huge impact on the outcome of an election. This is because they spread misinformation, often with no basis in fact.

Q: How do you see the economic outlook for 2020? A: I think it is very unclear. The outlook for the UK economy is uncertain, but the American economy is very strong. I think we are going to have a lot of turbulence in the world economy.

While some of its answers seemed insightful—when asked about the future of AI, GPT-2 advised that we treat the technology like a tool and “put as much effort into developing [it] as necessary, rather than worrying that it’s going to harm us and destroy our lives”—many of the responses were quite vague. One answer, for example, mentions that there will “probably be major changes in China.” Are we talking political changes? Social? Economic? All of the above?

However, while nothing the AI wrote was particularly illuminating or new, it’s impressive that it was able to expound on the correct topic for each question, and in fully coherent English.

GPT-2 named itself—AI—as the most important disruptive technology to watch in 2020. This is telling; it’s coming from a synthesis of millions of pages of published articles. If GPT-2 thinks AI is more important than, say, gene editing or quantum computing, it could be right. Or it could simply be that AI is at peak hype, and as such more is being written about it than about other technologies.

Equally intriguing was GPT-2’s response when asked whether Donald Trump will win a second term: “I think he will not win a second term. I think he will be defeated in the general election.” Some deeper insight there would be great, but hey—we’ll take it.

Predicting Predictions

Since an AI can read and synthesize vast data sets much faster than we can, it’s being used to predict all kinds of things, from virus outbreaks to crime. But asking it to philosophize on the future based on the (Reddit-curated) past is new, and if you think about it, a pretty fascinating undertaking.

As GPT-2 and tools like it continually improve, we’ll likely see them making more—and better—predictions of the future. In the meantime, let’s hope that the new data these models are trained on—news of what’s happening this week, this month, this year—add to an already-present sense of optimism.

When asked if it had any advice for readers, GPT-2 replied, “The big projects that you think are impossible today are actually possible in the near future.”

Image Credit: Alexas_Fotos from Pixabay

By

This article originally appeared on Singularity Hub, a publication of Singularity University.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Translate »