Artificial intelligence advances in a manner that’s hard for the human mind to grasp. For a long time nothing happens, and then all of a sudden something does. The current revolution of Large Language Models (LLMs) such as ChatGPT resulted from the advent of “transformer neural networks” in about 2017.
What will the next half-decade bring? Can we rely on our current impressions of these tools to judge their quality, or will they surprise us with their development? As someone who has spent many hours playing around with these models, I think many people are in for a shock. LLMs will have significant implications for our business decisions, our portfolios, our regulatory structures and the simple question of how much we as individuals should invest in learning how to use them.
To be clear, I am not an AI sensationalist. I don’t think it will lead to mass unemployment, much less the “Skynet goes live” scenario and the resulting destruction of the world. I do think it will prove to be an enduring competitive and learning advantage for the people and institutions able to make use of it.
How the best chess-playing entity was created
I have a story for you, about chess and a neural net project called AlphaZero at DeepMind. AlphaZero was set up in late 2017. Almost immediately, it began training by playing hundreds of millions of games of chess against itself. After about four hours, it was the best chess-playing entity that ever had been created. The lesson of this story: Under the right conditions, AI can improve very, very quickly.
LLMs cannot match that pace, as they are dealing with more open and more complex systems, and they also require ongoing corporate investment. Still, the recent advances have been impressive.
I was not wowed by GPT-2, an LLM from 2019. I was intrigued by GPT-3 (2020) and am very impressed by ChatGPT, which is sometimes labelled GPT-3.5 and was released late last year. GPT-4 is on its way, possibly in the first half of this year. In only a few years, these models have gone from being curiosities to being integral to the work routines of many people I know. This semester I’ll be teaching my students how to write a paper using LLMs.
ChatGPT, the model released late last year, received a grade of D on an undergraduate labour economics exam given by my colleague Bryan Caplan. Anthropic, a new LLM available in beta form and expected to be released this year, passed our graduate-level law and economics exam with nice, clear answers. (If you’re wondering, blind grading was used.) Granted, current results from LLMs are not always impressive. But keep these examples — and that of AlphaZero — in mind.
I don’t have a prediction for the rate of improvement, but most analogies from the normal economy do not apply. Cars get better by some modest amount each year, as do most other things I buy or use. LLMs, in contrast, can make leaps.
Still, you may be wondering: “What can LLMs do for me?” I have two immediate responses.
First, they can write software code. They do make plenty of mistakes, but it is often easier to edit and correct those mistakes than to write the code from scratch. They also tend to be most useful at writing the boring parts of code, freeing up talented human programmers for experimentation and innovation.
How AI can become tutors
Second, they can be tutors. Such LLMs already exist, and they are going to get much better soon. They can give very interesting answers to questions about almost anything in the human or natural world. They are not always reliable, but they are often useful for new ideas and inspirations, not fact-checking. I expect they will be integrated with fact-checking and search services soon enough. In the meantime, they can improve writing and organise notes.
I’ve started dividing the people I know into three camps: those who are not yet aware of LLMs; those who complain about their current LLMs; and those who have some inkling of the startling future before us. The intriguing thing about LLMs is that they do not follow smooth, continuous rules of development. Rather they are like a larva due to sprout into a butterfly.
It is only human, if I may use that word, to be anxious about this future. But we should also be ready for it.
Tyler Cowen is a Bloomberg Opinion columnist and a professor of economics at George Mason University. Source: Bloomberg