It seemed like a stretch at the time. But today, we have computers that beat humans at chess, smartphones that guide us on trips, chatbots that write essays, and apps that translate languages almost instantaneously.
“We have achieved a lot. we’ve come a long way in artificial intelligence,” says economics professor Kellogg Sergio Rebelo. “But this progress only happened after many years of failure.”
In a recent webinar hosted by Kellogg Executive Education and Kellogg InsightRebelo traversed lessons from AI’s past through the lens of a macroeconomist to help us prepare for its rapidly expanding role in society — today and in the years to come.
1. Overnight hits can take decades to emerge
One of the first strategies in artificial intelligence was to create expert systems. The goal was to feed a computer program as much knowledge as possible, making it “expert” so that it could use that information to perform relevant tasks.
The US Department of Defense used this tactic during the Cold War when it tried to create a machine that could quickly translate phrases captured from Russian into English. The computer scientists fed the machine a large number of words, rules and definitions and then had it translate words one at a time.
Too often, however, the machine couldn’t pick up on nuances in the languages and ended up producing inaccurate translations. For example, the biblical phrase, “the Spirit is willing, but the flesh is weak,” when translated into Russian and English, will become “whiskey is strong, but meat is rotten.”
Decades later, the Google Translate team was still using the same concept of an AI expert for their translations. The result, unsurprisingly, was that translations often ended up being overly literal, lacking the subtleties of the language. It wasn’t until 2016, when Google abandoned this approach, that its translation into artificial intelligence took off. The team harnessed neural networks to process entire sentences at once, using context to improve their translations.
It wasn’t an overnight success, Rebelo says, but instead years of trial and error brought the translation of artificial intelligence to where it is today. “And the success we’re seeing now with genetic AI is a little bit of the same thing. It’s a seemingly overnight success that took more than 50 years to complete.”
Rebelo adds that much of this progress has been possible because of the government’s long-standing commitment to funding AI research.
“We are where we are because, despite failure for 50 years, the government has continued to fund this research,” he says.
2. Intuition can lead you to smart risks
Early in her career, when she was an assistant professor at Stanford, computer scientist Fei-Fei Li had an idea of what was holding back artificial intelligence.
Her intuition told her that “what was missing was the data,” says Rebelo. “And that if you had more data and the computing power to process it, you could unlock potentially magical effects.”
Feeling inspired, she put all her resources into pursuing this hunch. So, along with her PhD students, she began tagging images to create a large enough dataset on which to train algorithms, hoping that AI could eventually make sense of the images.
“She decided to do something extremely risky,” says Rebelo, devoting about 2.5 years to the project instead of focusing on safer projects that would otherwise have helped her secure her term more easily.
This bet eventually led to the creation of ImageNet, a public dataset containing millions of images. Based on this data set, a diverse group of computer scientists – led by the “godfather of artificial intelligence”, Geoffrey Hinton – was able to develop an algorithm that could label images by describing their content. The team asked their algorithm to analyze a new set of images in an artificial intelligence competition in 2012. It completely blew other algorithms out of the water.
“The improvement was absolutely dramatic,” says Rebelo. “This was an amazing breakthrough, a watershed moment for modern artificial intelligence.”
From that point on, the race was on to get enough data to feed these insatiable algorithms. People realized that the evolution of artificial intelligence depended less on building knowledge into their algorithms and more on scaling them through big data.
And the advances in artificial intelligence that soon followed were made possible because an early-career scientist decided to take a risk.
3. Did we say stay the course for decades? Try a century
However, not all AI ventures have come to fruition so quickly, Rebelo notes. Some have faced significant obstacles and taken decades, if not a century, to move forward.
Mathematician Andrey Markov, who had spent years working on an early language model, wrote a letter to the St. Petersburg Academy of Sciences in 1921 to tell them that he had made an important discovery.
He was working on an algorithm designed to write poetry. But there was a problem. He did not have the means to physically arrive at the Academy to present his work. The Academy sent him a pair of boots to help, but they were the wrong size. Markov never made it to the presentation. About a year later, he died.
Almost a century after that, in 2017, a team of computer scientists at Google solved the problem Markov faced using a new form of neural network (the transformer) that eventually served as the foundation for today’s popular large language models (LLMs). such as ChatGPT .
“Maybe the discoveries we have now [with LLMs] it could have happened much earlier if we hadn’t lost that paper in 1921,” says Rebelo. “Anyway… the transformers have now produced amazing results.”
Despite the development, artificial intelligence still has many issues to be resolved. The illusion that AI is part of the information it provides is common. This flaw has intimidated many groups. Some law firms, for example, have gone so far as to ban their employees from using LLMs for work after a lawyer was caught giving a judge an AI-written brief full of cases.
But from Rebelo’s point of view, deciding to stop using AI tools out of fear is a mistake—one that will set people back further.
“There’s a lot of anxiety about AI replacing humans,” says Rebelo. “I’ll tell you – the first people to be replaced are the ones who don’t know how to use artificial intelligence. And they will be replaced not by artificial intelligence but by people who know how to use artificial intelligence.”
4. Beware of the hype
Among the many milestones in the development of artificial intelligence, “the most impressive achievements so far are in biology,” says Rebelo, referring to the application of artificial intelligence to understanding protein structures.
By 2019, scientists had determined the structure of about 170,000 proteins, which was a huge achievement since folding a single protein was considered a multi-year project worthy of a PhD thesis. However, in 2020, the AI program AlphaFold determined the unique structures of more than 200 million proteins.
“Clearly, we’re at the dawn of something new,” says Rebelo. “At the same time, there’s a lot of hype and snake oil.”
There’s this impression that “AI is a kind of one-size-fits-all magic tool,” he continues. “The reality is very different.”
Take ChatGPT, for example. To the average user, it looks like a single, complex algorithm capable of doing so much—from all kinds of text generation to audio processing. But behind the scenes, it’s a collection of specialized algorithms that are individually great at doing one particular task, but terrible at most others.
“Sometimes people’s perception is that AI looks like a bunch of pretty, shiny copper pipes,” says Rebelo, “when, in reality, it’s more like my basement, where everything is held together with duct tape. There is a lot of duct-taping in AI.”
There are also concerns that AI is hitting a wall and that OpenAI’s new LLM, Orion, isn’t necessarily better than its predecessor, ChatGPT. Similar rumors exist for Google’s Gemini and the latest version of Anthropic’s chatbot, Claude.
“Whether data scaling will continue to be a source of great improvement in AI, or whether we’re entering an era of diminishing returns … no one knows,” says Rebelo.
But that’s no reason not to celebrate what we’ve achieved so far, he says, because “what’s true is that the recent achievement has been pretty amazing.”