“Just one,” says Kellogg’s Hatim Rahman, assistant professor of Management and Organizations. “And in case you’re wondering, that job was an elevator operator.”
It’s a statistic he wants us to keep in mind as we think about the future of work — specifically, the “fear that artificial intelligence will suddenly lead to mass unemployment.” Rahman talked about how artificial intelligence will affect our careers and our society in a recent The Insightful Leader Live webinar.
His point? “Decades of research show the fear is unfounded.”
Instead, he says, where we head is largely up to us — though we’ll need to be proactive to ensure it’s not chosen for us. Here are four highlights from his presentation.
Even “rapid” change happens more slowly than we think
Whatever it is it feels like Technology is moving so fast that we can barely keep up, it takes much longer for these developments to fully integrate into society. This will also apply to artificial intelligence. “It will take a long time to penetrate an industry, especially in ways that will affect your career,” says Rahman.
This may be cold comfort to individual illustrators, translators and journalists who have already lost some work to genetic artificial intelligence. But zoom out a bit and it’s clear that while these professions are the first to be affected by technology, they’re hardly obsolete. Instead, they are changing, as AI is gradually integrated into various workflows and new infrastructure emerges to support it. “The more complex the technology, the more technical, human and monetary resources are required to develop, integrate and maintain [the] technology,” says Rahman.
That means really I am doing we have plenty of time to decide, as a society, how we want to use AI
By now you’ve surely heard the argument that AI is neither good nor bad, just a tool. So much will depend on how we choose to develop it. And there is time to make that choice collectively and intentionally.
We can choose to use AI to replace as many workers as possible—or we can choose to use AI to amplify talent and recognize it in places where it’s not recognized at all. We can choose to let machines make most of the decisions about our health care, education, and defense—or we can choose to keep humans at the helm, ensuring that human values and priorities rule the day.
And it’s not far-fetched to think we’ll actually have a choice. There are already areas where we have chosen to prioritize human involvement. Take the air force. Despite estimates that over 90 percent of a pilot’s responsibilities can be automated, our society has still decided to put well-trained pilots, capable of flying manually, in the cockpit in case things go wrong.
Automation has worked quite well for pilots, says Rahman. “Actually, overall, the number of pilots and their pay has increased over the years.”
But … we will need to listen to as many voices as possible when setting our priorities
Of course, pilots have more than their training: they also have strong professional organizations that can support them. Not all workers are so fortunate, and it is unlikely that all have the opportunity to shape the employment decisions that affect them.
This is a real problem, says Rahman, “because without diverse voices and stakeholders, the design and implementation of AI has [reflected]and will reflect, a very narrow group of interesting people.’
For example, much of the current debate surrounding genetic artificial intelligence comes from tech companies themselves, eager to find win cases for their products. Perhaps it’s no surprise, then, that “a lot of the way we talk about artificial intelligence is a hammer looking for a nail: ‘Here are big language models. How can we use them?’ says Rahman. “I don’t think this approach will help us thrive. Instead, we should be thinking first and foremost: What outcome is AI trying to measure, predict, or create… and should we use AI to make such predictions? This is where we need different voices and experts in the room to answer this question.”
He advises people to do what they can to join the conversation. Even grassroots organizations made up of like-minded lay people can be effective in pushing companies and local governments to develop and deploy AI in ways that are mutually beneficial.
And because it is impossible to intelligently advocate for our own interests if we remain in the dark about the decisions that affect us, we will also need to demand transparency about how AI systems are trained, used, and double-checked.
Let machines be machines and people be people
Finally, Rahman points out, AI is a bit of a misnomer as the technology is neither “artificial” nor “intelligent”.
AI is not “artificial” as it is trained on massive amounts of human data and further optimized by a small army of low paid people. (It also has a very real carbon footprint.) And AI isn’t “smart” in that it still can’t I think so in any meaningful way. Rahman showed that if you ask a model like GPT4 “What is the fifth word in this sentence?” it will tell you a different answer each time—usually the wrong one.
Instead, AI takes human input and processes it probabilistically.
However, it can be incredibly powerful. “AI tends to excel in efficiency, speed and the scale at which it can be applied,” says Rahman. “AI doesn’t get tired or bored the same way humans do.”
Instead, he explains, humans excel at innovation, emotional intelligence, and quick adaptation to new situations. While it may take thousands of cycles of training an artificial intelligence to learn to differentiate cats from dogs, a human toddler can make the same conclusions after spotting just a few goldendoodles.
With these relative strengths and weaknesses in mind, it becomes easier to think of AI systems less as our replacements and more as our partners, able to reinforce whatever human values and priorities we dictate.
You can watch the rest of Rahman’s webinar here.