Many of us have questions about AI, and when you listen to, say, 100 people express their concerns, you can tell that the bigger fundamental questions are often the same almost across the board.
Recently, we sat in the lecture hall with an audience of students and young professionals trying to understand some of these big issues.
I’ll start with some commonly expressed problems and then move on to one that is critical and integral to the future of the next generation’s careers.
First, there is a lot of excitement around the idea that you can create so many different things with AI and LLMs – but unless there is significant regulation and control, we are likely to see some very scary projects emerge. We already have examples such as facial recognition technologies that law enforcement agencies refuse to use for various reasons, or even just the ubiquitous third-party tracking of personal data that we are now trying to curb on the Internet.
In other words, people talk a lot about the democratization of data and owning your own data, but sometimes it seems that buying data and controlling third parties overshadows this idealistic idea.
Anyway, another thing that concerns many of our students and young people is democracy.
In particular, Ramesh Raskar is at MIT and heads the Camera Culture group, looking at the use of artificial intelligence for surveillance and other related topics, with an emphasis on artificial intelligence and imaging for health and sustainability. (I know Ramesh well, we traveled to India on a research project 12 times in three years and have been good friends ever since).
As he talks to young people, you hear him talk about “democracy 2.0”, suggesting that the landscape, politically, will be different than in the past.
There are some obvious concerns and challenges there. We probably don’t need to list them in great detail – we know that democracy works on certain principles, and most of us understand when those principles are tested and how.
The basic line of thinking here, however, is that AI has a lot of power, and that power must be harnessed appropriately – which then leads you to the question: who decides what is appropriate?
Quotes from Ramesh Raskar:
“Almost every job you can imagine today, that’s an office job, could be replaced by a computer, right? In the future. So what does it mean to go to school and learn? Does that bother you sometimes?’
“If you want to be really good at sports, it almost doesn’t matter what sport you start playing at a young age, because whatever that sport is, whatever the game is, you know, you’re going to be athletic and you’re going to be healthy and you’ll be able to play, right? And the same thing right now: it’s more about “learning about learning”. So as long as you’re good at some game, which is, you know, the physics game, the math game, the history game, whatever you’re learning right now… as long as you’re strong at that, you’ll be ready to play the next game they throw at you.”
This is the third point that I have found to be most critical for people just starting out in the business world.
Entering this topic, responding to concerns, Ramesh sums it up in this statement: “what does it mean to go to school and learn?”
Like many of us who already have most of our careers behind us, he suggested that it is a relief that we are already established in a world where artificial intelligence will overtake so many office jobs, regardless of their level or the cognitive skills required .
When experts talk to us about this, they often promote another ideal that seems equally ambiguous and may be at risk – they talk about the nature of artificial intelligence that we will find new jobs to replace old ones. What’s really worrying for people trying to do the math is that AI brings obvious labor efficiencies. So there will be new jobs. How many new jobs for every old job lost? How specifically would humans survive the automation of their jobs?
Specifically, we’re looking at the idea that you go to college and learn about things so you can be competitive and competent.
But if you can’t compete with AI, even after you go to college, that’s a problem.
What this consultation reveals to me is that students learn more about business and entrepreneurship and less about technical skills. Or they learn technical skills related to working with technology.
Let’s take the case of Python programming. Students may not do much Python programming anymore because they can just get ChatGPT or some other model to do it for them. But is it good to understand Python as a language? Absolutely. It is still a marketable skill when it comes to the proper use of assistive technology.
So this might be a bit of a no-brainer or qualifier for people who are worried about the transition.
In any case, Ramesh’s presentation sheds light on these important issues. Look for more as we delve into the biggest questions about AI and related work today.