Be careful when relying on the AGI aperture of certainty or uncertainty.
getty
In today’s column, I explore the growing hypothesis that there is an AI certainty gap associated with achieving artificial general intelligence (AGI). Here’s what this entails. Some argue that as we get closer to reaching AGI, our certainty that we will do so increases and uncertainty decreases. This means that our ability to predict or predict the arrival of AGI becomes stronger during the arduous journey to AGI, progressively.
Let’s discuss it.
This analysis of an AI innovator is part of my Forbes column’s ongoing coverage of the latest developments in AI, including identifying and explaining various complexities affecting AI (see link here).
Heading to AGI and ASI
First, some fundamentals are needed to set the stage for this weighty discussion.
There is a lot of research going on to further advance artificial intelligence. The overall goal is to either achieve artificial general intelligence (AGI) or perhaps even the extended possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered to be the equivalent of human intelligence and can seemingly match our intelligence. ASI is artificial intelligence that has surpassed human intelligence and would be superior in many if not all possible ways. The idea is that ASI will be able to circle around people, thinking about us at every turn. For more details on the nature of conventional AI vs. AGI and ASI, see my analysis at the link here.
We haven’t reached AGI yet.
In fact, it is unknown whether we will reach AGI or that perhaps AGI will be possible decades or perhaps centuries from now. The AGI achievement dates floating around vary wildly and are not supported by any reliable evidence or ironclad logic. ASI is even beyond the pale when it comes to where we currently stand with conventional AI.
Diaphragm of Certainty
Shifting gears, consider a rule of thumb about travel and destinations. Our intuition tells us that the closer you get to a goal or destination, the better you can predict when you’ll get there. It is an almost universal principle.
Imagine you are on a trip to the desert. At the beginning of the hike, you probably have a rough idea of when you’ll arrive at your dream campsite by this serene lake. It can take five hours or up to ten hours to cross the hills. It all depends on how your feet feel, whether the terrain is reasonable and many other factors.
After hiking non-stop for two hours in the hot sun, you take stock of where you are and when you might reach this frozen lake. Are you able to better estimate when you will arrive at the campsite? Probably so. Now you have some distance and can better judge how things are likely to go.
We could say that the aperture of certainty gives you a much clearer sense of the time to reach your destination. Another way to think about it is that uncertainty is reduced. This same idea of measuring the achievement of something applies to all kinds of things in life. The closer you get, the better you seem to be able to guess when the arrival will be.
AGI Aperture Of Certainty
Does the certainty gap apply to achieving AGI?
Many in the AI community assume that this must be the case. It seems to make sense. We are making incremental progress advancing conventional artificial intelligence. Each step seems to bring us closer to the famous AGI. In all this incremental progress, we should be able to get a better picture of when AGI will be achieved.
For example, let’s say that the projections for achieving AGI by the year 2040 are about on target (see my analysis of 2040 and other AGI date projections at the link here). A common belief is that within five years of AGI, as in 2035, we will know plenty if AGI is going to happen. Meanwhile, in the ten-year distance of 2030, the odds of measuring AGI by 2040 are considered to be much lower.
You can reframe this sense of understanding by referring to uncertainty rather than certainty. The uncertainty of the accurate prediction of 2040 is greater (we are more uncertain) in 2030 than in 2035. The uncertainty decreases as the expected target approaches.
Gotchas On The AGI Pathway
An idealized world would ensure that the certainty barrier operates continuously. But realistically, we don’t live in such a world. Sad face.
Think again about hiking to this dreary campsite. Just because you’ve already done a two-hour journey doesn’t necessarily tell you anything about what the rest of the trail might consist of. Unbeknownst to you, maybe an angry bear is waiting on the path ahead. The bear will undoubtedly slow your progress and you may have to wait hours for the beast to move away.
Advances in artificial intelligence can be similar to this kind of false belief that progress already made will somehow equate to potential future progress.
Advances in artificial intelligence could hit a roadblock. Perhaps this delays the AI move towards AGI by several years, perhaps a decade or more. Envision that on the way to the date of 2040, a serious roadblock will be created in the year 2036. While in the year 2035, everything looked rosy, the blockage in 2036 causes deep problems for the expected achievement of 2040.
Another derailment could be that efforts to advance AI are so top secret that it’s nearly impossible to gauge how things are progressing. In the year 2035, imagine that all AI makers are tied down and don’t reveal the state of their AI. It can be difficult to ascertain the state of artificial intelligence to then predict what may emerge by 2040.
Sooner than you think
So far I have highlighted aspects that would delay or extend the achievement of AGI. The other side of this coin is that AGI can occur sooner than assumed.
Here is one such scenario. Some fervently believe that we will experience an information explosion, consisting of AIs feeding on other AIs and rapidly accelerating towards AGI, see my discussion in the link here. Let’s say we’re making steady progress with advances in artificial intelligence and then all of a sudden we hit an explosion of information. A few minutes or hours later, voila, it arrived at AGI.
No one can say for sure whether we will somehow cause an information explosion. Nor can one say for sure whether it can happen by itself, without the human hand at play. Even trying to guess when an information explosion would occur is just as widely and wildly debated.
Going back to the opening of AGI certainty, imagine that the year is 2035 and all predictions line up that by 2040 we will reach AGI. In 2036, an information explosion occurs. It surprises us all. In any case, AGI is suddenly reached in 2036.
We do the best we can
The upshot is that while it is worthwhile to make predictions about AGI, including doing so with a mindset of certainty, there are many ways the journey can be shaky. A large grain of salt and a scrutinizing attitude should be honestly applied to predicting when AGI will be reached.
Peter Drucker, the legendary management guru, said it best about the challenges of forecasting: “Trying to predict the future is like trying to drive down a country road at night without lights while looking out the back window.”
This is roughly the same path that the achievement of AGI faces.
