Speculation about the future of AI, including artificial general intelligence (AGI) and artificial … more
In today’s column, I am considering a recently published piece of blog by Sam Altman who has created a lot of hubbub and controversy in the AI community. As Openai’s chief executive, Sam Altman is considered a Lighting AI, whose view of AI’s future is a huge weight. His latest online comment contains some indications of eyebrow gathering about the current and upcoming AI situation, including aspects that are partially overlapping in AI-SPEAK and other terminals of confidential information that require careful interpretation and translation.
Let’s talk about it.
This analysis of an innovative AI revolution is part of the continuing forbes column in its latest AI, including the identification and explanation of various AI complexity (see here).
Directed toward AGI and ASI
First, some basic elements are needed to set the stage for this discussion.
There is a great deal of research to further promote AI. The overall goal is to reach either Artificial Intelligence (AGI) or perhaps even to the stretched possibility of an artificial superintelligence (ASI).
AGI is AI that is considered equivalent to human intellect and may seem to match our intelligence. Asi is AI that has overcome human intellect and would be superior to many, if not all possible ways. The idea is that Asi could run circles around man by overcoming us at every turn. For more details on the nature of the conventional AI against Agi and Asi, see my analysis at the link here.
We have not yet achieved AGI.
In fact, it is unknown to whether we will reach AGI or that maybe AGI will be possible in decades or maybe centuries from now on. The reactions of the AGI floating around are wild and wildly unfounded from any reliable proof or iron logic. Asi is even beyond the pale when it comes to where we are today with conventional AI.
Sam Altman mixes the container
In a new publication on June 10, 2025, entitled “The Gentle Singularity” by Sam Altman on his personal blog, the renowned AI predictor made these comments (excerpts):
- “We are beyond the horizon of the fact. The take -off has begun.
- “The AI will contribute to the world in many ways, but the profits to the quality of life from AI that drive faster scientific progress and increased productivity will be huge, the future can be much better than the present.”
- “In general, a person’s ability to get much more in 2030 than it could in 2020 would be a striking change and one many people will understand how to benefit.”
- “This is how the peculiarity goes: The miracles become routine, and then the bets.”
- “It is difficult to imagine even today what we will discover by 2035, perhaps we will go from the resolution of natural high energy one year to colonization of space next year or from an important science of materials one year in true brain-brain interfaces next year.”
- “We need to solve security issues, technically and socially, but then it is very important to distribute access to overestimation, given the economic consequences.”
There is a lot to decompress.
The optimistic part of his opinion contains comments about many undecided thoughts, as mentioned by the ugly and unspecified AI horizons, the effects of artificial supernatural, the various dates we suggest when we can expect things to take away many others.
Let us soon explore the pillar details.
The Horizon of Event
A great question that faces those who are deeply in the AI consists of whether we are on the right track to achieve AGI and ASI. Maybe we are, maybe we are not. Sam Altman’s reference to the horizon AI refers to the existing path on which we are and implies categorically and states that in his view we have not only reached the horizon of the event, but that we are already greedy. As adopted, take -off has begun.
Just to note, this is a claim that incorporates huge boldness and brashness, and not everyone in AI agrees with this view.
Consider these vital aspects.
First, in favor of this perspective, some insist that the arrival of genetic AI and large linguistic models (LLMS) strongly proves that we are now completely on the way to AGI/ASI. The incredible appearance of the natural language presented by the computational capabilities of modern LLMS seems to be a sure sign that the road front should lead to AGI/ASI.
However, not everyone is convinced that LLMS is the right route. There are doubts that we are already watching the headings about how genetic AI can be further expanded, see my cover in the link here. Maybe we are approaching a serious barricade and the constant efforts will not get us further for the buck.
Even worse, we could be out of target and go in the wrong direction completely.
No one can say for sure whether we are on the right path or not. It’s a guess. Well, Sam Altman has planted a flag that we are undoubtedly on the right path and that we have already reached the road quite a distance. The cynics can find this a self-serving perspective, as it reinforces and confirms the direction that OPENAI is taking.
Time will say, as they say.
The supposed peculiarity of Ai
Another thought in the AI field is that there may be a kind of peculiarity that serves as a key point where AGI or Asi will easily begin to appear and strongly project that we have hit gold in the right path. For my detailed explanation for the supposed peculiarity of AI, see the link here.
Some believe that the peculiarity of AI will be an almost instantaneous split hypothesis, which will occur faster than the human eye can notice. For a moment we will carefully work to promote the AI forward, and then bam, the particularity happens. It is considered as a type of intelligence explosion, where intelligence quickly creates more intelligence. After the peculiarity happens, the AI will be jumping and delimits better than it was exactly. In fact, it could be that we will have a complete AGI or ASI because of the particularity. One second earlier, we had a simple AI, and a moment later we have amazing agi or Asi in the middle, like a rabbit of a hat.
Perhaps although the peculiarity will be a long and drawn activity.
There are those who speculate that the particularity can start and then take minutes, hours or days to run its course. The factor of time is unknown. Perhaps the peculiarity of AI will take months, years, decades, centuries, or longer to gradually overcome. In addition, there may be nothing that looks like a peculiarity at all, and we have just created a theory that has no basis in reality.
Sam Altman’s post seems to suggest that AI’s particularity is already underway (or maybe happen in 2030 or 2035) and that it will be a gradual population, rather than an instant.
Interesting speculations.
The dates of AGI and ASI
At the moment, prediction efforts when AGI and ASI are to be relying are generally based on the site of a finger on AI winds and wild measurement of possible dates. Keep in mind that the hypothetical dates have a very little evidence of them.
There are many great AI vocal lights that make the AGI/Asi date. These prophecies seem to merge to the year 2030 as a targeted date for AGI. See my analysis for these dates to the link here.
A somewhat quieter approach to the Gambit of the date guess is through the use of AI expert surveys or polls. This wisdom of the crowd approach is a form of scientific consensus. As I am discussing in the link here, the latest polls seem to indicate that AI experts generally believe that we will reach AGI by 2040.
Depending on how the latest Sam Altman blog interpreted, it is not clear whether AGI occurs by 2030 or 2035 or if it is ASI instead of AGI, since it refers to overestimation, which may be the way of expression ASI or perhaps agi. There is a muddy AGI differentiation from ASI. Indeed, I have previously covered its changing definitions related to AGI and ASI, that is, the movement of cheese, to the link here.
We will know how things have probably turned out to only 5 to 10 years. Note your calendars accordingly.
All roses and real thorns
One element of the detachment that has taken the female of the special AI actors is that the age of AGI and ASI seems to be depicted as only refreshing and happy. We are in a gentle peculiarity. These are definitely happy news for the world in general. Utopia is waiting.
There is another side in this currency.
The internal AIs are almost divided into two major camps at the moment for the impact of AGI or ASI. A camp consists of AI Doomers. They predict that AGI or ASI will seek to eliminate humanity. Some refer to it as “P (Doom)”, which means the possibility of fate or that AI is completely stunned, also known as the existential danger of AI or X -risk.
The other camp implies the so -called AI acceleration.
They tend to argue that Advanced AI, that is, AGI or ASI, is going to solve the problems of mankind. Cancer treatment, yes indeed. Overcome the hunger of the world, absolutely. We will see enormous financial benefits by releasing people from the Drudgery of Daily Toks. AI will work hand in hand with people. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention that people have ever made, but this is good in the sense that AI will invent things that we could never envision.
No one can say with certainty which camp is right and which is wrong. This is another polarizing aspect of our modern times. For the in -depth analysis of the two camps, see the link here.
You can easily discern what camp of the migratory sides with, that is, roses and fine wine.
A beam
It is important to carefully evaluate the myriad of statements and declarations made for the future of AI. Many times, the wording seems to claim that the future is completely well known and predictable. With a sense of feeling and trust, many of these predictions can be easily incorrect, such as a bit of a buzz of events and known, rather than a bundle of opinions and speculations.
Franklin D. Roosevelt prudence: “There are so many opinions that there are experts.” Keep your eyes and ears open and be carefully carefully for all the prophecies about the future of AI.
You will be incalculablely happy to be careful and alert.