Artificial Intelligence (AI) is changing. But let’s not forget where we came from. The first concepts of pseudo-sentient intelligence that permeated the mainframe labs of the 1950s may have been too embryonic for the processing and storage power of the era. While they may have given way to the “movie AI” of the 1980s, it wasn’t until the years after the millennium that we began to see real progress, and IBM Watson gained its share (and more) of attention in this space.
AI is now naturally changing again, and it wasn’t hard to spot why. The rise of genetic artificial intelligence (gen-AI) powered by large language models (LLM) running on vector databases has not been out of the tech news all year.
Sharper and more refined AI tools
But as we move into a new year and maybe some of the hype and hype subsides, what happens next with AI is about improvement and tools, ie specific jobs… and what we’re going now is to create sharper tools for software application development professionals to put new strains of AI into our applications.
Google famously capped off a year of gen-AI hysteria with the release of the Gemini Large Language model.
Before we consider how Google is positioning Gemini to reflect current trends, let’s pause for just a nanosecond and remember what we just said here, which is that the IT industry isn’t talking about a higher-level AI engine or model, the glitterati aren’t You’re not focused on some new AI-enhanced app that will order you a new pint of milk when the RFID-tagged carton in your fridge says it’s best before date… and we’re not talking about some new AI widget about to appear on our smartphones. Instead, we’re excited about a new approach to data science at a lower substrate level that will permeate upwards to give us better AI. As we said, artificial intelligence is changing.
Fanfare aside, what we can see here is that Google is very much reflecting the need to hone and improve AI at this stage. Technologists want AI tools that can work to ingest any kind of data and work in the widest variety of post-development scenarios. Google knows this and has built Gemini to be “multimodal” and able to absorb information in text form as well as images, audio and video.
Twins triplets
While we usually think of Gemini couples as a twin set astrological terms At the very least, this Gemini is shaped and scaled as a triple pack. By creating different versions of Gemini, Google says it will “run efficiently” on everything from cloud deployment at the data center level to mobile devices. To enable enterprise software application developers to build and scale with AI, Gemini 1.0 has been optimized in three different sizes:
- Gemini Ultra: The largest and most powerful model for extremely complex tasks.
- Gemini Pro: The model best suited for scaling across a wide range of tasks – calling it multi-purpose might be doing it a disservice, but you get the point.
- Gemini Nano: As the short name suggests, the most efficient model for on-device tasks.
With the interests of real-world software developers at the fore, the company now confirms that Gemini Pro is available through the Gemini API to developers in Google AI Studio, the company’s developer environment designed to allow developers to integrate Gemini models through an application programming interface (API). and develop prompts as they generate code to build genetic AI applications. It is also available to enterprises through Google Cloud’s Vertex AI platform, as explained here.
Why is Gemini available from both routes? API Selection via AI Studio is a free online developer tool designed to encourage usage and spark interest. Google says that when coders are ready for a fully managed AI platform, they can port their AI Studio code to Vertex AI for additional customization capabilities and Google Cloud, but at a cost, there’s no such thing as a free AI meal as we know it.
Shaping AI for health
If the tendency to shape and sharpen (and we can generally take scale as a given) AI right now is born out of Google’s work with these tools, we can see this in the introduction of MedLM, a family of foundational models fine-tuned for the healthcare industry, available to US Google Cloud customers through Vertex AI, this technology will be more widely available next year.
The company wants to show a friendly face as it tries to encourage coders to engage with its AI technologies by providing further tools and assistance. According Google’s AI blog, “Duet AI for Developers is now generally available. This always-on partner from Google Cloud offers code and AI chat assistance to help users build apps within their favorite code editor and software development lifecycle tools. It also improves applications running on Google Cloud — and Duet AI for Developers gives businesses built-in support around privacy, security and compliance requirements. We will be integrating Gemini into our Duet AI portfolio in the coming weeks.”
What will happen next, worldwide
While Google has reflected (some would say led, others would say followed) trends in the AI industry in general and worked to sharpen and shape AI from the way it takes in information to the way it can implemented, there are still (obviously) challenges ahead. While many of these technologies are available in all regions, Google is rolling out in the US first, followed by Europe (and the rest of the world), so in terms of international development drivers and perhaps governance, there’s a broader question going forward.
We’ve invited the medical industry here, there’s also work to deliver Google Duet AI to the Security Operations (SecOps) space, and to make genetic AI generally available to defenders in a unified SecOps platform. This is great for security teams, but there are many other tech engineers a) in the business team and b) in the wider IT department who will want to join the genetic AI movement and be able to work concurrently (software parallelism pun) with their colleagues.
Artificial Intelligence is changing and will continue to do so – although many believe that this year of generative AI stands out as a significant moment in time – let’s hope that developers have the right tools and that we are under no illusions.