AI is agnostic, thankfully. As software developers now create the new breed of Artificial Intelligence (AI)-enhanced applications we’ll use to lead our lives, we can perhaps be thankful that AI has no vices, no preferences, and is agnostically indifferent about where it is. implemented, what tasks are assigned to it and who ends up using it.
This reality (albeit largely virtual) means that we can apply AI to petrochemical facilities in the oil and gas sector, we can use it in financial markets, and its use cases extend to the operation of small businesses that specialize in the production of cakes to order.
AI works anywhere, but it also works on any thing.
AI for images
Driving AI in the image space is Landing AIa computer vision cloud platform company that specializes in helping businesses build what it calls Domain-Specific Large Vision Models (LVM)..
Just to break it down: the domain specific item refers to specialized image library collections that are specific to individual industries and in this case that includes agriculture, medical devices, food and beverage, manufacturing, etc. as we could also add in “infrastructure” (as in civil engineering city infrastructure) as well. In this context, domain-specific also means that LVMs are trained using the company’s private images, i.e. many companies have hundreds of thousands, millions or billions of images, most of which are different from the Internet images on which other models have been trained .
Furthermore, just as a Large Language Model (LLM) is a collection of text-based intelligence, facts and propositional strings, words and values, a Large Vision Model (LVM) is a collection of images depicting groups of objects and things at various stages. ranking, or not, as the case may be.
Landing AI says its project enables businesses with huge image libraries to bring artificial intelligence to their proprietary image data, enabling flexible in-field applications to meet business needs. Using LVM, it promises enterprises the ability to unlock intelligence from their images at a much faster rate than before, while protecting their privacy with domain-specific LVM.
So can Landing AI give us a working example? Take biotechnology and pharmaceuticals, where a microparticle in a syringe can mean the difference between life and death.
Locate the microparticle
“High-volume fluid testing machines require dynamic measurement of relevant volumes for accurate diagnosis to save millions of lives through early disease detection. Landing AI’s visual inspection workflow enables teams and inspectors to build reliable AI models that solve problems previously thought impossible for biotech companies to automate,” the company notes on its website.
With the company’s LVM technology, companies can take unlabeled image data and create high-performance LVMs that serve as the basis for solving a variety of computer vision tasks in their specific domains. This will happen much faster than with traditional approaches, because companies will save months of work by not having to tag huge libraries of images. And they will see improved accuracy and performance when it comes to comprehensive computer vision tasks, given the intelligence of LVM.
“The Large Vision Model revolution follows the Large Language Model revolution, but with one key difference – while the Internet text from which the LLMs were taught is similar enough to most corporate texts to apply the model, many companies in manufacturing , life sciences, geospatial data, agriculture, retail and other fields have proprietary images that look nothing like the typical Instagram photos found on the web,” said Andrew Ng, CEO of Landing AI. this domain-specific LVM deployment is key to unlocking the value of images in these domains.”
Histopathology images
Landing AI builds and runs domain-specific LVM for enterprises in scenarios where (as another example) there is a need for analysis, such as production line images to find manufacturing defects or histopathology (diagnosis and study of tissue diseases) images to find cancer cells in life sciences.
While generic LVMs built on Internet images are one-size-fits-all, Landing AI LVMs focus on one domain at a time, helping to solve proprietary problems faced by businesses. Today enterprises spend a non-trivial amount of effort training individual models for each vision task, even when those tasks belong to the same business domain. With domain-specific LVM, the goal is for companies to use a limited set of LVMs, one for each business domain, and meet their needs for solving multiple vision tasks in each domain.
The company says that through the use of LVMs, companies will more quickly identify solutions for tasks such as object detection, image segmentation, visual prompting or other AI vision-enabled applications. Adding the LVM capability to Landing AI works in line with the organization’s work on Generative AI. In April, it announced the Visual Prompting feature as part of its LandingLens offering.
As a final example here, artificial intelligence in the form of LVM has been used in the food industry for a long time. It is now moving out of the processing plant and closer to the field. Vision systems are used to help farmers optimize yields, minimize chemical use for greater yield and sustainability. and undertake chores such as weeding and picking.
How many more big models?
What’s next? Large Audio Models (LSM) for audio signal processing? Yes, they already exist. Large Touch Models (LTMs) don’t necessarily exist, but NTT has already worked it out simulation of tactile sensory touch sharing technologies at NTT Docomo’s Tokyo labs. Tactile (i.e. touch-related) information is quantified in terms of human touch vibrations measured with a device similar to Piezoelectric sensor.
We may have to prepare for the large smell models (LSMs) if we can refine machine smell soon. After all this, Large Emotion Models (LEMs) may begin to track our ability to fall in love. Hopefully some of life (even with the existence of dating apps) will remain mostly organic and natural for now, right?
1 Comment
Highly descriptive article, I liked that bit. Will there
be a part 2?!