(OPINION) Google has launched Gemini, a new artificial intelligence (AI) system that can seemingly understand and talk intelligently about almost any kind of prompt – pictures, text, speech, music, computer code and much more.

This type of AI system is known as a multimodal model. It’s a step beyond just being able to handle text or images as previous ones have. And it provides a strong hint of where AI may be going next: being able to analyze and respond to real-time information coming from the outside world.

Although Gemini’s capabilities might not be quite as advanced as they seemed in a viral video, which was edited from carefully curated text and still image prompts, it is clear that AI systems are rapidly advancing. They are heading towards an ability to handle more and more complex inputs and outputs.


Advertisement


To develop new capabilities, AI systems are highly dependent on the kind of “training” data they have access to. They are exposed to this data to help them improve at what they do, including making inferences such as recognizing a face in a picture, or writing an essay.

At the moment, the data that companies such as Google, OpenAI, Meta and others train their models on are still mainly harvested from digitized information on the internet.

However, there are efforts to radically expand the scope of the data that AI can work on. For example, by using always-on cameras, microphones and other sensors, it would be possible to let an AI know what is going on in the world as it happens.

Google’s new Gemini system has shown that it can understand real-time content such as live video and human speech. With new data and sensors, AI will be able to observe, discuss and act upon occurrences in the real world.

The most obvious example of this is with self-driving cars, which already collect enormous amounts of data as they are driving on our roads.

This information ends up on the manufacturers’ servers where it is used not just in the moment of operating the vehicle, but to build long term computer-based models of driving situations that can support better traffic flow, or help authorities identify suspicious or criminal behaviour.

In the home, motion sensors, voice assistants, and security cameras are already used to detect activity and pick up on our habits. Other “smart” appliances are appearing on the market all the time.

While early uses for this are familiar, such as optimizing heating for better energy usage, the understanding of habits will get much more advanced.

This means that an AI can both infer activities in the home and even predict what will happen in the future. This data could then be used, for instance, by doctors to detect early onsets of ailments such as diabetes or dementia, as well as to recommend and follow up on changes in lifestyle.

As AI’s knowledge of the real world gets even more comprehensive, it will act as a companion in all instances of life. At the grocer’s, I can discuss the best and most economical ingredients for a meal I am planning.

At work, AI will be able to remind me of the names and interests of clients in a face-to-face meeting – and suggest the best way to secure their business. When on a trip in a foreign country, it will be able to maintain an ongoing conversation about local tourist attractions, while the AI keeps an eye on any potentially dangerous situations I might encounter. (CONTINUE)

Author

  • End Time Headlines

    End Time Headlines is a Ministry that provides News and Headlines from a "Prophetic Perspective" as well as weekly podcasts to inform and equip believers of the Signs and Seasons that we are living in today.

    View all posts