Amazon has unveiled a new, generative AI-powered Alexa, its popular digital assistant, saying it is ready for a world of “ambient computing”. AI in your spectacles, watches, rings and even embedded under your skin. Is this the next big shake-up that tech is about to see?
What exactly is ambient computing?
Put simply, ambient computing refers to technology that you may not see, but is ever-present and all around you. For instance, Amazon’s new generative AI-powered Alexa+ envisions a future where you will no longer need to ask Alexa to execute various separate commands, but instead give it one long natural-sounding command that it processes and responds to—just like any human would do. Over the next two years, you’ll see more such mainstream technology in action—where you speak and interact with gadgets in a seamless, human-like way.
Also read | Meet Alexa+! Amazon introduces a more personalised, autonomous and smarter AI assistant
Ambient computing, in other words, is set to alter how we use most modern gadgets—hence taking us through a sea change in habits. This change will come on the basis of technology being present in our ambience—including advanced home automation where robots manage our laundries and groceries, too.
How will it change the way you consume tech?
As ambient computing becomes mainstream, your interactions with tech will become less purpose-driven, and more automated. For instance, if you’re speaking with a colleague about a future meeting, any internet-enabled device that you have will automatically schedule it in your calendar. This also means that you’ll use fewer apps on your mobile phone or computer to do most things that you do today: order food, hail a cab or even send a surprise gift online. Further, ambient computing will ensure that your daily tech needs are handled by a combination of smart wearables such as glasses, watches and rings—thus making you even less reliant on your phones. The goal of ambient computing is to make technology less intimidating and more natural for all.
Isn’t such technology here already?
Early embers of ambient computing can be seen in always-on smart speakers like Amazon’s Alexa that line up a music playlist based on what time you sit down for dinner, smart plugs like Cisco’s that switch on the geyser based on how close you are to your home, and mixed- reality headsets like Apple Vision Pro, which can replicate a large-screen theatre experience from your sofa. They, however, are all in early stages of refinement.
Also read | Music labels’ copyright fight reflects broader challenges with Open AI
How will AI contribute to ambient tech?
Contrary to today’s touch-based tech interfaces, ambient AI’s core operating interface will be voice, enabled by large language models such as OpenAI’s new GPT-4.5, as well as natural language processing (NLP). Together they offer machines an understanding of human conversations and sentiments. AI will be the mainstay of ambient computing, always forming the foundational layer of processing information without needing our inputs. AI will also enable cross-communication between devices, making sure that all our accessories work as one seamless, invisible network.
How to live without phones or apps?
The goal behind ambient computing is to ensure that we won’t need our phone for many use-cases. Last year’s Rabbit R1 and this year’s AI Phone by Deutsche Telekom have showcased a world where AI models replace all apps on a phone. Instead, the model would tap a service like Uber to book a cab ride based on your voice command. In fact, augmented reality smart glasses powered by voice AI models in future will completely replace the need for us to own a phone—by overlaying notifications in front of our eyes based on what we ask for.
Also read | The rise of ‘agentic AI’: Why data scientists and software developers should be worried
Source link