this article first appeared on dignari.com
Google introduced a new app called Google Lens at its I/O developer conference yesterday that may be the next step toward living in augmented reality.
The app provides vision-based computing capabilities that allow you to use your mobile device to delve deeper into the visual aspects of a photo, video, or live feed.
Besides being damn cool, what does this really mean?
This means your phone will now have the ability to not only take a photo or record video, but it will also be able to interpret objects and provide contextual and actionable intelligence related to its field of view.
Now you can take a picture of a flower and Google Lens would not only tell you it’s a flower but would also tell you what type of flower it is and how to take care of it.
How about a picture or a video of a restaurant? Sure, Google will let you know it’s a restaurant and will interpret the sign and tell you the name. That’s great. But the real magic is when Google also tells you where the nearest one is located, the type of food they sell, provides the associated Yelp reviews, offers you relevant coupons, and gives you step by step directions for how to get there.
What about automating tasks based on images? Point your phone at your home’s router and it immediately recognizes it is a WiFi network and automatically connects you to it. (Side note - this would actually come in really handy with my daughter when she has her friends over and everyone needs my WiFi password.)
What all of this means is that we are much closer to a deeper interaction with the physical world. That simple 2D image on your mobile screen is now able to be expanded exponentially in a multitude of dimensions. Each object can be identified and explored for further information and intelligence.
Your mobile device now contains a computer vision system far better than humans. The expanse of the Internet embedded under every object or image now seen by the human eye and captured by your phone.
This technology has been around for awhile and Google even experimented with similar capabilities with Google Goggles. So what makes this different?
The maturity and extensibility of the software and the growing consumer appetite are probably the biggest reasons why this is so newsworthy. This is just one more step toward a world where human-computer interaction is amplified.
Imagine the evolution of this technology. What is currently done via the mobile phone could morph to a wearable. A device providing real-time, in-depth, and interactive knowledge of your surroundings.
Add the explosive growth and acceptance of biometrics and suddenly you become fully aware of everyone and everything you interact with in the physical world. Every person, place, or thing is now a search query with the power and breadth of the Internet readily available beneath the covers.
You may think this is far off technology, yet there is an undercurrent bubbling in the tech space that is making artificial intelligence, machine learning, and object recognition commonplace. Technologies such as Google Lens will continue to evolve at breakneck speed and will start to blur the lines of physical and digital.
Are we that far off from hyper-reality?
The rapid advancement of computer capabilities is no doubt exciting. Interacting with our world in new and advanced ways our ancestors never even imagined is thrilling. Yet, with great power comes great responsibility.
While these technologies are sure to woo early adopters, a measured pace needs to be taken before further adoption. The attack surface is intensified and criminal opportunities expand. Just as the Internet of Things (IoT) found out, security needs to be baked in from the beginning.
Even with this being a cautionary tale replete with risk, it’s still damn cool.