Envision has announced that its AI-powered smart glasses will soon be upgraded with improved optical character recognition (OCR), better text recognition with contextual intelligence, support for additional languages, and the creation of a third-party application ecosystem.
According to Envision, the new ecosystem will allow “easy integration of specialized services, such as indoor and outdoor navigation, into the Envision platform.”
Envision has based its smart glasses on the Enterprise edition of Google Glass, using its built-in camera and processing power to help support its mission of accepting and processing visual data to help visually impaired people recognize objects and their environment. Although Google Glass failed to gain a large consumer following through its multiple releases, it has since found its way into niche use cases such as repurposing Envision as a hardware vehicle for its AI-based platform.
Other attempts have been made in the past to use AR (Augment Reality) technology to help people with visual impairments. Still, they largely focused on changing visual zoom levels dynamically and focused on helping users take advantage of the limited view they had. Instead, Envision uses AI to translate what it sees into audio cues played through accompanying speakers.
Also: Google: AI helps Google Translate deliver these new languages spoken by millions
Google’s Google Lens AI was previously used to power a similar platform called Google Lookout, which could provide descriptive readouts of what a connected camera was pointed at. A similar service offered by Facebook to help visitors hear descriptions of posted photos was also updated last year.
The original Envision model based on Google Glass debuted in 2020 and was designed to help users read documents, recognize individuals, find belongings, use public transport and achieve greater personal freedom.
Envision says its updated smart glasses can be useful in helping users read typed or handwritten text on documents, product packaging, screens and a wide range of other surfaces; translate the recognized text into 60 languages; and recognize people, colors and other important visual cues. If the on-board AI system is unable to complete a task, Envision also offers its “ally feature”, which connects users via video call to a sighted person who may be of assistance.
Envision offers its platform through the aforementioned Google Glass implementation, as well as through iOS and Android apps. A video detailing the Google Glass version can be viewed on the Envision website.