The Revolution of AI Accessibility Features for Visually Impaired People
Artificial intelligence is changing the world, and nowhere is its impact more profound than in accessibility for people with sight loss. AI is going beyond simple screen readers to create new ways for the visually impaired community to interact with and understand the world around them. These innovations are helping people to navigate, read, and connect with more independence and confidence than ever before. This article explores some of the latest and most impactful AI accessibility features for visually impaired people.

AI accessibility features for visually impaired people
AI in Smart Glasses: A Hands-Free World
Smart glasses are no longer just a gadget for tech enthusiasts; they have become a powerful tool for those with visual impairments. These devices use on-board cameras and AI to provide real-time audio descriptions of the user’s surroundings.
Envision Glasses
Envision glasses are a dedicated assistive technology designed specifically for people with sight loss. They offer a comprehensive suite of AI-powered features for hands-free use.
- Text and Document Reading: They excel at reading text from any surface, including street signs, handwritten notes, and product labels. The “Scan Text” feature can handle longer documents, while “Batch Scan” can process multiple pages at once. The “Smart Guidance” feature helps the user position the document correctly for the best scan.
- Object and Scene Recognition: The glasses can describe scenes, identify colours, recognise cash, and locate specific objects like keys or a coffee cup. The “Find People” feature allows a user to save faces and be notified when a familiar person is in their field of view.
- Live Human Assistance: The glasses have a dedicated feature to connect with a sighted person. Users can make a hands-free video call to a trusted friend, family member, or a professional agent, who can then see what the user sees and provide real-time guidance.
- Control and Navigation: The glasses are controlled through a combination of voice commands and intuitive touch gestures on the side of the frame, allowing for completely hands-free operation.
Find out more about the Envision Glasses.
Meta AI in Glasses (Ray-Ban and Oakley)
Meta has integrated its powerful AI into its smart glasses, including the stylish Ray-Ban and performance-focused Oakley models. While not exclusively designed for accessibility, they offer valuable features for visually impaired people.
- “Look and Tell”: The core of their accessible AI functionality is the “Look and Tell” feature, which allows a user to ask questions about their surroundings and receive an immediate spoken response. This can be used to identify objects, read street signs, or get a description of a scene.
- Live Assistance: These glasses also integrate with the “Be My Eyes” network, allowing users to connect with a sighted volunteer for live visual assistance through the glasses’ camera. This is hands free and voice initiated with the prompt “Call a Volunteer”. Note: the availability of volunteers may vary in different regions, as well as time of day.
- Hands-Free Communication: With voice commands, users can send messages and make calls on WhatsApp and Facebook Messenger without having to use their hands or a phone screen. The open-ear speakers allow for audio to be heard while still remaining aware of the user’s surroundings.

Ray-Ban Meta

Oakley Meta
Here is how the two lines differ:
| Oakley Meta HSTN | Ray-Ban Meta | |
| Design & fit | Wrap-around, lightweight performance frame in Oakley’s O-Matter; stays put during activity | Fashion-centric Wayfarer, Headliner and others |
| Lens options | Oakley PRIZM™ tints for higher contrast outdoors | Standard sunglass / clear lenses / transition lenses |
| Camera & video | 12 MP stills, up-to-3K video (2880 × 1536) | 12 MP stills, 1080p video |
| Typical battery life | 8 hours (48 hours with case) | 4 hours (32 hours with case) |
| Water-resistance | IPX4 (splash-proof) | IPX4 |
| Voice & AI | Meta AI voice assistant (same feature set) | Meta AI |
When the Oakley option could be the better fit:
- Active users – The snug, sport frame resists slipping while running, cycling, or during mobility-training sessions.
- All-day users – Double the runtime means fewer mid-session top-ups and less dependence on the charging case.
- Those needing sharper video for remote sighted assistance – 3K capture on the Oakley glasses offers clearer text and scene detail compared to the 1080p on the Ray-Ban.
- Users with low-contrast sensitivity – PRIZM lenses enhance colour separation and edge definition outdoors.
- Peripheral-vision-loss cases – The continuous lens curve broadens the visual field compared with Ray-Ban’s smaller flat lenses.
Those who prefer classic frames may still gravitate to Ray-Ban; having both ranges simply allows for matching the glasses to the individual rather than a one-size-fits-all choice.
We have lots more information on the Ray-Ban Meta Smart Glasses, some useful commands, and a 1 month review. Find out more below.
Find out more about Meta AI glasses.
Mobile Apps and Software: Powerful AI Accessibility Features for Visually Impaired People
Beyond wearables, a number of mobile apps and software platforms are harnessing the power of AI to provide vital assistance.
Seeing AI (Microsoft)
This free app uses AI to provide a variety of functions, from reading short text to describing images and even identifying products by scanning their barcodes. The app can also recognise people and describe their facial expressions.
Be My AI (Be My Eyes)
This new feature within the popular “Be My Eyes” app allows users to submit photos to an AI for a detailed visual description. This is particularly helpful for getting a quick description of something without having to wait for a volunteer.
Project Astra (Google)
Google is exploring new frontiers with “Project Astra,” an experimental AI that can act as a “visual interpreter.” The prototype can describe its surroundings in real-time as the camera moves, and even understands context to provide more helpful information. The goal is to integrate this technology into future devices, including smart glasses.
Android’s TalkBack and Gemini Integration
Google has now added its Gemini AI as an option in the native Android screen reader, TalkBack. This is a game-changer for people with sight loss. Now, in addition to the standard screen reading functions, users can open the TalkBack menu with a three-finger tap and use the “Ask Gemini” button to get detailed AI-generated image descriptions. Users can even ask follow-up questions about an image or the current screen by using their voice or keyboard. This allows for a deeper and more conversational understanding of visual content, from images in a social media feed to documents on the screen.
Find out more about Gemini and Android accessibility.
Gemini on Apple iOS
Google’s Gemini is also available on Apple devices through the Gemini mobile app. While it doesn’t integrate directly with native iOS accessibility features, like TalkBack does on Android, it still provides a wide range of powerful AI accessibility feature for visually impaired people. Users can use text, voice, or their camera to interact with the AI.
The “Gemini Live” feature is particularly useful, allowing for a real-time, two-way conversation with the AI. By enabling the use of your camera in the Gemini App, the user can point their phone at a scene and have a natural, conversational dialogue with the AI to get real-time descriptions of their surroundings.
Find out more about Gemini Live.
Apple’s Native Accessibility Features for Visually Impaired People
Apple’s iOS platform has long been a leader in accessibility, and it continues to add new AI-powered tools. The built-in accessibility reader, VoiceOver, provides spoken descriptions of what’s on the screen. In addition, the native magnifier app has features that use the phone’s camera to describe what it sees. “Detection Mode” can provide real-time descriptions of visual information, including the detection of people, doors, or text. Users can also use the iPhone as a magnifying glass to zoom in on objects.
We have already written an article on the new iOS 26 Accessibility Features. Click the image below to read what’s new with Apple, and their full list of latest AI accessibility features for visually impaired people.
Read Apple’s full list of Powerful Accessibility features.
The Role of Large Language Models (LLMs)
Large language models (LLMs), such as Gemini, are a crucial part of the latest wave of AI accessibility. These models are trained on vast amounts of text and image data, allowing them to understand context, generate human-like language, and interpret complex visual information. You might have heard of some of these: Google’s “Gemini”, and Open AI’s “Chat GPT”.
Gemini’s New Image Editor
A new feature of Gemini is its advanced image editing capabilities, which can be particularly useful for people with sight loss. You can type your commands, or with a hands-free approach, you can give voice commands to edit images. For instance, you can ask the AI to “remove the person from the background,” “change the colour of the jacket to red,” or “make the lighting brighter.” This allows for a creative and functional interaction with images that was previously difficult or impossible.
Another feature that comes with this is the improved AI-generated image descriptions. The image descriptions are now much more precise and to the point, picking out the main parts of the image, rather than describing every single detail. If you wish to have every detail described, you can just ask for a more detailed description.
Note: Edited and created images in the image editor do contain both visible and invisible watermarks.
Find out more about Gemini’s Image Editor.
Conclusion
The evolution of AI accessibility is a result of continuous innovation in technology. Dedicated smart glasses like Envision, integrated features in consumer devices like Meta’s smart glasses, and powerful mobile applications such as Seeing AI demonstrate a shift towards hands-free and real-time visual assistance. The integration of large language models like Gemini into native operating systems provides users with a more conversational and context-aware experience. These advancements facilitate greater access to information, enhance personal mobility, and provide new forms of assistance for daily tasks. The development of this technology is driven by a commitment to creating tools that are both functional and intuitive for the visually impaired community.
If this guide was useful, check out our other latest Tech News for visually impaired people.





