The first app to integrate GPT-4’s image recognition capabilities has been described as “life-changing” by visually impaired users.
Danish startup Be My Eyes has applied an artificial intelligence model to a new feature for blind or partially sighted people. Dubbed a “virtual volunteer,” the object recognition tool can answer questions about any image it’s sent.
Imagine, for example, that the user is hungry. They could simply take a photo of an ingredient and request the corresponding recipes.
If they prefer to eat out, they can upload a map image and get restaurant directions. Upon arrival, they can take a photo of the menu and listen to the selections. If they then want to work off those extra calories at the gym, they can use their smartphone camera to find a treadmill.
“I know we’re in the midst of an AI hype cycle right now, but some of our beta testers have used the phrase ‘life-changing’ when describing the product,” Mike Buckley, CEO of By My Eyes, told TNW.
Join us on June 15th and 16th at the TNW Conference in Amsterdam
Get 20% off your ticket now! Limited offer.
“This has the potential to be transformative in empowering the community with unprecedented resources to better navigate the physical environment, address daily needs and achieve greater independence.”
Virtual Volunteer takes advantage of OpenAI’s software update. Unlike previous iterations of the company’s vaunted models, GPT-4 is multimodal, meaning it can parse both images and text as input.
Be My Eyes took the opportunity to test the new feature. Although text-to-image systems are nothing new, the startup had never been convinced of the software’s effectiveness before.
“From too many bugs to not being able to chat, the tools on the market weren’t equipped to address many of our community’s needs,” says Buckley.
“The image recognition offered by GPT-4 is superior, and the analytics and conversational layers powered by OpenAI increase the value and utility exponentially.”
Be My Eyes previously supported users exclusively with volunteers. According to OpenAI: a new feature can generate the same level of context and understanding. But if the user doesn’t get a good response or simply prefers human contact, they can still volunteer.

Despite promising early results, Buckley insists the free service will be launched cautiously. Beta testers and beyond the community will play a central role in determining this process.
Ultimately, Buckley believes the platform will provide users with both support and capabilities. Be my eyes too will soon help businesses better serve their customers by prioritizing accessibility.
“It’s safe to say that technology can give people who are blind or visually impaired not only more power, but also a platform for the community to share more of their talents with the rest of the world,” says Buckley. “It’s an incredibly compelling opportunity for me.”
If you or someone you know is visually impaired and would like to try Virtual Volunteer, you can sign up for the waiting list here.