wiseup-blog-digital-ai

DIGITAL ACCESSIBILITY AND AI

HOW BRANDS AND PRODUCT DESIGNERS CAN APPEAL TO THE MOST DIVERSE AUDIENCE POSSIBLE

Artificial intelligence adds convenience to our professional and daily lives—but for 1.3 billion people with disabilities, the technology has the potential to radically improve assistive technologies.

Making experiences accessible for those with disabilities has long been treated as an afterthought. Global Accessibility Awareness Day, which I co-founded in May 2012, marks a moment every year that encourages us to bring accessibility to the fore, ensuring the products and digital experiences we create are inclusive to everyone. For the marketing and advertising community, we have a one-of-a-kind opportunity to lead the change, pushing clients, partners and the industry as a whole to place focus and resources on enhancing digital experiences for everyone as AI technology evolves.

What can some of the most incredible innovations in AI tell us about building more accessible experiences? Before understanding that, we need to carefully consider what ability is.

Defining (dis)ability

We take in sensory information through a variety of senses including our eyes, ears and nose. But the way we perceive, understand and interact with the world varies from person to person. Some may experience color blindness, neurodivergence, deafness … the list goes on. There are also different ways of venturing out into the world—such as in wheelchairs, walking with a cane or speaking with stroke-induced aphasia. 

Designing for inclusion is imperative, but with so many variables it can also be a challenge. AI’s growing ability to understand the world through sensors such as cameras and microphones, along with its generative abilities to output information, can help scale up the inclusive experiences we build and transform accessibility as a practice. Here’s what we can learn from the increasing abilities of AI, and immediate takeaways marketers can implement now as we build toward a more inclusive future:

AI that can see 

When GPT-4 launched, many of us were impressed by its multimodal functions: It could take information in one format or medium and output another, such as turning a photo of what’s in your fridge into a series of recipes based on available ingredients.

Similar tools such as mPLUG-Owl can not only understand what an image depicts but also answer questions about it. Someone with a visual impairment can upload an image and have a two-way conversation with the bot to gain a deeper understanding of what’s depicted, which goes far beyond the simplified summaries used for alt text today.

The lesson for brands: Use these tools to make visual content not only more accessible but also engaging for their users. 

AI that can hear

Many brands rely on speech input to offer a better user experience, such as asking Alexa where your Amazon order is, or searching YouTube videos via voice. While convenient, voice input isn’t always accessible to those with nonstandard speech—but AI that is capable of hearing is about to make it much better.

Google’s Project Relate is an app that relies on thousands of speech samples and machine learning to provide personalized speech recognition and closed captioning for users with nonstandard speech. In addition to captioning, a “repeat” feature clearly repeats what the user said in a synthesized voice.

The lesson for brands: Look into integrating speech options on your platforms if you haven’t already. Project Relate is planned to make Google Assistant and other speech-based interactions more accessible, making speech input another great option for brands to help users of all kinds find what they need more quickly.

AI that can feel

If you’re a fan of James Cameron’s “Avatar,” you might be excited about OceanOneK, an embodied humanoid robot that can explore ocean depths. What makes the robot unique is that it beams back both visual data and haptics—meaning operators can feel what OceanOneK touches. In March, Meta announced two tools that enable high-fidelity haptic experiences on Quest headsets, evoking some of that same magic in consumer-grade hardware.

The lesson for brands: Leverage technology to make faraway places or events more accessible to those with limited mobility. When it comes to immersive experiences such as VR, ensure that they are truly accessible. After all, an experience that virtually immerses users in another location doesn’t mean much if it still relies on an extensive range of motion to navigate.

AI innovation is transforming accessibility

Each of these technologies augment individuals’ abilities, rather than force people to make concessions to better fit in the world around them. They’re inherently personalized experiences, meaning accessibility isn’t just the right thing to do, but makes a solid business case as brands and product designers try to appeal to the largest, most diverse audience possible.

For too long, accessibility has been relegated as an afterthought, as evidenced by the 96.3% of home pages that have Web Content Accessibility Guidelines (WCAG 2) failures today. Soon, with AI’s transformative powers—people who are blind can transform all inputs to audio, people who are deaf can turn all inputs into visual formats—we will enable a new class of personalized experiences to display content in the best way for each individual. We’ve reached a new frontier for innovation, and those who lead the charge will be best able to deliver more personalized, accessible experiences for all.

Leave a Reply

Your email address will not be published. Required fields are marked *

Your Comment
Your Name
Your Website