IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

When Using GenAI to Empower, Push Against Its Limitations

A new paper from the National Association of State Chief Information Officers explores the role of generative AI in improving accessibility for people with disabilities. It finds use cases and limitations alike.

A human hand and a robot hand both reach out to touch a brain depicted in graphic form.
Shutterstock
A new paper from the National Association of State Chief Information Officers (NASCIO), Revolutionizing Assistance: How States Can Improve Generative AI’s Role in Disability Empowerment, explores current use cases involving using generative AI (GenAI) to improve accessibility.

States are implementing various approaches to ensure digital services are accessible to constituents with disabilities. A catalyst for this work is the April Department of Justice ruling, which requires state and local governments make their digital content usable for people with disabilities within two or three years depending on jurisdiction size.

GenAI’s rapid advance has posed new opportunities — and risks — for accessibility. As American Association of People with Disabilities Technology Policy Consultant Henry Claypool previously told Government Technology on the topic, “Measured optimism is the right disposition.”

The new paper offers four key recommendations to guide state technology leaders in its use — while noting limitations in that use also exist.

“And at the end of the day, the very nature of technology is to increase and expand access to the world in a way that best fits the person using it,” NASCIO Policy Analyst Kalea Young-Gibson, author of the report, said in a podcast episode about the paper.

The paper’s first recommendation is to engage all stakeholders when evaluating AI tools, including those with disabilities. Second, it recommends cultivating inclusive data sets to combat AI bias, because inaccurate or exclusionary data can perpetuate bias. Third, embrace transparency in developing AI tools, because that empowers end users to hold companies accountable. And fourth, confront and navigate AI limitations, because that allows state government to choose the most effective products for use.

“By taking these steps, state governments can significantly empower people with disabilities through generative AI,” the paper said.

It cites several tangible, positive impacts from GenAI. Using speech-to-text technology, GenAI speech models reduced word errors in atypical speech patterns for people with disabilities by about 26 percent. It also offered documented benefits for people with dyslexia, using website decluttering and summarization tools.

GenAI will only evolve; and in the future, user personalization could help improve these tools’ experience for people with disabilities by expanding information access. The paper argues that AI that mimics human intelligence may be able to address the current limitations of GenAI accessibility — noting that the considerations the paper raises on the topic will remain necessary.

The paper underlines current limitations, too: A University of Washington study found when a study participant with a cognitive disability used an AI tool to summarize a paper, some of its responses were incorrect. And while the tool eased the cognitive burden when writing short messages, for another study participant with autism, message recipients felt the content was “robotic.”

In another example cited, an individual who was hard of hearing used captioning to understand what was said at a conference. However, due to captioning delays on the live feed, and inaccurate captioning provided by the conference, the attendee had to use four different sets of captions to access the event’s information.

The paper offers additional considerations to further assist state leaders’ use of GenAI tools.

It advises officials to ensure any AI tools that are used are accessible, and to avoid AI tools that force people to use them, noting an example of AI-based website overlays. The paper suggests developing comprehensive AI policies and avoiding “clean” data sets that might reduce emphasis on outliers. These outliers, the paper said, may represent diverse and essential-to-include demographics such as people with disabilities.