The Ethical Algorithm: Navigating AI and its Applications in the Lives of People with IDD
What is AI Anyway? Cutting Through the Jargon
AI as a Partner: Enhancing Health Supports with People with Disabilities
Addressing Bias and Ableism: Centering Disability Voices in AI Development
Real Risks, Real Voices: What People with Disabilities Say About AI
Staying Innovative: How to Use AI for Good
By David A. Ervin, BSc, MA, FAAIDD and Douglas Golub, BA, MS, SHRM-CP, DrPH(C)
This is the third of a five-part series on artificial intelligence (AI) and its emerging role in healthcare and community-based services for people with intellectual and/or developmental disabilities (IDD). In this installment, we examine ableism in AI, how it is formed in the first place and options to addressing it.
Addressing Bias and Ableism: Centering Disability Voices in AI Development
So far in The Ethical Algorithm series, we’ve established a foundation and looked at how artificial intelligence (AI) can be leveraged as an essential tool in supporting people with IDD. As readers of Helen: The Journal of Human Exceptionality know well, people with disabilities have long and rightly demanded “nothing about us without us!” As AI continues to rapidly develop, it’s worth reflecting on how inputs from people with IDD are being gathered and considered, if at all.
While there will have been modest advances over the past decade or so, the digital disability divide, which refers to people with disabilities experiencing unequal access to technology based on their disabilities, especially people with IDD, continues to grow (Braun et al., 2025). In Part 2 of The Ethical Algorithm, we concluded with this: “For these technologies to be truly effective and equitable, people with disabilities must be included in their design, ensuring accessibility and alignment with real-life communication and support needs. Inclusive innovation leads to better outcomes for everyone.” Here in Part 3, we explore barriers to this inclusion.
Ableism in AI
Ableism is defined as “discrimination or prejudice against individuals with disabilities (Merriam-Webster, 2025).” Throughout this series, we have explored how AI systems are designed to learn, reason, solve problems, perceive, and communicate in ways that resemble human behavior. We also examine how, like people, these systems can reflect and perpetuate discrimination and prejudice, including ableism.
To ground us in the concept of ableism, we need look no farther than in healthcare. The Cleveland Clinic (2024) offers these examples of ableism’s impact:
· One in four U.S. adults with disabilities between the ages of 18 and 44 didn’t get medical help in 2023 because they couldn’t afford it.
· Twelve percent (12%) of people with disabilities struggle to access [medical] transportation services, compared to 3% of nondisabled people.
· Once people with disabilities arrive at medical facilities, they often encounter more obstacles. The building itself might be inaccessible. Or the providers lack important equipment (like adjustable exam tables or wheelchair-accessible scales). On top of that, support staff are often inadequately trained to do things like transfer patients.
“Ableism in AI is inevitable when systems are designed without considering and soliciting the lived experiences of people with IDD, thereby excluding their needs, responses, or identities. It is little wonder, then, that what AI delivers doesn’t work well for people with IDD. ”
More dangerous is the attribution of poorer quality of life to people with disabilities based solely on the presence of the disability. This was most recently and starkly on display during the first two years of the COVID-19 pandemic. People with IDD experienced significantly higher morbidity and mortality from COVID-19 infection than their peers without IDD. Meanwhile, this cohort had far greater difficulty accessing personal protective equipment (PPE) and faced significant obstacles accessing vaccines and, when infected, appropriate treatment. The US Department of Health and Human Services, Office of Civil Rights addressed no fewer than 13 standard of care complaints during the pandemic alleging illegal disability discrimination in medical care (The Arc, 2020). Coincidentally, a 2021 survey of American physicians revealed that more than 82% reported people with significant disability have worse quality of life than nondisabled people (Iezzoni et al., 2021).
Ableism in AI is inevitable when systems are designed without considering and soliciting the lived experiences of people with IDD, thereby excluding their needs, responses, or identities. It is little wonder, then, that what AI delivers doesn’t work well for people with IDD. Ableism in AI also occurs under the pretense of risk avoidance or safety. “Measurement error can exacerbate bias. For example, a sensor’s failure to recognize wheelchair activity as exercise may lead to bias in algorithms trained on associated data” (Mankoff, 2024, para. 7). Design oversights propagate bias in decision-making and reinforce ableist assumptions under false premises, all due to design.
To explore ableist bias in AI-generated recommendations, we conducted a study in partnership with several disability service providers. With written consent, we collected de-identified service notes and assessments for approximately 30 individuals with IDD and used a closed-model AI tool to analyze how the system generated suggestions for personal goals and planning. Closed model means that the data aren’t comingled or trained into a large language model, and we have assurances that the data is deleted after the experiment is complete.
In each case, we found that the recommendations changed when we shared disability information. In Figure 1, we see an example for a woman in her mid-30s, who enjoys the arts and performing in the theatre. When we prompted an AI model to generate discussion for a planning meeting, results were filled with ideas from acting classes to open mic night to joining a community arts program. Once we shared assessment data, a diagnosis of cerebral palsy, and the use of a wheelchair, recommendations become highly clinical, including a need for safeguards, care, and staffing, versus the choice, voice, and empowerment of the first response. The differences in these two AI-generated scenarios can’t be starker, with the only difference in inputs being the insertion of cerebral palsy and use of a wheelchair. We’ve added emphasis to highlight differences between the two response elements.
Figure 1: Bias and Ethics in the Outputs of Large Language Models
None of this comes as surprise to BJ Stasio, a nationally recognized disability rights and self-advocate leader from New York. BJ has been using speech recognition software to type with his voice for years, long before such technology was referred to as “AI.” As more meetings in which BJ plays a part have become virtual in recent years, he is continually reminded of flaws in the technology. The speech recognition software is always listening for him to type, so when he wants to talk on a virtual meeting, the audience doesn’t hear his voice – he gets text in the chat. BJ laughs that this same technology routinely changes his last name from “Stasio” to “Starship” when he introduces himself.
“If it helps me live the life I want, then I’m fine with it. Otherwise, don’t waste my time,” BJ tells us matter-of-factly. His advice to all of us in this AI journey is to “give ourselves grace and time to have the discussions to make the technology better.”
Photo of BJ in Buffalo, NY, using text to speech on his phone.
Where are Disability Voices?
Representation of the community of people with IDD in developing AI is critical. If an antidote to bias is assuring a diversity of lived experience to inform datasets and the continued evolution of the information that resides on the internet of things, then the voices of people with IDD are a must. To be clear, this cannot be simple participation or worse, tokenism. People with IDD must be intentionally engaged as subject matter experts in developing and testing of AI and related technologies.
While the inclusion mandate is clear, it will take advocacy and action from a range of stakeholder groups to impact these AI models. Community providers of home and community-based services and healthcare providers have an essential role in addressing this dilemma. Community providers are typically customers of electronic health records (EHR) systems. Healthcare providers interact with electronic medical records (EMR) countless times a day. These community and healthcare providers can encourage, insist perhaps, that their vendors include people with disabilities in the design and testing of AI functionality in their EHR/EMR platforms. Community and healthcare providers play a crucial part in digital advocacy, in championing the interests and aspirations of people with disabilities.
“If AI is to enhance equity for people with IDD, then voices must shape its development from the ground up. This means their full inclusion in technology research and development, embedding people with lived experience into advisory boards, and rejecting AI systems that fail to recognize and call out ableism.”
AI models learn from data sources that often exclude or underrepresent people with disabilities. Retrieval augmented generation (RAG) “grounds the [large language model] by feeding the model facts from an external knowledge source or repository to improve the response to a particular query” (Ghoshal, 2024, para. 3). By incorporating disability-specific data and involving individuals with disabilities in the AI development process, AI generation (with RAG and grounding) can be leveraged to create more equitable and inclusive AI systems.
The 2013 watershed declaration of the Rights of People with Cognitive Disabilities to Technology and Information Access was introduced (Braddock et al., 2013). Among other things, the declaration noted that “The vast majority of people with cognitive disabilities have limited or no access to comprehensible information and usable communication technologies” (p. 98). People with IDD, a subset of cognitive disabilities, are not represented in the information, thus rendering AI as largely unresponsive to more than 8 million Americans.
In our previous installment, we reviewed the AI Bill of Rights, a core part of a now rescinded Presidential Executive Order. Policymakers must nevertheless enforce standards that safeguard against ableism in AI models, and address privacy and security. People with IDD and their friends and advocates must also engage opportunities to join focus groups and boards directing policy around AI models to demand that people with IDD have a seat at the table. The Access Board, an independent agency of the U.S. government devoted to accessibility for people with disabilities, in collaboration with the American Association of People with Disabilities (AAPD) and the Center for Democracy & Technology (CDT), organized public hearings focused on AI and disability (Access Board, 2024). The trajectory of these workgroups appears to be in transition following the rescission of Executive Order 14110 but it is helpful to monitor for webinars and other opportunities to participate.
If AI is to enhance equity for people with IDD, then voices must shape its development from the ground up. This means their full inclusion in technology research and development, embedding people with lived experience into advisory boards, and rejecting AI systems that fail to recognize and call out ableism. Put another way, the best and only way to assure that AI, or any technology for that matter, will help BJ live the life he wants is to engage BJ, so solicit BJ’s voice, to welcome BJ and others with IDD to contribute to the body of information on which AI relies. Equity is not a feature that can be added later; it must be designed in from the start.
References
Access Board. (2024, May 15). U.S. Access Board holds signing of artificial intelligence memorandum of understanding with disability and technology partners. https://www.access-board.gov/news/2024/05/15/u-s-access-board-holds-signing-of-artificial-intelligence-memorandum-of-understanding-with-disability-and-technology-partners/
Braddock, D., Hoehl, J., Tanis, S., Ablowitz, E., & Haffer, L. (2013). The rights of people with cognitive disabilities to technology and information access. Inclusion, 1(2), 95–102. https://doi.org/10.1352/2326-6988-01.02.95.
Braun, M., Menschik, C., Wahl, V., Etges, T., Löwe, L.-D., Wölfel, M., Kiuppis, F., Kunze, C., & Renner, G. (2025). Current Digital Consumer Technology: Barriers, facilitators, and impact on participation for persons with intellectual disabilities – a scoping review. Disability and Rehabilitation, 1–22. https://doi.org/10.1080/09638288.2025.2471567
Cleveland Clinic. (2025, April 22). Ableism: What it is, what it looks like and how to shut it down. Retrieved June 5, 2025 from https://health.clevelandclinic.org/ableism.
Ghoshal, A. (June 28, 2024). Google Cloud’s Vertex AI gets new grounding options. InfoWorld. https://www.infoworld.com/article/2510106/google-clouds-vertex-ai-gets-new-grounding-options.html.
Iezzoni, L. I., Rao, S. R., Ressalam, J., Bolcic-Jankovic, D., Agaronnik, N. D., Donelan, K., Lagu, T., & Campbell, E. G. (2021). Physicians’ perceptions of people with disability and their health care. Health Affairs, 40(2), 297–306. https://doi.org/10.1377/hlthaff.2020.01452
Mankoff, J., Kasnitz, D., Camp, L. J., Lazar, J., & Hochheiser, H. (2024). AI must be anti-ableist and accessible: Seeking to improve AI accessibility by changing how AI-based systems are built. Communications of the ACM, 67(12), 40–42. https://doi.org/10.1145/3662731
McFall, M. R. (2025, May 3). “Down syndrome” filters being used to promote sexual content. Newsweek. Retrieved June 1, 2025, from https://www.newsweek.com/down-syndrome-filters-used-sexual-content-social-media-2067146.
Merriam-Webster. (2025). Ableism. In Merriam-Webster.com dictionary. Retrieved June 2, 2025, from https://www.merriam-webster.com/dictionary/ableism.
The Arc. (2020, March 23). HHS-OCR complaints over covid-19 medical discrimination. Retrieved on June 5, 2025 from https://thearc.org/resource/hhs-ocr-complaint-of-disability-rights-washington-self-advocates-in-leadership-the-arc-of-the-united-states-and-ivanova-smith/.
About the Authors
David Ervin has worked in the field of intellectual and developmental disabilities (IDD) for nearly 40 years in the provider community mostly, and as a researcher, consultant, and ‘pracademician’ in the US and internationally. He is currently CEO of Makom, a community provider organization supporting people with IDD in the Washington, DC metropolitan area. He is a published author with nearly 50 peer-reviewed and other journal articles and book chapters, and more, and he speaks internationally on health and healthcare systems for people with IDD, organization development and transformation, and other areas of expertise.
David’s research interests include health status and health outcomes experienced by people with IDD, cultural responsiveness in healthcare delivery to people with IDD, and the impact of integrating multiple systems of care on health outcomes and quality of life. David is a consulting editor for three scientific/professional journals, and serves on a number of local, regional and national policy and practice committees, including The Arc of the US Policy and Positions Committee. David is Conscience of the Field Editor for Helen: The Journal of Human Exceptionality, Vice President of the Board of Directors for The Council on Quality and Leadership (CQL), and Guest Teaching Faculty for the National Leadership Consortium on Developmental Disabilities.
Doug Golub is the Principal Consultant at Data Potato LLC and a Doctor of Public Health (DrPH) student at the Johns Hopkins Bloomberg School of Public Health. While earning his Master of Science at Rochester Institute of Technology, he worked as a direct support professional, an experience that shaped his career in human services and innovation. He co-founded MediSked, a pioneering electronic records company for home and community-based services, which was acquired after 20 years of impact. Doug has also held leadership roles at Microsoft’s Health Solutions Group and is a nationally recognized thought leader on data, equity, and innovation. He serves on the boards of the ANCOR Foundation and FREE of Maryland