The Ethical Algorithm:  Navigating AI and its Applications in the Lives of People with IDD

  1. What is AI Anyway? Cutting Through the Jargon

  2. AI as a Partner: Enhancing Health Supports with People with Disabilities

  3. Addressing Bias and Ableism: Centering Disability Voices in AI Development

  4. Real Risks, Real Voices: What People with Disabilities Say About AI

  5. Staying Innovative: How to Use AI for Good

By David A. Ervin, BSc, MA, FAAIDD and Douglas Golub, BA, MS, SHRM-CP, DrPH(C)

This is the second article in a five-part series on artificial intelligence (AI) and its emerging role in healthcare and community-based services for people with intellectual and/or developmental disabilities (IDD). While the first article set important context, this installment focuses on how AI can act as a collaborative partner, not a replacement, for enhancing supports. From assistive technologies that promote independence to AI-enabled documentation systems and tools that prompt and capture data from provider staff, this article explores real-world applications of AI across multiple service-delivery environments. This article also emphasizes the importance of good design to ensure AI empowers, rather than replaces, human connection and self-determination.

AI as a Partner: Enhancing Health Supports with People with Disabilities

In the first article in this series, What is AI Anyway? Cutting Through the Jargon, we discussed how artificial intelligence (AI) differs from the technology that preceded it because it can mimic human behavior. It is designed to learn, reason, problem-solve, perceive, and use language without being specifically programmed for each task. In the words of Fred Rogers (Mister Rogers), “we all need someone to help us do what we can.” Today, besides the many “someones” who support us, AI is becoming another powerful tool to help us reach our potential.

Assistive AI-powered tools help people to interact with the world around them. Smart home devices help with simple things like turning on lights, adjusting thermostats, and locking or unlocking doors with voice or touch, giving people greater control over their environments. In supports for people with intellectual and/or developmental disabilities (IDD), so-called Smart Homes have a very different and, in some cases, far more complex role. These smart homes provide monitoring devices and other support technologies throughout a persons’ home that can be monitored in real time, frequently remotely (Brand et al., 2019). Modern smart home technologies can support complex routines and activities, complement staff support strategies, and adjust to the person with an IDD and how they learn and exert control of the environment (Landuran et al., 2022). As AI has advanced, there are ever-increasing opportunities to apply learning technologies not only to smart homes but also to additional support technologies.

The term for AI-powered tools that are built to understand how people with cognitive differences communicate are sometimes called “Augmentative and Alternative Communication” or “AAC” apps, powered by AI. “Communication is a basic need for all people to fully participate in life. Persons with disabilities may face challenges in developing their communication skills and using them appropriately in different situations. AAC tools and methods can assist individuals in this process (Wahl, 2023, para. 1).” These AAC apps, and voice assistants can empower people with disabilities to live more independently and express themselves more fully.

Disability rights author, James Charlton, is widely credited with the statement ‘nothing about us without us.’ This slogan emphasizes the importance of involving people with disabilities in decisions that affect their lives. This principle clearly extends to developing AI systems that support people with disabilities.

The European Union’s Artificial Intelligence Act (AIA) became effective on August 1, 2024. It represents a legal structure governing AI, and it is expected to influence how medical AI is created and used around the world (Mann, 2024). While the United States has yet to enact comparable legislation, efforts are underway to establish clearer guidance for the use of AI in health and human services. In October 2023, President Joe Biden issued the Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It is a far-reaching federal government framework that speaks to, among other things, the “responsible deployment and use of AI and AI-enabled technologies in the health and human services sector.”

This prioritization of development and application of AI in health and human services was a response to significant advances in technology as well as, critically, growing concerns around ethical concerns and how people with disabilities access and use technology. Braddock and colleagues (2012) introduced the declaration of The Rights of People with Cognitive Disabilities to Technology and Information Access. The declaration and its signatories acknowledged the following:

Access to comprehensible information and usable communication technologies is necessary for all people in our society, particularly for people with cognitive disabilities, to promote self-determination and to engage meaningfully in major aspects of life such as education, health promotion, employment, recreation, and civic participation;

The vast majority of people with cognitive disabilities have limited or no access to comprehensible information and usable communication technologies; people with cognitive disabilities must have access to commercially available devices and software that incorporate principles of universal design such as flexibility and ease of use for all;

Those two paragraphs spoke to, eleven years before President Biden’s EO, two fundamental issues for people with IDD—the potential of technologies to benefit people with disabilities and the need to make them accessible to all people.

President Biden’s EO (2023) explicitly cautions: “Artificial Intelligence’s capabilities can increase the risk that personal data—sensitive information on people’s identities, locations, habits, and desires—could be exploited and exposed.” The five pillars of the Biden Administration’s AI Bill of Rights are: safe and effective systems, protection from algorithmic discrimination, data privacy, notice and explanation, and access to human alternatives and oversight. In health supports, there is additional highly sensitive information, considered protected health information (PHI) under the Health Information Protection and Portability Act (HIPAA), that must be protected. In many ways, AI elevates or at least differently highlights these sorts of risks. Jennifer King, a fellow at the Stanford Institute for Human-Centered Artificial Intelligence, rightly points out that once a user types or speaks anything into a chatbot or prompt field, “you lose possession of it” (Nguyen, March 2025). And, once it is entered, it is a part of the internet forever.

President Donald Trump rescinded President Joe Biden’s EO on AI, however, the awareness and understanding of security continues to evolve in the marketplace (O’Brien, 2025). Put another way, our attention to policy must be robust.

By now, most community provider organizations that support people with IDD are using electronic health records (EHRs) to maintain assessments, service plans, service notes, and summaries for the people they support. We are at the early stages of seeing AI-enabled functionality within these systems to help highlight important trends and support more accurate, individualized service planning that better reflects people’s goals and preferences. Subject to training and community provider organization policy, support staff can access AI tools formally in their secure EHRs and informally (i.e. with ChatGPT or Google Gemini) to improve their documentation, receive recommendations for ways to improve their supports, or brainstorm ideas. AI becomes a helpful and powerful tool, with policy and procedures mitigating risk.

There are safe and responsible ways to use AI tools to study large data sets without sharing protected health information with large language models that exist outside of a secure perimeter. In addition to the large language models (e.g., OpenAI’s ChatGPT, Microsoft Copilot, Google Gemini) that are becoming part of our vernacular, there are open-source models that can be installed on secure servers allowing for transparent system management and the implementation of data controls that meet stakeholders’ security requirements.

In other words, you can install an open-source model on your own server, and while it is significantly slower and not as well trained, you can control what happens to the data more than trusting it to a larger model in the cloud. In figure 1, we present a study of 1,658 service notes over a two-year period for a woman who receives home and community-based services and supports through an IDD-focused Medicaid Waiver. The time required to read all these notes would be significant, even for the fastest reader. Using a large language model and some scripting, AI tools can assign a sentiment score (between -100% and +100%) for the positivity (represented as the green stacked bar), negativity (represented as the red stacked bar), neutral content (represented as the yellow stacked bar), and composite score (represented as the blue line) in each service note. These tools can also look for context and categories to pick up on major and minor life events and health events, as requested. The result helps us look back through service notes that are required for compliance and begin to see new insights that may not have otherwise appeared.

Figure 1. Example of Utility of Open-Source Model in Community Supports

Figure 1 illustrates how AI can uncover hidden links between physical discomfort and behavioral symptoms using the Bio-Psycho-Social model. Initially, self-injurious behaviors were treated with psychiatric medications, but the root cause—recurrent, untreated physical conditions like vaginitis—was missed. Only after antibiotics were prescribed did behaviors improve significantly, revealing the physical source of distress. It took time to resolve because symptoms were treated in isolation rather than as part of an integrated, person-centered view.

In healthcare, the wide range of potential applications of AI includes clinical documentation, matching patients to clinical trials, and answering medical questions (Jin et al., 2024). Who among us hasn’t consulted “Dr. Google” on occasion? For people with IDD, access barriers to health services are longstanding, substantial and well chronicled in the research literature. These barriers combine to result in poorer health status among people with IDD when compared to their peers without IDD.

As AI becomes a robust tool in healthcare, its use with and potential impact on access to care and health outcomes holds promise. Still, caution is warranted. Some argue that AI actually exacerbates health disparities (see Celi et al., 2024). Others point to the absence of people with IDD in developing AI or any technologies as problematic.

“Disabled people are neither assumed to be nor hired to work as creators and designers of AI technologies, excluding them from having agency in the development and evaluation of AI technologies with direct impact on their lives (Newman-Griffis, 2022, para. 6). One example cited the design of sign language gloves without including deaf people. It focused only on hand movements and not on facial expressions, body posture, and cadence of pausing, making these largely unusable (Newman-Griffis, 2022).

Disability rights author, James Charlton, is widely credited with the statement “nothing about us without us.” This slogan emphasizes the importance of involving people with disabilities in decisions that affect their lives. This principle clearly extends to developing AI systems that support people with disabilities. To ensure these tools are accessible, relevant, and aligned with lived healthcare and support experiences and preferences, people with disabilities must be involved in their design and implementation.

AI is emerging as a valuable partner in supporting people with disabilities, from assistive tools that promote independence and EHR integrations that improve prompting and resources for staff to healthcare diagnostic systems and treatment planning. For these technologies to be truly effective and equitable, people with disabilities must be included in their design, ensuring accessibility and alignment with real-life communication and support needs. Inclusive innovation leads to better outcomes for everyone.

References

Braddock, D., Hoehl, J., Tanis, S., Ablowitz, E., & Haffer, L. (2013). The rights of people with cognitive disabilities to technology and information access. Inclusion, 1(2), 95–102. https://doi.org/10.1352/2326-6988-01.02.95

Brand, D., DiGennaro Reed, F. D., Morley, M. D., Erath, T. G., & Novak, M. D. (2019). A survey assessing privacy concerns of smart-home services provided to individuals with disabilities. Behavior Analysis in Practice, 13(1), 11–21. https://doi.org/10.1007/s40617-018-00329-y

Celi, L. A., Cellini, J., Charpignon, M. L., Dee, E. C., Dernoncourt, F., Eber, R., Mitchell, W. G., Moukheiber, L., Schirmer, J., Situ, J., Paguio, J., Park, J., Wawira, J. G., Yao, S., & for MIT Critical Data (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities-A global review. PLOS digital health, 1(3), e0000022. https://doi.org/10.1371/journal.pdig.0000022

Jin, Q., Wan, N., Leaman, R., Tian, S., Wang, Z., Yang, Y., Wang, Z., Xiong, G., Lai, P. T., Zhu, Q., Hou, B., Sarfo-Gyamfi, M., Zhang, G., Gilson, A., Bhasuran, B., He, Z., Zhang, A., Sun, J., Mann, S. P., Cohen, I. G., & Minssen, T. (2024, October 10). The EU AI Act: Implications for U.S. Health Care. NEJM AI, 1(11). https://doi.org/10.1056/AIp2400449

Newman-Griffis, D., Rauchberg, J. S., Alharbi, R., Hickman, L., & Hochheiser, H. (2022). Definition drives design: Disability models and mechanisms of bias in AI technologies. arXiv. https://doi.org/10.48550/arXiv.2206.08287

O’Brien, M. (2025, January 22). Trump rescinds Biden’s executive order on AI safety in attempt to diverge from his predecessor. Associated Press. https://apnews.com/article/trump-ai-repeal-biden-executive-order-artificial-intelligence-18cb6e4ffd1ca87151d48c3a0e1ad7c1

Wahl, M., & Weiland, K. (2023). Augmentative and alternative communication and digital participation. Frontiers in Communication, 8. https://doi.org/10.3389/fcomm.2023.1180257

Weng, C., Summers, R. M., … Lu, Z. (2024). Demystifying Large Language Models for Medicine: A Primer. ArXiv, arXiv:2410.18856v3.

About the Authors

David Ervin has worked in the field of intellectual and developmental disabilities (IDD) for nearly 40 years in the provider community mostly, and as a researcher, consultant, and ‘pracademician’ in the US and internationally. He is currently CEO of Makom, a community provider organization supporting people with IDD in the Washington, DC metropolitan area. He is a published author with nearly 50 peer-reviewed and other journal articles and book chapters, and more, and he speaks internationally on health and healthcare systems for people with IDD, organization development and transformation, and other areas of expertise.

David’s research interests include health status and health outcomes experienced by people with IDD, cultural responsiveness in healthcare delivery to people with IDD, and the impact of integrating multiple systems of care on health outcomes and quality of life. David is a consulting editor for three scientific/professional journals, and serves on a number of local, regional and national policy and practice committees, including The Arc of the US Policy and Positions Committee. David is Conscience of the Field Editor for Helen: The Journal of Human Exceptionality, Vice President of the Board of Directors for The Council on Quality and Leadership (CQL), and Guest Teaching Faculty for the National Leadership Consortium on Developmental Disabilities.

Doug Golub is the Principal Consultant at Data Potato LLC and a Doctor of Public Health (DrPH) student at the Johns Hopkins Bloomberg School of Public Health. While earning his Master of Science at Rochester Institute of Technology, he worked as a direct support professional, an experience that shaped his career in human services and innovation. He co-founded MediSked, a pioneering electronic records company for home and community-based services, which was acquired after 20 years of impact. Doug has also held leadership roles at Microsoft’s Health Solutions Group and is a nationally recognized thought leader on data, equity, and innovation. He serves on the boards of the ANCOR Foundation and FREE of Maryland.

 

Previous
Previous

Leave It To Me

Next
Next

Unlocking Behaviors: Feces Smearing