The Ethical Algorithm: Navigating AI and its Applications in the Lives of People with IDD
What is AI Anyway? Walking Through History and Cutting Through the Jargon
AI as a Partner: Enhancing Health Supports with People with IDD
Addressing Bias and Ableism: Centering Disability Voices in AI Development
Real Risks, Real Voices: What People with Disabilities Say About AI
Staying Innovative: How to Use AI for Good
By David A. Ervin, BSc, MA, FAAIDD and Douglas Golub, BA, MS, SHRM-CP, DrPH(C)
The following is the first in a five-part series on artificial intelligence (AI) and its emerging use in healthcare and in community-based services to people with intellectual and/or developmental disabilities (IDD). There are a great many exciting opportunities for AI to change the ways in which healthcare and community supports are delivered to people with IDD. And, there are a great many ethical considerations that cannot be overlooked. This series will explore these opportunities and considerations, with a set of recommendations on best and ethical applications to go with them.
What is AI Anyway? Walking Through History and Cutting Through the Jargon
Artificial intelligence (AI) surged into our everyday conversation, seemingly overnight, in late 2022 with the launch of ChatGPT, followed quickly by tools like Microsoft Copilot and Google Gemini. In just a short few years, these tools have become ubiquitous. They are free, user-friendly, and give us a new portal to virtually everything that resides on the internet. It brings us seemingly limitless possibilities and potential. It also raises serious concerns about those same possibilities. In this series, we will explore the use of AI in community supports and health services for people with intellectual and/or developmental disabilities (IDD), some of the major ethical issues to be navigated in its use, and how AI can empower people with IDD and the clinicians and providers who support them alike.
Today’s AI tools are using large language models (LLMs), trained with incredible amounts of data from the Internet and beyond. When the term AI is used today, it is often referring to “Generative AI,” which happens to be the “G” in ChatGPT. These models are “generative” in that they generate new content, “pre-trained” in that they have learned from vast data sets, and are “transformers,” in that they convert input into output with the right prompts and context. Understanding these basics and how these generative AI tools differ from our older “expert systems” is the first step toward using AI effectively in disability services and healthcare.
With all of that established, and with our Microsoft CoPilot app urging us to “Ask me anything….”, we typed, “We are writing a series of articles for Helen: The Journal on Human Exceptionality on the ethical use of AI in healthcare and long term services and supports for people with intellectual and/or developmental disabilities.” Not really a question, but an opportunity to see AI in action.
CoPilot’s answer? “The ethical use of AI in healthcare and long-term services for people with intellectual and developmental disabilities requires careful consideration of privacy, bias, transparency, and inclusivity. By addressing these ethical challenges, AI can significantly enhance the quality of care and support provided to individuals with disabilities.” Easy enough. We really only need to carefully consider privacy, bias, transparency, and inclusivity, and voilà, AI use in healthcare and community supports for people with IDD is AOK. But, is it AOK? To get to an answer is to carefully wander through some of the issues, concerns and considerations.
We are starting to understand that disability data is alarmingly absent from AI algorithms, contributing to inequities in access and quality of support (Alexiou, 2024). What happens when what we solicit from AI unintentionally reflects bias and ableism because it lacks context or inclusive pre-trained data? Can AI deliver content that reflects 21st Century commitments to self-determination? How do we assure that what we enter into AI is not just HIPAA compliant, but meets the strictest confidentiality standards while at the same time includes enough information for, by and about the person that it’s actually relevant? Beyond these and related considerations, there are complex legal considerations to inform our thinking about the promise and the risk of AI in healthcare and for community-based services.
History
Artificial intelligence, often and mistakenly considered a modern, 21st Century technology actually dates to mid-20th Century so called expert systems. “The premise of expert systems was straightforward — aim to capture all the domain-specific knowledge in a particular field, encode it into a computer program, add an interface to query the system, and you have a computer-brained ‘expert’ readily available (Arif, 2023, para. 5).”
Englishman Alan Turing, a mathematician of some renown and considered 70+ years after his death to be among the earliest theoricians on AI, offered the following in a speech in London on computer intelligence in 1947: “What we want is a machine that can learn from experience,” adding that the “possibility of letting the machine alter its own instructions provides the mechanism for this” (Press, 2017). Turing would continue to explore the notion of a computer that could be programmed to actively learn until his death in 1954.
By the 1960s, American computer scientist Edward Feigenbaum introduced Expert Systems, considered the earliest successful forms of AI, which are computers that solve complex problems by reasoning through bodies of knowledge, represented mainly as “if–then rules” rather than through conventional programming. Expert Systems consisted of a knowledge base, which represented facts and rules, and an inference engine, which applied the rules to the known facts to deduce new facts. Over the next 20 years, this deductive technology would proliferate in computer science and research.
“We are starting to understand that disability data is alarmingly absent from AI algorithms, contributing to inequities in access and quality of support (Alexiou, 2024). What happens when what we solicit from AI unintentionally reflects bias and ableism because it lacks context or inclusive pre-trained data?”
We all remember War Games. “Would you like to play a game?” The characters played by Matthew Broderick, accompanied by Ally Sheedy, would hack into WOPR (War Operation Plan Response), and ultimately help it recognize—on its own, through deducing that no one can win a thermonuclear war—the need to stand down America’s nuclear arsenal. Science fiction indeed, except that the premise of the 1983 movie, that a computer could learn, was very real technology at the time. Art imitating life.
In 1997, Deep Blue, a chess playing Expert System run on an IBM computer, beat then-reigning World Chess Champion Karry Kasparov. University researchers and students were developing robotic AI technologies, built as Expert Systems, that could, for example, drive a car with its only inputs being traffic and terrain hazards, traffic signs and driving laws. These were ‘learn as you go’ systems, demonstrating that computing systems can be taught to learn through consideration of contemporaneous inputs. These and other AI technologies were in rapid development as the 21st Century dawned.
Movie buffs may also remember HAL, the Heuristically Programmed Algorithmic Computer 9000, better known as HAL 9000 or HAL, for short, as the main antagonist of the 1968 sci-fi novel and film 2001: A Space Odyssey. In the movie, HAL develops a sense of imperfection about himself. He cuts off communication with Earth, and he ultimately decides to kill the astronauts who decide to shut him down.
This is, of course, a movie and is deliciously dramatic and, well, terrifying! It is also science fiction. As we reached the 2000s, we are seeing HAL-type AI emerging, although far more benevolent! And, the cautionary notes struck in the movie more than five decades ago have 21st Century scientists paying close attention to the continued evolution of AI.
Massive 21st Century advances in computing speeds and capacities for big data have seen a rapid advance of AI. For some, this catalyzed dreams of the possible, while for others, powerful AI was seen as an existential threat to mankind. Turing himself sounded a note of caution dating to 1951: “It seems probable that once the machine thinking method had started, it would not take long to outstrip [humans’] feeble powers. At some stage therefore we should have to expect the machines to take control” (Turing, 1951).
As Russel and Norvig outline in their seminal book, Artificial Intelligence: A Modern Approach (2022), “experiencing a general sense of unease with the idea of creating superintelligent machines is only natural. If this is the result of success in creating superhuman AI—that humans cede control over their future—then perhaps we should stop work on AI” (p. 33).
The Office of the National Coordinator for Health Information Technology's (ONC) Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule offers a checklist for evaluating AI tools across transparency, human oversight, privacy & security, bias mitigation, and inclusiveness. These requirements mandate that developers of certified health IT conduct risk management for all predictive decision support interventions (DSIs) within their modules. This includes evaluating and mitigating risks related to accuracy, bias, and safety. Developers must also provide transparency information about predictive DSIs to clinical users, enabling them to assess these tools for fairness, appropriateness, validity, effectiveness, and safety. (ONC, 2024) This framework offers a structure for health IT vendors that will help guide this journey through these articles.
Jargon
As Eleanor Roosevelt once said, “a little simplification would be the first step toward rational living, I think.” While the technology driving today’s AI tools may seem overwhelming at first, it becomes far more approachable when we cut through the jargon and explain what these terms mean. Syracuse University Libraries has published a glossary on Artificial Intelligence terms (Syracuse University Libraries, 2024) which helps to set a baseline for this series of articles.
Artificial Intelligence (AI) refers to computer systems that are trained on large amounts of data and designed to learn, reason, problem solve, perceive, and use language like humans. On this journey, we will explore how AI is showing up in real-world tools like chatbots that hold conversations, apps that translate languages, and systems that help summarize assessments, service plans, and notes. These tools generate new content or suggestions based on what they've learned from large amounts of data, which is why they’re called “generative AI.”
“AI must be developed and used in ways that respect human dignity, privacy, and rights. Issues like bias in AI systems, where the technology might unfairly treat someone based on flawed or incomplete data, are real concerns. ”
When we refer to some of the major AI engines of today like OpenAI’s ChatGPT or Google’s Gemini or Microsoft’s Copilot, we are referring to a Large Language Model (LLM). We use the term “agent” in the context of AI as a digital assistant for a specific function that “knows what to do next” based on what it sees or what it is told.
Prompt engineering is "the art of crafting instructions for artificial intelligence models, specifically generative AI models.” To get the most from these tools, users get better at inputting the goal, context, expectations, and the information sources for the LLM to process. Today’s AI requires a human to know how and when to use these tools to help them augment their job or other activity. In the future, we will have AI agents to complete work for with oversight, then AI agents that run key functions without oversight.
The conversation around AI also includes important topics like ethics, fairness, and safety. When your data is used for training, it is stored and helps improve the model over time, whereas in inference-only use, your data is processed temporarily to generate a response but is not remembered or used to train the model. AI must be developed and used in ways that respect human dignity, privacy, and rights. Issues like bias in AI systems, where the technology might unfairly treat someone based on flawed or incomplete data, are real concerns. That is why people talk about “AI alignment,” which means making sure these tools act in ways that match our human values. While the technology is powerful, it’s our responsibility to shape how it’s used for good, which starts with cutting through the jargon.
Armed with a basic lexicon and informed by the history of AI and the important notes of caution, how do we think about our use of AI in healthcare? Can AI be used to improve long term services and supports? Can AI enhance self-determination, or does it erode self-determination, replacing self- with AI-determination? If the latter, what do we need to do to protect against it? How do we grapple with a raft of ethical considerations? How do we ensure privacy and confidentiality? This series will explore these essential questions.
References
Alexiou, G. (2024, August 6). Disability data alarmingly absent from AI algorithmic tools, report suggests. Forbes. Retrieved from https://www.forbes.com/sites/gusalexiou/2024/08/06/disability-data-alarmingly-absent-from-ai-algorithmic-tools-report-suggests/
Arif, S. (2023, October 16). An overview of the rise and fall of expert systems. Medium. https://medium.com/version-1/an-overview-of-the-rise-and-fall-of-expert-systems-14e26005e70e
Press, G. (2017). Alan Turing predicts machine learning and the impact of artificial intelligence on jobs. Forbes. Retrieved March 28, 2025, from https://www.forbes.com/sites/gilpress/2017/02/19/alan-turing-predicts-machine-learning-and-the-impact-of-artificial-intelligence-on-jobs/.
Russell, S., & Norvig, P. (2022). Artificial Intelligence: A modern approach. Pearson Education Limited.
Syracuse University Libraries. (2024). Key terms in artificial intelligence. https://researchguides.library.syr.edu/c.php?g=1341750&p=10367071U.S. Department of Health and Human Services, Office of the National Coordinator for Health Information Technology (ONC). (2024, January 18). HTI-1 Final Rule Overview: Health Data, Technology, and Interoperability – Certification Program Updates, Algorithm Transparency, and Information Sharing. Presented at HITAC Meeting. Retrieved from https://www.healthit.gov/sites/default/files/facas/2024-01-18_HTI-1_Final_Rule_Overview_508.pdf
Turing, A (1951). Intelligent machinery, a heretical theory (Speech). Lecture given to '51 Society'. Manchester: The Turing Digital Archive. Archived from the original on 26 September 2022. Retrieved 28 March 2025.
About the Authors
David Ervin has worked in the field of intellectual and developmental disabilities (IDD) for nearly 40 years in the provider community mostly, and as a researcher, consultant, and ‘pracademician’ in the US and internationally. He is currently CEO of Makom, a community provider organization supporting people with IDD in the Washington, DC metropolitan area. He is a published author with nearly 50 peer-reviewed and other journal articles and book chapters, and more, and he speaks internationally on health and healthcare systems for people with IDD, organization development and transformation, and other areas of expertise.
David’s research interests include health status and health outcomes experienced by people with IDD, cultural responsiveness in healthcare delivery to people with IDD, and the impact of integrating multiple systems of care on health outcomes and quality of life. David is a consulting editor for three scientific/professional journals, and serves on a number of local, regional and national policy and practice committees, including The Arc of the US Policy and Positions Committee. David is Conscience of the Field Editor for Helen: The Journal of Human Exceptionality, Vice President of the Board of Directors for The Council on Quality and Leadership (CQL), and Guest Teaching Faculty for the National Leadership Consortium on Developmental Disabilities.
Doug Golub is the Principal Consultant at Data Potato LLC and a Doctor of Public Health (DrPH) student at the Johns Hopkins Bloomberg School of Public Health. While earning his Master of Science at Rochester Institute of Technology, he worked as a direct support professional, an experience that shaped his career in human services and innovation. He co-founded MediSked, a pioneering electronic records company for home and community-based services, which was acquired after 20 years of impact. Doug has also held leadership roles at Microsoft’s Health Solutions Group and is a nationally recognized thought leader on data, equity, and innovation. He serves on the boards of the ANCOR Foundation and FREE of Maryland.