The Ethical Algorithm: Navigating AI & its Applications in the Lives of People with IDD
The Ethical Algorithm: Navigating AI and its Applications in the Lives of People with IDD
What is AI Anyway? Cutting Through the Jargon
AI as a Partner: Enhancing Health Supports with People with Disabilities
Addressing Bias and Ableism: Centering Disability Voices in AI Development
Real Risks, Real Voices: What People with Disabilities Say About AI
Staying Innovative: How to Use AI for Good
By David A. Ervin, BSc, MA, FAAIDD and Douglas Golub, BA, MS, SHRM-CP, DrPH(C)
This is the fourth of a five-part series on artificial intelligence (AI) and its emerging role in healthcare and community-based services for people with intellectual and/or developmental disabilities (IDD). In this installment, we explore what people with disabilities think about AI, their hopes, fears, and renewed demands for inclusion. The authors are profoundly grateful to Jason Freeman, Steve O’Haver, Samara Pfeiffer, BJ Stasio, and Sammy Thomas for their time, their expertise, and their willingness to share their thoughts with us. They deserve to be engaged fully, and they have so much to offer.
Real Risks, Real Voices: What People with Disabilities Say About AI
Recently, the relationship between people with intellectual and/or developmental disabilities (IDD) and advanced technology has taken a disturbing turn. In early May, Newsweek published an article entitled 'Down Syndrome' Filters Being Used to Promote Sexual Content (McFall, 2025, May 3). The article’s focus? “A disturbing [social media] trend where creators are using an AI filter to make it look like they have Down syndrome and creating content that is suggestive and sexual in nature.” This isn’t just an Instagram or TikTok fad. This has reached OnlyFans, a subscription-based platform known for its explicit adult content. The portrayals are verifiably fake. But, it raises significant concerns not just in the unregulated use of AI to develop content, but how people with Down syndrome and other forms of IDD are being engaged—or ignored or, worse still, exploited—in developing advancing technologies. Dr. Amy Gaeta, research associate at The University of Cambridge, who was interviewed for the Newsweek article, notes that the "hyper sexualization of women with Down syndrome can put actual women with Down syndrome at greater risk for epistemic, symbolic and material harm by propagating false images."
This raises an important set of issues to which people with IDD, their families, advocates, and others must pay close attention. It is also a set of issues to which AI and other technology developers must attend.
The Ethical Algorithm series repeats an essential call to action: the development of AI must actively and intentionally involve people with IDD if it is to represent people with IDD, let alone be of benefit to them. To ground anchor our examination of AI to this fundamental necessity, we conducted interviews with five people with IDD from different parts of the United States. We ask what AI is, how they use or encounter it in their daily lives, and what excites or worries them about its future. We also ask about rights, how people want to be protected, if at all, and represented in a world where AI is playing a growing role in healthcare, education, and everyday services. We ask if they have been engaged in any parts or phases of the development of AI. Their answers to these and additional questions are instructive, and the conversations rich. Perhaps unsurprising, people with IDD say many of the same things everyone does about AI. “It sounds exciting and scary,” said one interviewee. “I’m looking forward to learning more.”
What sets these voices apart is that they are excluded outright or dramatically underrepresented in the dialogue on modern technologies, including AI, despite being deeply affected by the design and deployments of new technologies. Put another way, people with IDD and other disabilities are just not engaged in conceptualizing and developing modern technologies, including AI. As one person put it: “People try to create stuff to help us, but they don’t bother asking our opinions.”
This article shares what we heard: hopes, fears, and clear-eyed calls for inclusion. We don’t offer the insights of just these five people as representative of 8 million Americans with IDD. They are, however, instructive and essential.
Everyday Uses and Potential
Steve O’Haver, a proud Indianan, sounds as excited by AI’s potential as he sounds concerned by AI’s potential. “It can be used for anything! For special education students, AI can be helpful in schools, with better and more personalized planning through AI. AI could specifically come up with an IEP that is personalized for the individual [student].”
Steve O’Haver
Steve-O, as his friends call him, who’s enrolled in a culinary program and participates in advocacy and school-based leadership activities, uses AI to support his mental health. “I use an AI chatbot on my phone when I’m angry... the responses help me feel better on the inside. I use it a lot these days,” he said.
Sammy Thomas, an autistic college student at Purdue and next generation advocate leader, points to the potential of “AI [that] can help people to work more independently. As AI becomes more integrated in workplaces, people can manage their own enterprises.” As a student, Sammy “uses AI a lot in my day-to-day life…I use the Google search engine AI to easily find sources. I also sometimes use AI to generate funny images!” For Sammy, AI helps make research faster and brings a little joy between assignments.
Sammy Thomas
He also sees AI as a resource in his dream of starting his own health business someday. “If I’m in my own business, when I meet with people to discuss goals and issues, I don’t want to hire a bunch of other people, so AI can maybe save costs.”
BJ Stasio, a nationally recognized disability rights leader, has been using speech recognition tools for years, long before most people started calling them AI. BJ also contributed to article 3 in this series, Addressing Bias and Ableism: Centering Disability Voices in AI Development. BJ has found creative ways to make the tools work for him, especially when composing messages. “I write emails using what some people might call ‘colorful’ language, and then the AI calms it down for me.” BJ smiled. Anyone who knows BJ knows that he’s a very effective communicator and often does use colorful language. For BJ, AI isn’t a novelty, it’s a writing partner that helps him stay true to his voice while still getting his message across.
BJ Stasio
Samara Pfeiffer, a self-advocate and national dance champion with Warrior Dance, recently earned Diamond adjudication in the Inspiring Stars division at a competition in Sandusky, Ohio. Samara made a website for her team with the help of AI. The web design tool gave an option to “generate with AI right on it.” She also uses voice tools (Siri) to call people, including legislators as part of her advocacy work, and Copilot to help write papers. “When you don’t know what to write, you can just say it, and the computer types it for you.”
Jason Freeman, a seasoned disability rights advocate and host of the Awkwardly Awesome podcast available on YouTube, offered a mix of insight and humility. “I do Google searches all the time. If that’s AI, then I guess I use it—but I’m not sure where it starts or stops,” he admitted. His reflection touches on a common confusion: What is AI and what is just software? As Satya Nadella, CEO of Microsoft, puts it: “Traditional software follows rules; AI creates its own. One is a tool, the other is a collaborator, with all the brilliance and unpredictability that entails” (Madrona, 2025).
Jason Freeman
Jason went on to remind us that, as with all things, “if people use technology, including AI, with good intentions, then good can come of it. If people use it without good intentions, it can do tremendous damage.” Steve-O added “we need laws in place” that assure ethical, appropriate development and use of modern technologies to prevent harm.
Concerns, Risks, and Fears
Steve-O also worries about “AI replacing people. AI cannot replace friends—a true meaning of a friend is someone you can talk to. To be truly be happy in life, you have to be with friends, with people!” About this, he feels strongly. He’s not alone.
A recent article in Harvard Business Review chronicles the increasing use of AI for companionship (Zao-Sanders, 2025). In fact, a chorus of concerns that speaks to ways that AI is potentially replacing human relationships is growing (Hau & Winthrop, 2025; Siebens, 2025). These are not-so-subtle alarm bells to which we need to pay attention. There’s more. When asked what worries him about AI, Sammy put it this way: “Something going a bit too far with AI…blurring the lines between what humans should do versus what technology can do.” He uses art as an example. “AI-generated artwork being used for [inappropriate] profit. And,” he worries, “this is AI replacing [human] creativity.”
Samara raises another concern. AI “can be scary because it can create anything and anyone can get ahold of [the images].” And like the rest of us, she knows that once images—or any personal information—is out there on the internet, it is essentially permanently out there. Her concern is not unfounded—we need look no farther than the Newsweek article (McFall, 2025, May 3).
Many of the people with whom we spoke voiced serious concerns about how AI could be misused when it comes to privacy and manipulation. Samara warned, “[AI] can make fake Facebook accounts. You have to make sure you’re not being hacked yourself.” Sammy shared a similar worry: “My fear with AI lies not so much in the technology itself but rather people using it unethically, such as big tech companies gathering private info.” He also cautioned that AI “shouldn’t spread false info that’s more likely to affect vulnerable populations—children, elderly, disabled.” Such as an AI filter that makes women present with Down syndrome in sexually suggestive ways online.
Jason feared people may “start to rely on it and then not develop the mental faculties that humans once developed.” New and developing technologies make what was once impossible possible. For example, many of us will recall actual, paper maps, for example, on which we relied to travel by car from point A to point B.
Today, who among us can actually read and use a map? Who can recite by name the streets onto which we would make a right turn or a left turn? There are probably few and ever-diminishing number of us left who have retained these skills. Instead, voice commands from the likes of Apple Maps, Waze, Google Maps guide us. We no longer need bulky maps that were impossible to re-fold to anything resembling original state!
This may not be a loss of mental capacity and skill that leads to the proverbial fall of the Roman Empire, but Jason’s caution is an excellent one. Shanmugasundaram and Tamilarasu (2023) reviewed a litany of evidence-based changes (at the least) in cognition that has resulted from the ubiquitousness and widespread use of modern technologies, as well as losses (at the worst) in the ways we attend to our world and our critical thinking and learning skills. Sammy reminds all of us: “Use your brain, [don’t] just rely on a tool.” He points to Purdue plagiarism and cheating standards that have expanded to address AI use. So too quips Steve-O: “Don’t use AI to cheat!”
“As we move toward more automated AI technologies, let us commit to ensuring that these systems are more just, more inclusive, and more human. That future depends on all of us, and must be built with all of us.”
Rights, Representation, and Exclusion
Unsurprisingly, most interviewees said they had never been asked to help design or test AI tools. Samara put it plainly, “I have not. This interview is the first thing that I’ve done that has asked [about AI].” Steve-O shares a similar experience: “No, but I want to [be engaged]. People try to create stuff to help us, but they don’t bother asking our opinions.” Sammy had one opportunity. “Yes, last year I was asked to review the Outlier AI program,” but he noted that being invited was the exception, not the norm.
The principle of “Nothing About Us Without Us” came up often and with frustration. “Right now, nothing is involving us, so ‘Nothing About Us Without Us’ can’t be realized,” Steve-O said. Jason adds a compelling business case that should, if heeded, compel engagement of people with disabilities: “If we don’t engage people with lived experience, the tech won’t be responsive to people with disabilities, and people with disabilities won’t buy it...it doesn’t make business sense.” He surfaced what should be obvious to us all. Given that more than 25% of Americans are disabled, AI developers and modern technology businesses have a compelling economic reason to engage people with IDD and other disabilities.
Across the board, people want to be seen as collaborators in the development and evolution of AI, not just users of it. The American Foundation for the Blind offer their Guiding Principles for More Disability-Inclusive AI (Silverman et al., 2025). We find them to be comprehensive and excellent, and we encourage their application to the development and related considerations of AI. One of these 29 Principles is this:
“Collaboration between the assistive technology industry, AI developers, and the disability community could result in more accurate and neutral representation of individuals in image descriptions that balances privacy, concerns about bias, and accuracy of the image descriptions.”
Listening and Learning: What People with IDD Teach About AI
The vast majority of technology, including AI is developed without any engagement of people with IDD and other disabilities. Ultimately, that technology delivers something to or for people with IDD, but not with people with IDD. Everyone we interviewed—and we join them in lockstep solidarity—urged tech development to include people with IDD. People with IDD are well prepared, ready, and anxious to collaborate with the tech industry and AI developers. It’s up to the tech industry and AI developers to bring them to the table.
At the risk of repeating the same message once more—and acknowledging the likelihood that the message will find its way into the 5th and final article of The Ethical Algorithm series in September—true AI inclusion is only possible when people with IDD are recognized not just as users, but as leaders, testers, co-designers, and decision-makers. Collaborators who have expertise and insights to offer, borne and shaped of lived experience, which must shape the tools being built now. As we move toward more automated AI technologies, let us commit to ensuring that these systems are more just, more inclusive, and more human. That future depends on all of us, and must be built with all of us.
References
Hau, I., & Winthrop, R. (2025, July 2). What happens when ai chatbots replace real human connection. Brookings Brief. Retrieved July 12, 2025, from https://www.brookings.edu/articles/what-happens-when-ai-chatbots-replace-real-human-connection/
Madrona. (2025, March 18). Satya Nadella on Microsoft’s AI strategy, leadership, and culture. https://www.madrona.com/satya-nadella-microsfot-ai-strategy-leadership-culture-computing/
McFall, M. R. (2025, May 3). “Down syndrome” filters being used to promote sexual content. Newsweek. Retrieved June 1, 2025, from https://www.newsweek.com/down-syndrome-filters-used-sexual-content-social-media-2067146.
Shanmugasundaram, M., & Tamilarasu, A. (2023). The impact of digital technology, social media, and artificial intelligence on Cognitive Functions: A Review. Frontiers in Cognition, 2. https://doi.org/10.3389/fcogn.2023.1203077
Siebens, S. (2025, March 24). The Rise of AI Relationships: Are we Replacing Human Connection? Emotional Health Institute Blog. Retrieved July 15, 2025, from https://www.emotionalhealthinstitute.org/post/the-rise-of-ai-relationships-are-we-replacing-human-connection
Silverman, A. M., Baguhn, S. J., Vader, M.-L., Romero, E. M., & So, C. H. P. (2025). Empowering or Excluding: Expert Insights on Inclusive Artificial Intelligence for People With Disabilities [White paper]. American Foundation for the Blind. Available at: https://www.afb.org/research-and-initiatives/empowering-or-excluding
Zao-Sanders, M. (2025, April 9). How people are really using ai in 2025. Harvard Business Review. Retrieved July 14, 2025, from: https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025
About the Authors
David Ervin has worked in the field of intellectual and developmental disabilities (IDD) for nearly 40 years in the provider community mostly, and as a researcher, consultant, and ‘pracademician’ in the US and internationally. He is currently CEO of Makom, a community provider organization supporting people with IDD in the Washington, DC metropolitan area. He is a published author with nearly 50 peer-reviewed and other journal articles and book chapters, and more, and he speaks internationally on health and healthcare systems for people with IDD, organization development and transformation, and other areas of expertise.
David’s research interests include health status and health outcomes experienced by people with IDD, cultural responsiveness in healthcare delivery to people with IDD, and the impact of integrating multiple systems of care on health outcomes and quality of life. David is a consulting editor for three scientific/professional journals, and serves on a number of local, regional and national policy and practice committees, including The Arc of the US Policy and Positions Committee. David is Conscience of the Field Editor for Helen: The Journal of Human Exceptionality, Vice President of the Board of Directors for The Council on Quality and Leadership (CQL), and Guest Teaching Faculty for the National Leadership Consortium on Developmental Disabilities.
Doug Golub is the Principal Consultant at Data Potato LLC and a Doctor of Public Health (DrPH) student at the Johns Hopkins Bloomberg School of Public Health. While earning his Master of Science at Rochester Institute of Technology, he worked as a direct support professional, an experience that shaped his career in human services and innovation. He co-founded MediSked, a pioneering electronic records company for home and community-based services, which was acquired after 20 years of impact. Doug has also held leadership roles at Microsoft’s Health Solutions Group and is a nationally recognized thought leader on data, equity, and innovation. He serves on the boards of the ANCOR Foundation and FREE of Maryland