Jan 8, 2026
AI, AAC, and Presuming Voice
Written by
Alan is the founder and CEO of Flexspeak. He is a licensed speech-language pathologist with lived experience as a person who stutters.
AI in AAC is no longer theoretical. It is already here, already shaping how people communicate, and already raising big questions for our field. In a recent conversation with Brenda Del Monte, a leading Speech-Language Pathologist, AAC expert, and founder of Technically Speaking, we explored what this moment means for AAC users, clinicians, families, and schools, and why so much of the resistance we see today echoes past moments of change in assistive technology.
What stood out most was not a debate about tools, but a deeper question about belief. Do we truly presume competence when it comes to AAC users? And what does that presumption require of us now?
The AI branding problem
As Brenda pointed out early in the conversation, much of what we are calling “AI” today in AAC is less about entirely new capabilities and more about how those capabilities are framed and perceived.
“When we say artificial intelligence, people really do feel like… we’re already skeptical of the intelligence of our AAC users as a society.”
The term itself carries baggage. When combined with long-standing assumptions about disability, it can unintentionally reinforce the very biases AAC is meant to dismantle. What’s at stake is not the technology alone, but what we assume about the people using it.
Presuming competence means trusting choice
A central theme in Brenda’s perspective is that objections to AI often reveal an underlying lack of trust in AAC users’ discernment.
“If we presume that people have something to say and they know what they want to say, then we also presume that they have the ability to have discernment when choices are given.”
When people argue that AI-generated options might not reflect what an AAC user “really meant,” Brenda reframes the concern:
“When we say that it won’t be what they meant to say, we’re saying that we don’t believe in their competence.”
This is not a minor issue. It strikes at the core of AAC philosophy. If we truly presume competence, then we must trust AAC users to select, reject, refine, and personalize language, even when that language is generated with AI support.
The gap between “inside” voice and output
One of the most powerful moments in the conversation came from a real example with a teenage AAC user who was highly proficient with eye gaze and Unity.
Her original message was something like:
“I went through new house with mom and dad, hope they buy.”
When run through an AI-supported rewording, it became:
“I toured a new home that we are hoping to purchase.”
Brenda asked her if it was accurate. It was. But it still did not feel right.
When given a “teenage tone” option, the student selected:
“I rolled through a new crib that was straight up lit.”
Brenda described the reaction vividly.
“Her spastic CP is in full form. She’s responding physically that this is the way she would really want to say it.”
The original AAC output was correct, but it did not capture personality, excitement, or identity.
“That was not her inside voice. Her inside voice was exploding with excitement.”
This example illustrates a critical truth. Telegraphed AAC messages are often misinterpreted as reflections of cognition, when they are really reflections of access constraints.
Tone is not extra, it is meaning
Tone came up repeatedly as essential, not optional.
“So much of what we say is tone. When someone doesn’t have a way to add tone, it’s very subjective how you’re interpreting that.”
Brenda shared examples of AAC users training systems to express sarcasm, humor, and emotional nuance, including a NeuroNode user who explicitly trained sarcastic responses.
“Sarcasm is so much of a person. It’s a big part of personality. Tone reveals personality that current AAC systems have not expressed. AAC users often say that the device - that box of words - isn’t who they are. Let's give them access to ways to have those words expressed with tone that reveals their personality”
In spoken language, tone is automatic. In AAC, tone must be intentionally designed. AI creates new opportunities to make that possible without increasing cognitive or motor load.
Real-time support can reduce cognitive load
Another powerful use case discussed was AI supporting real-time interaction rather than replacing authorship.
Brenda described a system that recognizes when a communication partner asks a yes/no question and immediately presents yes/no options.
“Let me reduce the cognitive load of you having to navigate to a yes/no page.”
This matters because delays change conversations.
“It creates surface conversations because it takes so long for the AAC user to respond. Meanwhile, real-time conversations is inclusion and belonging, and fights the inherent loneliness that happens when one feels left out of a conversation.”
The goal is not automation for convenience. It is access to real conversation, in real time.
Supporting the support network
AI is not only about the AAC user. It also reshapes how educators, therapists, and families support communication.
Brenda was clear that current systems already limit voice, even without AI.
“The choices are given by the parent, the educator, the person facilitating. And we don’t stop to ask, is this my voice or is this theirs?”
When assignments are simplified due to time and fatigue, complexity of thought is often lost. AI can help bridge that gap by offering broader idea generation that is not constrained by a single facilitator’s perspective.
“A large language model is more likely to have a choice that matches their voice than an individual that’s facilitating.”
This reframes AI as a tool for reducing facilitator bias, not increasing it.
Resistance, compliance, and the pattern we’ve seen before
Concerns about FERPA, HIPAA, DPAs, and recording are real. But Brenda contextualized them historically.
“We went from ‘no way kids can have email’ to ‘every student is required to have an email’ in a very short period of time.”
The pattern is familiar. Resistance, followed by normalization, followed by recognition that access was the point all along.
“There’s resistance to change. Then we realize it’s actually access, and it's our job to advocate for it.”
Personalization is not a luxury
Finally, the conversation returned to something foundational. Personalization is not fringe. It is motivation, identity, and engagement.
“The least skilled thing I do is editing a button. You can YouTube that. The skill is making AAC relevant.”
Brenda emphasized how time-consuming personalization currently is, and why streamlining it matters.
“How much of my four training sessions is adding ‘Itsy Bitsy Spider’ when it shouldn’t be that?”
AI-assisted customization, when done collaboratively with the AAC user, does not replace clinical judgment. It frees clinicians to focus on what actually matters.
Moving forward with courage
This conversation was not about hype or fear. It was about responsibility.
Responsibility to presume competence.
Responsibility to trust AAC users’ discernment.
Responsibility to design systems that reflect who people are, not just what is easiest to produce.
AI does not remove authorship unless we design it that way. Used thoughtfully, it can do the opposite. It can return voice, tone, agency, and joy to communication.
And that is worth the work it takes to get it right.

