Using Aided Language Input to Elicit Verbal Speech

Owen reading the message window of Speak for Yourself.
Owen reading the message window of Speak for Yourself.

This post is combining two important concepts in the world of Augmentative and Alternative Communication (AAC): Aided Language Input (which may also be referred to as Aided Language Stimulation or Modeling) and the relationship between verbal speech and AAC.

Sometimes I hear people say that they’re afraid to use AAC because they are worried that it will inhibit verbal speech development.  They’re “not there yet,” as if implementing AAC is the course of action when they have given up on the possibility of verbal speech.  They are afraid that AAC will make the child “lazy,” as if learning a whole new language system, knowing where to find words that you want or need to say, and then hoping that listeners will be patient enough to understand your communication attempts is easier than…talking. The moment verbal speech is able to be produced, understood and effective, it is used.

If you are a parent, speech-language pathologist, teacher, 1:1 aide, or grandparent who is concerned about AAC having a negative impact on speech, I’m glad you’re reading this.  If you are a collector of empirical evidence, I’m including some articles with citings to evidence-based research that AAC “gains come at no risk to speech development or recovery”and that AAC Does Not Hinder Natural Speech.  Experts agree and evidence supports that if a child is not speaking, the best thing you can do to promote verbal speech is to provide access to AAC.  If a professional tells you that AAC will hinder your child’s verbal abilities, find a new professional.  It is just not true.

This is what I know to be true:

  • AAC reduces frustration for individuals with complex communication needs (CCN) by providing communication while they work to develop verbal speech.
  • If someone has the ability to produce verbal speech, AAC does not interfere with that ability.  Using AAC actually supports verbal speech by providing a consistent auditory and visual model, and reducing the anxiety of coordinating the oral motor movements required to speak.  Once the “pressure is off,” it often helps the individuals relax and focus on their message rather than the coordination of the movements to produce the correct sounds. Removing the oral motor planning, especially for individuals with apraxia, reduces the barrier to verbal speech. They start to talk without their brain having to think about it first.
  • AAC allows a child to develop language and to communicate. If that child does not speak verbally, he/she still has a voice.

Aided Language Input/Aided Language Stimulation/Modeling

Owen watching vocabulary as it is modeled on Speak for Yourself.
Owen watching vocabulary as it is modeled on Speak for Yourself.

Aided Language Input is a strategy that is highly regarded in the AAC world.  I’ve written about it here and here, and shared some modeling ideas here.  Last week, Owen and his family allowed us to video some of my time with him and share the videos.  I’ve pulled some video clips out that illustrate some examples of Owen using Speak for Yourself to cue his verbal speech.  When I watch videos of myself as a clinician, there are always things I see that I missed in the “live” moment, which is true in these videos as well.

However, my hope is that if you have no idea what everyone is talking about when they say to “model language on AAC,” these might provide an example. Besides, if you watch closely, Owen uses some great strategies!

In this video, I model a choice for Owen. He watches the model and then verbally says “stickers.” When I ask, “What color?” he verbally says “Blue,” but once he has them in his hand, changes his mind and verbally says, “Yellow” (And then decides very quickly that they should all be on the paper).

In this next video, we are deciding who we should draw. I say I am going to draw Owen, and start to model “draw Owen,” but when I model “draw”, he verbally says “draw.” So I stop and ask him who I should draw. He considers his options, and then chooses “TT” (his name for his wonderful aunt, who also happens to be a SLP), and giggles adorably. If you watch closely, he quickly presses “Hold That Thought” to save the message window content (draw TT). Then, just as quickly says, “Mommy”, looks closely at her picture, pulls “draw TT” down from “Hold That Thought,” and saves the combined message window in Hold That Thought that says: Mommy draw TT. He does this all very quickly because he knows I am going to model, and he doesn’t want my modeling saved with his “held thoughts.” Impressive, right? Then he takes it up a level, and when he sees that I’m excited, he looks at his 1:1 aide and verbally says, “Awesome. Almonds.”

Here’s a picture of the Hold That Thought window after that interaction:

The Hold That Thought window in Speak for Yourself
The Hold That Thought window in Speak for Yourself

This last video reinforces the importance of modeling language about someone’s interests. My intention was to model some of the words that his 1:1 aide was already using to expand his vocabulary, so that he would have access to them expressively.  He says the words verbally, and then his reaction is just the best:

Thanks again to Owen and his family for allowing us to share his videos and share in his progress!


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.