The Evidence-Based Research Behind Speak for Yourself

image
For the most part, we try to post information regarding Augmentative and Alternative Communication (AAC) that can be applied to any app or language system. However, in this post, we are going to tell you why we think Speak for Yourself is the best AAC app on the market, and link some of the research we’ve read that supports the design of Speak for Yourself and makes it a clinically-sound AAC choice.

There are a few disclaimers to get out of the way. First, we created Speak for Yourself, and we are biased. We think it is the “Best App Ever” (technically it won second place in the Best Special Needs App category in the Best App Ever awards 2012). If we didn’t think it was the best possible app we could make, we would change it. If we worked with individuals and saw that they were unsuccessful because of a flaw in the design, we would correct it.

Secondly, we are emotionally attached. We have put every ounce of knowledge, energy, experience, time, creativity, and money into the creation (and subsequent defense) of Speak for Yourself. We have explained to our children that there are people who can’t talk and we can help them, and Speak for Yourself has become part of their childhood. We are not business people or computer programmers, we are speech-language pathologists who saw an opportunity to use our skills and knowledge to give people an affordable, accessible, clinically sound, voice.

Our third disclaimer is one that is said frequently in the field of AAC. “There is no single communication system that is right for everyone.” So, we are saying that, and it is true that some people have motor impairments that prevent them from being able to access small buttons and some people are severely motorically impaired and are not able to move, so they require scanning or eye gaze for access. However, our very bold statement that we are going to make is this…if someone is able to physically access Speak for Yourself, it is the most fluid, comprehensive system available. Let us explain…

image

Prior to developing the Speak for Yourself app, we did AAC Evaluations and recommended products and language systems based on our trials with the child. We no longer do evaluations because it is unethical to recommend a product we have a financial interest in, and also unethical NOT to recommend our product that may be the best fit for a child. In addition, we would (and still do) consult with the team and work directly with children to teach them to use AAC on whatever system that child is using. We still support children that we knew before we created Speak for Yourself, and we support them on the language systems they have been using for years. We do, and have always done, what we believe is in the best interest of the child, and for our children who are successful AAC communicators, it is in their best interest to learn to use their communication system to participate in their life and education rather than learning a new system. (If students are NOT using a system successfully, that’s a different story, but one that we wrote about here) We have used every comprehensive app on the market – and some of the not so comprehensive apps – and we have spent hours, prior to creating Speak for Yourself, programming and structuring other apps to follow research-based principles in an effort to make communication as effective as possible for the child. So, when we say we know the other apps that are out there, we don’t mean that we saw a review or read an article. We are saying that there is at least one child who we know and support who is using each of the apps. We have programmed them, used aided language input with them, and trained teams to use them. We are not talking about a “Facebook friend” level of knowledge; we have met them for coffee and pondered the meaning of life.

Speak for Yourself is one of the few AAC apps that was created by speech-language pathologists (SLPs), but we created it with input from parents, teachers, nonverbal students, typical students, and people who knew absolutely nothing about AAC. We wanted someone completely unfamiliar with AAC to be able to program it based on their iDevice knowledge. We watched where their hands went naturally when they were trying to program something, and we made changes when we noticed that multiple people made the same “mis-hits.” If multiple people were making the same “mistakes,” we decided that we were the ones who had made the “mistake” in the design, so we changed it.

Observation-based features

Two-touch vocabulary

For our AAC users, we watched their “mis-hits” also, not only on Speak for Yourself, but on whatever device they were using. We noticed that some children had difficulty touching more than two cells without feedback. We could get students to go to a second page or touch a second button, but when we tried to have them navigate to a third page or return to the home screen, we were losing some of them. We also know that AAC is slower than verbal speech. Even if users are patient and capable of navigating to three or ten screens to say a word, what’s the point of cumbersome navigation? We didn’t – and still don’t – see a benefit in complex page navigation. For this reason, we designed Speak for Yourself so that the user can access almost 14,000 words, with no more than two touches to say a word. There is simply no other system that is designed this way.

Double tap

Every main screen word can be linked to a secondary screen or that link can be turned off to give the AAC user immediate feedback and/or increase his communication rate. When the link to the secondary screen is turned on, the main screen word is programmed directly under the same location on the secondary screen. That means that to “speak” the word, the user essentially double taps the same location. If you have ever watched someone at a vending machine, you’ll understand the reason for this design. It is human nature that when we push a button we expect something to happen. If nothing happens, we push that same button again. How many times have you watched someone pushing the same button to select a soft drink? If nothing happens, they’ll continue to push the button until they give up (and possibly kick the vending machine). In Speak for Yourself, when a secondary screen is linked, users don’t have to be taught to push the button again to say the word they had been saying with one touch. They do it automatically because they know it’s supposed to say something. We have watched children do this for years on various AAC systems, but on those systems we would have to teach them the correct button to push and reteach the motor plan when we increased their vocabulary level. When we designed Speak for Yourself, we decided to take whatever help we could get from human nature.

Search Feature

One of the problems that anyone using AAC faces is being able to find vocabulary within a device or app. This was also an issue in Speak for Yourself…until the release of version 1.3 which introduced a multi-sensory search feature. Words can be typed and then selected from a list of app vocabulary. The button on the main screen will be highlighted with a blinking outline (visual) until it is touched (tactile). If the target word is on the secondary screen, the outline will blink around the secondary screen button, and when it is selected, the word is spoken (auditory). The words are paired with the symbol or photo that is used in the app to allow users with beginning literacy skills are able to use the feature independently. If the target word is “closed,” the app will automatically open the word to allow it to be accessed.

Search feature highlighting the word "read."
Search feature highlighting the word “read.”
The app then navigates to the secondary screen and highlights the target -"word".
The app then navigates to the secondary screen and highlights the target -“word”.

Evidence-based research

As graduate students in Speech-Language Pathology, one of the elements that is drilled into SLPs-to-be is the importance of evidence-based research. As SLPs in the field, we are constantly reading new research articles and listening for studies that may offer ideas to help students. One of the downfalls is that sometimes, research takes time to catch up to interventions that are used in the field. It may be years before a strong clinical study is published on the effectiveness of Speak for Yourself as an AAC intervention, but when we created it, we used language evidence that applied to AAC. This is not a comprehensive list, but here are some of the sources and links:

Core vocabulary

Vocabulary selection for AAC is key to students successful communication. Nonverbal students can only say the words that we give them. If we only give them a page of foods during snack time, we can’t expect them to engage in a discussion about weekend events or the upcoming zoo trip as they eat their chips.

Core vocabulary words are the 300-500 words that comprise about 80% of the language we use. They remain consistent across age, language, and situation. For example, if I say, “We are going to go to Target, and then we will stop for ice cream,” the only words that are not core are “Target” and “ice cream.” (They are extended or fringe vocabulary).

Here is some of the evidence-based research regarding core vocabulary:

Core word comparison

Banajee, M., DiCarlo, C., & Buras Stricklin, S. (2003). Core Vocabulary Determination for Toddlers. Augmentative and Alternative Communication, 19(2), 67-73.

Dada, S., & Alant, E. (2009). The effect of aided language stimulation on vocabulary acquisition in children with little or no functional speech. Am J Speech Lang Pathol, 18(1), 50-64.

Fried-Oken, M., & More, L. (1992). An initial vocabulary for nonspeaking preschool children based on developmental and environmental language sources. Augmentative and Alternative Communication, 8(1), 41-56.

Marvin, C.A., Beukelman, D.R. and Bilyeu, D. (1994). Vocabulary use patterns in preschool children: effects of context and time sampling. Augmentative and Alternative Communication, 10, 224-236.

Raban, B. (1987). The spoken vocabulary of five-year old children. Reading, England: The Reading and Language Information Centre.

Using a word-based vocabulary

We discussed this in an earlier post, but support for using a word-based vocabulary can also be found here.

It’s also supported by the following research:
Balandin S. & Iacono T., (1999). Crews, Wusses, and Whoppas: Core and Fringe Vocabularies of Australian Meal-Break Conversations in the Workplace, Augmentative and Alternative Communication, Vol. 15, pp. 95-109.

To quote ASHA’s website regarding this research, SLPs were asked to “predict the topics that would be useful to employees in a sheltered workshop during breaks. The success rate was dismal, less than 10%. If sentences were pre-stored based on these predicted topics, the sentences would have little relevance to the actual conversations occurring.”

It may take a little more time to have children using sentences if you use a word-based system, but when the children get there, it’s because they have built their language skills to the sentence level.

Motor Planning

If you walk, talk, drive, text, dance, ride a bike, or move throughout life on a daily basis, you use motor planning. Motor planning makes our regular, repetitive activities automatic and fast! Imagine if someone swapped all of the letters on the QWERTY keyboard on your phone every morning. You would still be able to type, but you would have to scan to find the letters and after a few days of that frustration, well, you’d probably start displaying behaviors and become protective of your phone.

Motor planning is important in life, and also in AAC. Speak for Yourself keeps motor planning consistent because the core vocabulary is locked so that once an AAC user learns that word, it is never going to move. This allows language to be cumulative and eliminates a situation where children would have to relearn words as their vocabulary expands. In addition, the two-touch vocabulary maintains consistent motor planning. The users can quickly access words which increases their communication rate.

Here is an article that cites research about the importance of motor planning and the relevance to language and communication skills.

Babble Feature

This feature is a combination of our observations when we were working with children and also some research that has been in the making for years. It’s also based on the natural period of exploration that babies have when they Babble. They began by making sounds with an open vocal tract when they are born – they cry. Soon after, they produce vowel sounds (still with an open vocal tract)… they coo. Once they’ve mastered that, they begin to form oral motor movements and their articulators (tongue, lips, alveolar ridge) touch and we begin to hear them produce consonant sounds…They babble. Eventually they produce a syllable combination that resembles a word, like “dada”, and everyone says, “He just said ‘Daddy’!” and claps, and the babies figure out that the combination of movements means something, so they say it again.

Regardless of their age, children who are just being given an AAC device are in their infancy of expressive language. They have spent their life being quiet in their speech and language exploration. Even if they vocalized, they may not have reached the point where their vocalizations resembled words and had meaning. There was no excitement surrounding language, because more than likely, it was replaced with frustration and concern. When we had students who were being given a device for the first time, we would recommend that they have “talker time” built into their schedule to allow them to “babble.” During this time, their teacher or one to one aide would open all of their vocabulary to allow them to explore, and then respond to whatever they said to give it meaning. So, if the student said “sad”, the aide would pretend to cry. If he said, “sit”, the aide would flop to the floor dramatically, and the student would be motivated to use that vocabulary again because it was fun. When talker time was over, the aides would close the vocabulary to return it to the child’s configuration. Sometimes they would forget to close vocabulary, but there was a greater issue if they would forget to leave something important open…like the food or toy category.

When we designed the Babble feature, the purpose was to take the responsibility off of the teacher, aide, parent, or SLP and save the configuration automatically. Additionally, it is on it’s own lock so that the AAC users have the power to toggle between their limited setting and the entire app vocabulary so they can Babble. We’ve seen many of the students use it independently…even our young users like to have access to all of the words. Many of you may remember this video of Maya. (Dana Nieder at Uncommon Sense Blog).
Research was recently published that found that deaf babies use their hands to Babble.
It is important to Babble, in whatever language a child is going to speak.
image

Presuming Competence

PrAACtical AAC recently had a great blog about research on presuming competence. Of course the presumption of competence is not a feature that can be placed in a product, however, it is woven throughout the design of the app and throughout our clinical practice with students. When you put an app like Speak for Yourself in front of an individual who can’t say a word, there is an underlying message that says, “I believe that you have the ability to learn all of these words and I’m willing to teach you.” Features like Babble and the search feature were designed so that the AAC user has access to them. The texting feature was created with the expectation that individuals will establish relationships and communicate with important people in their life, just like everyone else. We have been told that students are at an obscenely low cognitive level, but they are able to put words together within the first hour of having a device in front of them. Choosing a language system based on a nonverbal child’s “low cognitive level” leaves the child trapped. If you give someone 9 buttons, how will she ever be able to show you that she has thousands of sentences in her mind? Presume that students are competent, intelligent, and have a desire to communicate until they prove you right…and then set the bar higher!
image

If you would like to enter to win a full version of Speak for Yourself, enter our 10 day code-a-day-giveaway a Rafflecopter giveaway“>


Comments

14 responses to “The Evidence-Based Research Behind Speak for Yourself”

  1. <3 For my son Jack. <3

  2. Thank you very much for a great AAC post. It is nice to have the rationales & research paired with the app features all together. Also, thanks so much for adding this great rafflecopter option for therapists and families.

  3. Malinda Avatar
    Malinda

    Love this for my non verbal 11 year old daughter 🙂

  4. Jennifer selanoff Avatar
    Jennifer selanoff

    Would love to let my 9 yr old non verbal son try this out!

  5. Jennifet Bilgrav Avatar
    Jennifet Bilgrav

    I would love to try this out with several of my non verbal students on my caseload.

  6. […] Augmentative and Alternative Communication (AAC) experience, and one of the creators of the Speak for Yourself AAC app. I have a passion for AAC and even more so for understanding and teaching the people who use it. […]

  7. […] to try it. If you are making a decision between other AAC apps and Speak for Yourself, here is the evidence-based research behind Speak for Yourself. Since that article was written, we have also added a History feature,  which gives quantitative […]

  8. […] Much better?  We talk a lot about core vocabulary.** It is central to the Speak for Yourself Augmentative and Alternative Communication (AAC) app, but core vocabulary is bigger than us…It is central to language. Core vocabulary is the 300-500 words we use to communicate about 80% of everything we say. If you are a research-based person and would like the references, we have links to quite a few in this post. […]

  9. Marjorie Delgado Avatar
    Marjorie Delgado

    Is the language only available in English or does it include Spanish?

    1. At this time, it’s only available in English.

  10. […] I didn’t have to decide which words should go where. The app’s designers already did the research for […]

  11. […] and encouragement, we decided to give the app, Speak For Yourself, a bona fide, year-long shot. (Here’s the research-based evidence behind the app. Here you can find a brief explanation of why we chose SfY.) As Maureen wrote in her […]

  12. […] who use the “Adam” voice. Although Cristian can express himself through the Speak For Yourself Augmentative and Alternative Communication (AAC) app, his vocal identity is […]

  13. […] we created Speak for Yourself and began to look for voice engines and text to speech options, there were no options that allowed […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.