On Thursday, March 17, 2011, Dr. Ronald Goldman presented a complementary webinar: Goldman-Fristoe 2: Research, Administration, and Interpretation. He also answered questions from listeners at the end of the event. Do you have comments or questions? Ask away in the comments below!
- September 20, 2017
CELF-5 Metalinguistics: Language Competence and Classroom Success
- September 21, 2017
CELF-5: Test Features and Interpretation
- October 5, 2017
Case Studies: Progress Monitoring and Intervention with Review360 for SLPs
Entries Tagged With: GFTA-2
I don’t know about you, but I find that the assessment of language skills in children is one of the most challenging aspects of my job. Language, unlike articulation, does not have a finite number of skills to master. Language learning and usage continue to develop throughout one’s life.
Perhaps the biggest challenge in language assessment is the balancing act involved in oral communication. We need to measure the broad range of skills in this area in a manner that is structured enough to compare the abilities of the examinee to a norm group, but also reflects the unstructured use of language in natural environments. The following case study should provide some insight into one option for balancing these goals.
CASE STUDY: SALLY, AGE 7 YEARS, 8 MONTHS
Sally was referred for speech and language testing by her classroom teacher. The case history information indicated that Sally’s teacher rated her as being “incomprehensible” in classroom discussions. On the day of the speech and language assessment, I introduced myself to Sally and asked her if she had already been to P.E. class. She responded, “I ride the bus.”
Step One: Initial Assessment
The Comprehensive Assessment of Spoken Language (CASL) was used as the formal measure of Sally’s oral language based on the depth of detail it provides. All core and supplementary tests were administered for Sally’s age level. Several weaknesses were identified, but they were not as severe or of the type I had expected. In the core tests, her standard scores were as follows:
|Syntax Construction||47||Pragmatic Judgment||73|
|Paragraph Comprehension||85||Core Composite||71|
Frankly, I expected Sally’s standard score for Syntax Construction to have been higher than the score for Nonliteral Language, which is a more complex task. The scores on the supplementary tests were generally average to low-average, with the exception of Sentence Completion, on which Sally obtained a standard score of 67. While Sally showed some obvious weaknesses on the CASL, she demonstrated enough competencies that I was surprised her teacher had rated her as “incomprehensible” in classroom discussions. Obviously, more information was needed on the type of language Sally used in less structured activities. In addition, I wanted a language sample typical of classroom activities in which the teacher provides topical structure.
Step Two: Making the Choice—Sounds In Sentences
After thinking about what exactly I needed to measure, I decided the Sounds in Sentences section of the Goldman-Fristoe Test of Articulation, Second Edition (GFTA-2) would be the best way to obtain the specific information for this situation. This portion of Sally’s language assessment would be less structured than the tests on the CASL, but more structured than a spontaneous language sample.
Following the regular administration guidelines, I tape-recorded and transcribed Sally’s responses on the Sounds in Sentences. An analysis of this sample indicated that Sally could retell a story with picture cues. Her sentences, while short, were logical and appropriate for the picture cues. She did omit regular past tense endings and some auxiliary verbs, and she would also confuse prepositions. Sally presented the story itself in logical sequence, however, and her language was comprehensible.
Step Three: Finding Nemo—Closing The Circle On Complete Assessment
As the final step in this process, I asked Sally to tell me about her favorite movie, Finding Nemo. No comments were provided other than those related to active listening, such as, “REALLY!” Here is a sample from her story:
- Nemo was a boy and Marla mom it die from the shark ate it. Marla try to take the mother inside the house but all the eat then mom and the baby but one baby Nemo left and next morning Nemo a kid. The bus the fish in the water the busdriver come and he say then he goes under and they went under the fish driver…
This time, Sally’s story was extremely difficult to comprehend, even by a listener familiar with the storyline of the movie. Her teacher judged this language sample to be representative of Sally’s spontaneous speech. From this example, it was readily apparent why the classroom teacher described Sally’s conversational speech as “incomprehensible.”
Next Steps: Help For Sally
The GFTA-2 Sounds in Sentences provided valuable information in this case. Sally did much better than expected in the more opened-ended responses on the CASL, but very poorly in the unstructured speech sample. Her responses on the Sounds in Sentences indicated that Sally was a far more competent communicator when she had visual cues to provide topical and sequential structure for her responses. Working with the classroom teacher, a plan was designed not only for therapeutic intervention, but also for accommodations to support more meaningful and logical responses from Sally in the classroom. Without the use of the information provided by the Sounds in Sentences activity, this task would have been far more difficult.
Conclusion: The Benefits of Sounds in Sentences
The selection of the Sounds in Sentences stories for a language sample offers other benefits as well. Due to the popularity of the GFTA-2, most SLPs have a copy of the picture stories. With a copy of the GFTA-2 in hand, anyone working in the field would know the length and complexity of the stimulus sentences and be able to compare them with a student’s responses provided in a written report. Using the Sounds in Sentences section of the GFTA-2, comparisons can be made from evaluation to evaluation to monitor the improvements made in an examinee’s structured language production. It is also a “middle ground” assessment task that fits perfectly between standardized test performance and open-ended tasks, such as storytelling or retelling stories from movies or children’s books.
Previously we talked about scoring tests and writing diagnostic reports that give a vivid picture of a client. We were, in essence, talking about levels of interpretation. The topic for this month’s Clinical Café covers in-depth analysis and interpretation. You definitely have some options with psychometrically sound tests to make them clinically useful and sophisticated. Are you getting the most from your test and manual? Read on!
Interpreting Performance in Layers
In the Goldman-Fristoe Test of Articulation, Second Edition manual (page 5), also known as the GFTA-2, the authors discuss levels of analysis that lead to different layers of interpretation. If you use the Level 1 or Level 2 scoring procedures, you have different amounts of information for interpreting the normative scores and the examinee’s overall performance. As you read in the manual, Level 1 scoring allows a global interpretation of the examinee’s normative scores with respect to his or her same-aged peers. Level 2 scoring gives the additional categorization of errors—how is the sound incorrect—that deepens to another layer of interpretability.
If there are numerous errors and/or the speech sample is highly unintelligible, then you may need a still deeper layer of interpretation. The psychometrically linked partner test to the GFTA-2 is the Khan-Lewis Phonological Analysis, Second Edition (KLPA-2), which organizes sound errors into 10 phonological processes divided into three process areas (manual, pages 10-14). Keep in mind that you can start right in with this layer, if your clinical judgment tells you that enough sound errors exist to warrant a process-based approach that addresses the examinee’s sound system as a rule-governed system. What’s more, you can analyze 34 additional phonological processes descriptively for even more in-depth interpretation.
One important point to remember: When you do articulation testing only, as with the GFTA-2, you should not refer to errors as phonological process errors. This terminology is available only with a phonological process-based test instrument such as the KLPA-2.
Interpreting Performance in Scores
Each normative score has opportunities and limitations in its interpretability. That is the reason for so many types of normative scores—each one has a specific use and value. You may want to put a bookmark in each of your test manuals at the following pages for easy reference:
- GFTA-2 pages 31-33
- KLPA-2 pages 42-48
- EVT pages 35-40
- CASL pages 89-98
- OWLS LC/OE pages 98-102
- OWLS WE pages 120-125
On these pages, you will find information on interpreting each of the normative scores appropriately. This is important because all too often these scores can be misunderstood and then inadvertently misused.
Interpreting Performance for Intervention and Collaboration
After completing professional analysis and interpretation of test results, you then explain these results to parents and teachers and plan intervention. The descriptive analysis worksheets for the Oral Written and Language Scales (OWLS LC/OE and WE) and Comprehensive Assessment of Spoken Language (CASL) can assist you in that process (These worksheets are available to you through our Web site www.speechandlanguage.com and are located on the right-hand side of each product page.) Likewise, the KLPA-2 provides a vehicle for describing interpretive results in the Phonological Summary and Progress Report. This handy form can be used to explain sound errors in detail, assist in developing goals and benchmarks (objectives), and provide a method of reporting progress over time.
New Year is the time for resolutions. Consider making this resolution your own: Start utilizing all the levels of analysis and interpretation that your tests and manuals provide. Personally, I prefer this resolution to exercise regimens or diets. Bring on the chocolate!
Do you find preparing diagnostic reports cumbersome? Time-consuming? A bit of a drag? If you would like to streamline the process, take a close look at this month’s Clinical Café. Along with your test manual, this article offers some helpful tidbits that you can keep in mind for your next client evaluation.
Do you remember when you were in graduate school writing all those diagnostic reports for your practicum? If so, did the clinic supervisor tell you that your report must present a vivid picture of the client’s ability? Well, the world is still the same today. There are diagnostic reports and then there are DIAGNOSTIC REPORTS.
Here’s a prime example. I recently received two speech and language evaluations of school children from two different agencies. I didn’t know either child. One of the reports resembled a bare vine, while the other blazed like a plant in full bloom. When I had finished reading the first report, I wondered how I was going to write speech and language goals and benchmarks for this child’s Individualized Education Plan (IEP). The report told me nothing beyond the tests’ raw and standard scores. On the other hand, the second evaluation gave me a vibrant, detailed picture of the child’s strengths and weaknesses—a huge difference!
As you begin to score tests, remember that not only are the numbers important, but descriptive analysis is priceless. Luckily, tests and test manuals available today can provide us with much of this information without having to rack our brains for words to explain our impressions and observations. Here are some examples:
- Diagnostic analysis worksheets—If you use the Comprehensive Assessment of Spoken Language (CASL), descriptive analysis worksheets are available for many of the CASL tests. The Oral Written and Language Scales (OWLS LC/OE and WE) also offer descriptive analysis worksheets. These worksheets break down a child’s responses and target specific skills, making it easier to formulate IEP goals and benchmarks. Vocabulary with Ease, a companion tool for the Peabody Picture Vocabulary Test-Third Edition (PPVT-III) and Expressive Vocabulary Test (EVT) is filled with expressive and receptive intervention strategies and also contains reproducible descriptive analysis worksheets for the PPVT-III and EVT.
- Test publisher descriptions of tests and subtests—Test manuals provide concise descriptions of tests and subtests. Many of these resources are available on publishers’ Web sites as well. You can incorporate these descriptions right into your diagnostic report.
- Frequently Asked Questions (FAQs)—Make sure to check out the FAQ resources attached to product pages within the publishers’ Web sites. Along with test manuals, these Web sites present practically everything you need to know about scoring and reporting.
Looking for a way to create diagnostic reports effortlessly? There’s more. Software programs now exist that give you scores, interpret results, and provide reports—often with graphical profiles and easy-to-read scores. Computer programs are the new dream tools of choice for speech-language pathologists. To complement your test manual, Pearson offers software tools for the following tests:
- GFTA-2/KLPA-2 ASSIST Software (Combined software for the Goldman-Fristoe Test of Articulation, Second Edition and the Khan-Lewis Phonological Analysis, Second Edition)
- OWLS: LC/OE ASSIST Software (Oral and Written Language Scales: Listening Comprehension and Oral Expression Scales)
- OWLS: WE ASSIST Software (Oral and Written Language Scales: Written Expression Scale)
- CASL ASSIST Software (Comprehensive Assessment of Spoken Language)
- PPVT-III ASSIST Software (Peabody Picture Vocabulary Test, Third Edition)
- EVT ASSIST Software (Expressive Vocabulary Test)
If you are in a school system as I am, you’re either on or coming back from a much needed “break in the action.” Why not make a New Year’s resolution to investigate ways to add depth and breadth to your diagnostic reports? Take a few moments to check out these ideas in your test manual and on the Internet. You might be surprised just how much you can expand your knowledge of the children you serve!
Happy New Year!
Since it can be difficult getting back into the school routine after a summer break, we as SLPs need to do what the children do—sharpen our skills on previously learned material. One way we can do this is through reviewing. And what better place to start than by looking at our test manuals.
Manual, according to The New International Webster’s Comprehensive Dictionary of the English Language, is a “compact volume; handbook of instruction or directions; designed to be retained for reference.”
Dust off those test manuals!
Do you know where your test manuals are? When was the last time you took this valuable part of your assessment out of the closet or its fancy bag? Authors and publishers of tests include manuals in their packages for good reasons!
This subject of manuals came to mind when I recently mentored a new, very conscientious SLP. She had just administered the Goldman-Fristoe Test of Articulation-2 (GFTA-2) to a young child. I was reviewing the information prior to our staff meeting. Many of the phonetic symbols on the protocol seemed to indicate a very unusual pattern.
I asked the SLP, “What did you hear in the child’s speech that led you to use that symbol?” She explained that another SLP told her to use that symbol when a phoneme sounded distorted. I explained that the phonetic symbol she used meant, according to the manual, that the child exhibited nasality. She stopped and thought about it for a minute. Then, she smiled and said, “Now I see why it is so important to read the manual!”
All manuals include in-depth instructions on proper procedures and details on how to enhance the use of the instrument. Think about this. Would you prepare a fancy new dish for dinner without following a recipe? Okay all you gourmet cooks, this example does not include you. However, this story about the new SLP shows how important reading the test manual can be. There are other reasons too:
- Administration standards—Standards 6.6, 8.1, and 11.3 (CASL p. 67, GFTA-2 p. 15) are spelled out to keep us on the right track when choosing assessments. All AGS Publishing instruments should be administered and interpreted by professionals who are trained to use them, or by students in training who are supervised.
- Scope and organization of the test—If you read through the GFTA-2 manual, you will see that pages with blue shaded edges (GFTA-2 pp. 16-25) contain helpful information. For instance, did you know that GFTA-2 includes six consonant clusters that were not in the original edition? More clusters were added because many of us out in the field requested them. Also, did you know that if you fold the protocol in the right way, you can compare the same sound across all sections? (GFTA-2 p. 23.) Try it.
- Research support for item types/constructs—Look at the CASL manual’s gray shaded page edges (pp. 34-66). There you will find a short description of each CASL test. It’s a quick way to locate an easy-to-understand test description. These descriptions are perfect for sharing with parents and others at eligibility meetings. Notice that the word is TEST, not subtest. A distinctive attribute of CASL is that each test stands alone—making CASL so “usable” for SLPs. If you hear someone refer to CASL subtests, politely refer them to the manual!
- Test construction decisions—One recent SpeechandLanguage.com discussion question concerned basals and ceilings. Guess what? The answer is clearly spelled out in your test manual. But remember, whenever you get a revised edition of a test, it’s important not to assume that the basal, ceiling, and even the administration process will be the same. Again, check the manual.
Before you can administer an assessment, you have to calculate the child’s chronological age. If you are as mathematically challenged as I am, then you will be happy to know that you can find an age calculator right on the Pearson Web site. With this handy little tool, we have no excuse for subtracting incorrectly!
A closing thought—do you ever reread favorite books for pleasure? If so, you know that when you read a book for a second or third time, you almost always find something you missed before. The same goes for all those important books called manuals. So go ahead and take the time to reacquaint yourself with them. It’s well worth it!
As a field, we’re into storytelling. A complete story includes setting, characters, events, consequences, plans, and resolutions. Likewise in complete test manuals, we look for “the story” of a test. How did the story, timeline, and events of a test’s development unfold? Content development in test manuals—that’s the topic of this month’s Café.
Have you ever seen or heard about a car for sale that looked great on the outside but when you opened up the hood was missing pieces or showed rust? Even worse, when the key was turned in this shiny new paint job, you wondered how a car that looked so good could sound so awful. Whether or not you’ve been in this situation, you can imagine your disappointment and dismay. Like any buyer, one of your first questions would be, “What’s the story here?”
And so it should be with tests and test manuals. Yes, the packaging should be attractive. Yes, the title should be memorable and explanatory. Yes, the record form should be easy to use. But what’s the story of the test? How did it come to be? What major and minor decisions were made that have fundamentally formed the test as a final instrument? What’s outside is important, but as the saying goes, “it’s what’s inside that counts.”
Why should we care about the story of a test? In a word: context. We all know how important context is in communication and that importance is no different in testing. We interview teachers and parents in an assessment process because we care about context. We observe the student on the playground, in the classroom, and in study hall or the lunchroom because we care about context. We teach code-switching skills to students because we want them to care about context. We read test manuals for the behind-the-scenes story of a test’s life and author’s thinking because we care about context.
Consider a comparison to a research article—we expect no less than adequate disclosure of background, subjects, and methodology in a good research article. Given the nature and potential impact of standardized test performance on accurate diagnosis, a child’s school placement, IEP services, and detailed intervention planning, why would we expect any less? A standardized test typically uses a series of tightly connected and hopefully well-controlled research studies. We should hold at least the same standard of “storytelling” and research rigor for tests as we do other research in the field.
Most test manuals tell you the basics of the test’s story—but how much is enough? That largely depends on you and your particular needs and questions. But here’s a list of things you can look for in test manuals—keep in mind that these may not be headings or independent chapter titles, but likely will be woven together in a chapter titled, “Content Development,” “Rationale,” “Theory Underlying Test Design,” or “Purpose, Scope, and Organization of [the test]”:
- Theoretical ground—terms, definitions, and perspectives. For example, the Integrative Language Theory behind OWLS and CASL classifies idioms as lexical units (high-level vocabulary); also, the KLPA-2 manual explains in detail the theory behind scored and unscored phonological processes
- Research support for theory. For example, throughout Chapter 2 of the CASL manual, numerous research references validate the author’s theory and five pages of reference details support it
- Scope and organization of the test—content covered, not covered, and rationale for section/subtest organization. For example, the GFTA-2 manual explains why only 23 of 25 consonant sounds are measured, the rationale for including certain consonant clusters in the scope, and how the three different sections cover the scope of articulation testing. In the PPVT-III, the manual and the Technical References provide “the story” of each test revision and how the approach and content remained or changed
- Research support for item types/constructs. For example, the EVT manual cites a research study that supports the change in item types from labeling to synonyms when measuring expressive vocabulary and word retrieval
- Test construction decisions. For example, the new KTEA-II manual explains that the oral language subtests are specifically designed to measure listening and speaking skills that are typical for students in school. The items include examples of more formal teacher language and common situations for students
Delve into the “stories” that await you in your test manuals. You’ll find that test developers have often written a wealth of information for you!
We hear about new (and older) tests in many ways: comments from colleagues, email listservs, flyers and postcards, catalogs, presentations at conventions, and the like. How many times have you purchased a new test after seeing it in one of these communication vehicles? When you received the test, how many times have you hurriedly opened the package, grabbed the easel and record form, and run off to test the student you believed it to be appropriate for—without reading the manual? If you say, “not once,” consider yourself one of a very small number of people who deserve kudos beyond measure, or . . . maybe you should rethink your answer. For most of us, the second group is where we sheepishly belong.
Say it with me: “I confess! I’ve given a test without reading the manual first!” There…now that’s out of our system. This month’s Café will begin a series on delving into the dark places of test manuals, hopefully to shed a little light or make the dim light a bit brighter.
You need a lot of things to help you work well in the school setting: Want a brief overview of the entire test in a short, concise description for a report or IEP meeting? Interested in the variety of ways a test may be used in clinical practice? Want to know how long testing may take or who is allowed to give the test? Need a bullet-point list of the test’s key features for the justification of the test purchase to your special education director or administrator? These and many other basic test questions are answered in well-written and complete test manuals.
In the case of Pearson’s test manuals, answers to these questions translate into “everything that is in Chapter 1.” Here are a few examples of the nuggets of gold just waiting for you to mine:
- The Expressive Vocabulary Test (EVT) manual states that this test was co-normed with PPVT-III. (pg. 1) The strength of the EVT test is not only that the test’s internal data are rigorous, but also that the psychometric link to the gold standard in vocabulary testing, the PPVT, makes it even easier to compare receptive and expressive vocabulary scores of your students or do research.
- The Comprehensive Assessment of Spoken Language (CASL) manual advises that you can use the open-ended responses from students during testing for dynamic and qualitative language sample analysis (pg. 5). The CASL is not just a norm-referenced test battery (not that it wouldn’t be great even if it were)!
- The Oral and Written Language Scales (OWLS) manuals give approximate testing times for different age groups across the testing age span (pp. 6 and 7, respectively). Each manual also includes approximate scoring times for each age group. Want some clear data for caseload vs. workload support in your school? Check out these published times and start adding it up!
- The Goldman Fristoe Test of Articulation-2 (GFTA-2) and Khan-Lewis Phonological Analysis-2 (KLPA-2)actually have different examiner requirements: two levels of qualifications for the GFTA-2 and an additional qualification for the KLPA-2 (pp. 5 and 4, respectively).
These are just a few of the many nuggets you can find in Chapter 1 of most Pearson test manuals. While understandably the text may not be as riveting as a John Grisham or Harry Potter novel, it may help you in ways that you would never consider until you read each page. Begin gently . . . with Chapter 1, which is typically eight or less full pages. Think about the “big picture” content for a while. Fit it in with what you already know and believe about testing. When you’ve had a break to consider the information, then go on to the next chapter. Scanning a manual is a good initial tool, but many of us don’t take the time to go deeper.
Go deeper. Your students depend on your depth.
From the view of a test publisher, writing and printing a manual is critical to any test development process. Authors and editors work closely together to be as complete and clear as possible without adding unnecessary information. Keep in mind that the manual must be written to a large audience who has a very wide knowledge base about testing in general and the individual test specifically. Finding a balance between not enough and too much information can be difficult, but with a solid process and competent professionals, a winning combination can be born.
Happy reading—you can do it!
Send us your “What I’d like to learn about tests this summer” list
As your partner in testing, we’d like you to know what we do, how we do it, and why. In turn, we’d like to know what other information we can provide to help you in your jobs. So send us your “What I’d like to learn about tests this summer” list to firstname.lastname@example.org and we’ll try to fulfill your wishes.
AGS Publishing National Speech and Language Consultants are gearing up to provide valuable workshops and ASHA-approved continuing education seminars across the country. Last month they assembled in Minneapolis for their own unique training program.
AGS Publishing Event
This motivating event was coordinated by Dr. Kathleen Williams, Vice President of Product Development, and Inga Weberg, Associate Director of Marketing, both from AGS Publishing.
Dr. Ronald Goldman
Dynamic speakers led the training sessions. The first speaker, Dr. Ronald Goldman talked about teaching young school children articulation and early reading skills with the new Sounds and Symbols Early Reading Program. His discussion included an interesting overview of current and past research on literacy, as it is affected by language development and articulation skills. Along with sharing information on his own studies working with children, Goldman included scientific findings from the work of renowned authors in the field, such as Lloyd Dunn, Richard Woodcock, and Macalyne Fristoe.
Tina Radichel, M.S. CCC-SLP
Next, Tina Radichel, M.S. CCC-SLP, New Product Manager at AGS Publishing, introduced consultants to the ASHA-approved workshop titled, “The Role of Speech-Language Pathologists in Early Literacy.” This workshop demonstrated how critical the training and background of speech-language pathologists can be in designing spoken and written language programs that promote reading skills.
Nancy Lewis, M.S. CCC-SLP
Another major contributor to the program was Nancy Lewis, M.S. CCC-SLP. She discussed the second edition of the Khan-Lewis Phonological Analysis (KLPA-2). This assessment works alongside the GFTA-2 to extend the analysis of articulation and phonological usage. Lewis also invited consultants to participate in a case study using the GFTA-2 results and KLPA-2 software analysis for planning remediation for fictitious students. It provided a hands-on method for learning how effective the two assessments can be when used together.
All in all, the back-to-school training day prepared AGS Publishing national consultants for providing valuable workshop experiences and inservice programs throughout the United States. Give them a call.
For information about these speech and language inservices, seminars, and ASHA-approved workshops, visit the AGS Publishing website, ags.pearsonassessments.com/workshops/inservice.asp
If speech development were easy, children wouldn’t need speech-language pathologists. But easy it is not. Speech production uses a set of arbitrary sounds and sound combinations that are based on an equally arbitrary set of rules (Kent, 1998, in Bernthal & Bankson, 1998). Unfortunately, children don’t always master these rules in the same way. Enter the need for speech assessment and time-consuming analysis and interpretation.
Looking at speech assessment on a continuum means knowing that each child may require a different level of sound analysis. At a basic level, simply counting errors on a set of single words and comparing that number to a set of national norms may be sufficient in a particular case. At the most complex level, a generative analysis of a child’s sound production in conversation offers a depth and breadth of data that can offer a rich description of the child’s individual sound system. More often than not, however, assessment needs will fall somewhere between these extremes.
How do you determine the place on the continuum that matches your particular child’s needs? Simply stated, it depends (sorry, no easy answers here). You may initially need to determine a cursory number of errors to get a general idea of severity. But then you want more information, so you choose to analyze this number of errors by type of error (substitution, deletion, distortion, or addition). Or you want to look at distinctive features (place, manner, and voicing changes, or labials vs. stridents, etc). Then you decide that information isn’t enough either; you would like to organize the child’s errors by phonological process (cluster simplification, velar fronting, stopping, etc.). And so it goes.
The ability to move easily between levels of information is key to effective assessment. For example, one of the reasons that the GFTA-2/KLPA-2 combination of tests is so powerful is that the continuum is integrated with one well-controlled, representative normative group. The combination of these two tools increases the validity and ease of moving through the continuum to deeper analysis without jeopardizing the reliability and validity of the data. What’s more, you can stop whenever you determine you have the information you need. It is this philosophical premise of a continuum of assessment and the necessity of flexibility that serves as the foundation of the tests, as well as the newly released, newly designed GFTA-2/KLPA-2 ASSIST scoring and reporting software.
Analysis and Interpretation of Formal Testing
Clinical decision-making in speech-language pathology has always been both an art and a science. Cliché, yes, but true. A formal test or a criterion-referenced checklist can provide you with a wealth of data, but you must then engage your “gray matter” and insert the data into the context of a child’s history, experience, life, and environment. At some point, the test data can tell you no more than a number or set of numbers. You must decide how those numbers fit together and “where to go from here.”
In short, you must use your clinical judgment!
“Egads!” you say. “Think? No, it’s summer! I can’t do that!”
Knowing that test scores tell you only a piece of what the child knows and can do, further dynamic procedures (e.g., the GFTA-2 Sounds-in-Sentences and Stimulability sections) can easily help you broaden the picture of the child’s sound system. These two sections of the GFTA-2 use consonant sounds in an authentic/dynamic way and provide more information for analysis and interpretation that is not possible through formal testing means. Best practices in speech-language pathology and educational psychology have long supported the use of a full range of assessment tools and information-gathering methods to complete an assessment that is valid and leads to intervention. (Feuerstein, 1979; Lidz, 1991; Moore-Brown & Montgomery, 2001; Paul, 1995; Schraeder, Quinn, Stackman, & Miller, 1999)
The bottom line of interpretation is simple: while each child’s speech system is unique, there are also a number of very common ways to talk about it. When making interpretive judgments about test scores, test manuals are invaluable resources for clinical decision-making and report-writing. In addition, software that can generate standard wording for describing test scores accurately can do much of the initial report-writing work for you! None of us have a lot of time these days, so efficiency is key to getting your professional interpretations down on paper. Both the GFTA-2 and KLPA-2 manuals offer excellent assistance in the analysis and interpretation of test scores, including special cases and considerations.
The rubber meets the road in clinical intervention. No assessment will make a good speech and language outcome, but excellent assessment tools can give you the necessary foundation for sound thinking in clinical practice. You make the difference in bridging the gap and making the data work for you clinically. Logic indicates that the deeper you go on the continuum of assessment, the more information you have for planning intervention. For example, knowing that a child may have 80 percent of his or her errors as substitution errors may help qualify a child for services and describe the test scores. Then you as a clinician must make the leap to determine what targets to pursue in therapy. On the other hand, if you know that those substitution errors are largely errors in the phonological process Liquid Simplification, you can determine if the errors are age-appropriate and on which targets to focus. The more depth of information you have up front, the easier and more effective intervention planning is after assessment. In this outcome-based world, there is no better reason for having an integrated continuum of assessment than better and more effective intervention!
While we can’t tell you what specific intervention activities will work with each individual child or group of children, we do want you to be able to spend more time on planning than on “crunching” the data and writing lengthy repetitive reports. Our new GFTA-2/KLPA-2 ASSIST software (brand new design too!) integrates scoring and reporting for both the GFTA-2 and KLPA-2. Check it out at http://ags.pearsonassessments.com/static/a11750.asp.
A Big Thanks!
As always, we’d like to thank you for your ongoing service to people with communication needs and remind you that we at AGS Publishing are here to support you with that effort. If you’d like to discuss this topic further, please feel free to use the SLP Forum Discussion Center as the vehicle for an ongoing discussion with your colleagues. Should you have questions regarding these or other Pearson Speech and Language products, we welcome your phone calls at 1-800-627-7271, or through our website contact form.
Enjoy the complexity of speech assessment!
Bernthal, J. E., & Bankson, N. R. (1998). Articulation and phonological disorders (4th ed.). Needham Heights, MA: Allyn and Bacon.
Feuerstein, R. (1979). Dynamic assessment of retarded performers: The learning potential assessment device, theory, instruments, and techniques. Baltimore: University Park Press.
Lidz, C. S. (1991). Practitioner’s guide to dynamic assessment. New York: Guilford Press.
Moore-Brown, B. J., & Montgomery, J. K. (2001). Making a difference for America’s children: Speech-language pathologists in public schools. Eau Claire, WI: Thinking Publications.
Paul, R. (1995). Language disorders from infancy through adolescence. St.Louis, MO: Mosby.
Schraeder, T., Quinn, M., Stockman, I. J., & Miller, J. (1999). Authentic assessment as an approach to preschool speech-language screening. American Journal of Speech-Language Pathology, 6, 195-200.
An Ode to Retesting Test, test, and test again.
Wait, wait a minute-does it matter WHEN?
Is it weeks or months, days or years?
Do I have enough info to tell all my peers?
What will happen to the students’ scores?
Will kids remember the answers or run out the doors?
Will the scores be valid, reliable, and true?
Will I interpret correctly so no one will sue?
Oh, test publisher, hear my refrain;
Give me some guidance, it’s taxing my brain!!
One should begin any dry but important topic with a bit of levity…hence the original poem above (inspired by Dr. Seuss, of course)! Welcome to the second issue of the Clinical Cafe, with today’s “espresso shot” topic of test-retest reliability. We, in Development for Pearson’s Assessment group, get numerous calls each month from professionals around the country regarding rules and strategies for retesting students. So, here’s another “insight” to read with your morning (or afternoon or evening) coffee and share with your colleagues.
Overall, test-retest reliability is an index of temporal stability. It tells how much the individual’s normative score might possibly change on retesting if a period of time has elapsed between test administrations. Change could reflect the person’s growth or fluctuation in the ability being measured, random differences in performance, or the individual’s recollection of the earlier administration. A test-retest coefficient is a statistical measure that is obtained by administering the same test twice, with a certain amount of time between administrations, and then correlating the two score sets. Reliability between two parallel forms with different content is known as an alternate-form coefficient (as in the PPVT-III).
When making a decision on retesting, follow the steps below.
- Determine why you are conducting a retest: Did the examinee’s performance fall below your expectations due to illness, bad day, test anxiety, student behavior, etc? If this is the case, as soon as the examinee is up to it, you should be able to retest, especially if you use a parallel form. Are you involved in a pre/post test situation where you are attempting to ascertain gain? If so, you’d want to schedule the second administration after the completion of instruction or therapy to determine the effectiveness of the treatment.Has the student recently transferred from a different school? If the original test administration was done in another school or setting by a clinician that you do not know, and you question the reported results, you may choose to re-test.
- Locate the section in each of the Pearson test manuals that discuss test-retest reliability:
TEST Manual Page(s) Test-Retest Time Interval Test-Retest Median Interval Reliability Coefficients CASL 121-124 7-109 days 6 weeks .65-.96 EVT 67-69 8-203 days 42 days .77-.90 GFTA-2* 52-54 0-34 days 14 days .79-1.00 KLPA-2* 66-67 0-34 days 14 days .79-1.00 OWLS LC/OE 123-125 20-165 days 8 weeks .73-.89 OWLS WE 146-147 18-165 days 9 weeks .87-.90 PPVT-III Form A 48-51 One month 30 days .91-.93 PPVT-III Form B 48-51 One month 30 days .91-.94 PPVT-III A & B** 48-51 0-7 Same day .88-.95
*Read further for an additional issue related to test content.
- **The PPVT-III is unique among Pearson speech-language tests in that it has parallel forms (Forms A & B). As written in the Alternate-Forms Reliability Coefficients section on page 48 of the PPVT-III manual, most test subjects were given both forms of the test in the same day.
- Make a clinical decision based on all the information:While some tests provide clinicians with an exact test-retest waiting period, some do not. So much depends on the reason for the retesting. After reviewing the information provided in the manual and following the above steps, you may rely on your professional clinical judgment to determine when to confidently retest. You can be confident of the test reliability if your retesting falls within the respective test interval in the above table since you will be matching standardization procedures.
Test Content and Test-Retest Reliability: A Related Consideration
Test content also should be considered when determining the need for retesting. For example, articulation is a developmental skill area; you would expect more of a change in performance earlier on the growth curve (i.e., ages 2-5), which then will flatten and stabilize around 8 years of age. Retesting a young student at a given time interval on the GFTA-2 and/or the KLPA-2 would likely have greater change attributed to growth than retesting a student at 10 years of age. This is one reason that the test-retest time interval is narrower for these two tests.In language, skills that are “closed set” (e.g., syntax) should be considered differently than those that are “open set” (e.g., vocabulary, nonliteral language). Vocabulary has a steep growth curve similar to articulation, but does not flatten at a particular age; it grows throughout the life cycle. Conversely, syntax forms are a closed set; you learn them early, and they are generally static. So it helps to make the test-retest decision-making and data analysis seem clearer when you take into account test content.
Dancing the Line of Clinical Judgment and Test Standardization
Yes, it is difficult to know sometimes what you can and can’t make decisions on. Test standardization is a rigorous process and we appreciate professionals being concerned about “following the rules.” At the same time, no test can anticipate all the situations and nuances of the clinical arena, so there is a point at which the rules end and your clinical judgment begins. We’re happy to continue to help you clarify the line on which to “dance.”If you want more information on standardized test development, the Development Team at Pearson’s Assessment group is in the process of creating an ASHA-approved CE presentation (available end of summer) on the basics of assessment. If you are interested in scheduling a continuing education activity for your school, district, or your state speech-language-hearing association, please contact:
As always, we’d like to thank you for your ongoing service to people with communication needs, and we are here to support you with that effort. If you’d like to discuss this topic further, please feel free to use the SLP Forum as the vehicle for an ongoing discussion with your colleagues. Should you have questions regarding these or other Pearson Speech and Language products, we welcome your phone calls at 800-627-7271 or use our web site at http://psychcorp.pearsonassessments.com. Oh, yes, and if you’d like to copy the poem at the top, feel free (but don’t forget to cite the Clinical Café)!
Enjoy the summer!