Our Blogs

Share in practical tips and insights, inside information, stories and recollections, and expert advice..

Entries Tagged With: assessment

Using the GFTA-2 Sounds in Sentences as Part of a Complete Language Assessment


Clinical Café By Kathy Swiney, CCC-SLP, BRS-FD

I don’t know about you, but I find that the assessment of language skills in children is one of the most challenging aspects of my job. Language, unlike articulation, does not have a finite number of skills to master. Language learning and usage continue to develop throughout one’s life.

Perhaps the biggest challenge in language assessment is the balancing act involved in oral communication. We need to measure the broad range of skills in this area in a manner that is structured enough to compare the abilities of the examinee to a norm group, but also reflects the unstructured use of language in natural environments. The following case study should provide some insight into one option for balancing these goals.

CASE STUDY: SALLY, AGE 7 YEARS, 8 MONTHS

Sally was referred for speech and language testing by her classroom teacher. The case history information indicated that Sally’s teacher rated her as being “incomprehensible” in classroom discussions. On the day of the speech and language assessment, I introduced myself to Sally and asked her if she had already been to P.E. class. She responded, “I ride the bus.”

Step One: Initial Assessment

The Comprehensive Assessment of Spoken Language (CASL) was used as the formal measure of Sally’s oral language based on the depth of detail it provides. All core and supplementary tests were administered for Sally’s age level. Several weaknesses were identified, but they were not as severe or of the type I had expected. In the core tests, her standard scores were as follows:

Antonyms 88 Nonliteral Language 87
Syntax Construction 47 Pragmatic Judgment 73
Paragraph Comprehension 85 Core Composite 71

Frankly, I expected Sally’s standard score for Syntax Construction to have been higher than the score for Nonliteral Language, which is a more complex task. The scores on the supplementary tests were generally average to low-average, with the exception of Sentence Completion, on which Sally obtained a standard score of 67. While Sally showed some obvious weaknesses on the CASL, she demonstrated enough competencies that I was surprised her teacher had rated her as “incomprehensible” in classroom discussions. Obviously, more information was needed on the type of language Sally used in less structured activities. In addition, I wanted a language sample typical of classroom activities in which the teacher provides topical structure.

Step Two: Making the Choice—Sounds In Sentences

After thinking about what exactly I needed to measure, I decided the Sounds in Sentences section of the Goldman-Fristoe Test of Articulation, Second Edition (GFTA-2) would be the best way to obtain the specific information for this situation. This portion of Sally’s language assessment would be less structured than the tests on the CASL, but more structured than a spontaneous language sample.

Following the regular administration guidelines, I tape-recorded and transcribed Sally’s responses on the Sounds in Sentences. An analysis of this sample indicated that Sally could retell a story with picture cues. Her sentences, while short, were logical and appropriate for the picture cues. She did omit regular past tense endings and some auxiliary verbs, and she would also confuse prepositions. Sally presented the story itself in logical sequence, however, and her language was comprehensible.

Step Three: Finding Nemo—Closing The Circle On Complete Assessment

As the final step in this process, I asked Sally to tell me about her favorite movie, Finding Nemo. No comments were provided other than those related to active listening, such as, “REALLY!” Here is a sample from her story:

    Nemo was a boy and Marla mom it die from the shark ate it. Marla try to take the mother inside the house but all the eat then mom and the baby but one baby Nemo left and next morning Nemo a kid. The bus the fish in the water the busdriver come and he say then he goes under and they went under the fish driver…

This time, Sally’s story was extremely difficult to comprehend, even by a listener familiar with the storyline of the movie. Her teacher judged this language sample to be representative of Sally’s spontaneous speech. From this example, it was readily apparent why the classroom teacher described Sally’s conversational speech as “incomprehensible.”

Next Steps: Help For Sally

The GFTA-2 Sounds in Sentences provided valuable information in this case. Sally did much better than expected in the more opened-ended responses on the CASL, but very poorly in the unstructured speech sample. Her responses on the Sounds in Sentences indicated that Sally was a far more competent communicator when she had visual cues to provide topical and sequential structure for her responses. Working with the classroom teacher, a plan was designed not only for therapeutic intervention, but also for accommodations to support more meaningful and logical responses from Sally in the classroom. Without the use of the information provided by the Sounds in Sentences activity, this task would have been far more difficult.

Conclusion: The Benefits of Sounds in Sentences

The selection of the Sounds in Sentences stories for a language sample offers other benefits as well. Due to the popularity of the GFTA-2, most SLPs have a copy of the picture stories. With a copy of the GFTA-2 in hand, anyone working in the field would know the length and complexity of the stimulus sentences and be able to compare them with a student’s responses provided in a written report. Using the Sounds in Sentences section of the GFTA-2, comparisons can be made from evaluation to evaluation to monitor the improvements made in an examinee’s structured language production. It is also a “middle ground” assessment task that fits perfectly between standardized test performance and open-ended tasks, such as storytelling or retelling stories from movies or children’s books.

Scoring Nuances of the Comprehensive Assessment of Spoken Language, Part 3: Zero Scores


Clinical Café By Kathy Swiney, CCC-SLP, BRS-FD

Accurate raw scores are the foundation for obtaining meaningful test results. Without adherence to the standardized administration procedures, reliable standard scores cannot be obtained. Precision in administration and interpretation of standardized tests allows evaluators to compare the skills of one specific examinee to those of participants of the same age in the normative sample. Understanding zero raw scores is one important aspect in achieving accuracy in scoring and interpretation on the CASL. Because of their unique nature, zero raw scores can result in either a standard score or no score at all. This article has been prepared to help clarify when each of these situations occurs.

Zero Scores:

You will inevitably have examinees who obtain raw scores of “0″ on one or more of the CASL tests. Zero scores are treated differently than other raw scores:

  • depending on how they were obtained
  • in the way they are interpreted
  • in the method used to calculate index scores.

Understanding these unique properties actually starts with the administration of the examples preceding each CASL test.

Role of Examples in the Administration of CASL tests:

Proper administration of the examples is essential for scoring accuracy. Examples are provided so the examinee has an opportunity to understand the task required on each test. The examiner can administer the examples for younger children if there is reason to believe an examinee will have difficulty with the examples at his or her age-level (see Examiner’s Manual p. 73). Examples may also be repeated (see EM p. 72). Specific procedures for administration of the examples are provided in the Test Books on the pages immediately following the tab for each test. These instructions, like all other administration procedures, should be followed exactly.

Responses obtained on the examples play a very important role. They determine whether testing begins at the age-appropriate Start Item or at Item 1. The examiner uses the accuracy of an examinee’s responses on the examples to make this determination. Most tests provide two examples for each of one or two age ranges. The exceptions are Paragraph Comprehension which provides one example paragraph, Sentence Comprehension which provides one example with two parts (Part A and Part B), and Grammaticality Judgment which provides one set of three examples for all examinees.

For the majority of tests, the following procedures apply.

    First Example for age range:
    Correct response – Examiner continues with the second example for the age range.Incorrect or no response—Examiner repeats the example, models the correct response, and continues to the second example.

    Second Example for age range:
    Correct response – Examiner continues testing with the administration of the actual test items starting with the age-appropriate Start Item.

    Incorrect or NR—Examiner repeats the example, models the correct response, and continues with the administration of the actual test items starting with Item 1, not the age-level Start Item.

(See specific instructions in the Test Books for Paragraph Comprehension, Sentence Comprehension, and Grammaticality Judgment.)

When a “zero” score yields a standard score:

Standard scores are a representation of how far from average an examinee’s score falls. One or more participants in the standardization sample actually scored “0″ and, therefore, that score is a certain distance from average given the examinee’s age level peers. In other words, any raw score can translate to a standard score in the distribution of scores. If the examinee understood the task required on the test, presented his or her best effort, and still obtained a raw score of zero, the standard score obtained should be considered as valid as any of the others for this test.

An example of this situation can occur when the examinee responds correctly to the examples and the answers on the scored items, while incorrect, at least indicate the examinee understands the task required. The following scenario is an example of just such an occurrence.

Scenario A – Zero score that yields a standard score

Student A, aged 11-1, is taking the Grammatical Morphemes test. The examiner administers Examples 3 and 4 which Student A answers correctly. The examiner proceeds to administer Item10, the age-appropriate Start Item. Student A responds incorrectly to Item 10. Following an incorrect response to the Start Item, the examiner administers test items in reverse order all the way to Item 1 in an attempt to obtain a basal of three consecutive scores of 1. The student does not answer any of the items correctly. On items administered, the examinee provides responses that, while incorrect, indicate she understands the task required. She obtains a raw score of “0.” In this case, the standard score of 50 applies.

NOTE: When using the CASL ASSIST scoring program, zero scores obtained in this manner can be used in the calculations for all indexes. Enter “0″ in the field for the test(s) on the ASSIST.

When a zero score does not yield a standard score:

There are instances in which a zero raw score cannot be used to derive a standard score. This situation occurs when an examinee starts at Item 1, which can happen under a number of conditions:

  • the examinee’s age-level Start Item on a test is Item 1
  • the examiner believes a particular examinee will have difficulty with the age-appropriate Start Item and determines that the appropriate Start Item should be Item 1
  • the examinee responds incorrectly to the examples (see EM p.73)

When any of these situations occurs and the examinee provides incorrect responses to all of the example items administered, as well as to Items 1, 2 and 3, testing is discontinued and no standard score can be obtained. An example of this scenario follows.

Scenario B – Zero score does not yield a standard score

Student B, aged 10-3, is taking the Antonyms test. Based on the student’s age, the examiner administers Example 3 and Example 4. Student B responds incorrectly to both examples. The examiner adheres to the instructions on the tab for this test in Test Book 1. Testing continues from Record Form 2 with Item 1 rather than the age-level start item. Student B cannot respond correctly to Item 1, Item 2 or Item 3. The examiner must conclude that this examinee does not understand the concept of the test. Testing is discontinued (see EM p. 72). “No normative information can be derived” (EM p. 73), and no standard score can be obtained. If the test is a Core Test, a Core Composite cannot be obtained. If this situation occurs on one of the tests that make up a Category Index (Lexical/Semantic, Syntactic, or Supralinguistic), a standard score for the category index cannot be obtained.

NOTE: When using the CASL ASSIST scoring program, zero scores obtained in this manner should not be used in the calculations. Do not enter any score in the field for this test on the ASSIST. The program will calculate all other scores accordingly.

I don’t know about you, but I don’t think I ever considered how much information can be found in “0″!

Scoring Nuances of the Comprehensive Assessment of Spoken Language (CASL) Part 2: Administration, Prompting, Repetition, and Questions


Clinical Café By Kathy Swiney, CCC-SLP, BRS-FD

No matter how thorough the instructions are in the Examiner’s Manual, situations invariably arise which seem to fall outside the range of these instructions and require further clarification. It is these situations we hope to address in this series of columns.

Follow the order of administration instructions

Instructions in the CASL Examiner’s Manual state that tests must be administered in order. The examiner should give the Core tests first, starting with those in Test Book 1, then those in Test Book 2, followed by those in Test Book 3. Supplementary tests may be administered in any order.

Administration

Test Order

I test a lot of very young children. Wouldn’t it save time and improve performance if I could administer all of the Core and Supplementary tests from each test book at one time?

You are certainly right to be concerned about the attention span and fatigue level of young children when administering any standardized test. There are, however, two reasons that the core tests must be administered in order and prior to the administration of any supplementary tests.

First, to obtain standard scores for the examinee, the tests, including test order, must be administered in the same manner as done during the standardization process. This makes it possible for the child’s performance to be truly compared to the normative group. Secondly, the core tests measure those skills most representative of each category for each of the six age bands. From this standpoint, it is important to administer the core tests first, when the child is most attentive. The supplementary tests provide additional diagnostic information and should be administered at the end of the test session or during a subsequent test session. The supplementary tests are selected at the discretion of the examiner and may be administered in any order.

Sentence Comprehension

In this test, two pairs of sentences are read. The examinee has to respond correctly to both sentence pairs for a score of 1. If the child misses the first sentence, would the examiner have to read the second sentence?

You must give all the sentences in the Sentence Comprehension of Syntax test because that follows standardization procedures. The “1″ is simply a scoring rule. In addition, if you don’t administer the items like you told the child you would, it could be confusing, misleading, or inappropriately indicate to the child that he or she missed an item.

This same procedure also applies to Grammaticality Judgment. For this test, the procedure is clearly defined in “Important Points to Remember During Testing.” The instructions related to this topic are repeated here:

  • If the examinee says “yes” to an incorrect sentence, go on to the next item.
  • If the examinee says “no” to a correct sentence, let him or her try to fix it. Do not indicate at any time other than with the examples that the examinee has given an incorrect response (from Test Book 2).

Scoring Nuances of the Comprehensive Assessment of Spoken Language, Part 1: Basal and Ceiling Rules


Clinical Café By Kathy Swiney, CCC-SLP, BRS-FD

The Comprehensive Assessment of Spoken Language (CASL) is fast becoming “the test of choice” for identifying oral language skills in children and young adults aged 3 to 21. As more and more examiners use the CASL, specific questions arise about the administration of the instrument. This article will address questions that have been posed regarding the basal and ceiling rules.

Dr. Elizabeth Carrow-Woolfolk, author of the CASL, has made the administration particularly logical and straightforward. It is, however, the examiner’s responsibility to have a thorough knowledge of the administration instructions for each of the fifteen tests in the CASL test battery. It is essential that each examiner read the Examiner’s Manual thoroughly before administering this or any other standardized assessment instrument. Unless the CASL, or any other standardized test, is administered in the same manner utilized during the standardization process, the results obtained may not be interpretable. (Examiner’s Manual, p. 68).

Before we address specific questions, it might be helpful to review some of the guidelines contained in the Examiner’s Manual for obtaining accurate results.

Consider the special needs of the examinee. (Examiner’s Manual. p. 30, 31, 68.)

    Does the examinee have visual, aural, physical, attentional, articulatory, emotional, or English proficiency limitations that might affect his or her responses? Any adaptations made in the administration of the tests must be documented and considered during interpretation. If sufficient modifications are made, the use of normative scores may not be possible. If so, the examiner’s clinical judgment may be used to formulate a qualitative evaluation of the examinee’s skills.

Adhere to the prompting, repetition, basal, and ceiling rules. (E.M ibid. p. 7, 69.)

    Keep the Test Book open to the appropriate tabbed page for prompting and scoring information. This information is repeated on each Record Form in a box preceding the section for recording responses. The majority of tests have the same basal, three consecutive correct responses, and ceiling, five consecutive incorrect responses, rules. On the Sentence Comprehension of Syntax test and Ambiguous Sentences, while the basal and ceiling rules follow the general format, the examinee must give a correct response to both parts of the stimulus item(s) to receive a score of 1. Paragraph Comprehension and Grammaticality Judgment have unique basal and ceiling rules.

Specifics on Basal and Ceiling Rules

Double Basal/Double Ceiling

What is Dr. Carrow-Woolfolk’s rationale for using the lowest basal and the highest ceiling when obtaining raw scores on the CASL?

The goal in the testing world is to capture the most complete view of a child’s abilities. The lowest basal and highest ceiling rule allows you to obtain as much information as you can without tiring or frustrating the examinee by administering too many items that are either too easy or too difficult for him/her.

Earned Ceiling vs. Tested Ceiling

What should the examiner do when an open-ended question is administered and the examiner is not sure that the response is correct?

In this case, the examiner should not break the continuity of the test administration. The examiner should continue administering items until he or she is sure a ceiling is met, checking questionable responses after the test is complete.

Does the highest ceiling rule apply in this case?

When the examiner goes back to score the test, he or she should keep in mind that the child’s performance drives where the ceiling is, not the examiner’s decision to keep testing. So, if the examiner, after the fact, scores a child’s response as incorrect and that creates a child’s “earned” ceiling, then that is the ceiling to be used.

What Your Test Manual Will (and Should) Tell You—Part 5


Clinical Café by Debby Hutchins, MS, CCC-SLP

Previously we talked about scoring tests and writing diagnostic reports that give a vivid picture of a client. We were, in essence, talking about levels of interpretation. The topic for this month’s Clinical Café covers in-depth analysis and interpretation. You definitely have some options with psychometrically sound tests to make them clinically useful and sophisticated. Are you getting the most from your test and manual? Read on!

Interpreting Performance in Layers

In the Goldman-Fristoe Test of Articulation, Second Edition manual (page 5), also known as the GFTA-2, the authors discuss levels of analysis that lead to different layers of interpretation. If you use the Level 1 or Level 2 scoring procedures, you have different amounts of information for interpreting the normative scores and the examinee’s overall performance. As you read in the manual, Level 1 scoring allows a global interpretation of the examinee’s normative scores with respect to his or her same-aged peers. Level 2 scoring gives the additional categorization of errors—how is the sound incorrect—that deepens to another layer of interpretability.

If there are numerous errors and/or the speech sample is highly unintelligible, then you may need a still deeper layer of interpretation. The psychometrically linked partner test to the GFTA-2 is the Khan-Lewis Phonological Analysis, Second Edition (KLPA-2), which organizes sound errors into 10 phonological processes divided into three process areas (manual, pages 10-14). Keep in mind that you can start right in with this layer, if your clinical judgment tells you that enough sound errors exist to warrant a process-based approach that addresses the examinee’s sound system as a rule-governed system. What’s more, you can analyze 34 additional phonological processes descriptively for even more in-depth interpretation.

One important point to remember: When you do articulation testing only, as with the GFTA-2, you should not refer to errors as phonological process errors. This terminology is available only with a phonological process-based test instrument such as the KLPA-2.

Interpreting Performance in Scores

Each normative score has opportunities and limitations in its interpretability. That is the reason for so many types of normative scores—each one has a specific use and value. You may want to put a bookmark in each of your test manuals at the following pages for easy reference:

  • GFTA-2 pages 31-33
  • KLPA-2 pages 42-48
  • EVT pages 35-40
  • CASL pages 89-98
  • OWLS LC/OE pages 98-102
  • OWLS WE pages 120-125

On these pages, you will find information on interpreting each of the normative scores appropriately. This is important because all too often these scores can be misunderstood and then inadvertently misused.

Interpreting Performance for Intervention and Collaboration

After completing professional analysis and interpretation of test results, you then explain these results to parents and teachers and plan intervention. The descriptive analysis worksheets for the Oral Written and Language Scales (OWLS LC/OE and WE) and Comprehensive Assessment of Spoken Language (CASL) can assist you in that process (These worksheets are available to you through our Web site www.speechandlanguage.com and are located on the right-hand side of each product page.) Likewise, the KLPA-2 provides a vehicle for describing interpretive results in the Phonological Summary and Progress Report. This handy form can be used to explain sound errors in detail, assist in developing goals and benchmarks (objectives), and provide a method of reporting progress over time.

2005—A Resolution

New Year is the time for resolutions. Consider making this resolution your own: Start utilizing all the levels of analysis and interpretation that your tests and manuals provide. Personally, I prefer this resolution to exercise regimens or diets. Bring on the chocolate!

What Your Test Manual Will (and Should) Tell You—Part 4


Clinical Café by Debby Hutchins, MS, CCC-SLP

Do you find preparing diagnostic reports cumbersome? Time-consuming? A bit of a drag? If you would like to streamline the process, take a close look at this month’s Clinical Café. Along with your test manual, this article offers some helpful tidbits that you can keep in mind for your next client evaluation.

Do you remember when you were in graduate school writing all those diagnostic reports for your practicum? If so, did the clinic supervisor tell you that your report must present a vivid picture of the client’s ability? Well, the world is still the same today. There are diagnostic reports and then there are DIAGNOSTIC REPORTS.

Here’s a prime example. I recently received two speech and language evaluations of school children from two different agencies. I didn’t know either child. One of the reports resembled a bare vine, while the other blazed like a plant in full bloom. When I had finished reading the first report, I wondered how I was going to write speech and language goals and benchmarks for this child’s Individualized Education Plan (IEP). The report told me nothing beyond the tests’ raw and standard scores. On the other hand, the second evaluation gave me a vibrant, detailed picture of the child’s strengths and weaknesses—a huge difference!

As you begin to score tests, remember that not only are the numbers important, but descriptive analysis is priceless. Luckily, tests and test manuals available today can provide us with much of this information without having to rack our brains for words to explain our impressions and observations. Here are some examples:

  • Diagnostic analysis worksheets—If you use the Comprehensive Assessment of Spoken Language (CASL), descriptive analysis worksheets are available for many of the CASL tests. The Oral Written and Language Scales (OWLS LC/OE and WE) also offer descriptive analysis worksheets. These worksheets break down a child’s responses and target specific skills, making it easier to formulate IEP goals and benchmarks. Vocabulary with Ease, a companion tool for the Peabody Picture Vocabulary Test-Third Edition (PPVT-III) and Expressive Vocabulary Test (EVT) is filled with expressive and receptive intervention strategies and also contains reproducible descriptive analysis worksheets for the PPVT-III and EVT.
  • Test publisher descriptions of tests and subtests—Test manuals provide concise descriptions of tests and subtests. Many of these resources are available on publishers’ Web sites as well. You can incorporate these descriptions right into your diagnostic report.
  • Frequently Asked Questions (FAQs)—Make sure to check out the FAQ resources attached to product pages within the publishers’ Web sites. Along with test manuals, these Web sites present practically everything you need to know about scoring and reporting.

Looking for a way to create diagnostic reports effortlessly? There’s more. Software programs now exist that give you scores, interpret results, and provide reports—often with graphical profiles and easy-to-read scores. Computer programs are the new dream tools of choice for speech-language pathologists. To complement your test manual, Pearson offers software tools for the following tests:

If you are in a school system as I am, you’re either on or coming back from a much needed “break in the action.” Why not make a New Year’s resolution to investigate ways to add depth and breadth to your diagnostic reports? Take a few moments to check out these ideas in your test manual and on the Internet. You might be surprised just how much you can expand your knowledge of the children you serve!

Happy New Year!

What Your Test Manual Will (and Should) Tell You—Part 3


Clinical Café by Debby Hutchins, MS, CCC-SLP

Since it can be difficult getting back into the school routine after a summer break, we as SLPs need to do what the children do—sharpen our skills on previously learned material. One way we can do this is through reviewing. And what better place to start than by looking at our test manuals.

Manual, according to The New International Webster’s Comprehensive Dictionary of the English Language, is a “compact volume; handbook of instruction or directions; designed to be retained for reference.”

Dust off those test manuals!

Do you know where your test manuals are? When was the last time you took this valuable part of your assessment out of the closet or its fancy bag? Authors and publishers of tests include manuals in their packages for good reasons!

This subject of manuals came to mind when I recently mentored a new, very conscientious SLP. She had just administered the Goldman-Fristoe Test of Articulation-2 (GFTA-2) to a young child. I was reviewing the information prior to our staff meeting. Many of the phonetic symbols on the protocol seemed to indicate a very unusual pattern.

I asked the SLP, “What did you hear in the child’s speech that led you to use that symbol?” She explained that another SLP told her to use that symbol when a phoneme sounded distorted. I explained that the phonetic symbol she used meant, according to the manual, that the child exhibited nasality. She stopped and thought about it for a minute. Then, she smiled and said, “Now I see why it is so important to read the manual!”

All manuals include in-depth instructions on proper procedures and details on how to enhance the use of the instrument. Think about this. Would you prepare a fancy new dish for dinner without following a recipe? Okay all you gourmet cooks, this example does not include you. However, this story about the new SLP shows how important reading the test manual can be. There are other reasons too:

  • Administration standards—Standards 6.6, 8.1, and 11.3 (CASL p. 67, GFTA-2 p. 15) are spelled out to keep us on the right track when choosing assessments. All AGS Publishing instruments should be administered and interpreted by professionals who are trained to use them, or by students in training who are supervised.
  • Scope and organization of the test—If you read through the GFTA-2 manual, you will see that pages with blue shaded edges (GFTA-2 pp. 16-25) contain helpful information. For instance, did you know that GFTA-2 includes six consonant clusters that were not in the original edition? More clusters were added because many of us out in the field requested them. Also, did you know that if you fold the protocol in the right way, you can compare the same sound across all sections? (GFTA-2 p. 23.) Try it.
  • Research support for item types/constructs—Look at the CASL manual’s gray shaded page edges (pp. 34-66). There you will find a short description of each CASL test. It’s a quick way to locate an easy-to-understand test description. These descriptions are perfect for sharing with parents and others at eligibility meetings. Notice that the word is TEST, not subtest. A distinctive attribute of CASL is that each test stands alone—making CASL so “usable” for SLPs. If you hear someone refer to CASL subtests, politely refer them to the manual!
  • Test construction decisions—One recent SpeechandLanguage.com discussion question concerned basals and ceilings. Guess what? The answer is clearly spelled out in your test manual. But remember, whenever you get a revised edition of a test, it’s important not to assume that the basal, ceiling, and even the administration process will be the same. Again, check the manual.

Before you can administer an assessment, you have to calculate the child’s chronological age. If you are as mathematically challenged as I am, then you will be happy to know that you can find an age calculator right on the Pearson Web site. With this handy little tool, we have no excuse for subtracting incorrectly!

A closing thought—do you ever reread favorite books for pleasure? If so, you know that when you read a book for a second or third time, you almost always find something you missed before. The same goes for all those important books called manuals. So go ahead and take the time to reacquaint yourself with them. It’s well worth it!

What Your Test Manual Will (and Should) Tell You—Part 2


Clinical Café by Tina J. Eichstadt, M.S., CCC-SLP

As a field, we’re into storytelling. A complete story includes setting, characters, events, consequences, plans, and resolutions. Likewise in complete test manuals, we look for “the story” of a test. How did the story, timeline, and events of a test’s development unfold? Content development in test manuals—that’s the topic of this month’s Café.

Have you ever seen or heard about a car for sale that looked great on the outside but when you opened up the hood was missing pieces or showed rust? Even worse, when the key was turned in this shiny new paint job, you wondered how a car that looked so good could sound so awful. Whether or not you’ve been in this situation, you can imagine your disappointment and dismay. Like any buyer, one of your first questions would be, “What’s the story here?”

And so it should be with tests and test manuals. Yes, the packaging should be attractive. Yes, the title should be memorable and explanatory. Yes, the record form should be easy to use. But what’s the story of the test? How did it come to be? What major and minor decisions were made that have fundamentally formed the test as a final instrument? What’s outside is important, but as the saying goes, “it’s what’s inside that counts.”

Why should we care about the story of a test? In a word: context. We all know how important context is in communication and that importance is no different in testing. We interview teachers and parents in an assessment process because we care about context. We observe the student on the playground, in the classroom, and in study hall or the lunchroom because we care about context. We teach code-switching skills to students because we want them to care about context. We read test manuals for the behind-the-scenes story of a test’s life and author’s thinking because we care about context.

Consider a comparison to a research article—we expect no less than adequate disclosure of background, subjects, and methodology in a good research article. Given the nature and potential impact of standardized test performance on accurate diagnosis, a child’s school placement, IEP services, and detailed intervention planning, why would we expect any less? A standardized test typically uses a series of tightly connected and hopefully well-controlled research studies. We should hold at least the same standard of “storytelling” and research rigor for tests as we do other research in the field.

Most test manuals tell you the basics of the test’s story—but how much is enough? That largely depends on you and your particular needs and questions. But here’s a list of things you can look for in test manuals—keep in mind that these may not be headings or independent chapter titles, but likely will be woven together in a chapter titled, “Content Development,” “Rationale,” “Theory Underlying Test Design,” or “Purpose, Scope, and Organization of [the test]“:

  • Theoretical ground—terms, definitions, and perspectives. For example, the Integrative Language Theory behind OWLS and CASL classifies idioms as lexical units (high-level vocabulary); also, the KLPA-2 manual explains in detail the theory behind scored and unscored phonological processes
  • Research support for theory. For example, throughout Chapter 2 of the CASL manual, numerous research references validate the author’s theory and five pages of reference details support it
  • Scope and organization of the test—content covered, not covered, and rationale for section/subtest organization. For example, the GFTA-2 manual explains why only 23 of 25 consonant sounds are measured, the rationale for including certain consonant clusters in the scope, and how the three different sections cover the scope of articulation testing. In the PPVT-III, the manual and the Technical References provide “the story” of each test revision and how the approach and content remained or changed
  • Research support for item types/constructs. For example, the EVT manual cites a research study that supports the change in item types from labeling to synonyms when measuring expressive vocabulary and word retrieval
  • Test construction decisions. For example, the new KTEA-II manual explains that the oral language subtests are specifically designed to measure listening and speaking skills that are typical for students in school. The items include examples of more formal teacher language and common situations for students

Delve into the “stories” that await you in your test manuals. You’ll find that test developers have often written a wealth of information for you!

What Your Test Manual Will (and Should) Tell You—Part 1


Clinical Café by Tina J. Eichstadt, M.S., CCC-SLP

We hear about new (and older) tests in many ways: comments from colleagues, email listservs, flyers and postcards, catalogs, presentations at conventions, and the like. How many times have you purchased a new test after seeing it in one of these communication vehicles? When you received the test, how many times have you hurriedly opened the package, grabbed the easel and record form, and run off to test the student you believed it to be appropriate for—without reading the manual? If you say, “not once,” consider yourself one of a very small number of people who deserve kudos beyond measure, or . . . maybe you should rethink your answer. For most of us, the second group is where we sheepishly belong.

Say it with me: “I confess! I’ve given a test without reading the manual first!” There…now that’s out of our system. This month’s Café will begin a series on delving into the dark places of test manuals, hopefully to shed a little light or make the dim light a bit brighter.

You need a lot of things to help you work well in the school setting: Want a brief overview of the entire test in a short, concise description for a report or IEP meeting? Interested in the variety of ways a test may be used in clinical practice? Want to know how long testing may take or who is allowed to give the test? Need a bullet-point list of the test’s key features for the justification of the test purchase to your special education director or administrator? These and many other basic test questions are answered in well-written and complete test manuals.

In the case of Pearson’s test manuals, answers to these questions translate into “everything that is in Chapter 1.” Here are a few examples of the nuggets of gold just waiting for you to mine:

  • The Expressive Vocabulary Test (EVT) manual states that this test was co-normed with PPVT-III. (pg. 1) The strength of the EVT test is not only that the test’s internal data are rigorous, but also that the psychometric link to the gold standard in vocabulary testing, the PPVT, makes it even easier to compare receptive and expressive vocabulary scores of your students or do research.
  • The Comprehensive Assessment of Spoken Language (CASL) manual advises that you can use the open-ended responses from students during testing for dynamic and qualitative language sample analysis (pg. 5). The CASL is not just a norm-referenced test battery (not that it wouldn’t be great even if it were)!
  • The Oral and Written Language Scales (OWLS) manuals give approximate testing times for different age groups across the testing age span (pp. 6 and 7, respectively). Each manual also includes approximate scoring times for each age group. Want some clear data for caseload vs. workload support in your school? Check out these published times and start adding it up!
  • The Goldman Fristoe Test of Articulation-2 (GFTA-2) and Khan-Lewis Phonological Analysis-2 (KLPA-2)actually have different examiner requirements: two levels of qualifications for the GFTA-2 and an additional qualification for the KLPA-2 (pp. 5 and 4, respectively).

These are just a few of the many nuggets you can find in Chapter 1 of most Pearson test manuals. While understandably the text may not be as riveting as a John Grisham or Harry Potter novel, it may help you in ways that you would never consider until you read each page. Begin gently&nbsp. . . with Chapter 1, which is typically eight or less full pages. Think about the “big picture” content for a while. Fit it in with what you already know and believe about testing. When you’ve had a break to consider the information, then go on to the next chapter. Scanning a manual is a good initial tool, but many of us don’t take the time to go deeper.

Go deeper. Your students depend on your depth.

From the view of a test publisher, writing and printing a manual is critical to any test development process. Authors and editors work closely together to be as complete and clear as possible without adding unnecessary information. Keep in mind that the manual must be written to a large audience who has a very wide knowledge base about testing in general and the individual test specifically. Finding a balance between not enough and too much information can be difficult, but with a solid process and competent professionals, a winning combination can be born.

Happy reading—you can do it!


Send us your “What I’d like to learn about tests this summer” list

As your partner in testing, we’d like you to know what we do, how we do it, and why. In turn, we’d like to know what other information we can provide to help you in your jobs. So send us your “What I’d like to learn about tests this summer” list to webmaster@agsnet.com and we’ll try to fulfill your wishes.

Know The Code!


Clinical Café by Tina J. Eichstadt, M.S., CCC-SLP

Quickly, before you run out the door for the summer . . .  are you ready for the fall testing of your students and prospective students? “Are you serious?” you ask. “I’m still wrapping up spring testing right now. I’m not even thinking about September yet!” Well, that works too. Whether you’re surviving in June or looking ahead to September with purchases, plans, and protocols, take the next two minutes to read this month’s Clinical Cafe!

The world of testing in education is big—bigger now than it has been in the past. Students deserve our best; we need to think ever clearer about what we do in testing and how we do it. Pearson’s Assessment group staff receives calls and emails daily from customers with questions like these and more: “Which test is appropriate?” “How do I administer and score this test?” “How do I accurately report and interpret these results?” “What do I need to tell test takers?” Similarly, Pearson’s Assessment group asks many of the same questions between one another in the test development process: “How do we know we’ve built an appropriate and relevant test?” “How do we make administration and scoring crystal clear and easy?” “What information and guidance can we provide that will streamline interpretation and reporting?” “What is our role in informing test users and/or test takers about our tests?”

The Joint Committee on Testing Practices, an interdisciplinary group from multiple national organizations including ASHA, drafted guidelines called the Code of Fair Testing Practices in Education (2005). The Code was just completed this year and adopted by the ASHA Legislative Council, and is consistent with the ASHA Code of Ethics. The whole document is relatively brief, not even five full pages (and it’s mostly in tables  . . .  easy!). Four sections help answer the above questions with specific information for both test developers and test users:

  1. Developing and Selecting Appropriate Tests
  2. Administering and Scoring Tests
  3. Reporting and Interpreting Test Results
  4. Informing Test Takers

As a professional in the field of communication disorders, where a lot of testing is completed, the Code of Fair Testing Practices in Education is for you. It will trigger your memory, teach you something new, make you want to grab a highlighter or run to the copy machine for your next staff meeting. Keep it handy. Share it with colleagues. Put the test you use up against this Code and see how it measures up. Ask good questions and keep the answers. Test developers (like Pearson) and test users (like you) together need to know the Code.

To get your copy of the Code of Fair Testing Practices in Education (2005), follow these directions:

  1. Go to www.asha.org.
  2. Hold your mouse over the top of the “For ASHA Members Only” link and watch the drop-down list appear.
  3. Drag your mouse down to the item, “ASHA Desk Reference” and click once.
  4. Login into the Members Only section with your email address and password (ASHA Members only).
  5. On the ASHA Desk Reference page, click on “Volume 1: Cardinal Documents of the Association.”
  6. At the bottom of the page, under the subheading “Relevant Papers,” click on the Code of Fair Testing Practices in Education (2005).
  7. When the popup window opens, choose Open or Save.
  8. View the .pdf document (you need Adobe Acrobat Reader) and print.

Easy!

P.S. You may have been thinking, “I thought she was going to write about the ASHA Code of Ethics—THAT code.” Of course, that applies here too. As mentioned above, the ASHA Code of Ethics was used in the development of this document. Both apply to the testing situation, but the Code of Fair Testing in Education (2005) is much more specific to actual tests and testing situations. The bottom line? You need both!

Send us your “What I’d like to learn about tests this summer” list

As your partner in testing, we’d like you to know what we do, how we do it, and why. In turn, we’d like to know what other information we can provide to help you in your jobs. So send us your “What I’d like to learn about tests this summer” list to webmaster@agsnet.com and we’ll try to fulfill your wishes.

Have a great kick-off to summer!