Our Blogs

Share in practical tips and insights, inside information, stories and recollections, and expert advice.

Stay Connected

Subscribe to our FREE monthly eNewsletter

Clinical Cafe

Standardized Assessments and Telepractice


some Apple iMacs on a table

The buzz isn’t new; telepractice has been around the field of speech-language pathology for more than a few years. Some of us are working in a telepractice context full-time, others “dabble,” and still others know people who use telepractice but haven’t tried it themselves. ASHA’s new Special Interest Group (SIG) on Telepractice, however, may be signaling an important shift in the way that this service delivery model is coming into more mainstream practice.

As a member of ASHA’s SIG 1, I received a recent listserv email on telepractice wondering how publishers see their role in this service delivery model. In addition to consistent calls and contacts over the last few years, we see questions and issues that prompt our thoughtful response. We do see the use of our tools, assessments in particular, in a telepractice context as something we can and should address. The demand has grown tremendously over the last years and we want to respond in good order.

The simple fact is that equating studies need to be done on each test under particular conditions to determine whether or not scores change significantly in a different delivery context. We don’t have published tools (yet) that include the telepractice context in standardization so it’s an empirical question to answer—that is, do examinees perform similarly or differently when the assessment is delivered through telepractice? In addition, a number of different physical scenarios may be considered “telepractice” (this is not an exhaustive list):

  1. A physical stimulus book/easel at the examinee site (with or without a facilitator also present); the examiner gives the verbal stimulus through telepractice, but the visual stimulus is in print with the examinee.
  2. A physical stimulus book/easel at the examiner site; the examiner moves the camera to show the visual (with accompanying verbal) stimuli through telepractice.
  3. A digital stimulus book/easel on the computer screen; the examiner may or may not also be in view; the examinee responds verbally, by mouse click, or by touchscreen input.
  4. Others?

There are many variables to consider. Of course, some tests lend themselves to telepractice better than others, so it’s really dependent on the tool and the format. A “one-size-fits-all” approach isn’t a wise option, but key strategies may be universal and maintain all the appropriate best practices we know so well about assessment in general.

Certainly, we also monitor and consider ASHA’s guidelines (as well as the draft APA Standards document that is now out for review). Our input would be to participate as appropriate in the conversations and practically extend established guidelines into individual test contexts.

What are your thoughts on the use of assessments in telepractice? Chime in below!

Webinar Recording: Preschool Language Scales-5: Assessing Language From 0-7


You can watch the recording of “Preschool Language Scales-5: Assessing Language From 0-7” below.

You can download the slides here.

**please note that CEUs were only offered for attending the live webinar. We are unable to provide CEUs for watching the recording.**

Goldman-Fristoe 2: Research, Administration, and Interpretation (Webinar Recording)


On Thursday, March 17, 2011, Dr. Ronald Goldman presented a complementary webinar: Goldman-Fristoe 2: Research, Administration, and Interpretation. He also answered questions from listeners at the end of the event. Do you have comments or questions? Ask away in the comments below!

When are test norms “outdated?”


PLS Picture Book, Revised Edition (from 1979!)

When are test norms “outdated?” Ah, the age-old question. There is really no “number” or length of time that determines when norms are “out of date.” The fact is that one day makes any data set (whether in test norms or a journal article) one day older. The Standards for educational and psychological testing, which is the guiding document for most test publishers, uses the word “periodically” in the section on test revisions (I’ll do a quick excerpt here to save everyone’s time):

“Tests and their supporting documents…are reviewed periodically to determine whether revisions are needed. Revisions or amendments are necessary when new research data, significant changes in the domain, or new conditions of test use and interpretation would either improve the validity of interpretations…or suggest that the test is no longer fully appropriate for its intended use” (p. 42).

The paragraph goes on to discuss the difference between outdated norms and outdated item content, which are two different things, of course. This is not a black and white issue—and just like language, the nuances are critical. Some of our content domains, like vocabulary, change more often than other more “stable” domains, like the acquisition of basic syntactic structures or phoneme acquisition (although in the latter domain, the definitions of “mastery” of a phoneme vary widely). It’s true that a general rule of thumb for test revision tends to be 8-10 years, but that’s as much a practical matter as a data-based one.

The number of factors in making a clinical decision on whether or not to use any assessment tool (whether normed or not) makes our roles as professionals all the more important. Certainly, the older the norms the more critical we should be of the validity (i.e., the use of the norms as stated in the manual) of a test instrument. Yes, some states have made a gray issue into a black and white one by setting a specific number of “years old” that any norms set can be. But the “story” of any test is so much richer than just a number…and while one use of an assessment tool may be inappropriate in a given context, there may be other valid uses that still exist for a particular instrument.

As a final aside, ASHA echoes and supports the use of the Standards as a guiding document for test use in our profession:
Code of Fair Testing Practices in Education. (2004). Washington, DC: Joint Committee on Testing Practices.

Perhaps a gift to your colleague(s) for the holidays? It’s not “warm and fuzzy,” but sitting next to your APA Style Guide, it’s not a bad idea.

(Note: The author of this blog post has no affiliation with the publishing of the Standards nor receives any benefit from the promotion of the book!)

Development, Validation, and Use of the OASES for Children, Teens, and Adults Who Stutter.


On November 1, 2010, J. Scott Yaruss, PhD, CCC-SLP, BRS-FD, ASHA Fellow, presented the webinar “Development, Validation, and Use of the OASES for Children, Teens, and Adults Who Stutter.” You can view the recording and/or download the slides below.

Development, Validation, and Use of the OASES for Children, Teens, and Adults Who Stutter

3 Reasons to Measure the Impact of Stuttering


When stuttering severity doesn’t show significant change, then one should consider measuring something that will. According to Yaruss & Quesal (2008, 2010), measuring the impact of stuttering on a person’s life is just the thing to do.

Last Friday (10/22/10) was International Stuttering Awareness Day–but just like any other day, we all need to keep learning about what it means to be a person who stutters.

Enter Yaruss and Quesal‘s (in collaboration with Craig Coleman) latest effort for children who stutter–the Overall Assessment of the Speaker’s Experience of Stuttering (OASES) forms for children ages 7-17. How does stuttering impact a young person’s life–at home, at school, out on the playground or at practice? This new self-report does just that in a few minutes. Who better to tell professionals how to support a person who stutters than the people who stutter themselves?

The new forms of the OASES make perfect partners on multiple levels:

  1. As a partner to the measurement of stuttering severity (a la the SSI-4, for example)
  2. As a partner to the planning of intervention–focusing where the impact on life is the greatest or most valued
  3. As a partner to the person who stutters–providing young children (in this case) with a vehicle for communicating complex and often emotional details about their speaking.

The number of stutters a person “speaks” in a particular situation may be variable on any given day, but reducing the negative impact of those stutters consistently in the middle of a particular situation may be the best thing that ever happened to someone who stutters–and now you can quantitatively and qualitatively measure it.

That’s significant, don’t you think?

Reference:
Yaruss, J. S. & Quesal, R. W. (2010). Overall Assessment of the Speaker’s Experience of Stuttering. Bloomington, MN: NCS Pearson, Inc.

2 Things I Learned About Stuttering At ASHA Schools, 2010


Las Vegas Sign

"Leaving Las Vegas" by pyth0ns on Flickr.com

If you can’t stand the heat, get inside. Luckily, the 119 degree heat in Las Vegas at the ASHA Schools conference in mid-July kept people indoors and for many, at a session on school-age stuttering by Nina Reeves, MS, CCC-SLP, BRS-FD. If you look at the ASHA data, most SLPs see children who stutter, but not very many in number. So it seems that continuing education on stuttering assessment and intervention is especially critical to maintain a high standard of evidence-based practice and confidence in this part of the SLP scope of practice. Nina’s presentation did just that…with plenty of great energy and passion on behalf of children who stutter.

Two (of many) key insights from Nina’s presentation:

  1. Look beyond the stutter—As we all know, while frequency counts and descriptions of stuttering are important, they are not the whole story. In addition, children who continue to stutter well into elementary school and beyond often have long-term effects of stuttering in their lives. In ongoing assessment and intervention, Nina emphasized the cognitive/affective and environmental aspects of the child’s life as they address their own stuttering. The literature on any chronic communication disorder may be a good area of ongoing reading for even more insights into our role as SLPs and supportive guides to children who stutter.
  2. Jump in yourself—If you want to teach a technique to someone who stutters, you’d better be willing to demonstrate both the technique AND the stuttering event with them. Trust and authenticity is paramount in any therapeutic partnership; stuttering assessment/intervention is no different. Nina encouraged all of us who work with children who stutter to take our turn and learn to pseudo-stutter with our students in the same contexts that they do. Scary? Sure. But just imagine being someone who stutters—not able to turn stuttering “off.”

Want more from Nina? See her 2006 ASHA Leader article for starters.

One final thought: if you’re looking for an assessment tool that measures “beyond the stutter,” check out our soon-to-publish record forms for ages 7-17 of the OASES: Overall Assessment of the Speaker’s Experience of Stuttering by J. Scott Yaruss, Robert Quesal, and Craig Coleman. In just 10-15 minutes, your student who stutters can complete a self-report of the impact stuttering has on his or her life across contexts. While the frequency of stuttering may not change much over time, the OASES can give you a quantifiable measure of progress (i.e., an impact score) that the impact of stuttering is going down.

Have your own story about stuttering assessment and intervention? Share it below!

Two Quick Tips for Bedside Aphasia Evaluations


June is National Aphasia Awareness Month. In that spirit, we’ve been thinking about quick tips for you related to aphasia assessment and treatment. Most of us who have worked clinically in the area of aphasia likely would agree that a combination of formal and informal assessment procedures balances an approach to assessment at any stage. Of course, when the patient or client is in the acute phase of a neurological event, brevity is a partner to a broad survey of communication skills.

  1. Enter the Bedside Record Form (BRF) of the Western Aphasia Battery-Revised (WAB-R). In just 15 minutes or less, you can get a broad survey of typically assessed language skills with a few items in each area. That’s the formal part of the evaluation—listening, speaking, reading, writing, and apraxia, in a quick, standardized nutshell.
  2. Then, consider using some of the tasks (or similar tasks) dynamically. In just seconds, you can informally assess “what works” in terms of viable communication strategies beyond the formal procedure. Tools required? Blank paper and Sharpies. Here’s an example: The patient scores 0 on the Spontaneous Speech: Content section of the BRF. After you complete the administration, you might say to the patient, “Let’s try this again.” Readminister the same four items from that section, but offer written choices you’ve created with your Sharpie and paper. Read through each of the choices aloud, pointing as you go, and then hand the Sharpie to the patient. What happens next may surprise you! Picture drawing, pointing to written choices, gesturing—all are viable dynamic options to follow a quick, formal assessment.
You’ve done it! Report the score from the WAB-R’s BRF; describe options for communication success where the difficult skills appear for the patient, caregivers, and medical staff.
So, do you have a tip for assessing or treating a person with aphasia? Share it with everyone in the comments below.

The ABCs of Psychometrics… Please Read On!


“I’m an SLP; I was told there would be no math.”

I don’t exactly remember the first time I heard someone say something to this effect—grad school at the earliest, I’m sure—but it has certainly stayed with me and keeps me chuckling as I occasionally think the same thing myself. Ironically, I ended up in a role in the professions where I deal with math and statistics constantly every day in business, development and clinical contexts. Over the years, I also have come to appreciate that there are plenty of places where math and statistics are rightfully embedded in the professions. So where did the “no math” notion come from, I’ve wondered? It certainly isn’t left out of our standard masters-level curriculum.

Lest any young (or seasoned) professional feel encouraged, prodded, or outright pushed into the world of math and statistics against their will, I’d like to call your attention to one of the many critical ways math and statistics need to be on our collective radar as we work daily to improve communication skills in individuals of all ages.

The term of the day is psychometrics. No, it’s not a compound word combining a “psycho” concept with a “metrics” concept exactly! Psychometrics, a subspecialty of psychology, is a discipline that deals with the measurement of human behaviors and/or traits. For our purposes in this article, psychometrics is the discipline of valid and reliable test development.

Excellent test development means you have tools that you can stand behind confidently in your work. The psychometric effort that goes into building a test, among many other areas of expertise, is straightforward yet creative, analytical yet fluid. Psychometricians evaluate the needs that the test fills. They use their knowledge of math and statistics, but their expertise when working with content types (like us) is to provide insight on how to best capture the variability of human behavior.

You may be able to define the terms standard score, percentile, or stanine, but can you articulate the rationale behind the norming process of your favorite standardized test? Can you accurately describe the math and statistics-based validity, reliability, and evidence-based properties of your favorite non-standardized assessment tools? In any assessment tool choice you make, you are responsible for the appropriate application of that tool—to the right person, for the right reason(s), in the right place, at the right time. Our Code of Ethics requires it.

If you’ve read this far, you may be reaching for a paper sack right now to help with your breathing or needing someone to elbow you to remain upright and awake. I can’t hand you that type of support in this article, but I do have something that hopefully will be even more helpful. With the integration of the Pearson and PsychCorp businesses, we’re just beginning to enjoy all the things we can share across the Minnesota and Texas campuses. One very special document that our colleagues in Texas have created is called a “Psychometrics Primer.”

View the Psychometrics Primer.

It’s a way of looking at psychometrics from an SLP’s perspective. I’d call it “required reading” for every student in the professions as well as a great reference and a helpful reference for every professional in practice today.

With all the questions we receive at Pearson from professionals who use our products, we know that we can’t say enough about what goes into putting a tool like the CELF, PLS, PPVT, or GFTA together. It’s more than numbers and pretty colors on the packaging, to be sure (although we do like how our products look). As you read, share, and ponder the attached document, please email us as you have questions about how our tools are created and how they apply to the work you do, individually and collectively.

Oh, and do keep in mind—math is a language too!

Monitoring Progress…the Easy (or Easier) Way!


These days, making statements about progress are increasingly important as we seek to document our efforts in each and every practice setting where SLPs and audiologists serve individuals with communication disorders. To that end, using scores that are sensitive to smaller changes in performance over time are critical. There are a number of Pearson products that currently have growth scores:

But what exactly is a growth score, and how is it used? Using the PPVT-4 test as an example, you can read a brief excerpt from the test manual for a definition below. In the case of the PPVT-4 and EVT-2, the growth score is titled “Growth Scale Value” or GSV:

The GSV score is useful for measuring change in…performance over time. The GSV is not a normative score, because it does not involve comparison with a normative group. Rather, it is a transformation of the raw score and is superior to raw scores for making statistical comparisons (p.18).

For a little more background on growth scores, you can read another set of comments in the PPVT-4 test manual regarding the GSV:

The GSV scale was developed so that vocabulary growth could be followed over a period of years on a single continuous scale. Standard scores, percentiles, stanines, and NCEs compare an examinee’s vocabulary knowledge with that of a reference group representing all individuals of the same age or grade. In contrast, the GSV measures an examinee’s vocabulary with respect to an absolute scale of knowledge. The test performance of any examinee…can be placed on a [single] GSV scale. As an examinee’s vocabulary grows, the GSV will increase.

The GSV is an equal-interval scale. Therefore, GSV scores can be added, subtracted, or averaged. Furthermore, the fact that GSVs can be averaged makes this scale a useful one for tracking the progress of groups.

Standard scores and percentiles are less useful than GSVs for measuring growth, because the reference norm group changes as the examinee moves into a higher age or grade level. If a person’s vocabulary increases at the average rate, his or her standard score and percentile would stay the same, whereas the GSV score would increase (p.21).

In addition, each test manual should offer you the number of growth points needed to show statistically significant change at a particular age level. For example, 8 GSV points of change from one test administration to another is statistically significant on the PPVT-4 for individuals age 2:6-12. For children in this age range, if they increase 8 points on the GSV scale, you can be confident that the child’s vocabulary has truly increased.

A caveat: Using growth scores for measuring progress doesn’t mean standard scores are not important. Standard scores serve a very clear purpose and can be used reliably with growth scores. You can think of a growth score as a complementary tool to a standard score; each score tells you something different about the individual’s performance and creates a clearer picture of change over time. The growth score indicates whether there has been improvement, and the standard score indicates whether the rate of improvement has been above or below the average rate for the child’s peers.

So, as you consider the need to demonstrate growth in an individual you serve, do consider using the growth scores available in the above tests as well—and make your work easier!

Reference

Dunn, L. M. & Dunn, D. M. (2007). PPVT-4 Manual. Bloomington, MN: NCS Pearson, Inc.