Our Blogs

Share in practical tips and insights, inside information, stories and recollections, and expert advice.
Submit Your Question

Ask The Expert

CLQT Symbol Trails


On Symbol Trails, the examinee did Trials 1 & 2 correctly, but did not follow the instructions on the actual scored task. The examinee kept repeating “circle to triangle,” but she drew the lines in a scattered fashion, not paying attention to connecting circles to triangles or connecting objects of increasing size. According to the scoring criteria, the examinee completed 7 lines correct. Is the score actually 7? Should she consider the subtest to be spoiled?


If the examiner follows the guidelines for instructions to the examinee, credit should be given for the lines connected correctly. The score is indeed a 7 and scoring procedures should be followed and reported. At the same time, the clinician needs to make a judgment whether or not that score appears to be reflective of intentional performance or not and qualify those concerns in the report. Certainly, the verbal repetition “circle to triangle” could be an indicator of lack of attention and “random drawing” (which ended up being rather accurate in this case), or it could simply be verbal rehearsal and a perseverating self-monitoring strategy during the task. Only the clinician giving the test can make the best judgment about that. The scoring, however, is based on actual performance given correct administration procedures.

Which Items in CELF-4 are Appropriate to Target in Treatment


When administering all items in a CELF-4 subtest, how do we know which of those items are appropriate to target in treatment if all items are to be administered?


Refer to the Item Analysis at the end of each subtest description in the Examiner’s Manual, Chapter 2 to examine the student’s patterns of correct responses and errors. Information from the CELF-4 Item Analysis (along with your other assessment results such as classroom observations, additional probes, dynamic assessment) can assist you in designing an individualized therapy plan for the student. CELF-4 , like all standardized assessment tools, is only one of the measures that should be used in a comprehensive diagnostic assessment process to determine if a child has a language disorder, to determine strengths and weaknesses, and to identify treatment targets if the child has a disorder.

Language, Literacy & Learning Behavior: A Design for Change

Lance M. Gentile, PhD

On October 20, 2011, Lance M. Gentile, PhD presented: Language, Literacy and Learning Behavior: A Design for Change

Dr. Lance M. Gentile, author of the newly-released OLAI-2, has taught for over 45 years. The numbers of children who have not acquired the foundations of language for learning in school have multiplied. The author will discuss the role of parents and professionals in teaching children the language, literacy, and learning behavior skills needed to be successful in school.

You may watch the recording here.

**please note that CEUs were only offered for attending the live webinar. We are unable to provide CEUs for watching the recording.**

The Challenges of Basic Concept Assessment/Intervention

On October 18, 2011, Ann E. Boehm, PhD presented: The Challenges of Basic Concept Assessment/Intervention. A multi-step model for assessing and planning treatment for basic concepts was explored. Dr. Boehm presented research-based intervention strategies and checklists to monitor progress. The session also addressed the complexity of direction-following and ways to improve children’s performance.

You may watch the recording here.

DELV: Who is the Test for and How is it Useful?

On October 12, 2011, Jill de Villers PhD, Peter de Villiers PhD, and Tom Roeper PhD presented: DELV: Who is the Test for and How is it Useful? While DELV addresses dialect issues in language testing, it is appropriate for mainstream English speakers as well. Subtests unique to DELV (e.g., wh-question asking, fast mapping, narrative, quantifiers) complement other assessments and are important indicators for SLPs designing interventions.

You may watch the recording here.

When to use CELF Preschool 2 or CELF-4


I am a speech pathologist currently working in a preschool/kindergarten building. I often use the CELF-Preschool 2 or the CELF-4 to evaluate their communication skills. I would like this question directed to the authors of these assessment tools. Since both of these tests cover the 5-6 year age range, which test would they recommend we use at the kindergarten level?

Elisabeth H. Wiig, PhDAnswer:

In general, the CELF Preschool-2 is your best option for children in Kindergarten–the formats in the test are more supportive and child-friendly for young children. This is especially the case if a child is a young five year old (e.g., 5:0 through 5:5) and has had little preschool experience, and limited verbal ability. There is more in-depth content coverage for younger children in CELF Preschool-2 than you will find on CELF-4, which covers content for mostly older children (ages 5-8).

Keep in mind that if the children you are testing in Kindergarten are five years old, have enough preschool experience that they are comfortable and familiar with school types of tasks, and express themselves well in social situations, you will be able to obtain accurate test results using the CELF-4. Your choice of assessment really depends on the maturity of the child, previous preschool experiences, social verbal ability, and his or her experiences with standardized assessment tools.

Score Discrepancies on CELF-4


I have an 8 year 3 month old 2nd grade boy whose overall profile falls between 5 and 6 standard scores with Formulated Sentences at 8 and Expressive Vocabulary at 7 [On the CELF-4]. Working Memory subtests standard scores as follows: Number Rep Fwd 6, Number Rep Backward 5, Familiar Sequences 10. This is huge discrepancy. No inattentive behaviors noted. Any help?

-Beth M.

Elisabeth H. Wiig, PhD

Dr. Elisabeth Wiig’s Answer:
To begin, take a look at page 121 of the Examiner’s Manual. As you will see, both the Number Repetition subtests and the Familiar Sequences subtests place a heavy demand on attention, concentration, and auditory or verbal working memory. If you examine the content in the test items on the Record Form , you will see that the first 7 items in the Familiar Sequences subtest are relatively easy in comparison to items 8-12—the context includes “familiar sequences” such as the letters of the alphabet and the days of the week, not the long random sequences of numbers in the Number Repetition task. There is a great deal of automaticity in producing those sequences (and they are a closed set!) compared to the Number Repetition subtest. The score discrepancy this student exhibited is a red flag that there may be some working memory issues operating with this child and that further assessment is warranted. Consult with your school psychologist who can conduct a more thorough assessment of the student’s skills memory and attention skills.

You might want to administer the CELF-4 Rapid Automatic Naming subtest. It probes attention, visual working memory and set shifting. If the boy uses significantly longer time to name the color-form combinations, this can serve as validation since color-form naming requires adequate bilateral temporal-parietal, subcortical and hippocampal functioning. In other words,significantly impaired performance on that subtest can point to an underlying neuropsychological/neurological deficit involving the attention-working memory and cognitive

Answering Tough Questions About CELF-4 Interpretation

On April 12, 2011, researcher and Pearson author Elisabeth Wiig, Ph.D, answered your questions about CELF-4 interpretation. The recording and a PDF of the slides are available below.

You can download a PDF of the slides here: Answering Tough Questions About CELF-4 Interpretation.

***Please note: we are unable to provide CEUs for watching the recording of this webinar. CEUs were only offered for attending the live event.

How the SCAN-3 Tests Can Be Used

The original SCAN test published in the early 1980s was designed to be a screening test. It soon became clear that the test provided important diagnostic information and with the subsequent revision it was published as a test of auditory processing disorders, i.e. a diagnostic test.

Standardized scores used in medicine, psychology, education, and speech-language pathology are used for diagnostic purposes. The ability to determine a subject’s performance at a specific level and categorize that performance as normal or not is very specifically what is used in fields such as medicine, where performance below -2 SD is considered pathological.

The current SCAN-3 batteries contain the major tests recommended by position papers published by ASHA and AAA. There are small portions of the most recent versions that may be used as screening tools. Primarily, however, they are diagnostic in nature. While some might argue that test of auditory processing disorders (APD) are not or should not be diagnostic in nature, the SCAN tests are designed to be so. Conversely, if the SCAN test batteries are not diagnostic, then what tests are available that have better normative data? Professionals familiar with the APD literature and available tests of auditory processing recognize that published norms are not available for a majority of tests currently used. When cut-scores are recommended in the literature there often is little, if any information available to the user on how those scores were obtained.

The most recent revision of the test batteries, SCAN:3 for Children, Tests of Auditory Processing Disorders and SCAN:3 for Adolescents and Adults, include:

  1. Three screening measures with criterion referenced cut-off scores;
  2. Four tests of auditory processing used to develop the composite score; and
  3. Three optional tests of auditory processing including two additional signal-to-noise ratios and a time compressed sentence test.

In addition, the manual describes administering the Competing Words test under free recall and then directed ear conditions in order to assess higher order memory/executive functions. The revised test batteries were completely renormed on 775 subjects.

It may be of interest to the readers of this note that Friburg & McNamara (2010) found that SCAN-C and SCAN-A have the highest level of sensitivity and specificity of any auditory processing test or battery.


Dr. Robert W. Keith

By Robert W. Keith, Ph.D.

Adjunct Professor
University of Cincinnati – College of Allied Health Sciences
Department of Communications Sciences and Disorders
Professor Emeritus
Department of Otolaryngology
University of Cincinnati College of Medicine



Friberg, J.C. & McNamara, T.L. (2010). Evaluating the reliability and validity of (Central) Auditory Processing Tests: A preliminary investigation. Journal of Educational Audiology, 2.

How to Report and Interpret Extreme Raw Scores

We recently received the following question about the CASL test:

When the Norms Book lists a standard score (SS) associated with a raw score of 0, but the manual guides interpretation differently, which reporting/interpretation strategy do you use?

Although a normative score equivalent is reported in norms tables for scores of 0, best practice would be to follow the recommendations in the manual. Page 73 of the CASL manual, for example, states the following: “If the examinee responds incorrectly to Items 1, 2 and 3, do not administer the test. No normative information can be derived. However, the examiner may wish to describe qualitatively in a report the examinee’s difficulty with the task.”

In addition, page 88 in the CASL manual deals with extreme raw scores. Essentially, raw scores that are 0 or “nearly perfect” should be interpreted with great caution.

From a psychometric perspective alone, it’s important to know that an associated SS is possible for raw scores of zero. In the CASL norms tables, zeros complete the range of possible raw scores. However, from an interpretive perspective, even though an associated score is mathematically and statistically possible, the examiner must consider the usefulness or meaningfulness of a score of zero. Caution is always recommended when attempting to interpret a score of zero on any assessment.

School districts may want to see a score, but if that score is meaningless, the examiner must consider the implication for the examinee of a misinterpretation or misuse of that score.

In short, we recommend that you follow the manual’s directive regarding raw scores of zero, and do not report the SS for a raw score of 0.

Comments? Add them below!