Our Blogs

Share in practical tips and insights, inside information, stories and recollections, and expert advice.
Submit Your Question

Ask The Expert

Children Who Fail PLS™-5 Screening Test, but Pass PLS™-5?

Nancy Castilleja is a senior Product Manager for speech and language products, including PLS-5.

I am seeing a lot of 3- and 4-year-olds who fail the PLS™-5 Screening Test, then score 78–85 when they take the full PLS™-5. Any insights you can provide about these test results?

A criterion-cut score is selected on a screening test to best differentiate a group of typically developing children from a group of children with a disorder. These two groups overlap in performance somewhat—they are not completely distinct groups. Low-performing typical kids and higher performing atypical kids with a mild language disorder may perform similarly.

Because of the overlap in the distribution of scores of both groups of children, a criterion score may be chosen that slightly over-identifies children who do not have a disorder as needing more testing or under-identifies children with a mild disorder who are not flagged for additional testing. Many screening tests are designed with a cut score that slightly over-identifies children who need additional testing so that clinicians don’t miss children who have a disorder.

The example the clinician provides (children scoring 78–85 on PLS-5) indicates that the PLS-5 Screener is not screening out children with a mild disorder (standard scores of 78 to 85 translate to a percentile rank of 7 to 16—not a score in the average range, but not low enough to qualify a child for services in many programs.) It would be a concern if the clinician found that many children failed the Screener, but had PLS-5 scores of 86+, well into the average range of the test. But this clinician has a population of children who are having some degree of language issues—84–93% of same age peers score better than they do. The PLS-5 Screener isn’t identifying children for direct services, but is identifying children with low language skills that should be monitored. Since the children she is testing may not qualify for direct services, she may consider other ways to foster improved language skills for these children scoring somewhat low such as team teaching in the classroom for certain lessons, providing the teacher with techniques to facilitate improved vocabulary or syntactic skills, etc.

What’s New With CELF®-5

What is new about the CELF®-5?

View this great resource that tells you everything that is new about the CELF-5.

Webinar “CELF-5 Overview: What’s New & Different?”
By Adam Scheller, PhD, NCSP

What CELF®-4 Subtests Were Deleted from CELF®-5?

What CELF®-4 Subtests Were Deleted from CELF®-5?

Market research conducted with customers showed strong interest in having more information about pragmatic language skills and written language. While certain CELF-4 subtests provided valuable information to answer questions about specific students, there were several tests used less frequently than the core subtests.

So that the CELF-5 battery would not consist of 23 subtests, the following subtests were deleted from CELF-5: Working Memory tests: Number Repetition and Familiar Sequences. It is recommended that you work closely with a psychologist, who can work with you to thoroughly evaluate the effects of attention, working memory, and behavior on language processing.

Language Memory tests are still included in the CELF-5 battery so that you can examine the effect of memory on language skills.

  • Word Definitions
  • Rapid Automatic Naming
  • Phonological Awareness

CELF®–4 Scores Still Valid?

Nancy Castilleja is a senior Product Manager for speech and language products, including PLS-5.

I see the new CELF®-5 is coming out. Are CELF®-4 scores still valid?

According to the Standards for Educational and Psychological Testing, published by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education, a test should be revised “when new research data, significant changes in the domain represented, or newly recommended conditions of the test may lower the validity of test score interpretations.” The CELF-5 was updated based on the USA’s changing demographic landscape and innovations in educational and clinical practices. Some educational programs require that a new edition of a test be adopted within 1–2 years; others do not have these requirements. The CELF®-4 does not automatically become “invalid” when the new test is released; however as time passes, the normative information collected in 2002 becomes less and less representative of the current population, therefore less appropriate for use as a measure for comparing students to same-age peers.

CELF®–5 Metalinguistics to replace TLC-E

When will the Test of Language Competence be updated?


CELF®–5 Metalinguistics
(Wiig & Secord, in development) is a revision and update of the Test of Language Competence—Expanded Edition (TLC–E) (Wiig & Secord, 1989). CELF–5 Metalinguistics is designed to evaluate delays in the emergence of metalinguistic awareness and knowledge in late-elementary, secondary, and college-level students ages 9–21 years old. The test will cover metalinguistic skills such as the ability to make and understand inferences; using and understanding multiple meanings words, figurative language; and using conscious processes in formulating spoken or written sentences to meet cultural expectations for conveying messages or expressing emotions or opinions. A standardization version of the CELF–5 Metalinguistics test will be field tested beginning in May. If you are interested in participating in the field test, complete an application. If you have any additional questions about the standardization, please contact clinsamp@pearson.com.

More information on CELF-5 Metalinguistics be will posted to our CELF-5 website soon. Please bookmark the site and return for more information.

Understanding the Results of CCC-2

I have trouble understanding the results I obtain from the Children’s Communication Checklist (CCC-2). For example, I have evaluated a student that I suspect has a pragmatic disorder. His CELF®-4 language scores are all normal with a total language score of 100. The pragmatic profile from the CELF-4 is very low indicating a pragmatic disorder. I gave the CCC-2 and got a General Communication Composite score of 71 with a SIDI of 8. The consistency check is 1. My question is, what does this tell me? I understand 11 or above may indicate SLD and -11 possibly on the autism spectrum. Does the communication score tell me communication competence as a whole is 71 or does imply because of a pragmatic disorder, communicative competence is low? If it implies a language disorder, it is not consistent with findings on the CELF-4. Or, does it imply a language disorder in the area of pragmatics? Please help me understand this.

The CCC-2 is a measure designed to assess children’s communication skills in the areas of pragmatics, syntax, morphology, semantics, and speech. CCC-2 can be used to:

  • Identify children with a pragmatic language impairment by comparing performance in different language domains (e.g., pragmatics vs. syntax and morphology).
  • Identify children who may have a speech and language impairment, and whose receptive and expressive language skills should be further evaluated with a comprehensive speech and language assessment. In other words, CCC-2 may be used to screen children who are suspected of having a speech and language impairment.
  • Assist in identifying children who may require further assessment for an autistic spectrum disorder. That is, CCC-2 may be included as one measure in an assessment battery to diagnose children suspected of having autistic spectrum disorder.

To answer your questions:

  1. Consistency Check number. There are many numbers to input when completing the scoring worksheet and deriving scaled scores. The consistency check number (e.g., 1) enables you to verify that the values derived for Subtotal Raw Scores A and B are within the possible range. If Subtotal Raw Score B is less than or equal to 30, then the consistency check is passed. In your case, your consistency check number of 1 suggests that values you input for the different scales (i.e., speech, syntax, semantics, coherence, initiation, scripted language, context, nonverbal communication, social relations, interests) are accurate.

    It also suggests that the caregiver who provided the ratings was consistent and did not contradict himself/herself when rating items in section 1 (items 1–50) and section 2 (items 51–70). That is, the items in section 1 are statements that refer to the difficulties that have an effect on a child’s ability to communicate. In contrast, the items in section 2 are statement that refer to the communication strengths a child may demonstrate. If the caregiver were inconsistent in his or her ratings (e.g., rated “talks repetitively about things that no one is interested in” as always (3), and rated “talks to others about their interests, rather than his/her own” as always (3)), the inconsistency check number most likely would be above 30, signaling a misstep in either the rating or the scoring process.

  2. General Communication Composite score (GCC). The GCC is calculated by summing the scaled scores of 10 scales (i.e., speech, syntax, semantics, coherence, initiation, scripted language, context, nonverbal communication, social relations, and interests). In general, the GCC may be used to identify children likely to have significant communication problems. That is, a low GCC would indicate that the child’s skills were rated poorly in most of the 10 scales. If the child’s skills were scattered with some areas of strength (e.g., morphology and syntax) and some areas of weakness (e.g., coherence, scripted language, social relations), the scaled scores may “average out.” In your case, a standard score 71 would suggest that the child demonstrated some areas of strength and some areas of weakness. To check if this is the case, review your Scoring Worksheet, Scaled Scores section to see if there is a pattern of strengths and weaknesses. It is likely that since your child’s CELF-4 language scores are in the normal range, the CCC scales for syntax, semantics, coherence should be areas of relative strength and score in the normal range. The child’s scores in the pragmatic areas (e.g., initiation, nonverbal communication, social relations, interests) are likely to be low, suggesting they are areas of weakness.
  3. Social Interaction Difference Index (SIDI). The SIDI is a special index derived by subtracting the sum of scaled scores for the language areas (Speech, Syntax, Semantics, Coherence) from the sum of the scaled scores for pragmatic areas (Initiation, Nonverbal Communication, Social Relations, Interests). Children with pragmatic difficulties often show areas of relative strength in the language areas and areas of relative weakness in the pragmatic areas. CCC-2 research support the profile of children with pragmatic difficulties demonstrate relative strengths in language areas and relative weaknesses in pragmatic areas. (Refer to CCC-2 Manual, page 18, Table 3.1).

    In your case, the child’s SIDI score is 8. If you refer to Table 3.1, 80.43 percent of the CCC-2 sample of children diagnosed with pragmatic disorder received a SIDI score between -10 to 10. So, a score of 8 is one indication that the child you tested may have a pragmatic impairment. For further validation, you can refer back to the Scoring Worksheet and review each of the 10 communication scales. Review the pragmatic scales and see if the scaled scores for each of those scales is indeed low.

    In summary, GCC score of 71 suggests that the child has a language impairment. However, it does not necessarily indicate the child has a language impairment in the areas of syntax, morphology, and/or semantics. The low score may be the result of low scaled scores in the areas of pragmatics. You need to examine the 10 scales and look for patterns of strengths and weaknesses to determine which language areas the child is having difficulty (e.g., linguistic areas and/or pragmatic areas).

Basic Concept Assessment

What originally motivated your work in basic concept assessment?

I began to think about basic concepts many years ago when I was a teacher of young children and working on my doctorate at Teachers College, Columbia University. I became interested in why some children had difficulty following directions such as “Mark all of the words in the top row that begin with the letter g.” As I reviewed curricular materials in reading and mathematics, the relational words stood out.

The use of many of these words was not directly taught at that time. So, for many children there was a hidden curriculum that was needed for success in school. The development of children’s understanding of what I refer to as “basic concepts” became the basis of my doctoral dissertation, the Boehm Test of Basic Concepts, and continues to be an area of importance in my life’s work.

The current version, the Boehm Test of Basic Concepts-3 (BTBC-3), continues to be a planning and problem-solving tool for teachers, speech-language pathologists, and other professionals. It samples a student’s understanding of a large number of essential basic relational concepts—such as before and after—that are important to reading, mathematics and science, following directions, solving problems, and taking other tests. The goal of the BTBC-3 is to identify basic concepts of space, quantity and time that children are familiar with or may be emerging in order to guide instruction at school and at home.

I will be presenting two upcoming webinars that focus on basic concepts and hope that you will attend:

April 2, 2013: Basic Concept Assessment/Intervention: Building Blocks to School Success
Explores basic concept assessment-intervention planning using a multiple-step model. Presents evidence based intervention concerns along with checklists to monitor students’ use of concepts across different contexts and as tools of thinking.

April 23, 2013: The Critical Role of Following Directions in the Classroom and at Home
Explores the role of basic concepts in directions that children hear at home and at school along with strategies to improve children’s ability to follow different kinds of directions

The Bridge of Vocabulary’s link to the Common Core State Standards (CCSS)


How does The Bridge of Vocabulary link to the Common Core State Standards (CCSS)?

The CCSS adds two previously overlooked elements in literacy performance—speaking and listening. So, our work in directly linking oral to written language is much clearer through the new standards. To that end, The Bridge of Vocabulary provides over a hundred highly effective speaking and listening instructional strategies for students PreK–12th grade. All of these activities can be linked to the CCSS!

Administering Expository Reading for the OLAI-2

When administering Expository Reading for the OLAI-2, should the examinee have the text in front of him/her for the intra- and extra- personal questions?
The examiner should have the pictures still in front of the student from the beginning of the task. Given the way the record form is laid out, the student reads the passage silently and then the examiner takes the record form to record the answers to the questions (on the back side of the passage in the form itself). Have the examiner start with that method—not showing the text, just the pictures, to the student. If the student struggles, it should be noted on the record form and then the examiner would be free to show the text as well to see if that helps. Keeping in mind that the task is not testing memory, the pictures are typically a good support to the student after the text is read. In addition, only a few of the questions are related to specific text in the story, so using the text or not still requires the student to use personal, world knowledge and his or her own thinking skills.

PLS-5 Additional Norms?

I’ve been using the PLS-5 Norms in 1-Month increments on the website. They work really well when I’m testing a child who is 2-10 or 2-11, but the scores are still high for children who are 2-6 (in fact, the scores are even higher than the scores in the PLS-5 manual. Why is that?
The scores in the PLS-5 manual show the average score for a child in the 2:6 to 2:11 age range. When a 6-month age normative interval is used, a child who is 2:10 or 2:11 is being compared to a sample of children who are mostly younger than he or she is. The resulting score may be higher than expected. A child who is 2:6 is being compared to children who are mostly older than he or she is, so the six-month norm score may be lower than expected. The PLS-5 norms in 1-month increments can be used to compare a child to peers in the same 1-month age group. When using norms in 1-month increments, a younger child (e.g., one who is 2:6) is no longer being compared to a sample of children who are mostly older than he or she is, so the score will be higher than the 6 month norm reported in the PLS-5 manual. When using norms in 1-month increments, an older child (e.g., one who is 2:11 is no longer being compared to a sample of children who are mostly younger than he or she is, so the score will be lower than the 6 month norm reported in the manual. Children who are in the middle of the age range (e.g, age 2:9) will have scores very similar to the 6 month norms reported in the manual.