Our Blogs

Share in practical tips and insights, inside information, stories and recollections, and expert advice.

Webinars-Events

Stay Connected

Subscribe to our FREE monthly eNewsletter

Submit Your Question

Ask The Expert

What’s New With CELF®-5


Question:
What is new about the CELF®-5?

Answer:
View this great resource that tells you everything that is new about the CELF-5.

Webinar “CELF-5 Overview: What’s New & Different?”
By Adam Scheller, PhD, NCSP

What CELF®-4 Subtests Were Deleted from CELF®-5?


Question:
What CELF®-4 Subtests Were Deleted from CELF®-5?

Answer:
Market research conducted with customers showed strong interest in having more information about pragmatic language skills and written language. While certain CELF-4 subtests provided valuable information to answer questions about specific students, there were several tests used less frequently than the core subtests.

So that the CELF-5 battery would not consist of 23 subtests, the following subtests were deleted from CELF-5: Working Memory tests: Number Repetition and Familiar Sequences. It is recommended that you work closely with a psychologist, who can work with you to thoroughly evaluate the effects of attention, working memory, and behavior on language processing.

Language Memory tests are still included in the CELF-5 battery so that you can examine the effect of memory on language skills.

  • Word Definitions
  • Rapid Automatic Naming
  • Phonological Awareness

CELF®–4 Scores Still Valid?


Nancy Castilleja is a senior Product Manager for speech and language products, including PLS-5.

Question:
I see the new CELF®-5 is coming out. Are CELF®-4 scores still valid?

Answer:
According to the Standards for Educational and Psychological Testing, published by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education, a test should be revised “when new research data, significant changes in the domain represented, or newly recommended conditions of the test may lower the validity of test score interpretations.” The CELF-5 was updated based on the USA’s changing demographic landscape and innovations in educational and clinical practices. Some educational programs require that a new edition of a test be adopted within 1–2 years; others do not have these requirements. The CELF®-4 does not automatically become “invalid” when the new test is released; however as time passes, the normative information collected in 2002 becomes less and less representative of the current population, therefore less appropriate for use as a measure for comparing students to same-age peers.

CELF®–5 Metalinguistics to replace TLC-E


Question:
When will the Test of Language Competence be updated?

Answer:


CELF®–5 Metalinguistics
(Wiig & Secord, in development) is a revision and update of the Test of Language Competence—Expanded Edition (TLC–E) (Wiig & Secord, 1989). CELF–5 Metalinguistics is designed to evaluate delays in the emergence of metalinguistic awareness and knowledge in late-elementary, secondary, and college-level students ages 9–21 years old. The test will cover metalinguistic skills such as the ability to make and understand inferences; using and understanding multiple meanings words, figurative language; and using conscious processes in formulating spoken or written sentences to meet cultural expectations for conveying messages or expressing emotions or opinions. A standardization version of the CELF–5 Metalinguistics test will be field tested beginning in May. If you are interested in participating in the field test, complete an application. If you have any additional questions about the standardization, please contact clinsamp@pearson.com.

More information on CELF-5 Metalinguistics be will posted to our CELF-5 website soon. Please bookmark the site and return for more information.

Understanding the Results of CCC-2


Question:
I have trouble understanding the results I obtain from the Children’s Communication Checklist (CCC-2). For example, I have evaluated a student that I suspect has a pragmatic disorder. His CELF®-4 language scores are all normal with a total language score of 100. The pragmatic profile from the CELF-4 is very low indicating a pragmatic disorder. I gave the CCC-2 and got a General Communication Composite score of 71 with a SIDI of 8. The consistency check is 1. My question is, what does this tell me? I understand 11 or above may indicate SLD and -11 possibly on the autism spectrum. Does the communication score tell me communication competence as a whole is 71 or does imply because of a pragmatic disorder, communicative competence is low? If it implies a language disorder, it is not consistent with findings on the CELF-4. Or, does it imply a language disorder in the area of pragmatics? Please help me understand this.

Answer:
The CCC-2 is a measure designed to assess children’s communication skills in the areas of pragmatics, syntax, morphology, semantics, and speech. CCC-2 can be used to:

  • Identify children with a pragmatic language impairment by comparing performance in different language domains (e.g., pragmatics vs. syntax and morphology).
  • Identify children who may have a speech and language impairment, and whose receptive and expressive language skills should be further evaluated with a comprehensive speech and language assessment. In other words, CCC-2 may be used to screen children who are suspected of having a speech and language impairment.
  • Assist in identifying children who may require further assessment for an autistic spectrum disorder. That is, CCC-2 may be included as one measure in an assessment battery to diagnose children suspected of having autistic spectrum disorder.

To answer your questions:

  1. Consistency Check number. There are many numbers to input when completing the scoring worksheet and deriving scaled scores. The consistency check number (e.g., 1) enables you to verify that the values derived for Subtotal Raw Scores A and B are within the possible range. If Subtotal Raw Score B is less than or equal to 30, then the consistency check is passed. In your case, your consistency check number of 1 suggests that values you input for the different scales (i.e., speech, syntax, semantics, coherence, initiation, scripted language, context, nonverbal communication, social relations, interests) are accurate.

    It also suggests that the caregiver who provided the ratings was consistent and did not contradict himself/herself when rating items in section 1 (items 1–50) and section 2 (items 51–70). That is, the items in section 1 are statements that refer to the difficulties that have an effect on a child’s ability to communicate. In contrast, the items in section 2 are statement that refer to the communication strengths a child may demonstrate. If the caregiver were inconsistent in his or her ratings (e.g., rated “talks repetitively about things that no one is interested in” as always (3), and rated “talks to others about their interests, rather than his/her own” as always (3)), the inconsistency check number most likely would be above 30, signaling a misstep in either the rating or the scoring process.

  2. General Communication Composite score (GCC). The GCC is calculated by summing the scaled scores of 10 scales (i.e., speech, syntax, semantics, coherence, initiation, scripted language, context, nonverbal communication, social relations, and interests). In general, the GCC may be used to identify children likely to have significant communication problems. That is, a low GCC would indicate that the child’s skills were rated poorly in most of the 10 scales. If the child’s skills were scattered with some areas of strength (e.g., morphology and syntax) and some areas of weakness (e.g., coherence, scripted language, social relations), the scaled scores may “average out.” In your case, a standard score 71 would suggest that the child demonstrated some areas of strength and some areas of weakness. To check if this is the case, review your Scoring Worksheet, Scaled Scores section to see if there is a pattern of strengths and weaknesses. It is likely that since your child’s CELF-4 language scores are in the normal range, the CCC scales for syntax, semantics, coherence should be areas of relative strength and score in the normal range. The child’s scores in the pragmatic areas (e.g., initiation, nonverbal communication, social relations, interests) are likely to be low, suggesting they are areas of weakness.
  3. Social Interaction Difference Index (SIDI). The SIDI is a special index derived by subtracting the sum of scaled scores for the language areas (Speech, Syntax, Semantics, Coherence) from the sum of the scaled scores for pragmatic areas (Initiation, Nonverbal Communication, Social Relations, Interests). Children with pragmatic difficulties often show areas of relative strength in the language areas and areas of relative weakness in the pragmatic areas. CCC-2 research support the profile of children with pragmatic difficulties demonstrate relative strengths in language areas and relative weaknesses in pragmatic areas. (Refer to CCC-2 Manual, page 18, Table 3.1).

    In your case, the child’s SIDI score is 8. If you refer to Table 3.1, 80.43 percent of the CCC-2 sample of children diagnosed with pragmatic disorder received a SIDI score between -10 to 10. So, a score of 8 is one indication that the child you tested may have a pragmatic impairment. For further validation, you can refer back to the Scoring Worksheet and review each of the 10 communication scales. Review the pragmatic scales and see if the scaled scores for each of those scales is indeed low.

    In summary, GCC score of 71 suggests that the child has a language impairment. However, it does not necessarily indicate the child has a language impairment in the areas of syntax, morphology, and/or semantics. The low score may be the result of low scaled scores in the areas of pragmatics. You need to examine the 10 scales and look for patterns of strengths and weaknesses to determine which language areas the child is having difficulty (e.g., linguistic areas and/or pragmatic areas).

Basic Concept Assessment


Question:
What originally motivated your work in basic concept assessment?

Answer:
I began to think about basic concepts many years ago when I was a teacher of young children and working on my doctorate at Teachers College, Columbia University. I became interested in why some children had difficulty following directions such as “Mark all of the words in the top row that begin with the letter g.” As I reviewed curricular materials in reading and mathematics, the relational words stood out.

The use of many of these words was not directly taught at that time. So, for many children there was a hidden curriculum that was needed for success in school. The development of children’s understanding of what I refer to as “basic concepts” became the basis of my doctoral dissertation, the Boehm Test of Basic Concepts, and continues to be an area of importance in my life’s work.

The current version, the Boehm Test of Basic Concepts-3 (BTBC-3), continues to be a planning and problem-solving tool for teachers, speech-language pathologists, and other professionals. It samples a student’s understanding of a large number of essential basic relational concepts—such as before and after—that are important to reading, mathematics and science, following directions, solving problems, and taking other tests. The goal of the BTBC-3 is to identify basic concepts of space, quantity and time that children are familiar with or may be emerging in order to guide instruction at school and at home.

I will be presenting two upcoming webinars that focus on basic concepts and hope that you will attend:

April 2, 2013: Basic Concept Assessment/Intervention: Building Blocks to School Success
Explores basic concept assessment-intervention planning using a multiple-step model. Presents evidence based intervention concerns along with checklists to monitor students’ use of concepts across different contexts and as tools of thinking.

April 23, 2013: The Critical Role of Following Directions in the Classroom and at Home
Explores the role of basic concepts in directions that children hear at home and at school along with strategies to improve children’s ability to follow different kinds of directions

The Bridge of Vocabulary’s link to the Common Core State Standards (CCSS)


Judy-Montgomery_95

Question:
How does The Bridge of Vocabulary link to the Common Core State Standards (CCSS)?

Answer:
The CCSS adds two previously overlooked elements in literacy performance—speaking and listening. So, our work in directly linking oral to written language is much clearer through the new standards. To that end, The Bridge of Vocabulary provides over a hundred highly effective speaking and listening instructional strategies for students PreK–12th grade. All of these activities can be linked to the CCSS!

Administering Expository Reading for the OLAI-2


Question:
When administering Expository Reading for the OLAI-2, should the examinee have the text in front of him/her for the intra- and extra- personal questions?
Answer:
The examiner should have the pictures still in front of the student from the beginning of the task. Given the way the record form is laid out, the student reads the passage silently and then the examiner takes the record form to record the answers to the questions (on the back side of the passage in the form itself). Have the examiner start with that method—not showing the text, just the pictures, to the student. If the student struggles, it should be noted on the record form and then the examiner would be free to show the text as well to see if that helps. Keeping in mind that the task is not testing memory, the pictures are typically a good support to the student after the text is read. In addition, only a few of the questions are related to specific text in the story, so using the text or not still requires the student to use personal, world knowledge and his or her own thinking skills.

PLS-5 Additional Norms?



Question:
I’ve been using the PLS-5 Norms in 1-Month increments on the website. They work really well when I’m testing a child who is 2-10 or 2-11, but the scores are still high for children who are 2-6 (in fact, the scores are even higher than the scores in the PLS-5 manual. Why is that?
 
Answer:
The scores in the PLS-5 manual show the average score for a child in the 2:6 to 2:11 age range. When a 6-month age normative interval is used, a child who is 2:10 or 2:11 is being compared to a sample of children who are mostly younger than he or she is. The resulting score may be higher than expected. A child who is 2:6 is being compared to children who are mostly older than he or she is, so the six-month norm score may be lower than expected. The PLS-5 norms in 1-month increments can be used to compare a child to peers in the same 1-month age group. When using norms in 1-month increments, a younger child (e.g., one who is 2:6) is no longer being compared to a sample of children who are mostly older than he or she is, so the score will be higher than the 6 month norm reported in the PLS-5 manual. When using norms in 1-month increments, an older child (e.g., one who is 2:11 is no longer being compared to a sample of children who are mostly younger than he or she is, so the score will be lower than the 6 month norm reported in the manual. Children who are in the middle of the age range (e.g, age 2:9) will have scores very similar to the 6 month norms reported in the manual.

CLQT Administration


Nancy Helm-Estabrooks, ScD, CCC-SLP, BC-ANCDS – author of CLQT

Question:
On Symbol Trails, the examinee did Trials 1 & 2 correctly, but did not follow the instructions on the actual scored task. The examinee kept repeating “circle to triangle,” but she drew the lines in a scattered fashion, not paying attention to connecting circles to triangles or connecting objects of increasing size. According to the scoring criteria, the examinee completed 7 lines correct. Is the score actually 7? Should she consider the subtest to be spoiled?

Answer:
If the examiner follows the guidelines for instructions to the examinee, credit should be given for the lines connected correctly. The score is indeed a 7 and scoring procedures should be followed and reported. At the same time, the clinician needs to make a judgment whether or not that score appears to be reflective of intentional performance or not and qualify those concerns in the report. Certainly, the verbal repetition “circle to triangle” could be an indicator of lack of attention and “random drawing” (which ended up being rather accurate in this case), or it could simply be verbal rehearsal and a perseverating self-monitoring strategy during the task. Only the clinician giving the test can make the best judgment about that. The scoring, however, is based on actual performance given correct administration procedures.