Division of Early Childhood Education »DECE Home

Guidance for Maintaining Rater Reliability on the ECERS-R, PCMI and SEL

Reliability of raters using a classroom observation instrument is a fundamental concern.  Regardless of the instrument(s) chosen by a district, a system of reliability sustainment should be in place. Reliability focuses on the quality of the use of a measure, specifically its "consistency" and "repeatability."  It is the process that will help the observer use the measure in a manner consistent with the way in which it is intended to be used.  Before reliability visits, the observers should meet to discuss each item to make sure they are all looking at the items in the same way.  It is important that master teachers and administrators use classroom observation instruments with consistency, so that they can appropriately score and use the data to inform program improvement.

For ECERS-R, it is recommended that a sample of master teachers from the district go out every couple of years with a reliable observer and achieve an 80% level of agreement in 1 observation with the item scores no more than 1 away.  For SELA and PCMI the same protocol should be followed, however, reliability is achieved when 70% of scores are matching exactly.  Contact the New Jersey Division of Early Childhood (DECE) for a list of Universities who have reliable ECERS-R, PCMI and SELA observers. 

Master teachers should also periodically go out together and use the structured observation instrument(s) in a classroom and then check for inter-rater reliability after scoring.  This process begins by reviewing each of the master teachers’ item scores to see if they match or are 1 or more away.  The master teachers come to consensus of what each item’s score should be and then determine if they achieved at least 80% level of agreement for ECERS-R or 70% exact agreement for the PCMI and SELA based on the agreed upon score for each item.  If the district has only one master teacher employed, that master teacher should contact a master teacher from another district to go through the inter-rater reliability process.

Maintaining reliability addresses the need for increased control for observer bias, contextual variances, and data quality.  To ensure master teachers interpret each item correctly, they should periodically review the scoring rubric and practice what they have learned with the “reliable” observer to make sure that they are maintaining their reliability when using a structured observation instrument.

Feel free to contact Renee Whelan with any questions renee.whelan@doe.state.nj.us.