Inter-rater reliability among principals using the instructional domain of the Tennessee Educator Acceleration Model
Highly qualified and effective teachers play a significant role in student achievement. Teacher evaluations are administered for the purpose of determining the competency of the teacher while providing opportunities for professional development and growth. The recent focus on securing effective teacher evaluation systems has elevated the need for a more equitable process of teacher evaluation. Although raters are being trained, concerns about individual raters being either too lenient or strict have arisen. The problem that prompted this study was the lack of consistency among evaluators using the Tennessee Educator Acceleration Model (TEAM) to evaluator teachers. The range in scores received by principals and teachers being evaluated using TEAM depends on the evaluator and the circumstances. The purpose of this quantitative correlational study was to determine the levels of inter-rater reliability that existed among elementary education evaluators on the Instructional Domain of the TEAM in school districts in Tennessee currently using TEAM. The study also examined the effectiveness of the evaluator training offered by the district by comparing the Instructional Domain scores of those evaluators who attended the training to those who did not. A reliability analysis, using the Cronbach’s alpha values, was conducted to examine whether was internal consistency in the Instruction Domain components of the TEAM General Education Rubric while a correlation analysis was conducted to examine the inter-rater reliability on the Instructional Domain of the TEAM among evaluators who have participated in evaluator training than those who have not participated in the training. The degree of inter-rater reliability for principals evaluating a pre-service teacher yielded a Cronbach’s alpha of .900. A statistical finding of .900 indicates that inter-rater reliability was high among principals using TEAM to evaluate the Instruction Domain of TEAM. An analysis of the effect that TEAM training had on the inter-rater reliability of the evaluators could not be performed due to the fact that only one participant did not receive the training. Evaluators and teachers would benefit from this study on inter-rater reliability among evaluators as it examines the rationale for the level of agreement and reasons for inconsistent ratings. This would in turn lead to a more equitable evaluation system.^
Educational evaluation|Teacher education
"Inter-rater reliability among principals using the instructional domain of the Tennessee Educator Acceleration Model"
ETD Collection for Tennessee State University.