Volume 45, Issue 1
(March 2016)

< Previous Next >



Current Issue
Available Issues

Alerts for the Journal

Click to get an email alert for every new issue of

School Psychology Review

Journal Information

Online ISSN:  
Frequency: Quarterly
RSS Feed:
(What is this?)
rrs icon

Register for a Profile

Not Yet Registered?

Benefits of Registration Include:

  • A Unique User Profile that will allow you to manage your current subscriptions (including online access)
  • The ability to create favorites lists down to the article level
  • The ability to customize email alerts to receive specific notifications about the topics you care most about and special offers

Register Now!

Previous Article
Volume 45, Issue 1
(March 2016)
Next Article
  • Add to Favorites
  • |
  • Share Article
  • |
  • Export Citations
  • |
  • Track Citations (RSS | Email)
  • |
  • Permissions

  • Full-text
  • PDF

Dependability of Two Scaling Approaches to Direct Behavior Rating Multi-Item Scales Assessing Disruptive Classroom Behavior

Robert J. Volpe and Amy M. Briesch

Northeastern University

Please address correspondence regarding this article to Robert J. Volpe, Northeastern University, Department of Applied Psychology, 413 International Village, Boston, MA 02115; e-mail:

Article accepted by previous Editor

Robert J. Volpe is an associate professor in the Department of Applied Psychology at Northeastern University and Co-Director of the Center for Research in School-based Prevention. His research focuses on designing academic and behavioral interventions for students with disruptive behavior disorders, as well as feasible systems for assessing student behavior in problem-solving models. He is President-Elect of the Society for the Study of School Psychology.

Amy M. Briesch is an associate professor in the Department of Applied Psychology at Northeastern University and Co-Director of the Center for Research in School-based Prevention. Her research interests involve the role of student involvement in intervention design and implementation, as well as the development of feasible and psychometrically sound measures for the assessment of student behavior in multitiered systems of support.

Associate Editor: Stephen P. Kilgus

Abstract

This study examines the dependability of two scaling approaches for using a five-item Direct Behavior Rating multi-item scale to assess student disruptive behavior. A series of generalizability theory studies were used to compare a traditional frequency-based scaling approach with an approach wherein the informant compares a target student's behavior with that of classroom peers. A total of seven novice raters (i.e., graduate students) used both types of scales to rate 10-min video clips of the classroom behavior of nine middle school students across three occasions. Generalizability of composite scores derived from each type of scale was examined across raters and occasions. Subsequent decision studies were conducted to determine the number of measurement occasions that would be required to obtain an acceptable level of dependability. Results of these studies indicated that the type of scale accounted for a substantial proportion of variance (29%) and that the traditional frequency approach required far fewer assessment occasions to reach the criterion for absolute and relative decisions (4 and 8 occasions, respectively) compared with the comparative scaling approach (>30 occasions). Implications for future research and current practice are discussed.

Received: October 28, 2014; Accepted: June 2, 2015;

Copyright 2016 by the National Association of School Psychologists