Customer Service
Cart Total: $0.00

Tips From Dr. Marzano

A Handbook for High Reliability Schools: The Next Step in School Reform

 

A Handbook for High Reliability Schools The following tips are designed to assist you in applying the latest research in tangible ways in your classroom, school, or district.

 

 

A high reliability school, by definition, monitors the effectiveness of critical factors within the system and immediately takes action to contain the negative effects of any errors that occur.

These schools have several things in common, including high, clear, shared goals; real-time, understandable, comprehensive data systems; collaborative environments; flexibility; formalized operating procedures; a focus on best practices and expertise over seniority; rigorous teacher performance evaluations; and clean, well-functioning campuses. (pages 1–2)

In order to know what to work on and to measure their success at each level, school leaders need ways to assess their school’s current status, gauge their progress through each level, and confirm successful achievement of each level.

Leading and lagging indicators are useful to these ends. The distinction between leading and lagging indicators is this: leading indicators show what a school should work on to achieve a high reliability level (they provide direction), and lagging indicators are the evidence a school gives to validate its achievement of a high reliability level (they provide proof), particularly in areas where there is general agreement that the school is not doing well. (page 5)

Problem prevention is an excellent reason to constantly monitor critical factors and address errors immediately.

However, it is not the only reason to monitor performance. Tracking performance using quick data allows school leaders to celebrate successes with staff members, parents, and students. (page 11)

Level 4 of the Marzano High Reliability Schools™ model requires schools to implement standards-referenced reporting, which means that student achievement is measured and reported through the use of specific measurement topics in each subject area.

Unlike standards-based grading systems, standards-referenced systems do not require students to become proficient in a particular measurement topic before they move on to the next topic. Instead, students’ mastery of each measurement topic in a subject area is reflected on their report cards as a score. Standards-referenced reporting can act as a stepping stone to the more comprehensive overhaul of school reporting, organization, and priorities than a standards-based system requires (p. 83).

To reach high reliability status for level 4, schools must first develop proficiency scales for their essential content.

Proficiency scales outline the simpler content, target learning goal, and more complex content for the measurement topics in a subject area. Students earn scores for each level of the proficiency scale they master; a score of 4.0 means they have mastered the more complex content, a score of 3.0 means they have mastered the target learning goal, a score of 2.0 means they have mastered the simpler content, a score of 1.0 means they have partial success with help, and a score of 0.0 means they do not demonstrate success with any of the content, even with help. Importantly, proficiency scales promote consistent expectations for student achievement across teachers, subject areas, and grade levels by clearly delineating what comprises student mastery. Every essential topic in every grade level needs its own proficiency scale, which schools can develop through the use of collaborative subject-area or grade-level teams and guiding state or national standards. Additionally, Marzano Research offers a number of resources that can assist schools with the creation or implementation of proficiency scales, such as a proficiency scale bank at itembank.marzanoresearch.com and personalized scale consultation services (pp. 89–91).

Once schools have developed proficiency scales for each subject area, they can begin reporting students’ status and growth on report cards using these scales.

Schools may need to redesign report cards so that they better reflect students’ scores on individual measurement topics. For instance, a standards-referenced report card would report a student’s scores for the five measurement topics covered in a course’s grading period. Standards-referenced report cards can also feature bar graphs that illustrate a student’s growth from the beginning of the grading period to the end. While schools can report student scores using 0.0–4.0 ranges, they can also translate proficiency scale scores into summative number or letter grades by establishing a conversion scale. One school may determine, for example, that scores 3.00–4.00 are an A grade, scores 2.50–2.99 are a B grade, scores 2.00–2.49 are a C grade, scores 1.00–1.99 are a D grade, and scores below 1.00 are an F grade (pp. 91–94).

Level 5, the topmost HRS designation, can only be achieved after a school has effectively implemented competency-based education.

Competency-based education (or standards-based education) refers to a system in which students move to the next level of learning when they have demonstrated competence with the material at the previous level. This stands in contrast to the more traditional model of schooling in which students move forward after spending a specific amount of time in class. In order for school leaders to assess their progress toward competency-based education, they can use the following three leading indicators:

  1. Students move on to the next level of the curriculum for any subject area only after they have demonstrated competence at the previous level.
  2. The school schedule is designed to accommodate students moving at a pace appropriate to their situation and needs.
  3. Students who have demonstrated competency levels greater than those articulated in the system are afforded immediate opportunities to begin work on advanced content and/or career paths of interest (p. 101).

In order to implement a competency-based system, a school must get rid of time requirements.

The amount of learning a student completes can be thought of as the time spent learning the material divided by the time needed to learn the material (see John Carroll’s model, 1963, 1989). In a competency-based system, students progress through the various levels of mastery at their own pace. For example, if a particular student needs 10 hours to learn specific content, but the school only allows for the student to spend five hours with the content before moving on, the student will not learn the material well (p. 107).

In order to implement a competency-based system, a school must make adjustments to its reporting systems.

Competency-based systems require reporting that allows for students’ competence to be represented by a sequence of knowledge and skills. Schools can change report cards to represent subjects by levels and use ratios to represent the number of learning targets in a level a student has mastered. For example, the grade of a student who has mastered 23 of 40 learning targets in a level would be reported as “23 of 40.” Once all 40 learning targets are mastered, the student moves onto the next level (p. 108).

For decades, school leaders have used educational research to select individual factors to implement in their schools. While these efforts are laudable, they represent too narrow a focus. All of the research-based factors need to be arranged in a hierarchy that will allow schools to focus on sets of related factors, progressively addressing and achieving increasingly more sophisticated levels of effectiveness.

The Marzano High Reliability Schools™ model involves five hierarchical levels that schools can use to achieve high reliability status. A high reliability school is one in which school leaders monitor the effectiveness of critical factors within the system and immediately take action to contain the negative effects of any errors that occur. The hierarchical nature of the model is one of its most powerful aspects. Each level guarantees that a school is also performing at all of the lower levels. So, if a school is working on level 4 and has achieved levels 1, 2, and 3, it is guaranteed that outcomes associated with levels 1 to 3 are in place (pp. 3, 11).

In order to know what to work on and to measure their success at each level, school leaders need ways to assess their school’s current status, gauge their progress through each level, and confirm successful achievement of each level. Leading and lagging indicators are useful to these ends.

Leading indicators show what a school should work on to achieve a high reliability level (they provide direction). For example, at level 1, one leading indicator is “Faculty and staff perceive the school environment as safe and orderly.” School leaders can use a survey to measure the extent to which faculty and staff perceive the school environment as safe and orderly. Lagging indicators are the evidence a school gives to validate its achievement of a high reliability level (they provide proof). For example, at level 1, a school where the faculty and staff do not perceive the school environment as safe and orderly (a leading indicator) might formulate the following lagging indicator to measure their progress toward a safe and orderly environment: “Few, if any, incidents occur in which rules and procedures are not followed” (pp. 4–5).

A criterion score is the score a school is aiming to achieve for a particular lagging indicator.

To meet lagging indicators, such as “Few, if any, incidents occur in which rules and procedures are not followed,” school leaders must determine how many incidents constitute a “few.” This number is called a criterion score. If results meet the criterion score, the school considers itself to have met the lagging indicator. If results do not meet the criterion score, the school continues or adjusts its efforts until it does meet the criterion score. To design criterion scores, school leaders can state that a certain percentage of responses or data collected will meet a specific criterion, set a cutoff score below which no responses or data will fall, set criterion scores that indicate specific amounts of growth, or center criterion scores around the creation of concrete products, such as written goals or action plans (p. 5).