I recently posted two discussions on LinkedIn questioning the need, viability and justification for continuing with the current system of teacher evaluation: “TEACHER OBSERVATIONS DON’T WORK… BUT WHY?” and “WHY ARE TEACHERS EVALUATED?” The results showed wide ranging disparity on both the effectiveness and worth of the evaluative system, but was balanced by strong support by teachers for change in this area. These are the general comparisons from both schools of thought.
- Administrators – in general – believe there is an absolute need for evaluations to continue in their current form, but with minor modifications. Overall, they feel teacher observations are a necessary element of teaching and have confidence they are producing “some type of” desired result.
- Teachers overwhelmingly believe there is ample room for improvement in evaluations to promote teacher growth and development (did administrators feel this way when they were teachers?) and suggested a variety of options from minor modifications of the current system to eliminating it completely. Overall, they do not share administration’s feeling of success brought by the observation process.
These discussions were intended to initiate dialogue for change in schools in one area – teacher observations – to bring about improved student success, which we can all agree is urgently needed. What better place to begin than with a program that is suspect by most, highly subjective at best, and seemingly productive for all involved except teachers and students. In my conversations with administrators, and while a few may disagree, most are dissatisfied with the outcomes of observations given the time and energy they must invest necessary to perform them. One of the last comments summed it up and I am paraphrasing, “If teacher observations did what they were designed to do – develop greater teacher competency to increase student achievement – then why aren’t we seeing the resulting student successes?” It doesn’t get much simpler than this.
In spite of the many great ideas brought to the table, missing from both management and labor perspectives was a comprehensive solution that would:
- eliminate the formal observation process
- substitute it with a valid data-driven system of evaluation
- be powered by student outcomes
- use goal-setting to statistically analyze “trending”
- satisfy both parties
In truth, I was hoping for one administrator, one school, to say, “Look, if you have a better way, our school is willing to try it.” I didn’t hear that, so I am offering a proposal – and it is all about DATA. We need to agree on a common starting point, so let’s use what we have top bragging rights to – DATA.
DATA – like it, love it or hate it – is our most valuable and under-appreciated asset. We use data to catalog students in every way possible – top third, bottom third, ELL, special education and then re-order each one on so many sub-levels, but fail to rely on its ability to help quantify and qualify the effectiveness of a teacher. It seems only logical that if we can use data (not exclusively for VAM) to show how effectively a teacher is performing, based on student achievement, then why would blanket formal “subjective high-stakes” evaluations ever be necessary again? Also, if we are to avoid the type of fallout shown by preliminary VAM feedback, we need to have this discussion.
So why aren’t we using data to our greater advantage? If students of a particular teacher are experiencing either good or bad results – trending up or down – then why isn’t this the primary data used to validate his or her competency in the classroom rather than four 15-minute informal walk-throughs?
Take differentiation and data. I have a sign that reads:
“In this class, we use Assessment to generate Data that will help us to Differentiate Instruction and to create Curriculum that will intellectually impact All Members of the Learning Community.”
Deliver instruction first then, then based on the data, differentiate if necessary. How can you effectively differentiate instruction before you know if differentiation will be necessary? Even if differentiation is based on past experiences, you are still making presumptuous, and probably incorrect, assumptions. In theory, shouldn’t there come a time when struggling students make breakthroughs and no longer need the crutch of differentiation? Isn’t that why we differentiate in the first place? Differentiating instruction without first using data to modify curriculum or teaching methods is just differentiating for the sake of differentiating (my apologies for the multiple uses of differentiation.) Class data must be statistically analyzed to create effective strategies, but we know if differentiation is excluded during an observation… points lost. Why? Who better to determine – based on class data – if differentiation is needed or not?
How about teacher competency and data. Once teachers are hired, with a reasonable “expectation of competency,” they begin producing data in the first few weeks. Not just grades, but also a vast untapped source of secondary data every teacher has, yet does not currently use as proof of proficiency. How often is all this data used as a benchmark of competency prior to an observation? And if it is being used please show me, because I just don’t see it. Teachers are evaluated to improve student outcomes, so why isn’t this the first place we look? Why isn’t a baseline created here to monitor future growth and trending? This is so easy for a teacher to prepare if they are shown how to do it. Not being aware of teacher progress using statistical analysis of data, via baselines and trending – but going ahead with an observation anyway – is not really helping to improve teacher development, but is merely observing for the sake of saying an observation was performed. This is unproductive, not good for anyone and will never create the desired result, which is well-developed teachers producing maximum student achievement. We need to look at the data.
If I seem obsessed with data, it is because data can drive instruction, but how often is data used to drive both instruction and to formulate accurate evaluations? This is our missing link.
How many principals would be willing to try these steps as an option to the traditional teacher observation? This is an outline of Data-Driven Evaluation.
- Effective electronic gradebook (EGB) used to collect the data necessary that will be used to graphically analyze trending as it occurs on a day to day basis. The EGB also needs to be streamlined enough for any teacher to pick up quickly – no steep learning curves, no unnecessary bells and whistles. There are only a few EGBs that can do this. This is a non-negotiable.
- Student Work – graded, entered and posted the same day it is received. This is a non-negotiable. Why? Because you cannot keep accurate stats without timely data. All professionals have lots of paperwork, but grades are #1 to students – and this is our job. Doing so also keeps parents on your side.
- Weekly Progress Reports – Students must bring in a signed Progress Report every Monday (or Friday) that is counted as an assignment. This will be used to set goals for the week. This is a absolutely a non-negotiable.
- All teachers are supplied with a comprehensive “Data Collection Statistical Analysis Program,” which teaches specific strategies for collecting/recording all student and classroom data (the same used by all management consultants outside of teaching). The data is kept in the classroom, in a STATS Folder, is available to all visitors, and is updated daily. It is recorded on paper, and updated online daily. Data is represented in graph form on a regular schedule. A good EGB will make accessing this data very easy for the principal from his/her office as well. This is a non-negotiable.
- Basic data such as grades, tests, quizzes, HW, and attendance are only rudimentary statistics and are obviously insufficient. Statistical analysis and trending of class data plus additional unmined data (this is the “secret sauce”) must be added. This is a non-negotiable.
- Trending is the basis for this system. This is a non-negotiable. Students must be taught trending. Statistical analysis of data trending (that every pro athlete uses daily) is taught for students to increase their “sense of urgency” through correct goal setting, time management, prioritization and increased awareness of progress. It is this data collection and analysis by students that increases student engagement and reduces teacher/administration monitoring by more than half.
- Based on easy online access to this data, administrators would be much better prepared to “consult” with teachers regarding progress, strategies, successes, etc… prior to visiting each class.
- Inter-visitations – Teachers will be required to do a compliment of 3-4 inter-visitations (or a prearranged #) each month with colleagues. Simple documentation will be kept by teacher in their STATS Folder and be available to administrator at any time. Teachers will complete “Self-Assessment” forms on the back of each inter-visitation form during the inter-visitation and kept in our classroom STATS Folder. Half of these Self-Assessments can be done as Peer-Assessment. This is a non-negotiable.
- No formal observations. This is a non-negotiable (remember, this is an alternative to the current system.) How simple would it be for a principal to compare peer or self-assessments against actual classroom practices through regular informal walk-throughs? This is how a professional becomes self-actualized. Either we want a piece of paper in the teacher’s file (an observation), or we want highly competent, highly-developed, teachers.
- Regular “high-visibility high-profile” walk-throughs. Without formal observations to slow a principal down, this would be easy. Principals will have access to any student or classroom data prior to, during, and after these visits. This is a non-negotiable.
- Teachers who show trending upward by a mutually predetermined trending rate will also be permitted greater autonomy. Less oversight followed by greater self-actualization and personal development. Trust increases, job satisfaction increases. (Statistical analysis of data and trending are the linchpins of this program.) This is a non-negotiable.
- So what happens if a teacher is doing poorly? Teacher data trending will not align with student data trending. Teacher will be consulted regarding statistics by one or more administrators and will include peer-intervention for support and advice, reassessment of goals, strategizing what is working and what is not, all within a time-frame for improvement and possibly even student input (a last measure).
This is an outline proposal. And while this some points may be a bit unorthodox and unsettling for administrators who feel they absolutely must see how a teacher wrote their Do Now or if they differentiated properly (even if differentiation for that particular lesson was unnecessary), it is an alternative could work and/or improved upon. I developed the program of data collection, statistical analysis, trending, prioritization, time management and true goal setting and have used it for years. Now, if we can only get a few principals to pilot this type of approach, we might start moving closer to the change we seek.
This is an alternative to the current system of teacher observations. I welcome all comments, criticisms and observations (pardon the pun).
http://thebusinessofschool.org Teacher Practice Management Consulting
Please register on the blog for the latest updates – and let’s get ready for September
Please post this to my Facebook pages
Carole Bonnie Reiss
The Staten Island Green Charter School for Environmental Discovery
I am interested in exploring this further and find your approach interesting. What goes it look like in Primary classroom (Australia)?