Brittany Larkin is a professor in the College of Education at Auburn University.  She grew up in Florida and spent her career there before moving to Auburn a year ago.  She went to public schools in Florida, as did her children, and she taught special education for 10 years before moving into higher ed.  Her expertise is education policy.

As most in education know, Florida has been a petri dish of education reform efforts for years, especially under Governor Jeb Bush.  Larkin understands Florida schools as well as anyone since she dealt with all the “bright ideas” at the ground level.

She was prepared to testify at the March 8 hearing on the RAISE/PREP bill, but time elapsed before she had a chance.  After listening to all the testimony, she wrote the following “open letter” to Senator Del Marsh, sponsor of this bill.

I am writing this letter as a private citizen with a research expertise in educational finance and equity issues. After attending the legislative hearing about the PREP Act (Senate Bill 316) on March 8, I have several concerns. One of which is the unintended consequences of including a value added model (VAM) component in the teacher evaluation system.

We all want what is best for the children of Alabama, and that we need good teachers, strong school leaders, and a supportive community to ensure we have the best public education system possible.  It’s also equally important to monitor, evaluate, and make decisions based on meaningful results.  We are, after all, accountable for the return on investment to the public for the tax dollars we invest in education. The March 8 hearing indicated that most feel it is the role of the State Department of Education (SDE) and professional educators to be in charge of education policy.

Senator Marsh repeatedly defended the bill by stating that itss teacher evaluation piece is exactly like the one used by the SDE.  To prove his point he held up a pie chart showing that the SDE’s plan includes 25 percent of a teacher’s evaluation as a student growth measure. I have compared the SB 316 with the SDE’s Teacher Effectiveness plan for evaluating teachers.  There are similarities, including components of SB 316 that allow local school systems the autonomy to create their own evaluation (within the framework of the evaluation system), at least two classroom observations, professional learning plans, survey data, and student growth.  But the Teacher Effectiveness plan also includes self-assessment, collaborative design, and professional showcase.

However, the most important contrast is here: they differ in how “Student Growth” is defined!  SB 316 specifies the student growth model be a “A statistical growth model used to isolate the effect and impact of a teacher on student learning, controlling for preexisting characteristics of a student, including, but not limited to, prior achievement.”  The bill also specifies that growth is measured using the state-mandated evaluation (currently the ACT Aspire) for the grades and subjects that are assessed using it.  And for grades and subjects not assessed using the state-mandated evaluation system, the SDE is to generate a list of approved alternative tests that can be used to measure the student growth.

So Senator Marsh’s attempt to say the methodology in SB 316 is the same as the one called for by the state department is inaccurate.

There is much scholarly evidence showings Value Added Model (VAM), which is the name for the definition SB 316 uses for student growth, is completely inappropriate.  VAMs have stringent technical requirements for tests, so even the required “approved alternatives” concept is problematic. For a few of the many scholarly refutations of VAM as a reliable basis for evaluation, see positions by the American Education Research Association, the American Statistical Association, and the research conclusions of leading VAM scholar Dr. Andrea Amrein-Beardsley Also, Dr. Amrein-Beardsley’s recently penned a particularly devastating review of SB 316, showing among other things the high likelihood that passing this bill would lead to more losing lawsuits for the State of Alabama.

In contrast, the SDE’s Teacher Effectiveness plan says the student growth measure should use data from various assessments.  It also says that districts should use the design phase to discuss what student data is meaningful in determining student growth.  What measures do teachers and leaders want to use to help inform not only practice but also evaluate the impact of teaching on student growth?  The SDE understands that you cannot effectively measure student growth with one instrument (ACT Aspire or an alternative measure for subjects beyond math, science, and reading), but rather use multiple data sources to show growth in a manner appropriate to different content areas.

Boiling it down then, if we replaced SB 316’s definition of student growth with the definition the SDE’s Teacher Effectiveness plan uses, they would be about the same.  But, this begs the question, why have the bill at all?  The SDE evaluation plan has already rolled out in 50 school systems. Why have legislative mandates on the teacher evaluation process?  Shouldn’t that be the role of the SDE?  After all, when the time comes to evaluate this process and make changes, do we want to have to wait for the legislative process to slow or halt the improvement?

Finally, one of the strongest arguments for not having legislatively mandated process for teacher evaluation is that Alabama cannot afford the litigation that is sure to come with passage of this bill. 

It’s time to make the right decision.  The reauthorization of the federal Elementary and Secondary Act, named the Every Student Succeeds Act of 2015, purposefully removed the requirement for states to use standardized assessments as a measure of teacher evaluation.  The evidence says there are better ways of measuring student growth.  The landscape of the courts tells us there are better ways of evaluating teachers.  We do not have to recreate the wheel, but we do need to learn from the mistakes of others.