With the demand for new teachers rising, there is increased pressure to ensure teacher candidates are prepared to hit the ground running the moment they enter the classroom.  With this in mind, Bellwether Education Partners released two new reports, which address the challenges states face in linking teachers’ outcomes to the programs that prepare them and the strategies necessary to create new pathways to teaching.

Peering Around the Corner: Analyzing State Efforts to Link Teachers to the Programs that Prepared Them

In Peering Around the Corner, Bellwether reports the challenges and trade-offs that states face in their effort to link classroom outcomes to teacher preparation programs’ effectiveness.  The paper addresses six key elements and highlights 11 states that are linking outcomes to prep programs.  Historically, evaluation of teacher prep programs focused on inputs, or the rules and regulations required to assess qualified teaching candidates. These inputs were the primary predictor of teacher effectiveness.  In recent years, however, there has been a shift to an outcomes based approach for measuring prep program effectiveness.  Under the ED’s Race to the Top, funds totaling $4.35 billion were put toward linking program effectiveness to their outcomes.  Later, Title II and Title IV regulated the reporting of such outcomes. To appropriately take on the work of using outcomes to measure program success, Bellwether suggests that:

1. States must determine how they will use program measurements. State education departments will either use outcomes to differentiate programs by performance level and/or use outcomes in the program approval process OR hold programs accountable to performance standards.  THE TRADE OFF: There is value and risk associated with using both a wide birth of data and a narrower, limited set of measurements.

2. When tracking outcomes, states must consider how they will draw their samples.  Given that pathways to teaching vary, states must decide whether they wish to study outcomes from only teachers who took a linear path to teaching or track outcomes from teachers who have taken both linear and non-linear paths. THE TRADE OFF: Studying linear outcomes data will be easier, but will greatly reduce the sample size. However, studying outcomes from non-linear teachers adds a great deal of complexity and added cost.

3. Determining a sample size (n-size) is complicated.  A great number of factors effect outcomes studies’ n-size, including: attrition from the job, access to data that may be limited to public schools, completers leaving the state to teach, and the effort required to roll-up data from multiple cohorts or within equivalent programs.  THE TRADE OFF: Choosing to pursue one method of n-size selection over another could render outcomes data inconclusive or make the findings opaque.

4. States must decide whether they want to evaluate programs individually or at the institution level. THE TRADE OFF: Outcomes gleaned from institutions can be pulled from a large sample size, but institutions can veil undesirable outcomes in overall performance.  However, outcomes gleaned at the program level may mean a smaller sample size but could render more precise reporting and greater opportunity for feedback.

5. States must determine whether they will look at summative data or place institutions and programs into performance bands. THE TRADE-OFF: Research has found little variation in quality of prep programs, and nuance could be lost in summative data. Additionally, performance thresholds that are too broad will fail to distinguish mid-level programs from the highest and lowest performers.

6. States must address challenges they will face around 1) program completer effectiveness as a lead teacher, 2) completer employment outcomes and 3) completer and employer satisfaction.  THE TRADE-OFF: Using standardized test scores to measure effectiveness may lead to questions around quality of the assessments. Anecdotal data taken from classroom observations, though telling, may be collected inconsistently.  Additionally, measures may fail to take into account cultural issues that affect ultimate performance (like candidates being pushed to undersupplied subject areas or high-needs schools where they are not a good fit.)

The study focuses on specific data points in outcomes tracking from eleven states: Colorado, Delaware, Florida, Georgia, Louisiana, Massachusetts, New Jersey, North Carolina, Ohio, Rhode Island and Tennessee. Of note, Ohio uses outcomes data from both Resident Educator Persistence and Resident Educator Survey Results at both the program and institution levels to measure preparation program effectiveness.  Delaware uses its LEA360 Survey to assess candidate readiness by having district representatives, like teacher-mentors, track key performance indictors in teaching candidates.


No Guarantees: Is it Possible to Ensure Teachers Are Ready on Day One?

In No Guarantees, Bellwether finds that there is no conclusive data that can be used to outline the requirements needed to develop a successful teacher or to define a successful teacher training program.  It suggests that policymakers invest “more time and resources into learning the science of teaching.” Policies must reflect the fact that we understand more about what determines teacher quality after the teaching begins, than we do about how to predict the quality of the teaching training, itself.  As background, Bellwether notes that there are 26,589 teacher preparation programs through 2,171 colleges, universities or other education prep providers.  School districts spend an average of $24,250 to train teacher candidates and each candidates spends 1,512 in training.

As in Peering Around the Corner, Bellwether’s research suggests that past emphasis on inputs like admissions criteria, required coursework, advanced degrees and certifications to determine teacher quality do not tell us who actually makes an effective teacher.  Further, there is no evidence to suggest that the length of a teacher prep program is related to ultimate student outcomes.  In fact, one study found that candidates with higher levels of education are more likely to enroll in shorter training programs. Bellwether suggests that, on the whole, teachers would be more effective if, during their training, less time was spent on theory and content learning and more on practicing actual teaching.  They are clear, however, that research shows that merely adding clinical prep hours is not a foolproof way to increase student achievement.

Bellwether also points out that there is conflicting research on whether outcomes are better than inputs in measuring teacher preparation effectiveness. As is discussed in Peering Around the Corner, the outcomes approach is bogged down by questions and trade-offs for state actors.  No Guarantees reminds policymakers that since current research suggests a teacher’s preparation is only a minor factor in her overall effectiveness, states should focus on measuring and incorporating data on teacher effectiveness pulled from the practitioner’s first year of teaching. The four strategies to ensuring a quality pipeline of teachers include:

1. Making teaching less risky. The high cost of teacher preparation, the low pay and the lack of advancement opportunities are all deterrents to the teaching profession.

2. Ensure districts, not preparation programs, are responsible for recommending a candidate for licensure. School leaders should make the call on who is ready to teach.  Futhermore, expectations on performance should change as a teacher becomes more experienced. This ensures that becoming a ‘teacher of record’ is hinged on demonstrated effectiveness in the classroom. Of note, Ohio’s Resident Educator Program, mentioned in No Guarantees, focuses on such teacher accomplishments.

3. States should measure and publicize effectiveness data. This level of transparency will create an environment where traditional prep programs must compete with alternative prep programs for top teaching candidates.

4. Unpack the black box of good teaching.  As a rule education reformers must analyze existing measures and inputs to see if they are impacting a wider range of outcomes, use assessments that measure higher order thinking, look at outcomes beyond student test scores and encourage variability WITHIN preparation programs. Instead of standardizing, programs should be charged with innovating and experimenting on unique program features that could positively affect outcomes.