States Beginning To Figure Out They Will Be Left Holding The Bag With National Assessments They Can't Afford to Implement--Will Governors Be Seeing Wisconsin's Woes In Their Own States Soon?
“The Other Shoe Drops: National Testmakers Worried”
by Donna Garner
4.12.11
Summary of worries revealed in this Education Week article (posted below):
1. High expectations for these national assessments may outpace the ability of states to pay for the technology required to administer them.
2. Both of the consortia have to provide -- for each tested grade level and course -- benchmark assessments (a.k.a., periodic, formative) and summative assessments (a.k.a. finals).
3. The consortia are worried that the tight timelines set by the feds won’t allow for well-done piloting of the assessments.
4. Both consotia have to create formative and summative assessments that are of equal content and difficulty and which can be taken by all types of students.
5. The national assessments were originally intended to save states money, but the federal grants contain no money for administering the assessments.
6. States are beginning to figure this out and are worried they will be left with national assessments they cannot afford to implement.
Donna Garner
Wgarner1@hot.rr.com
=================================
Published Online: April 12, 2011
Includes correction(s): April 12, 2011
Experts See Hurdles Ahead for Common Core Tests
As America’s “next-generation” assessments for common core academic subjects begin to take shape through two state consortia projects, researchers and test developers alike are beginning to worry that expectations for the tests may outpace states’ technology and budgets.
Michigan and Louisiana education officials and leaders of the two consortia tasked with developing the new assessments—the 25-state SMARTER Balanced Assessment Consortium, or SBAC, and the 26-state Partnership for the Assessment of Readiness for College and Careers, or PARCC—discussed challenges to the tests at a panel here at the annual meeting of the National Council on Measurement in Education.
The panel was organized by the Council of Chief State School Officers, one of two Washington-based group that spearheaded efforts to create new common standards for college and career readiness, now adopted by 44 states and the District of Columbia.
The tests are expected to roll out in 2014, and “the amount of innovation we’ll be able to carry off in that amount of time is not going to be that much,” warned Joseph Willhoft, the executive director of the SMARTER Balanced Consortium. “There’s an expectation that out of the gate this [assessment] is going to be so game-changing, and maybe after four or five years it will be game-changing, but not immediately.”
Both consortia received grants through the federal Race to the Top Assessment competition created in the federal economic-stimulus law to develop new tests based on the common standards. Each consortium must develop computer-based tests for each tested grade level and subject, as well as optional interim benchmarking tests to allow teachers to monitor how students progress and change instruction accordingly.
Both groups are developing both the end-of-year summative tests that can be used by any state in the country, and the ancillary benchmark tests that teachers or principals can use to track the progress of individual students or groups throughout the year.
The SMARTER Balanced Consortium’s tests are intended to go beyond simply moving questions from a paper to a computer screen, to adapt the difficulty of each question as students progress on the test. Ideally, individual test items will be tagged with the accommodations allowed for students who require them based on a disability or limited English proficiency, according to Laura M. Slover, the senior vice president of the Washington-based Achieve, Inc., which is helping develop assessments for PARCC.
Yet all of that is still in the works.
“One of the biggest problems I’ve seen with state assessments and national assessments is they are typically not done on a budget and a timeline that allow people to go out and do the pilot testing and tryouts that you would like,” said Mark D. Reckase, a professor of measurement and quantitative methods at the University of Michigan. “I’ve looked at the timelines for this, and they are fast; there will be incredible pressure to just get it done.”
Moreover, making sure the tests will serve their intended accountability use has become trickier in the wake of high-profile, test-based teacher evaluations, such as that done in Los Angeles last fall. “If we are trying to look in a crystal ball about educator evaluation … That is likely to be the most difficult use of any data we put out and therefore requires the most thought and care in designing the models,” said Joseph Martineau, the director of educational assessment and accountability for the Michigan Department of Education, part of the SBAC.
PARCC plans to train thousands of teachers both in how the assessments will work and how the resulting data can be used for accountability or classroom instruction, said Ms. Slover. “One of the purposes [of the consortia project] was to really change assessment, both the way it’s done and the way it’s experienced by the students and teachers in the classroom,” Ms. Slover said. “As we think about how to transform the test to make it more useable for teachers, teachers have to embrace it and think it’s something being done for them and with them—and not to them.”
Mr. Reckase warned that mistrust of the new tests during the transition could cause delays. “There’s a tendency to want redundant systems, computerized and paper-and-pencil, … but that causes a whole other set of problems because now you have to make sure the two tests are equivalent and ensure they work for all students.”
Betting on Technology
Ms. Slover said the consortia are “betting heavily” that emerging technology will help them create tests that can balance accountability on multiple levels—from annual student achievement reporting to ancillary data used to evaluate programs and curricula—with formative test information to help teachers tweak instruction for different students throughout the year. “One cannot be done at the expense of the other, so balancing those is critical, and then you add the cost factor into that,” she said. “Innovation in technology happens at lightning speed, so we are betting heavily on the fact that in four years there will be a new way of doing things, that iPads will be easily accessible or that handheld devices will be very affordable and will change the way we do testing in our schools.
“But we’re betting on that, and it does worry me,” she said. “I think technology is not really fully embedded in the world of classrooms at this point.”
Even among classrooms with computer and Internet access, state officials agreed there are few brick-and-mortar schools that fully integrate technology into instruction, which may make it harder for students to adapt to taking tests via computer.
Changes Difficult
Scott Norton, Louisiana’s assistant state superintendent for student and school performance, said states must be careful to get the tests right in the first shot. While jointly developing tests was intended to save states money, the grants do not include money for administering the new assessments long-term, and it will be harder to make adjustments to the tests once they are completed, because so many states will need to sign off on changes.
“The cost makes me the most anxious,” Mr. Norton said. “In today’s world if we have a [testing] cost problem, we own that: We can print on lighter paper or something. I’m not sure that holds up when we don’t own it alone. If we get into a test we can’t afford, we’re really left holding the bag.”
Vol. 30, Issue 28
Reader Comments