Sunday, November 27, 2011

OHIO: THINKING IS OPTIONAL?

As a product of viewing information from Ohio’s Department of Education, while also reviewing its just released “draft rankings of Ohio school districts,” another standardized testing enigma was highlighted.

As Dr. Diane Ravitch, referenced repetitively in SQUINTS, has been the most professional and eloquent source of critique of the shortfalls of NCLB’s standardized testing, the observations prompted the following post to Dr. Ravitch.  Appended in its entirety, the letter explains in detail the standardized test dysfunction uncovered.

Ravitch Letter

TO:       Dr. Diane Ravitch

DATE:  November 27, 2011

SUBJ:   Standardized Test Dysfunction  

Dr. Ravitch,

You may wish to add this tale (still short an ending) to your repertoire of K-12 standardized test debacles.

The tale starts with an upside, the appointment of a new state superintendent -- Stan Heffner -- in Ohio.  In the months after his change in status from acting to appointed, and through the early months of 2011, the Department issued a series of policy papers on K-12 reform that set for Ohio a new record for education sanity and common sense, departing the trajectory only where standardized testing is involved.  One proposal is linked here.

However, Ohio's Republican Legislature created and passed Ohio FY2011 H.B. 153, mandating that the Ohio Department of Education (ODE) produce and publish for each Ohio school district a Performance Index (PI), ranking all school districts on the basis of a weighted composite of the Ohio standardized (NCLB) achievement tests and Ohio's graduation test (OGT). The OGT is roughly equivalent in form and rigor to the referenced achievement tests.  

Parenthetically, the bill also required that ODE post along with the PI an expenditure number, the total school expenditures per pupil.  Notably, no discrimination was requested for how those dollars were allocated, between those going specifically to the classroom versus administration and other non-instruction functions.  As myopic as this latter specification, it becomes almost a minor issue.

The draft rankings of Ohio's schools were just published, raising some eyebrows because of the rankings cited.  More interesting, however, is a statement that was part of the earlier and above referenced policy publications:

"In the current system, a school can be recognized as Excellent with Distinction while having nearly one in five students fail. Ninety excellent rated districts had ACT scores below the state average.  One excellent rated district had a college remediation rate of 81%. Sixty­-five excellent rated school districts had negative value added scores.  Clearly, excellence doesn’t mean high student results in Ohio."

Perplexing, the proposal for a remedy to district letter grades or a qualitative descriptor as above, both based on standardized tests, was to rank all Ohio systems, but using -- hold your applause and grab the arm of your chair -- the same standardized test and OGT scores that determined the former assessments.

The translation of the ACT issue is that approximately 27 percent of the districts with an "excellent" rating or better -- systems in the 65th to 100th percentile based on the PI calculation -- obviously didn't exceed the State's ACT average or district averages' 50th percentile if mean and median were comparable.  It is unknown whether the logical sequel was checked, calculating the proportion of systems in the 65th to 100th percentile of ACT scores that were in the comparable percentile range based on PI scores; it was never reported.  

If the scores are reasonably normally distributed, the implication of the report is that as much as two-thirds of the districts in the 65th to 100th percentile on PI scores could have ACT scores below the 65th percentile of the ACT distribution, challenging the credibility of any Ohio assessment based on standardized test scores.

There are five hypotheses that might explain discrepant Ohio district PI ratings versus ACT average scores:  One, that the ACT is a flawed test -- given its service over decades, reliance on it by higher education, national representation, and prior oversight, not highly likely; two, because there is a negative correlation between ACT average scores and ACT participation, the discrepancy is an artifact of high ACT participation -- possible, but needs to be tested because it seems unlikely to account for the magnitude of the discrepancy; three, that pedagogy and learning in grades 1-8 are distorted by Ohio standardized testing, subsequently impacting 9-12 performance unless offset by different 9-12 pedagogy; four, that the grade 1-8 negative effects of hypothesis three attenuate with time, but grades 9-12 are subject to misdirected administration and/or academically unprepared teachers -- plausible given that the conditions are actually visible in area schools; and five; that hypotheses two, three and four are jointly causal variables -- also plausible.

Given that Ohio's ODE web site, ODE’s representations to government and the public, and even material rewards or penalties for districts are so heavily invested in and impacted by standardized test results, it would seem logical that an agency would wish to explain the noted discrepancy?  Counterpoint, doing so could embarrass the Legislature, or even mark it educationally uninformed, as well as undermining ODE's credibility.  In other venues, those implications have produced both high level cheating and attempted cover-ups.

Related, Ohio's ODE web site does not currently report the district ACT averages and participation rates.  But, materially, I was provided in a previous request comparable ACT data as public records under the prior Ohio and ODE administrations.

Invoking Ohio's open records act, I requested from ODE the most recent ACT scores; the intent to do the analysis of concomitance between the PI rankings and ACT rankings by district.  Because participation rate is not uniform, the more complete analysis might be a partial regression of ACT scores on PI scores with the effect of participation held constant. Not the usual form of testing rigor for such data, but straightforward and de rigueur in virtually all multivariate analyses of any consequence.

To date ODE has neither provided the ACT data nor indicated a willingness to follow Ohio law.  One inference is that the analyses described could be an embarrassment to the Ohio Legislature and/or ODE, non-response interpreted as a cover-up by ODE of public information that might undermine the validity of the standardized testing being massively employed in its tactics and reporting.

A serious negative impact:  It is self-evident that Ohio districts that achieve some public approbation because of rankings based on the PI and its standardized test scores will likely resist changing any strategy or tactic, no matter how illogical or harmful to actual learning as advocated by you and other nationally recognized educators, in favor of flogging standardized test scores with the known tactics for cheating.

The jury is still out, but I suspect that unless I am willing to go to the mat -- meaning petitioning a superior or Ohio's Supreme Court for a writ of mandamus -- the ACT data by district will not materialize.  As an educator for decades, it is painful to experience what can only be described as an egregious failure of Ohio ethics and concern for real learning in a profession and national endeavor that should reflect their highest achievement.

Regards,

   Ron Willett

Dr. Ronald Willett, P. O. Box 81, New Bremen, OH  45869
Cell:  419-202-2044     Office:  419-977-2103

Postscript

Subject to some attempt to explain the issues defined above, Ohio’s education credibility at the State level is in question.  Clearly, it is hard to assert to Ohio’s individual districts that K-12 reform is of paramount importance when the questionable logic described above is employed as the basis for judging a system’s attempts to improve learning outcomes.

Ohio’s first K-12 reform challenge should be to clean up its own education act.

Sunday, November 20, 2011

HOW THE GRINCH SWALLOWED SQUINTS 11/21/2011

Not a momentous week for US public K-12 education, it still managed to produce some major groans.

Barbarians at the gates.

In what appears allegedly venal cronyism, Arne Duncan (and presumably Mr. Obama) awarded and gifted Microsoft with the US Department of Education’s “TEACH” program, and its website, a program intended to promote K-12 teaching as a profession to present and future collegians.

There is no issue with the need; through this decade the US may need as many as a million new K-12 teachers because of waves of boomer retirements.  The issue is that there are likely hundreds of US organizations with greater credibility and competence to promote K-12 teaching as a profession, including Apple where innovation and commitment to education were crafted that made Microsoft a distant runner-up that created little.

An assumption is that the gift is the quid pro quo for Gates’ billions spent on pushing standardized testing for the Obama-Duncan version of public education reform, opening a gate for Microsoft to prostitute public education to peddle obsolete software to future teachers – a class act?

The good news is that Microsoft may have as much success in the quest as it has had in writing bug-free code and delivering it at less that monopolistic prices.   The bad news is that the “TEACH” program is critical to future pubic K-12, and the opportunity cost of the cronyism to the US and the public education system is astronomical.

One could effectively argue that Arne Duncan has proven unfit to lead US public K-12 education reform, and should be replaced.

From rift to abyss.

The rhetoric, between accomplished education scholars such as Darling-Hammond at Stanford, professor, historian and author Diane Ravitch, and a small army of accomplished educators with decades of experience, juxtaposed against Duncan, the “billionaire boys club” (and more than a few naïve to destructive millionaires’ reform clubs), and the corporate reform movement of instant education experts, has become increasingly contentious. 

What seems clear is that the hypocrisy of the latter combination has reached new levels.  In a prior SQUINTS it was observed that the “corporate reform” mentality calls for using contemporary business logic in education, but then advocates a testing logic to inspect and penalize at the end of the line.  The blatant hypocrisy:  Contemporary management logic proposes that model of quality control is obsolete and advocates process controls that minimize or eliminate traditional inspection, i.e., in the case of K-12, simplistic standardized tests.

In a similar vein, Gates, Duncan, and that army of corporate reformers apparently missed entirely the epic work of MIT’s Douglas McGregor, on contrasting motivational approaches, Theory X versus Theory Y.  Theory X envisions beating public K-12 into submission.  Theory Y advocates a more intelligent approach, which by the way, works.  One educator, being far too kind, proposes that Bill Gates could be a hero by dropping Theory X and adopting Theory Y.  Pardon the demurral; spell that skepticism that Gates can distinguish X and Y any better than his leadership was able to distinguish an IBM giveaway and tax write-off of DOS from genuine invention, knocking off Apple technology and profiteering versus investment in innovation, or discerning competition from monopoly.

Lastly, as rift becomes abyss, hard documentation is quickly growing that demonstrates the present approach of NCLB is failing, along with the orgy of standardized testing.  A recent state level example was that state’s education department finding that at least 25-30 percent of its public school systems that had garnered ratings of “excellent” based on NCLB testing scored below the state’s average on the more defensible college admission tests.  What fraction of those “excellent” programs might show similar deficits if the collegiate test score reference mark was increased just one-half a standard deviation above the mean?  Phony excellence?

But an even testier issue with the escalating standoff between public schools and a small army of genuine education scholars on one side of the skirmish line, and the “Theory X” reform mentality on the other side, is they both embrace flawed arguments. 

Our public schools, by assuming entitlement, and by delaying for decades the self-examination and diagnostic assessment of their own structure and learning performances (along with America’s schools of education), gave birth to the present reform movement.  The public teachers’ unions exacerbated the schools’ performances with their own narrow missions.  That has been extended by the hubris still demonstrated by many dug-in public education bureaucrats and even teachers, who can’t fathom either present research on learning or how US need and technology have evolved. 

The present reform movement, in turn, is being driven more by ideology than strategic thought and intelligent diagnosis.  Even more destructive, virtually none of the standardized testing and teacher assessment modeling based on that same logic, has withstood either conceptual testing or empirical verification.  NCLB was simply allowed to snowball because of the ignorance and shallow thought now being projected by Duncan and the US Department of Education, then reinforced because it appeals to the political right wing’s view that we need to go back to school privatization shoot-outs that channel an old Ronald Reagan movie.

Ohio, thy name is ambiguity.

A pair of bittersweet policies issued recently from Ohio’s Department of Education, now overseen by Stan Heffner.

ODE has not proven historically a centerpiece of K-12 educational excellence – previously an inbred, bureaucratic division committed to protecting local schools from transparency and making it a challenge to find out virtually anything organized about Ohio’s public schools, in some cases violating Ohio’s laws to do so. 

But under its new state superintendent, Heffner, it has posted one of the more articulate yet readable prescriptions for proactively reforming Ohio’s schools.  The policies advocated are so evocative of common sense, and broadly applicable to US K-12, that a great deal of past misdirection is absolved.

But, also issuing – admittedly in response to a misguided Ohio bill, Ohio 2011 H.B. 153 – that same Department of Education just announced a new series ranking Ohio school districts, based on an alleged district “Performance Index” (PI), to be promulgated shortly in final form along with district per pupil spending.  

Sounds reasonable and responsible?  It might be if the Department had employed relevant data.  The alleged "Performance Index" is based on weighted NCLB standardized test results, challenged by virtually every real education scholar as meaningful learning, along with results from Ohio's graduation test (OGT) of questionable rigor.

In turn, the gross per pupil expenditures are an equally questionable measure, because of the effects of how those expenditures are allocated, for example, between dollars going specifically to instruction versus spending on administration, bureaucratic functions, and other non-instructional activities.  Inexplicably, the spending depiction accompanying the rankings is the opposite of the admonitions about Ohio school spending proposed in the prior and excellent policy document. 

Ohio’s Department of Education could adopt as a graphic the Roman god, “Janus,” though the real meaning of that symbolism is more intelligent than the PI effort.

Pizza and fries, the new health food?

Peripheral to K-12, but descriptive of Congress’ depiction as legalized bribery, was its response to an attempt to improve the quality of school lunches.  You have undoubtedly heard the groans from parents with wits, and any health professional who understands nutrition and America’s childhood obesity epidemic, mingled with the cheers from a phalanx of corporate lobbyists and a few select producers, and ultimately the ka-ching of money counting in legislators’ campaigns.

As cited by the Washington Post’s Valerie Strauss:  “Despite public ridicule — including a skewering on Jon Stewart’s ‘The Daily Show’ — Congress has gone ahead and approved legislation that junks new standards the Obama administration was trying to set to make lunches healthier for public school children.”

The rest of Strauss’ op-ed is worth reading.  Any additional comment seems superfluous, except for the vague recurring dream that conjures a giant plastic spray bottle positioned in front of the Capitol and labeled for Congressional weed control, “Roundup/Spray On.”

Better parents.

In Sunday’s New York Times, author and columnist Tom Friedman reported on findings from a first study of 5,000 students in 20 countries that make up the O.E.C.D. (Organization for Economic Cooperation and Development), which internationally conducts the exams of 15-year-olds known as PISA (Program for International Student Assessment).  As Friedman points out: “America’s 15-year-olds have not been distinguishing themselves in the PISA exams compared with students in Singapore, Finland and Shanghai.”

The research, for the years 2006 and 2009, went beyond classrooms, specifically “…the PISA team went to the parents of 5,000 students and interviewed them ‘about how they raised their kids and then compared that with the test results’ for those years.”

The key finding: “Fifteen-year-old students whose parents often read books with them during their first year of primary school show markedly higher scores in PISA 2009 than students whose parents read with them infrequently or not at all.  The performance advantage among students whose parents read to them in their early school years is evident regardless of the family’s socioeconomic background,” and “…just asking your child how was their school day and showing genuine interest in the learning that they are doing can have the same impact as hours of private tutoring.  It is something every parent can do, no matter what their education level or social background.”
Friedman’s argument is that the US reform movement should quit uniquely identifying only America’s teachers as the crux of US public K-12 reform, and start asking America’s families to “parent-up.”
So parents, if you don’t want a Bill Gates’ phantasmagoric manifestation materializing in your bedroom, entering through the seventh pane of the bedroom window, start addressing your child’s education with more robust learning support than junk TV, junk games, junk lyrics, junk food, and the belief that touchy-feely self esteem and sports are more important than literacy.
Were the latter Gates’ appearance not such a scary image, this might have actually constituted a “squib.”
Next SQUINTS.

Thanksgiving will intervene so the next SQUINTS will be posted on December 5. 

The topic will seek to address the diverse meanings of “knowledge” and “learning,” including the misuse/abuse or misunderstanding of these terms in much present K-12 reform rhetoric.  An attempt will also be made to relate one of the accepted depictions of learning processes (for example, “Bloom’s augmented taxonomy”) to evolving digital learning technologies, to get beyond the naïve views of technology meaning only whiteboards, desktops, laptops, even pads by the innovative, and their use frequently misconceived as simply add-ons to conventional pedagogy.

SQUINTS’ best wishes for a peaceful and pleasant “turkey day,” where for a brief period the turkey metaphorically is not wholly US public K-12 education.

Sunday, November 13, 2011

SQUINTS NOVEMBER 14, 2011: WYSKAOEASPESBWHTA

Not a new sub-Saharan country that sprouted over the weekend, or a Welsh railway station:  The cryptic acronym stands for “what you should know about Ohio’s elementary and secondary public education system but were hesitant to ask.”

State data predominate.

In a prior SQUINTS, the issue was raised of limited, comparable K-12 school data in the US, especially for local systems.  The post prompted a deeper review of US school data, concentrated in the US Department of Education’s National Center for Education Statistics (NCES), with some additional data from the National Education Association (NEA) and US News.  These data, however, are primarily either US totals or averages, or are only broken out by state.

Not fully satisfying if the quest is for local school assessments, but the state data are still revealing of our states’ propensities in promoting education, and their overall performance versus the US averages.  To test the utility of the data to portray a state’s education tendencies, this SQUINTS reports on Ohio’s public K-12 positions vis-à-vis the US and a reference state's performances.

All of the data reported are based on the most recent statistics, usually either 2009 or 2010 information.  For positioning, Ohio’s results are compared with two benchmarks:  US means or composites; and with the state that consistently leads the US in public K-12 performance and in school system operating data collected, Massachusetts.

Because of the lack of comparability of states’ alleged standardized test results, only the latest NAEP test performances were employed.

Rating schema.

A total of 35 properties of K-12 schools were identified in the data available, where the US or a state’s values indicated directionality, favorable or unfavorable for advancing the education mission.

Ohio’s performance was first compared with the US averages or a composite.  Simple counts were used, indicating cases where Ohio did better or worse than, or tied the US, and except as noted below factors were unweighted.  There is some unknown automatic weighting because of concomitance among factors employed, but the extent of the weighting couldn’t be determined for the present analysis.

In the next step, Ohio’s results were compared with Massachusetts using the same rubric for scoring; counts indicating where Massachusetts did better or worse than, or tied Ohio.  A single comparative exception was Colorado as a surrogate for Massachusetts because of the low proportion of Ohio graduates taking SATs, and a state with a low comparable SAT participation rate was necessary.  That was required because there is a high negative correlation between the percent of a state’s graduates taking the SAT and subsequent scores; the higher the participation, the lower the mean SAT scores.

OHIO’S OUTCOMES.

*************************************************************************

Where Ohio’s K-12 schools were compared to the US:  Ohio scored below the US average or composite on 22 out of the 35 factors (63 percent), tied on three items, and bettered the US norm on ten factors.  Compared to Massachusetts’ schools (with one surrogate for SAT results), Ohio scored lower on 32 of the 35 factors (91 percent), tied on three, and performed better than Massachusetts on no factor.
  
Self-evidently these comparative results, though based on a simple methodology, are not a ringing endorsement for Ohio’s pursuit of K-12 educational excellence.  Had the metrics of each factor been employed to create, pardon the irony, a standardized quantitative score, Ohio might have displayed even greater deficits.
  
Ohio's overall K-12 school performance suggests:  Dollar-eating bureaucracy in redundant superintendents, redundant school administration, and a system of redundant and corruption-prone Education Service Centers; overspending on facilities, underspending on instruction; deployment of misdirected to obsolete education technology; weak academic performance along with low state education department support for quality of resources and performance; and belief systems that fail to hold local systems accountable for genuine learning as well as integrity and public transparency.
  
In perspective, the data provide one inference why Ohio appears seriously challenged in its pursuit of economic development that in part hinges on an educated workforce.

***********************************************************************

Conclusion – US education diagnostic deficits.

An argument in a prior SQUINTS was the need for a national census of individual public, private and charter K-12 school systems, to establish a valid diagnostic baseline for reform efforts. 

The question is:  Why failure of both the White House and Congress to recognize or acknowledge the harm being caused by forced and simplistic standardized testing, pushing the wrong learning buttons, and threatening to further disrupt and degrade genuine public K-12 education?  Both power centers appear clueless about the properties of the full US population of over 88,000 public schools being impacted, irresponsibility rivaling the failures in seeking US jobs' recovery.

At the opposite pole from NCLB, the recent Republican debates have multiple candidates advocating from ideology, for elimination of the US Department of Education, or scaling back its role, to returning all aspects of PreK-12 education function to our states and their present support and control structures.

The latter proposal might be viewed, metaphorically, as the equivalent of making Texas' legislature the national advocate for evolution, and its governor our national advocate for excellence in math.

There are two overarching arguments for veering away from the referenced proposals:  There is nothing remotely "local" remaining in creating the learning needed by the US through this century, arguments spanning the universality of contemporary knowledge, to employing human resources that now count the world as the relevant domain; and second, left to the isolated and uneven oversight of individual states, PreK-12 education systemically can never be equilibrated across the US.

A different model.

One model, to challenge conventional wisdom, and provoke debate, combines a change in USDOE roles, with a new institutional mechanism that respects states' rights but creates one US PreK-12 educational game plan.

It suggests NCLB set the goals as national standards but eschew enforcement; and entails USDOE roles being scaled to focus only on Title 1 enforcement and any subsequent Congressional mandates meeting similar standards, funding and doing research on topics applicable to improving learning at the functional level, and non-partisan data generation that would fully inform all states of their systems' properties and performance using common definitions and data collection.  Policies on curricula, standards of learning, knowledge based on our accepted national academies' oversight of rigor, and common preparation and certification standards of both teaching and administrative human resources, would vest in a new function.

The above would have to be created by Congress (yes, an immense stretch) but might be viewed as a national consortium for PreK-12 education, representing and governed by the 50 states and the District, reflecting independence similar to the Federal Reserve or other quasi-federal programs that avoid some of the biased consequences and temporal volatility of present politically-driven departmental control.

Heretical, perhaps; but given the present morphology of NCLB and reform misdirection and gaffes, one variant to reform a system that in all likelihood cannot be materially changed much less improved as institutionally constituted.


Appendix

Below are the factors that could be scored, and the information used for scoring, based on available education data from the US Department of Education -- National Center for Education Statistics, the National Education Association, US News, and US Census’ American Community Survey.

Factors and scores:  

Best high schools, where 49 states were ranked, and metaphorical medals (bronze, silver or gold) were awarded:  US bronze or better = 9.3%; Ohio bronze or better = 5.6%; Massachusetts bronze or better = 10.7%.  Ohio ranked 39th out of the 49 states assessed, Massachusetts ranked 2nd.   (US News)

Average daily attendance as a percent of fall enrollment:; the US overall = 95%;  Massachusetts = 94.3%: Ohio = 86.9%.  (NEA)

Dropout factories – percent of schools with a promoting power ratio of 60% or less:  US = 1.6%; Massachusetts = 1.3%; Ohio = 1.7%.  (NCES)

Number of high school graduates as a percent of 9-12 enrollment:  US = 20.7%; Massachusetts = 21.8%; Ohio = 21.2%.  (NEA)

Average salaries of public school teachers, index where US = 100:  Massachusetts = 125.5; Ohio = 101.4.  (NEA)

Change in public school teacher salaries (in constant dollars) from 1999/2000 to 2009/2010:   US = +3.5%; Massachusetts = +16.6%; Ohio = +5.9%.  (NEA)

Public school revenue per student:  US = $11,841; Massachusetts = $16,150; Ohio = $9.889.  (NEA)

Percent of revenue for public schools from state government:  US = 45.3%; Massachusetts = 41.4%; Ohio = 45.1%.  (NEA)

Percent of revenue for public schools from the Federal sources:  US = 11.1%; Massachusetts = 7.6%; Ohio = 8.6%.  (NEA)

Current expenditures for public K-12 schools per student as a percent of the US average:  US = 100; Massachusetts = 139.5%; Ohio = 90.0%.  (NEA)

Per capita state and local capital sending for K-12 public schools:  US = $231; Massachusetts = $102; Ohio = $224.  (NEA)

K-12 teachers as a percent of the total instructional staff:  US = 87.1%; Massachusetts = 88.8%; Ohio = 80.3%.  (NEA)

K-12 administration as a percent of total instructional staff:  US = 5.5%; Massachusetts = 3.7%; Ohio = 5.7%.  (NEA)

Average percent of students at or above proficient in NAEP testing on math:   US = 35.5%; Massachusetts = 54.5%; Ohio = 40.5%.  (NCES)

Average percent of students at or above proficient in NAEP testing on reading:   US = 30.5%; Massachusetts = 46.0%; Ohio = 36.0%.  (NCES)

Average percent of students at or above proficient in NAEP testing on science:   US = 27.0%; Massachusetts = 39.5%; Ohio = 35.0%.  (NCES)

Average percent of students at or above proficient in NAEP testing on writing:   US = 29%; Massachusetts = 45%; Ohio = 35%.  (NCES)

Percent of free lunch eligible students:  US = 37.8%; Massachusetts = 27.4%; Ohio = 33.9%.  (NCES)

Pupil/teacher ratios across all grades as a percent of the US, where US = 100:  Massachusetts = 83%; Ohio = 106%.  (NCES)

Policy on use of state standards in selection of textbooks:  US – 15/50 states require; Massachusetts – no; Ohio – no.  (NCES)

State requires parental notification of out-of-field teachers:  US – 6/50 states require; Massachusetts – no; Ohio – no.  (NCES)

Percent of charter schools:  US = 5.0%; Massachusetts = 3.4%; Ohio = 8.5%.  (NCES)

Districts required to align professional development with local priorities and goals:  US – 31/50 states require; Massachusetts – yes; Ohio – no.  (NCES)

State provides incentives for teachers to earn National Board certification:  US – 31/50 states have provision6; Massachusetts – yes; Ohio – no.  (NCES)

Percent of a state’s graduates taking the SAT:  US = 47%; Massachusetts = 86%;Ohio = 21%.  (NCES)

Average SAT scores for Ohio versus Colorado, based on comparable participation from both states:  Colorado = 565; Ohio = 536.  (NCES)

State requires statewide social studies assessment:  US – 25/50 states require; Massachusetts – yes, at 5 and 10-11; Ohio – no.  (NCES)

Facilities acquisition and construction as a percent of total school expenditures:  US = 9.7%; Massachusetts = 5.2%; Ohio = 9.4%.  (NCES)

Teachers as a percent of total staff:  US = 50.5%; Massachusetts = 56.8%; Ohio = 45.7%.  (NCES)

Schools use unique student identifiers:  US – 43/50 do; Massachusetts – yes; Ohio – no.  (NCES)

Schools have the capacity to communicate with the state’s higher education systems:  US – 33/50 states do; Massachusetts – yes; Ohio – no.  (NCES)

Percent distribution of school expenditures to instruction:  US = 65.8%; Massachusetts = 69.8%; Ohio = 63.5%.  (NCES)

Percent distribution of school expenditures for administration:  US = 10.8%; Massachusetts = 7.7%; Ohio – 13.2%.  (NCES)

Percent of people who completed a most recent bachelors degree:  US = 26.7%; Massachusetts = 37.4% and ranked 2nd; Ohio = 23.3% and ranked 40th.  (NCES)

Note, not part of the rating scheme:  For the US, K-12 enrollments 2001-2011 are virtually flat, but expenditures per pupil enrolled has changed from approximately $7,500 in 2001 to $11,000 in 2011; classroom teachers changed from 2.95MM in 2001 to 3.25MM in 2011.  In turn, K-12 performance in the same period improved only marginally as measured by the NAEP.  (NEA and NECS)

Note, not part of the rating scheme:  Though 50/50 states have standards for technology, only 21/50 require some record of achievement for licensure for teachers, and only 10/50 require that for licensing administrators.  (NCES)

Monday, November 7, 2011

EARLY WAKE UP SQUINTS 11/7/2011: TRUTH AND CONSEQUENCES

Dr. Diane Ravitch, in the just published "revised and expanded" edition of the perceptive, courageous and best-selling The Death and Life of the Great American School System:  How Testing and Choice are Undermining Education, and with great precision, relates how several major metropolitan school systems have been undermined by so-called reform efforts based heavily on use of standardized tests as the magic bullet.

Parenthetically, an article in today’s Washington Post raises questions about the full validity of the recently released 2011 NAEP testing that has been considered the gold standard in assessing US PreK-12 educational progress every two years.  Its author, Dr. James Harvey, is a highly principled former professor, researcher and educational advocate who was part of the team that prepared “A Nation at Risk,” and presented it to former President Reagan.  He held his silence until Mr. Reagan’s death, and only then revealed his dismay and disappointment when Mr. Reagan’s primary reaction was disappointment that the report didn’t call for dissolution of the US Department of Education and restoration of school prayer.

The Ravitch targets.

Dr. Ravitch’s historical assessments are meticulous, and retain objectivity that leaves it to the reader to sort many values and issues footing the history.
 
Her critique of this century’s reform efforts carefully details multiple stories: The misinterpretation of the early reform results from NYC’s District 2; San Diego’s odyssey from a good school system, through a top down reform movement that diminished the public school performance that had been achieved, to a post-book recovery of PreK-12 excellence now threatened by the recession and funding shortfalls; the subsequent New York City reform attempts and denouement; the Bush/Perry “Texas miracle” that was primarily an artifact; and Atlanta’s claims of PreK-12 excellence now wracked by the reality that it was achieved by wholesale cheating.  She, mercifully, runs out of pages before tackling intimately what occurred and is still methodically eroding D.C.’s schools. 

The book’s critiques of the present reform effort and its demagoguery are needed straight talk, but the causes of the creeping obsolescence and petulance of some of US public education have been decades in the making. The narratives lose impact in any retelling; the book needs to be absorbed.

Finally this is not a review of her book, even if the writer had that perspicacity. In my opinion her work reflects the best of American academic tradition, accompanied by the courage to challenge an establishment doing potential PreK-12 strategic harm with tentacles extending all the way to the White House.  There are few if any quibbles with what Dr. Ravitch has asserted; the few generically are that the work skirted some issues as critical to actually changing PreK-12 as those detailed.  What follows are brief opinions about those items.

Local is beautiful?

The attribute that circumscribes her narratives is that the systems examined are all metropolitan systems.  Arguably, it is those systems in the US most likely to exhibit the greatest needs because of poverty, racial discrimination, extreme school experiences, and the stress exerted on those subpopulations and cultures by a city environment.  A qualifier is that only 29 percent of public PreK-12 enrollment is categorized as being “city.”  An additional 34 percent of public enrollment is suburban, some amalgam of both urban system attributes and systems associated with more affluent residential populations.  Ravitch’s critiques may speak to school issues of 40 to 50 percent of our schools, but not a clear majority.  Her work also covers schools that receive oversight from some body other than an elected school board, or from boards that attract real contests and the requirement that candidates defend their beliefs and competence to be elected.

At the other pole, almost 37 percent of public school enrollment is in towns and rural, accounting for 18MM public school students.  These systems will virtually all be governed by local school boards, where many of those boards are manipulated by their systems, and the candidates for those boards can range from legitimately qualified by prior education to functional illiteracy.  The real world stories about marginal educational practices among this class of system are legion, belying the “Norman Rockwell” imagery of America’s small towns. Bigotry, ignorance and self-centricity can achieve levels rivaling or exceeding the stories Ravitch’s book relates.  Usually the place’s school system, to the detriment of its students, is in the thick of exercises in local power and prosecution of rooted beliefs that may be faulty.

It would be welcome to be able to with pride say – it can’t happen here.  The issue is it already has, to the detriment of likely more than one educationally-cheated generation of area youth.  An area system’s history includes a series of ethically and educationally challenged superintendents, marginal teaching and curricular resources, through a board that fails in integrity, critical thought, and genuine education values – failing to properly vet administrative hires, misrepresentation of system financial data to secure levies, secrecy, unresponsiveness to parents and voters, and manipulation of board elections via cronyism are just examples.

Superficially attractive, this is a system that has for a decade shunned transparency, eschewed or faked real technological progress, and via teaching to the tests to claim excellence based on simplistic test scores, created and perpetuated a myth now an addiction.  The fear of transparency has produced behavior that allegedly violates the state’s open door and records acts, and created malfeasance from incompetent curricula through administrative behavior that can only be described as sociopathic avoidance of review.  Couple this system’s retreat from reality with a state’s equivalent incompetence in its education oversight; you have bred public education mediocrity that will never be assuaged by the standardized testing mantra Ravitch also shreds as the fix for those urban systems. 

Clearly, among over 40,000 town and rural schools, there are systems subscribing to ethical standards and competently educating, and school boards with equally competent and ethical elected members.  But like the example poster child of a challenged system, sprinkled among that number is some unknown number of schools with combinations of the flaws cited; local cultures frozen in time, where there is no effective oversight of inept or arrogant administration, and where a system’s bricks and mortar, hype and propaganda, and sports performance, frequently dominate and trump learning. 

Local is not beautiful in many of America’s alleged storybook small communities, where reform can be effectively blocked because of incompetent local control, or no control at all.  While the systems Dr. Ravitch primarily cites as America's reform challenges are the ones highly visible to the media, an equivalent block of PreK-12 schools out of the spotlight is also a roadblock to stepping up US education performance.  Reflecting on Ravitch’s calls on reform of urban systems, it may take a different reform model, plus incentives and discipline to impact middle America’s “pride in ignorance” school belts.

The business of business.

In the middle of last century a conservative academic economist named Frank Knight published a classic paper titled, “The Business of Business is Business, Not Doing Good;” born 50 years too soon?  A bit of a rant about the difference between market values and public administration, the otherwise lucid paper at the time was a reflection of the initial reaction to an invasion of liberal values.  What the paper failed to anticipate was the separation of goals and means, and creative thinking that would overtake organization and management theory in the subsequent decades.  We may still quarrel about the goals of business organizations, but the genre spawned a revolution of both academic and high level practitioner thinking about organizations and the deployment of human resources, systems theory, along with methods of planning, control, and performance measurement, all applicable to any organization and well beyond the narrower practice of business in markets you recognize.

The Death and Life…, correctly, hammers the simplistic corporate reasoning and lobbying that foot much of the current orgy of standardized testing being forced on public education, along with its promotion by the “billionaire boys’ clubs.”  In the process the negatives of alleged management approaches applied to public PreK-12 operations crash into the present, century old organization of our schools, creating noise but little wisdom.  There have been ridiculous superficial applications of functional reorganization of public schools – Dr. Ravitch properly ridicules some – but the real opportunity is recognition of the advances in social psychology wedded to organization theories that have revolutionized the performance of many US and world companies.  The thinking and models, based on process analysis rather than what’s trendy or titles on boxes, are fully applicable to public education.  Until that is recognized and some of that knowledge customized to restructure school organization, unthinking tradition will continue to restrain PreK-12 change.

Curricula versus knowledge.

Dr. Ravitch makes telling points in showing how in a rush to allegedly improve learning, present reform modes have bypassed virtually in total the very sources of the learning, curricular reform.  The rare exceptions have been the National Academies – science, engineering, medicine, and research council – that have called for rethinking curricula to move from teaching (and testing) fragments of knowledge to focus on core understanding of major disciplines and science-driven processes.  The concept is still incipient for the social sciences but applies there as well.

The question is whether we are looking at the real heart of learning. At issue are what constitutes knowledge and contemporary and defensible understanding of learning, i.e., modeling learning from the first year of life on as multivariate process -- temporally, by personality variables, by socioeconomic variables, by cultures, to prescribe the most effective mix of processes to achieve learning and its subsequent retrieval and application. Epistemology (or the science of knowing), in particular the whole concept of conceiving (many, small and early) controlled experiments and the proper interpretation of their results, should foot everything we do in PreK-12.  This is precisely the opposite of the simplistic testing and methods currently being deployed.  Notably, we have failed systemically to pursue alternative testing models that would obsolete present standardized tests, a major US education research blunder.

The role of technology.

Driven by the need for controversy, or an individual byline, there are increasingly media offerings by sources a dollar short and a day late in understanding the evolution of digital technology, to the effect that developments such as online learning, or the various hardware being adopted by education is unproductive or not functioning in PreK-12.  The reality is many of these commentators are confusing things and function.  The dictum of architect and legend Ludwig Mies van der Rohe resonates, “form follows function.” 

This is too complex for only a fractional SQUINTS, a full section promised later, but two top line points:  One, if you follow the literature of science, engineering, and contemporary economics, a proposition is that the best hope the US has of strategic economic recovery is the major pool of expansible technologies building just below the threshold of applicability, a product of decades of prudent US investment in good science (if it is not crippled by the ignorance of the right wing of an already mentally and ethically challenged Congress).  Two, digital technology applied to PreK-12 is not rooted simply in hardware and gadgets, it never has been, but in reformulation of logic and methodology that can expand education systems and learning. Those technologies have to be fitted or dovetailed with how knowledge is constituted, and human resources' roles, both teaching modes and student learning needs and styles.  This is both curricular reform and integrally the methodology for transforming information into knowledge and making it retrievable and applicable to critical thinking, problem solving and invention.  

The digital game has changed the larger learning game, and irreversibly, whether it is presently a comfortable playing field or not for many educators; Dr. Ravitch missed this target.  The issue is the cross-discipline work to marry the capabilities of those evolving technologies, including incipient AI (artificial intelligence), with human factors.  When the Turing test starts eliciting "hello, who is this?" responses, there is a need to open up thinking.

Minor gripes about major issues.

Opinion, but in this writer’s view Dr. Ravitch let off the hook two dysfunctional factors in the present school change debates:  The first is Mr. Obama, who has demonstrated repetitive bouts of hypocrisy, publicly acknowledging inadequacy of present simplistic standardized testing on any Monday, then blasting out with even more aggressive advocacy of that testing on a subsequent Tuesday; perhaps politics-as-usual, or a rare intellectual failing, but the mode has been destructive of reason and a positive, rather than punitive approach to public PreK-12 change.

The second rates far more coverage at a later date, but of all factors impacting why in this decade change is needed, as well as whether even change achieved will be sustainable, what should be in everyone’s sights are the near intellectual bankruptcy and bunker mentality of the bulk of American schools of education.  For practicality, dismissing “Teach for America” and similar peripheral experiments because of their scale, the majority of future teachers in public PreK-12 will come from those same schools of education.  Unless they are changed as the first line of attack for reform of both US higher education and PreK-12, America will have both elementary/secondary and higher education at some future point in intensive care.

Lastly.

In her concluding chapter Dr. Ravitch notes:  “Education is a reflection of our society.”  Full of truth, and a pretty scary thought, given the levels of literacy, critical thinking, strategic thinking, and the world view manifested by too many of our citizens.  Repetitive surveys have demonstrated our adult population’s inability to pass simple tests on citizenship, to read for effect or read at all, and belief in magic; add breaking away from “Dancing with the Stars” long enough to notice that America is in both strategic economic straits and precipitating potential class warfare.  Education, but not simply the present model, is one prescription for restoring America’s eminence, and it likely has greater potential than destroying its governments.

Dr. Ravitch’s book and views may be for many, who see thinking as an option, antithetical or isolated examples.  Reality is that the narrations supplied are US reality, one that the full American electorate needs to experience to reapply some common sense to its PreK-12 public schools before we push them into some intellectual coma.

For the non-Luddites, The Death and Life… can be acquired online in seconds via either Apple’s online bookstore, or via Amazon and Kindle.  The book is categorically worth the price of admission.  

Tuesday, November 1, 2011

SQUINTS, PS2: THIRD WHY - KNOWING K-12?

What do we really know about our K-12 public schools, and why the black box?  

The really short answer is, very little, in spite of the NCES (National Center for Educational Statistics), and the contribution of the NAEP (National Assessment of Educational Progress) representing the best of current selective K-12 learning assessment.  (Its 2011 Mathematics and Reading results were issued Tuesday, November 1 at 11 AM, referenced later.)  

The reasons are varied:  The limited scope of school data collected by the Federal government in spite of NCES/NAEP, and their timeliness; the second a product of quixotic state control of public education; and third the mixed integrity of the local leadership and oversight of those 88,000 public schools.

Perhaps a further reason is our society’s – including its professional bureaucrats and legislators – aversion to math in general and the discipline required to understand the data that continuously flow from our various institutions.  These data are now more easily collected, having become virtually automatic outputs because of our digital capacities, but by the same token increasingly representing a major task to digest that data and exploit their potential meaning.  The technical term for a branch of methods and algorithms that can extract meaning is “data mining;” sounds simple but it encompasses substantial computational and modeling expertise.

Some counterpoint, however; even sophisticated findings from large data sets can be extracted using first order standard statistical methods that should be universally present in our K-12 administrators and teachers, also universally taught in our K-12 schools, but are not.

Washington’s contribution.

Given the role being played by Title 1 school mandates, by NCLB, and now for some states RttT, the assumption would be that our government that manages its nation’s census might be a rich comprehensive source of school data?

There are two primary sources of Federal data for K-12 schools, the Census and NCES. 

School data collected by the US Census Bureau are summarized in the appended Exhibit A.  In addition, within Census’ 2010 American Community Survey (“factfinder2”) there are estimates by locations of the standard demographics, plus education levels, and detail on alternate language speakers.  The operating issue with these data is their sampling basis (they do not represent a census) and relating their geographic boundaries with school districts, as well their limited utility in studying school performance.

From Exhibit A the School District Poverty Estimates might contribute to asking questions about school performance but the last data are for 2009; the School District Finance data, also currently 2009, are generally available more currently from individual states; the NCES Common Core data are current, high accuracy, but provide little more than the identity of schools, their location, and some very basic history and type classification data.

The acknowledged quality winner in present US school assessment data is NCES, its opening page linked here.  Be forewarned, intellectual curiosity and passionate belief in the necessity of maintaining and improving the US system of public schools may be a second career if you start opening the NCES windows.  It is redundant to try to summarize the rich documentation provided from NCES; its NAEP has become the accepted benchmark for determining whether our K-12 systems have generated selected learning effects.

The just released 2011 NAEP results for Mathematics and Reading, for 4th and 8th graders, in public and private schools, are linked here and here.  They compare the 2011 results with 2009 and prior measurements.

The NAEP paradox.

As expert and careful its procedures, and beautifully detailed the web site and data provisions, the National Assessments represented for 2011 bear some reflection before galloping to any conclusions.  One experienced educator, writing about the results’ likely impact on the current debates over standardized testing, suggested that whatever their outcomes – math and reading improving or not – the results would be expressed as an endorsement of that testing.  If the scores are up, it was because of the testing; if scores went down or were flat it was because there wasn’t sufficient testing.

At the risk of taking the intrigue out of self-discovery, the overall NAEP results for 2011 indicate that since 2009 4th and 8th grade math scores (may) have very marginally increased, and 4th and 8th grade reading scores (may) be flat.  “May,” because in spite of the care that goes into the elaborate, multi-stage sampling across states and schools, and the challenge of creating year-to-year comparability, there are sampling frame and material non-sampling error potentials.  The results you will see in the linked NAEP findings represent samples of from 4.5 to 5.7 percent of the 4th and 8th grade school universes, spread across the 50 states and representing less than a tenth of US K-12 schools. In turn, the score changes you will view represent fractions of a percent change, assessed significant with the minimal standard considered rigorous in assessing sample results.  The data are competently collected but represent tiny and fragile differences to be used with care.  

The Washington Post's Education Section offered a similar assessment.

Two points of critique:  One, the lack of documentation of the sampling errors that accrue and are compounded in multi-stage sampling; and two, disingenuous for NCES, presentation of the NAEP graphic reports in a fashion that perceptually distorts the findings, making results appear more favorable at just a glance than the proper metrics justify.

Beyond the technical difficulty of collecting the NAEP data, and the care in their reporting exercised by NCES, they offer little that could serve a high order study of correlates of public school system performance across the nation.  Basically the only classification variables reported were income effects, sex of student, racial effects, and public versus private school representation.  The almost obvious findings from these assessments were that performance as related to income, racial attributes, and sex of student reflect long term patterns not materially changed, and that math and reading performance within private schools continues to best public school achievement.

Good methodology, good procedures, care in tabulating and reporting, but the NAEP was not designed to try to seek causal factors that might be used to either infer how to improve learning at the individual system level, or assess specific school educational environment factors that may influence learning.

Were the NAEP model for summative assessment made a national requirement for all schools, coupled with collection of school environmental and operating data, the combination could provide the basis for data mining for inferential relationships with learning achieved, replacing the mashup of simplistic standardized testing that is distorting and potentially injurious to future US public education. 

State data -- digging is believing.

State-by-state school data given high expectations are even more discouraging than present Federal data for assessing information causal or concomitant to educational performance.

For the skeptics, the question can be a do-it-yourself project and high impact learning experience if you have a computer, some Internet bandwidth, and a fairly large chunk of time and patience.  It will be rewarded at least in knowledge and a healthy wake-up call.

Using your web browser, search on “(state name) department of education,” then open each official site, checking for data that describe openly and conveniently the demographics and performance parameters of each state’s systems in detail.  It will only take a few searches, including Ohio’s site, to start to discover that our states have cared little about informing the public about their schools.  With a search of all 50 states you will discover, additionally, there is no uniform manner in which a state’s systems’ data have been defined, gathered and presented.

There is one reasonable reference mark, the State with the longest history of education in our nation -- Massachusetts.  Their school demographics and system educational operating data are clear, comprehensive, and easily accessible.  The Massachusetts data are summarized in Exhibit B below, and the State’s web site is accessible here.

In sum, our states for whatever combination of reasons represent a totally disorganized source of intelligent school data that could have properly guided NCLB had they been assembled and assessed before that program was ignorantly enacted and applied.  Ohio, with a corrupted system of corporate and political cronyism effecting its education, and payoffs and manipulation in providing its schools everything from K-12 curricular materials, computers, even bandwidth, through overt buffering of individual systems from transparency and accountability, is an example of what state control with low standards or political motivation has bred.  Ohio’s school data have been on and off, not aligned historically, and from first hand experience its Department of Education has a track record of stonewalling even on basic public data.

Repeating the earlier point, Congress could make an impact on intelligent K-12 change by mandating NAEP as the periodic K-12 school census providing summative performance measures.  In turn, the Massachusetts model of explanatory variables is not perfect, but it comes close to providing the kind of data that could intelligently allocate change efforts.

Local is problematic.

Assessing the true performance of local systems is another matter.  The FBI, NSA and Google likely have a better fix on your and your household’s life trail than a community has awareness of what its K-12 school is actually doing; teacher preparedness, what it’s teaching, use of funding, hiring, conspiring, handling complaints, and importantly, any awareness at all of the beliefs and values of administration that controls that activity.  Rarely do local boards have or professionally seek that information, much less act on it to correct unethical and malfeasant performances.  The reality is that local systems frequently work overtime to obscure what they are actually doing with tax dollars and in the classroom out of fear, or self-righteousness, or simply hubris.

Clearly this is not a universal trait, but in fact little is known about how prevalent that is by any classification factor from geography through local cultures.  With little research that has attempted to understand school system organizational behavior, there are few clues in the education literature.  The acid test is asking your local system for what is almost universal by state, public information, governed in turn by some form of public open records act or law.  The results may surprise many taxpayers who naively assume that our education systems must be the most open of the public breed – they are not.

A major factor in failure of local system transparency is in part the paranoia that can come along with the risk of being branded a failed school by NCLB.  Even systems that have operated with integrity are challenged to avoid defining lesson plans that become teaching to the tests.  Once initiated the behavior feeds the need to try to shield the performances.

But it is also driven by poor management, compounded by inept local board oversight, prompting some school administration to withdraw behind a wall to block citizen questions and critique.  Superintendents outnumber US public K-12 teachers 35 to one; however, a suggestion is that for every five to ten teachers that are burned out, or slipped through a school of education and accreditation without adequate vetting, or are incented primarily by dollars, there is at least one superintendent who should be removed or retrained.  For the reality is that our schools of education do not managerially train good administration, their inadequate vetting by local boards permits the power-hungry, manipulative and sociopathic to operate, and subsequent local oversight by boards can be manipulated by a school’s administration.

One rudimentary test of a school board’s independence and integrity is amazingly simple:  Just attend a board meeting, then review your local board’s meeting minutes to determine whether they were written by its superintendent before the public board meeting was held and merely distributed for signatures, sometimes even neglected?  You may be astounded by the findings, or not.

Valid statistics trump damned lies.

Mark Twain’s admonition, “…lies, damned lies, and statistics,” was prompted by the frequent use in his time of marginal statistics to buttress a generally weak logical proposition, the magic of data.  In this century and for the latter part of last century the data available to test propositions and evaluate assertions have become far more robust.  But there is still opportunity to, as a mid-20th century book popular at the time noted, “How to Lie With Statistics.”

But that provocative title aside, when there are databases that reflect issues subject to debate, the most comprehensive knowledge that can be brought to bear – more defining, credible, and representative than the ambiguity of language and value judgments – is competent mathematical analysis of quantitative data to assess relationships.  That is precisely what should already be on the table for America’s public K-12 systems and their performance attributes; inputs, alternative pedagogies, organizational and managerial strategies, assignment of resources, technology employed, uses of funds, input/output measures, performance measures of their educational function.  Such data hypothetically have limits in expressing the quality of inputs and outputs, but contemporary model building and the capacity to digitally transform qualitative factors have begun to address that limitation as well.

A large number of partially understood or unaddressed K-12 performance relationships could be tackled with a universal school database (referenced above using the NAEP model as the dependent variables) that contained the factors similar to those being measured by the Massachusetts system.  Those findings then become the logical scaffolding for the next level of explanation, starting to concept and execute controlled experiments or quasi-experiments to assess how to functionally create more effective classroom learning.  Finally, get a handle on alternatives to measure genuine learning outcomes.

Importantly, those experiments actually start from the inside-out, conceptualizing how variables targeted as well as variables confounding explanation would need to be assessed, to with high reliability infer significant cause and effect. That leads to identifying the research design and analyses required to account for the factors that are ongoing in any such trial.  Such research designs and prudent use of statistical models are the desiderata of future educational studies if there is to be greater confidence in propositions about learning methodology. 

Sounds like a case for hard thinking and the reality that good research and explanation are in fact demanding and precise work; perhaps why an evolving “instant gratification” society, even encompassing many professionals, keeps coming up short on explanation in education?


Exhibit A - US Census School-Related Data

The U.S. Census Bureau develops demographic, economic, geographic, and fiscal data for school districts, and many of these data collection activities are conducted in cooperation with the National Center for Education Statistics (NCES), part of the U.S. Department of Education's Institute of Education Sciences. The U.S. Census Bureau does not collect student achievement information and it does not provide data that identifies characteristics of individual students or staff members.
School District Boundaries
The U.S. Census Bureau's Geography Division updates school district boundaries every other year as part of the School District Review Program. This initiative provides boundaries for the production of school district demographic estimates, and also provides school district boundary layers for the Census Bureau's TIGER/Line spatial data products.
School District Finance
As part of the Annual Survey of Government Finances, the U.S. Census Bureau's Governments Division collects fiscal data from a wide variety of local governments and agencies with taxing authority, including school districts.
School District Poverty Estimates
The U.S. Census Bureau's Small Area Income and Poverty Estimates program (SAIPE) produces annually updated school district poverty estimates to support the administration and allocation of Title I funding under the No Child Left Behind Act of 2001 (the most recent reauthorization of the Elementary and Secondary Education Act of 1965). These data include estimates of total population, number of children ages 5 to 17, and number of related children ages 5 to 17 in families in poverty.
Demographic and Geographic Estimates
The U.S. Census Bureau's Education Demographic and Geographic Estimates project (EDGE) produces a variety of geodemographic data for the National Center for Education Statistics. It is the primary source of custom tabulated school district demographic data from the decennial census and the American Community Survey.


Exhibit B – Massachusetts School System Data

Accountability Report
Advanced Placement Participation
Advanced Placement Performance
AMAO Report
Class Size by Gender and Special Populations
Class Size by Race/Ethnicity
Enrollment by Grade
Enrollment by Race/Gender
Enrollment by Selected Population
Graduates Attending Higher Ed.
Graduation Rates
MCAS Participation
MCAS Performance Results
MCAS Test Item Analysis
Mobility Rate Report
Per Pupil Expenditure
Plans of High School Grads
SAT Performance
Special Education Results
Staffing Age Report
Staffing Data by Race/ Ethnicity and Gender
Student Indicators
Teacher Data Report
Teacher Grade and Subject Report
Teacher Program Area
Teacher Salaries
Technology