Tuesday, November 1, 2011

SQUINTS, PS2: THIRD WHY - KNOWING K-12?

What do we really know about our K-12 public schools, and why the black box?  

The really short answer is, very little, in spite of the NCES (National Center for Educational Statistics), and the contribution of the NAEP (National Assessment of Educational Progress) representing the best of current selective K-12 learning assessment.  (Its 2011 Mathematics and Reading results were issued Tuesday, November 1 at 11 AM, referenced later.)  

The reasons are varied:  The limited scope of school data collected by the Federal government in spite of NCES/NAEP, and their timeliness; the second a product of quixotic state control of public education; and third the mixed integrity of the local leadership and oversight of those 88,000 public schools.

Perhaps a further reason is our society’s – including its professional bureaucrats and legislators – aversion to math in general and the discipline required to understand the data that continuously flow from our various institutions.  These data are now more easily collected, having become virtually automatic outputs because of our digital capacities, but by the same token increasingly representing a major task to digest that data and exploit their potential meaning.  The technical term for a branch of methods and algorithms that can extract meaning is “data mining;” sounds simple but it encompasses substantial computational and modeling expertise.

Some counterpoint, however; even sophisticated findings from large data sets can be extracted using first order standard statistical methods that should be universally present in our K-12 administrators and teachers, also universally taught in our K-12 schools, but are not.

Washington’s contribution.

Given the role being played by Title 1 school mandates, by NCLB, and now for some states RttT, the assumption would be that our government that manages its nation’s census might be a rich comprehensive source of school data?

There are two primary sources of Federal data for K-12 schools, the Census and NCES. 

School data collected by the US Census Bureau are summarized in the appended Exhibit A.  In addition, within Census’ 2010 American Community Survey (“factfinder2”) there are estimates by locations of the standard demographics, plus education levels, and detail on alternate language speakers.  The operating issue with these data is their sampling basis (they do not represent a census) and relating their geographic boundaries with school districts, as well their limited utility in studying school performance.

From Exhibit A the School District Poverty Estimates might contribute to asking questions about school performance but the last data are for 2009; the School District Finance data, also currently 2009, are generally available more currently from individual states; the NCES Common Core data are current, high accuracy, but provide little more than the identity of schools, their location, and some very basic history and type classification data.

The acknowledged quality winner in present US school assessment data is NCES, its opening page linked here.  Be forewarned, intellectual curiosity and passionate belief in the necessity of maintaining and improving the US system of public schools may be a second career if you start opening the NCES windows.  It is redundant to try to summarize the rich documentation provided from NCES; its NAEP has become the accepted benchmark for determining whether our K-12 systems have generated selected learning effects.

The just released 2011 NAEP results for Mathematics and Reading, for 4th and 8th graders, in public and private schools, are linked here and here.  They compare the 2011 results with 2009 and prior measurements.

The NAEP paradox.

As expert and careful its procedures, and beautifully detailed the web site and data provisions, the National Assessments represented for 2011 bear some reflection before galloping to any conclusions.  One experienced educator, writing about the results’ likely impact on the current debates over standardized testing, suggested that whatever their outcomes – math and reading improving or not – the results would be expressed as an endorsement of that testing.  If the scores are up, it was because of the testing; if scores went down or were flat it was because there wasn’t sufficient testing.

At the risk of taking the intrigue out of self-discovery, the overall NAEP results for 2011 indicate that since 2009 4th and 8th grade math scores (may) have very marginally increased, and 4th and 8th grade reading scores (may) be flat.  “May,” because in spite of the care that goes into the elaborate, multi-stage sampling across states and schools, and the challenge of creating year-to-year comparability, there are sampling frame and material non-sampling error potentials.  The results you will see in the linked NAEP findings represent samples of from 4.5 to 5.7 percent of the 4th and 8th grade school universes, spread across the 50 states and representing less than a tenth of US K-12 schools. In turn, the score changes you will view represent fractions of a percent change, assessed significant with the minimal standard considered rigorous in assessing sample results.  The data are competently collected but represent tiny and fragile differences to be used with care.  

The Washington Post's Education Section offered a similar assessment.

Two points of critique:  One, the lack of documentation of the sampling errors that accrue and are compounded in multi-stage sampling; and two, disingenuous for NCES, presentation of the NAEP graphic reports in a fashion that perceptually distorts the findings, making results appear more favorable at just a glance than the proper metrics justify.

Beyond the technical difficulty of collecting the NAEP data, and the care in their reporting exercised by NCES, they offer little that could serve a high order study of correlates of public school system performance across the nation.  Basically the only classification variables reported were income effects, sex of student, racial effects, and public versus private school representation.  The almost obvious findings from these assessments were that performance as related to income, racial attributes, and sex of student reflect long term patterns not materially changed, and that math and reading performance within private schools continues to best public school achievement.

Good methodology, good procedures, care in tabulating and reporting, but the NAEP was not designed to try to seek causal factors that might be used to either infer how to improve learning at the individual system level, or assess specific school educational environment factors that may influence learning.

Were the NAEP model for summative assessment made a national requirement for all schools, coupled with collection of school environmental and operating data, the combination could provide the basis for data mining for inferential relationships with learning achieved, replacing the mashup of simplistic standardized testing that is distorting and potentially injurious to future US public education. 

State data -- digging is believing.

State-by-state school data given high expectations are even more discouraging than present Federal data for assessing information causal or concomitant to educational performance.

For the skeptics, the question can be a do-it-yourself project and high impact learning experience if you have a computer, some Internet bandwidth, and a fairly large chunk of time and patience.  It will be rewarded at least in knowledge and a healthy wake-up call.

Using your web browser, search on “(state name) department of education,” then open each official site, checking for data that describe openly and conveniently the demographics and performance parameters of each state’s systems in detail.  It will only take a few searches, including Ohio’s site, to start to discover that our states have cared little about informing the public about their schools.  With a search of all 50 states you will discover, additionally, there is no uniform manner in which a state’s systems’ data have been defined, gathered and presented.

There is one reasonable reference mark, the State with the longest history of education in our nation -- Massachusetts.  Their school demographics and system educational operating data are clear, comprehensive, and easily accessible.  The Massachusetts data are summarized in Exhibit B below, and the State’s web site is accessible here.

In sum, our states for whatever combination of reasons represent a totally disorganized source of intelligent school data that could have properly guided NCLB had they been assembled and assessed before that program was ignorantly enacted and applied.  Ohio, with a corrupted system of corporate and political cronyism effecting its education, and payoffs and manipulation in providing its schools everything from K-12 curricular materials, computers, even bandwidth, through overt buffering of individual systems from transparency and accountability, is an example of what state control with low standards or political motivation has bred.  Ohio’s school data have been on and off, not aligned historically, and from first hand experience its Department of Education has a track record of stonewalling even on basic public data.

Repeating the earlier point, Congress could make an impact on intelligent K-12 change by mandating NAEP as the periodic K-12 school census providing summative performance measures.  In turn, the Massachusetts model of explanatory variables is not perfect, but it comes close to providing the kind of data that could intelligently allocate change efforts.

Local is problematic.

Assessing the true performance of local systems is another matter.  The FBI, NSA and Google likely have a better fix on your and your household’s life trail than a community has awareness of what its K-12 school is actually doing; teacher preparedness, what it’s teaching, use of funding, hiring, conspiring, handling complaints, and importantly, any awareness at all of the beliefs and values of administration that controls that activity.  Rarely do local boards have or professionally seek that information, much less act on it to correct unethical and malfeasant performances.  The reality is that local systems frequently work overtime to obscure what they are actually doing with tax dollars and in the classroom out of fear, or self-righteousness, or simply hubris.

Clearly this is not a universal trait, but in fact little is known about how prevalent that is by any classification factor from geography through local cultures.  With little research that has attempted to understand school system organizational behavior, there are few clues in the education literature.  The acid test is asking your local system for what is almost universal by state, public information, governed in turn by some form of public open records act or law.  The results may surprise many taxpayers who naively assume that our education systems must be the most open of the public breed – they are not.

A major factor in failure of local system transparency is in part the paranoia that can come along with the risk of being branded a failed school by NCLB.  Even systems that have operated with integrity are challenged to avoid defining lesson plans that become teaching to the tests.  Once initiated the behavior feeds the need to try to shield the performances.

But it is also driven by poor management, compounded by inept local board oversight, prompting some school administration to withdraw behind a wall to block citizen questions and critique.  Superintendents outnumber US public K-12 teachers 35 to one; however, a suggestion is that for every five to ten teachers that are burned out, or slipped through a school of education and accreditation without adequate vetting, or are incented primarily by dollars, there is at least one superintendent who should be removed or retrained.  For the reality is that our schools of education do not managerially train good administration, their inadequate vetting by local boards permits the power-hungry, manipulative and sociopathic to operate, and subsequent local oversight by boards can be manipulated by a school’s administration.

One rudimentary test of a school board’s independence and integrity is amazingly simple:  Just attend a board meeting, then review your local board’s meeting minutes to determine whether they were written by its superintendent before the public board meeting was held and merely distributed for signatures, sometimes even neglected?  You may be astounded by the findings, or not.

Valid statistics trump damned lies.

Mark Twain’s admonition, “…lies, damned lies, and statistics,” was prompted by the frequent use in his time of marginal statistics to buttress a generally weak logical proposition, the magic of data.  In this century and for the latter part of last century the data available to test propositions and evaluate assertions have become far more robust.  But there is still opportunity to, as a mid-20th century book popular at the time noted, “How to Lie With Statistics.”

But that provocative title aside, when there are databases that reflect issues subject to debate, the most comprehensive knowledge that can be brought to bear – more defining, credible, and representative than the ambiguity of language and value judgments – is competent mathematical analysis of quantitative data to assess relationships.  That is precisely what should already be on the table for America’s public K-12 systems and their performance attributes; inputs, alternative pedagogies, organizational and managerial strategies, assignment of resources, technology employed, uses of funds, input/output measures, performance measures of their educational function.  Such data hypothetically have limits in expressing the quality of inputs and outputs, but contemporary model building and the capacity to digitally transform qualitative factors have begun to address that limitation as well.

A large number of partially understood or unaddressed K-12 performance relationships could be tackled with a universal school database (referenced above using the NAEP model as the dependent variables) that contained the factors similar to those being measured by the Massachusetts system.  Those findings then become the logical scaffolding for the next level of explanation, starting to concept and execute controlled experiments or quasi-experiments to assess how to functionally create more effective classroom learning.  Finally, get a handle on alternatives to measure genuine learning outcomes.

Importantly, those experiments actually start from the inside-out, conceptualizing how variables targeted as well as variables confounding explanation would need to be assessed, to with high reliability infer significant cause and effect. That leads to identifying the research design and analyses required to account for the factors that are ongoing in any such trial.  Such research designs and prudent use of statistical models are the desiderata of future educational studies if there is to be greater confidence in propositions about learning methodology. 

Sounds like a case for hard thinking and the reality that good research and explanation are in fact demanding and precise work; perhaps why an evolving “instant gratification” society, even encompassing many professionals, keeps coming up short on explanation in education?


Exhibit A - US Census School-Related Data

The U.S. Census Bureau develops demographic, economic, geographic, and fiscal data for school districts, and many of these data collection activities are conducted in cooperation with the National Center for Education Statistics (NCES), part of the U.S. Department of Education's Institute of Education Sciences. The U.S. Census Bureau does not collect student achievement information and it does not provide data that identifies characteristics of individual students or staff members.
School District Boundaries
The U.S. Census Bureau's Geography Division updates school district boundaries every other year as part of the School District Review Program. This initiative provides boundaries for the production of school district demographic estimates, and also provides school district boundary layers for the Census Bureau's TIGER/Line spatial data products.
School District Finance
As part of the Annual Survey of Government Finances, the U.S. Census Bureau's Governments Division collects fiscal data from a wide variety of local governments and agencies with taxing authority, including school districts.
School District Poverty Estimates
The U.S. Census Bureau's Small Area Income and Poverty Estimates program (SAIPE) produces annually updated school district poverty estimates to support the administration and allocation of Title I funding under the No Child Left Behind Act of 2001 (the most recent reauthorization of the Elementary and Secondary Education Act of 1965). These data include estimates of total population, number of children ages 5 to 17, and number of related children ages 5 to 17 in families in poverty.
Demographic and Geographic Estimates
The U.S. Census Bureau's Education Demographic and Geographic Estimates project (EDGE) produces a variety of geodemographic data for the National Center for Education Statistics. It is the primary source of custom tabulated school district demographic data from the decennial census and the American Community Survey.


Exhibit B – Massachusetts School System Data

Accountability Report
Advanced Placement Participation
Advanced Placement Performance
AMAO Report
Class Size by Gender and Special Populations
Class Size by Race/Ethnicity
Enrollment by Grade
Enrollment by Race/Gender
Enrollment by Selected Population
Graduates Attending Higher Ed.
Graduation Rates
MCAS Participation
MCAS Performance Results
MCAS Test Item Analysis
Mobility Rate Report
Per Pupil Expenditure
Plans of High School Grads
SAT Performance
Special Education Results
Staffing Age Report
Staffing Data by Race/ Ethnicity and Gender
Student Indicators
Teacher Data Report
Teacher Grade and Subject Report
Teacher Program Area
Teacher Salaries
Technology
    

No comments:

Post a Comment