Saturday, December 31, 2011

SQUINTS 12/31/2011 -- K-12 EOY RETROSPECT & PROSPECT

This is an end-of-year wrap of loose ends.  The first item is a conclusion on how Ohio is executing its Ohio K-12 education train wreck, with a stealth “charter school” overdose, plus a rating scheme that is a giant hole below Ohio’s K-12 school waterline.

Next, briefly upbeat, is a perspective on the step beyond traditional high-order thinking in education; what K-12 should be doing, on top of what K-12 should be doing, if it wasn’t flogging test scores.

Lastly, a look at what 2011 delivered to K-12 education, and some predictions on what 2012 might bring.

Ohio’s Remains of the Year

The Wobbly ODE Performance Index

Last SQUINTS unfolded an analysis of 650 Ohio school systems, comparing ODE’s PI, or Performance Index, based on NCLB-style standardized testing, with schools’ 2011 ACT results.  Hardly a perfect index of learning outcomes, the ACT has validation and a track record to recommend it, and is arguably a better index of district quality than the PI.

The results are clear; the PI does not credibly track ACT or SAT results, inferring that claims or choices made on the basis of a school district’s PI ranking, or the prior qualitative designations, are generally wrong enough to be terrible school policy.

Ohio’s Charter Scam

Almost unseen in national press coverage of the charter movement, is Ohio’s charter school binge, not so forthrightly billed as “community schools,” 356 and counting now listed by ODE.  A better designator might be “corporate schools,” or Ohio’s ghost schools by virtue of their sponsorship and promoters.

One of the most prolific sponsors of Ohio’s charters is the Ohio Council of Community Schools, created unilaterally in the late ‘90s by the University of Toledo’s Board of Trustees, and featuring an educationally light and undistinguished management.  What education experience resides there appears to have vocational and corporate ambience.

Also prominent throughout the 356 charters is the presence of especially four consultants, Paul Preston, John Wilhelm, Don Urban, and Jack Nairus, the “ghosts in the machine.”  Preston is ambiguously listed as associated with ODE, as a consultant, but promoting charters for Ohio’s Department of Education -- so much for ODE’s championing public education in Ohio.  John Wilhelm shows on the web as associated with Columbus, but with zero additional information.  Don Urban can’t be identified on the web.  Jack Nairus is listed as an employee of Manta, a tiny Cleveland company of 5-9 employees doing unspecified Internet promotion.  The K-12 educational expertise embodied in this group appears breathtaking.

Here is some Ohio “community school” performance skinny:  The bottom 10 percent of the 650 districts studied is half charter schools.  Though the bottom of Ohio’s school barrel is pretty dismal generally – ODE PI values mostly below 90, and ACT score averages mostly below 19 – the bottom five percent belongs exclusively to Ohio’s euphemistic “community school” block.

ODE reports on its website the list of Ohio community schools, but magically, its tab, “Sponsor Composite Performance Index & Reporting Compliance,” comes up dry, a search eliciting only the message, “This document has expired.”  Based on the performance of ODE’s other performance index, that might normally be seen a blessing, except for the obfuscation it implies of reporting on Ohio’s charters.

The complete database to assess Ohio’s charter explosion is not in hand, but from the prior analysis where many of these charters were seen in pruning the data for the PI versus ACT analysis, a large number of these schools appeared to reside at the bottom of Ohio’s educational barrel.  The Kasich Administration and ODE should be made accountable for a forthright presentation of the learning performance of these alleged schools.

The Purty Thirty

Obviously, there are competent school districts in Ohio, and superior performances.  To attempt to isolate the better schools a combination of the PI, ACT and SAT scores were used to identify an alleged best 30 of the group studied.  The “Purty Thirty” were, in general order of rank based on data available (County: District/System):

Hamilton:  Indian Hill Exempted Village
Cuyahoga:  Chagrin Falls Exempted Village
Lucas:  Ottawa Hills Local
Franklin:  Upper Arlington City
Montgomery:  Oakwood City
Hamilton:  Wyoming City
Franklin:  Grandview Heights City
Summit:  Hudson City
Hamilton:  Sycamore Community City
Greene:  Cedar Cliff Local
Greene:  Yellow Springs Exempted Village
Franklin:  Bexley City
Licking:  Granville Exempted Village
Cuyahoga:  Brecksville-Broadview Heights City
Franklin:  Dublin City
Hamilton:  Mariemont City
Warren:  Mason City School District
Montgomery:  Centerville City
Greene:  Bellbrook-Sugarcreek Local School District
Cuyahoga:  Beachwood City
Summit:  Revere Local
Cuyahoga:  Solon City
Portage:  Aurora City
Warren:  Springboro Community City
Geauga:  Kenston Local
Hamilton:  Forest Hills Local
Cuyahoga:  Orange City
Darke:  Arcanum-Butler Local
Franklin:  New Albany-Plain Local
Shelby:  Russia Local

Flip Side – the Potentially Dirty Thirty

There was another class of district that popped out of the present analysis, school systems that had fourth quartile (highest) PI scores, but didn’t show ACT and/or SAT scores that aligned with the stratospheric PI rankings based on NCLB testing.  These are systems that pose question marks for ODE and their boards:  Specifically, are these systems that achieved their lofty PI positions by virtue of “teaching to the test,” the most common and insidious form of K-12 cheating that has been spawned by NCLB?

2011 has been punctuated by national K-12 test cheating scandals, ranging from supplying and changing answers on its tests, to in Texas almost unbelievably reducing a district’s entire curriculum for years to feature only subject matter on its Texas/NCLB testing.  “T4” is more subtle, but just as corrupting of real education.  It employs selective use of texts, time in the classroom, and lesson plans to feature material most likely to score on NCLB standardized tests.  Egregious, it represents unethical school administration, plus a form of manipulation that likely cannot be identified by most school boards unless they seek an external educational audit of a school and its leadership.

The result, however, is just as corrupting as the more overt forms of cheating; schools’ students are shorted proper learning experiences, and in Ohio’s flawed system of oversight schools are rewarded for cheating.  What an epic and meaningful learning experience for Ohio’s youth?

The potentially “Dirty Thirty” Ohio systems, itemized below, fit the symptoms of “T4.”  There are other explanations for the gap between ACT/SAT results and their high PI positions, but they would be highly exceptional, and the burden of proof of legitimate education should be on the system and its school board.  The most obvious candidates (there are more but these led the ranking) for a careful critique of administration and pedagogy in 2012 are (County:  District/System):

Putnam:  Miller City- New Cleveland Local
Shelby:  Botkins Local
Trumbull:  Maplewood Local
Mercer:  Fort Recovery Local
Geauga:  West Geauga Local
Shelby:  Anna Local
Warren:  Wayne Local
Butler:  Ross Local
Summit:  Twinsburg City
Wayne:  Norwayne Local
Auglaize :  New Bremen Local
Lucas:  Toledo School For The Arts
Williams:  Edon-Northwest Local
Mahoning:  South Range Local
Trumbull:  Champion Local
Columbiana:  Columbiana Exempted Village
Mahoning:  Springfield Local
Clinton:  Clinton-Massie Local
Mahoning:  Western Reserve Local
Putnam:  Pandora-Gilboa Local
Clermont:  Goshen Local
Jefferson:  Steubenville City
Trumbul:  Howland Local
Seneca:  Old Fort Local
Clinton:  Blanchester Local
Fairfield:  Berne Union Local
Scioto:  Wheelersburg Local
Stark:  Perry Local

Will Ohio’s K-12 education systems change in 2012?  That event is highly unlikely, given the extreme conservative Ohio Statehouse and Legislature, reflected as well in a politicized Ohio Department of Education. 

Ironically, what is needed for all of Ohio’s schools is not political at all, and in synch with private sector managerial excellence.  That is, a competent research effort to create a system for measuring the learning outputs of Ohio’s K-12 systems that encompasses more than NCLB standardized tests.  ODE under the Kasich Administration has demonstrated both analytical incompetence and total lack of creativity in gathering and using available school data to create defensible ratings.  But perhaps that has been its upside?

A Learning Step Beyond

For a brief change of pace from 2011’s education doom and gloom, and before looking at K-12 in 2012, there was a welcome article on the prospects for getting creativity back into our classrooms.

SQUINTS’ readers will recognize the concept of Bloom’s Taxonomy of Learning, a widely accepted concept of how cognitive skills develop and are sequenced.  Paradoxically, while much of present K-12 labors to achieve only at the lowest level of learning that inhabits standardized testing, there are students of learning who are pushing beyond even Bloom’s higher-order mental operations of analysis and synthesis. 

The goal is to seek what numerous critics of America’s private sector have been calling for, greater creativity in US economic and educational processes.  A goal worth pursuing, given the finding that only about one-fourth of US college students are reported to have the reasoning skills to solve conceptual problems.  One can only speculate how dismal the same statistic is for many US K-12 schools?

A summary of recent research on creativity, encompassing neuroscience experiments replacing deduction, was reported in the 16 December 2011 US journal Science.  By an education researcher at Emory University, the review not only dissects the processes that differ between even high-level thinking and creative exercises, but also unfolds classroom pedagogies that can teach creative problem solving.  Two well-documented quotes from the journal article are provocative:
"Neuroscience experiments show that associative thinking is cognitively quite different from analytical problem-solving. Brain regions such as the right superior temporal gyrus  are activated to a greater degree in subjects solving remote association problems by insight…in a functional magnetic resonance imaging scanner than in subjects solving problems by analytical reasoning. Associative thinking increases the probability of accessing weakly associated ideas." 
"A creative insight, then, is a sudden, unexpected recognition of concepts or facts in a new relation not previously seen. Such creative insights often follow conceptual reorganization or a new, non-obvious restructuring of a problem situation. The mechanism whereby two ideas are blended or convoluted by insight-like mechanisms into a third novel idea by a process termed “conceptual integration” is an area of active research."
An early 2012 edition of SQUINTS will review the full effort noted, as part of an assessment of learning modes that can be exported to K-12 from higher education methodology and from corporate experiences, and by virtue of extant and emerging digital learning technologies.

EOY Review and 2012 Preview

2011 has been a busy year for the consortium of Obama, Duncan, Gates, the Republican Party, our corporate oligopoly of test makers and scorers, and a slew of public education-haters, working hard to disassemble US public education.  The unlikely combination of the US Department of Education as Darth Vader, tacitly cooperating with the political extreme right to undermine a century of stable K-12 education, is something that even the public education establishment has failed to grasp.  That public K-12 in far too many cases – with vacuous school boards, bureaucratic and sometimes ethically questionable school administrations, plus either inadequately trained or supported teachers – has helped create the present assault, is only marginally compensatory.

In a 12/31/2011 New York Times opinion piece, British author Geoffrey Wheatcroft commented on the present dismal time of "...political miscalculations and deceptions...and downright criminality," invoking something that has overtaken our society, termed "unknown knowns."  His definition:  "Unknown knowns were things that were not at all inevitable, and were easily knowable, or indeed known, but which people chose to 'unknow.'" The concept aptly describes the all-out assault on American public education, and our public systems' dismissal of the threat; the acquiescence is devastating American K-12 learning.

2011 Alice in Eduland

Some of 2011’s developments are worth repeating simply because they also defy common sense.

"Teach for America" has just been given a $50MM grant to expand their program to bring gifted college graduates into low-income classrooms; how can that be faulted?  Perhaps because:  (1) Their total exposure to classroom education principles consists of a five week summer institute; (2) at the end of their brief TFA commitment few have stayed in those classrooms; and (3) research is demonstrating that it takes five to six years of classroom experience to begin to develop needed tools to mediate learning, including the ability to gauge a student’s prior knowledge that has been demonstrated central to present learning.

The KIPP charter schools received a $25.5MM grant from the Walton Family Foundation to enter more school districts.

Obama’s recent “gift” of NCLB waivers requires an even more convoluted set of bureaucratic school requirements and deeper commitment to standardized testing.

The Charlotte-Mecklenburg School District field-tested on students 52 different standardized tests, to pick standardized tests in every subject so teachers can be evaluated on the scores.  Good gracious, where did they find time to actually teach?

True or false, on March 4, 2011, President Obama lauded America’s public schools?  False, he appeared in Miami with Jeb Bush, to push “corporate education reform,” and parenthetically, gladden the hearts of a slug of corporate standardized test creators and scorers and their attached lobbyists.

In the Administration’s key education initiative, “Race to the Top,” what was not included as a priority was “making sure kids have ample opportunity to learn through play.”

But the best of the bunch was a quote by education historian and one of the US’ best and brightest students of public education, Dr. Diane Ravitch:  “I’m beginning to think we are living in a moment of national insanity.”  Amen.

What’s Coming in 2012?

Linked here is a Washington Post piece by an experienced teacher and author who did some prognostication for the coming year.  Be advised it’s pretty grim, as he sees no abatement of the dogmatic effort underway to reform public education by sheer force of testing for the lowest levels of learning achievement, and failing that, by simply displacing public education with privatized K-12 schools.

Some 2012 New Year’s Resolutions

Lastly, some wishful thinking for the K-12 genre and its contestants.

Vow that Mr. Obama’s weekly reading list will actually feature some of the legitimate research on K-12 learning warehoused but unused at his own National Center for Education Research within his own US Department of Education.

Commit to finding Bill Gates a new hobby, and the Walton Family Foundation a new mission.

Provide “Teach for America’s” wunderkind a copy of Bloom’s Taxonomy of Learning, some serious mentoring, and a new contract more productive for America’s children.

Reform America’s collegiate schools of education, please, even if the rest of a fattened and rudderless higher education establishment goes merrily on its profligate way.

That US K-12 public education will throw caution to the wind and seek the gift of both mind and spine.

Commit to replacing Arne Duncan as US Secretary of Education with a resource that actually understands that the “K” in K-12 education doesn’t really refer to Washington's “K Street.”

Vow to encourage someone to metaphorically trip over the power cord to Ohio's Department of Education, so they have to reboot the whole mess, in the process potentially creating some logic and consistency.

Finally, a resolution for Governor John Kasich:  For 2012, the commitment to advocate aggressively for educational literacy -- his own.

A Closing Perspective

An enormous gap has developed in the US, between genuine students of learning, and a mass of intertwined advocates (many lacking any education credentials) who believe that the current “corporate reform movement,” NCLB’s billy club, Bill Gates’ billions spent on his hobby to sort and spit out teachers, and under-the-table and politicized charter efforts like Ohio’s, are the way to confront America’s loss of K-12 educational performance.  What’s wrong with this picture?  One view is the ancient wisdom that the treatment may be worse than the disease, capable of maiming or killing the patient.

In the same vein of stepping back from the trenches to look at the larger picture, this unlikely cabal of reformers may actually have a hold on a fragment of truth, because much of traditional US K-12 public education and its generally local oversight have become so entrenched, defensive and tentative about real learning, that it can’t field initiatives for internal reform, or find the courage to execute them where there is contemporary educational literacy. 

What is most dispiriting is observing K-12 public education, metaphorically squatting on its haunches, watching dully and with seeming ignorance, as the multitudinous army of alleged reformers and frequent charlatans methodically shoves the public K-12 herd toward the precipice.   That much of public K-12 can’t even offer a defense that goes beyond denial, may be another cause for invoking Ravitch’s prior quote.

The critical question is, when dominant K-12 ersatz education enforced by standardized testing finally matures in the form of a generation or two that can’t actually think critically, solve problems, or create anything, and the US world position in education sinks even lower, what’s next? 

Welcome to 2012.

Friday, December 23, 2011

SQUINTS 12/23/2011 - THE TESTING GRINCH

In the spirit of the season, today’s SQUINTS looks at Ohio’s K-12 standardized testing Grinch, and its dubious gift of assessment that keeps on misgiving.

Tilt

If as an Ohio parent you believe your children’s K-12 education is life-enabling, and that Ohio is accurately measuring learning outcomes -- equitably awarding or penalizing schools and even teachers on the basis of that testing -- today's SQUINTS is sobering.

In a prior SQUINTS the story was related of Ohio’s Legislature requiring Ohio’s Department of Education (ODE) to produce a “performance index” for every Ohio K-12 school district, and rank all districts.  Ostensibly, the exercise was to be an attempt to improve on the prior system of assigning a series of descriptors to Ohio’s districts, ranging from “excellence with distinction” to “academic emergency.”

Readers may recall, from the earlier SQUINTS, that except for adding the Ohio Graduation Test (OGT) to the formula, and weighting the test components, the Performance Index (PI) was based on virtually the same NCLB standardized testing as before.

The catch came in the form of an ODE acknowledgment that the PI did not necessarily track well with Ohio’s 12th grade ACT (originally the abbreviation for "American College Testing") test results.  Given the addition of the OGT to the formula, the observation was curious.*  The ACT has been well validated; a reasonable assertion is that it is currently a better measure of learning outcomes produced by school districts than the now increasingly criticized NCLB standardized tests at earlier grades.  In 2011 ACT test-takers were 63 percent of Ohio’s district graduating classes.

Without revisiting the lengthy prior arguments why NCLB-type testing has become a K-12 school liability -- including nationally entrenched and regular cheating on those tests to avoid NCLB’s negative effects -- a legitimate question is, just how do Ohio’s PI scores and rankings hold up in paralleling ACT results? 

The reason the question is valid is that the answer is neither trivial nor a matter of wading through some obscure analysis.  On the basis of that standardized testing, and any positioning of school districts because of those scores, children may not be receiving the promised education, schools may see penalties, cheating can be rewarded, levies for schools can be impacted, and teachers’ attitudes, ratings and even their employment may be impacted.

A Brief Update

When this issue was last visited, a request had been made to acquire Ohio school district ACT results to attempt a comparison.  Ohio’s Department of Education had not yet responded, but subsequently complied with the request, providing a 2011 ACT file of scores by school district. 

A small discrepancy, not yet fully sorted, ACT reported for Ohio 92,313 ACT test-takers in 2011; the ODE supplied file accounted for 79,212 of those tests, 86 percent of the test-takers.  To date ODE and ACT have not provided an adequate explanation of the difference, which could be attributable to either ODE’s or ACT’s school categorization or data processing.  In the process of reconciling ODE and ACT district-by-district data, the tentative inference is that a large share of the 13,000 test-taker count shortfall is associated with districts with either bottom-dwelling PI scores, schools where the SAT is taken rather than the ACT, and charters or private schools not in the database provided.

Also, a repeat admonition, there is an expected relationship between ACT scores and the proportion of a graduating class taking the ACT.   Two effects are present:  One, a larger fraction of test takers may pull down the ACT average; and/or two, a larger fraction of test takers may reflect schools’ testing cultures and preparation producing higher overall ACT performance.  The two effects can’t be separated here, but the positive relationship found between both the PI and ACT average, and a class’ fraction of test-takers tends to inflate subsequent ACT scores.

This has to be factored into any comparison of the ODE PI with ACT, which entails processing additional Ohio district data to create such a variable.  All of these chores were accomplished just days ago, resulting in a correlation analysis of 650 Ohio school districts’ PI scores and comparable ACT scores.

The Results

The test conducted is called partial correlation; simply, the relationship between the PI scores across districts and the comparable district ACT scores, with the effect of the proportion of test-takers held statistically constant.  It produces a simple answer, but with large impact and implications.

A correlation coefficient (r) of 1.00 would be perfect association; values of .80 and up are considered respectable, but even a coefficient of .80 (r-squared = .64) means that only 64 percent of a criterion variable is being explained by another indicator.

The ODE PI and ACT fit had a partial r-squared (also known as the coefficient of determination) of .52.  The translation, the PI scores “statistically explained” 52 percent of the variation in ACT scores across districts.  In sum, if the ACT is a better measure of Ohio learning outcomes, then dollars, policies, or any other action on the basis of using Ohio’s PI rankings as a criterion are not on solid footing. 

A scatter chart of the composite results appears below in FIGURE ONE.

FIGURE ONE

(Fit:  Partial r-squared = .52)

Clusters at the extremes in the distributions of both the Performance Index and the ACT scores effect a correlation, tending to anchor the line of fit based on extreme points.  But a practical measure of learning outcomes must reflect discrimination across the full range of an indicator along with reasonably uniform error in estimates.  More meaning comes out of looking at both the low region of PI scores, the high end, and at the middle performances.

Looking at just the highest (4th) quartile (25 percent) of districts based on the PI, the fit of ACT scores to PI scores produced a r-squared of .256, or in that top quartile the PI explained 26 percent of the variation in ACT scores.  In the lowest quartile of PI, the fit was just a bit better, the PI explaining 36 percent of the ACT variation.  Both fits are evidence of very weak association.  The challenge of discriminating among school districts based on the PI scores can be seen below in FIGURE TWO, describing the fit between the ACT estimated by the PI scores versus actual ACT scores.

FIGURE TWO

(Fit:  r-squared = .256)

The most troubling fits were of PI and ACT in the 50 percent of all school districts studied in the 2nd and 3rd quartiles of the PI score, or over 320 school districts, the middle majority.  That scatter of ACT scores estimated by PI scores versus actual ACT scores is shown below in FIGURE THREE.  With a r-squared of .1225 it indicates that, in the middle 50 percent of Ohio districts based on the PI rank, the PI scores explained only slightly over 12 percent of the ACT scores.

FIGURE THREE

(Fit:  r-squared  = .1225)
 
Hence, discriminating among any of these districts at other than a very macro level, based on ODE’s Performance Index and rankings, is a serious challenge potentially misrepresenting a school district’s standing in Ohio.

Flags

One implication of the findings relates to why a district might score well on the standardized tests but fail to match that with the terminal ACT performances.  One quite serious implication is whether a system has fallen into the pedagogy of “teaching to the tests.”

The latter term is frequently misunderstood.  It doesn't necessarily refer to an obvious direct transmission of future (or even past) standardized test questions to students, or their overt inclusion in lesson plans, although that really egregious event has been widely documented nationally in our public K-12 systems along with worse (most recently, again in Georgia), to try to beat the NCLB sanctions.  

When a school's leadership and faculty start to inch into the dark side, even believing it is for the greater good, the tactics are more subtle, but no less corrupt as alleged K-12 education.  Those approaches take the form of targeted selection of texts, prohibition of alternative learning materials that might dilute the focus, exclusion or suppression of any sources that might question curricular or teaching policies, and constructing lesson plans to maximize test scores whether or not the tactics and material strategically constitute genuine learning.  This isn't going to be visible to most school boards unless they immerse themselves in a system and dig into the pedagogy embraced.

From the present analysis the places to look are in the data quadrants where a district’s PI score is high, but its ACT is lower than expected.  For example, the top half of districts’ PI scores start at the median PI score of 99.  A flag should go up where a district has a PI score materially above that value, but an ACT score at or below Ohio’s median of approximately 22.

There are obviously still other explanations for that combination but none is good news.  It may signify a one-off, a senior class that has simply underperformed in that year.  More significant is the possibility of a performance gradient between the administration and teaching of the 9-12 grades versus middle and elementary performance.  That has become increasingly a source of learning vulnerability as the high school level has been pressured to create greater college-readiness, that frequently entails adopting more of the course design of post-secondary work, challenging both curricula, materials, and teacher preparation.  

Lastly, the errors of fit may simply signify, as now asserted by many education professionals and researchers, that the standardized testing that has been allowed to flourish is both invalid and unreliable measurement of meaningful K-12 learning.  The obvious sequel to that is why the US public K-12 establishment has simply acquiesced to a corporate oligopoly, with little to no oversight, exercising major input and control over K-12 testing and curriculum -- the education tail wagging the dog, so to speak?

Bottom Line

An implication, that should be visible to all but the dogmatic or deniers, is that material decisions or claims made on behalf of a school district, and based on the Ohio PI or any similar derivative of present standardized tests, have a high probability of being flawed.  At the polar extremes, a high PI and equally high ACT average likely identify a superior system.  Conversely, very low scores of both PI and ACT do signal poor performing districts.  But in between, using the present PI scores to either strategize or tactically manage Ohio’s schools is unreliable and has the potential for harming Ohio K-12 education.

The Greater Challenge
 
Results above simply mirror findings starting to appear across the US as education researchers challenge the almost maniacal dependence being placed on present standardized, corporately devised testing and scoring.   The irony is that virtually none of the present highly orchestrated and heavily funded attempts to force that testing to be the order of the day was subjected to research, or experimental verification, or even employed prudent small-scale trial runs before being dropped on US K-12 schools.

The hard reality is that the “testing to force K-12 change” model is not functioning.  Although in the present example the ACT may not be the ultimate measure of a school district’s learning performance, it is likely a better basis for overall assessment than present NCLB testing.  Another aspect of that reality graced this week’s news, where New York State is now investigating the Pearson Foundation, an arm of the nation’s largest publisher of standardized tests and packaged curricula, for improper lobbying.

But the greater challenge is to go beyond calling out folly and see K-12 education creatively and rigorously substitute better measures of learning outcomes to assess school and student performance.  That the venue has not reached for its bootstraps, and assumed the responsibility for devising more valid learning outcome measures, are both vexing and something of a mystery. 

One explanation was offered in the December 21, 2011 Education Section of the Washington Post, by Mark Phillips, a professor emeritus of education.  His view is empathetic and realistic:

"Most teachers and administrators, dealing with the daily challenges of teaching, don’t have the luxury of thinking beyond the present paradigm.  They’re too busy dealing with meeting student needs, designing engaging lessons, and responding to external pressures, from assessment to the latest mandated 'innovation.'  But for those of us who have the luxury of time to think and lead, reformers and policy makers alike, I think the relative paralysis should be a matter of concern." 
"The concept of schools without walls is not a new one, and yet in this age of instantaneous electronic communication, as we freely Skype and network in multiple ways with people all over the world, how can we possibly think of education as taking place in a building in blocks of 49 or 53 minutes?  While I don’t know exactly what a new paradigm should look like, the little I see suggests that it might include classrooms as command centers to coordinate schooling without walls, with present subject organizers vastly changed, the line between teaching, facilitating, and counseling blurred, the functions integrated, and a seamless connection between the school, the community and the land itself.  This is not boring?"

Phillips, obviously, sees even more opportunity for innovation than just improving measurement of learning outcomes.  But a return to K-12 sanity would be ginning up the needed research and grass roots testing on indicators of school performance and quality that reflect all aspects of genuine education:  Categorically, direct evidence-based measurement of learning, but learning with context that satisfies the test that knowledge has been created; measures of the performance of the process variables that functionally allow learning to happen -- curricular validity, teacher preparation, organizational alignment with the social and cultural environment of a school; and equally solid evidence of school administrative, leadership, and ethical performance.

Lastly, longitudinal measurement has now become part of the nomenclature for assessing teachers but in a perverse fashion.  The so-called value-added measures being touted are totally misaligned with the processes that produce good teaching.  Longitudinal effects of a school’s and teacher’s contribution to a child’s learning need to be assessed, but it is not accomplished by rigid microscopic measurements of short term change in fragments of retained information, but by at least three initiatives:  Recognizing that a child’s “prior knowledge” is a major condition of present learning and that we are still vague on how to address that; by acknowledging that social and cultural factors do impact spot capacity for student learning; and by using the tools available and employed by virtually every sophisticated private sector organization to gauge post-transaction customer satisfaction and behavior.  The latter notion and methods have been out there as long as modern marketing, now multiplied by digital tools, but seemingly unrecognized or ignored by K-12 education.

After tours through multiple web sites of states’ departments of education there is evidence that a few states are asking the right questions.  Whether that is enough to kick-start better learning performance measurements is a question mark.  One perspective is that it will take an orchestrated national voluntary effort, or a Federal data initiative to generate consistent inputs for better assessments.

One closing perspective, the digital capability to both codify good data for measurement, and techniques to automate their application for both student and school assessment appear to outstrip the basic understanding of many K-12 educators; digital methods intrinsically are not the roadblock.  For those of us who encountered computers before they fit in your shirt pocket, or your garage, or a few years earlier in a warehouse, there was an anthem any computer user learned to respect.  It went by the acronym, GIGO, “garbage in, garbage out;” it still applies.

Happy Holidays.

* A footnote:  Another test that seems timely, and due diligence, would be an assessment of the concordance of the Ohio district 2011 OGT results with 2011 ACT scores.


Technical Postscript
Analysis Coverage

The comparative ODE and ACT score databases do not fully align.  The report references the discrepancy between ACT counts of test-takers in the ODE file supplied (79,212), versus the ACT Ohio report of 92,313 2011 test takers (14 percent shortfall).  Some of that discrepancy appears to have occurred in areas of very low PI scores.   Another part may have a different explanations, schools where only the SAT was taken, and charters and private schools not in the database ODE provided.  There is presently no way to sort that out.

However, the ACT test-takers represented in the above analyses (77,966) account for approximately 98 percent of the 79,212 ACT test scores provided. The small difference occurred where district data as reported by ODE could not be matched with the ACT report.

The SAT Test-Takers

The present analysis did not include SAT test-takers, their present data coverage representing approximately 15 percent of Ohio's 12th graders, versus the ACT, where approximately 63 percent of graduating seniors were test-takers.  A new data set was required to test the concordance of SAT scores with the ODE Performance Indexes.

Because of the difference between numbers taking the SAT versus the ACT, much smaller data sets were available.  Total 2011 SAT test-takers were 18,998, and only ten percent of the prior district average scores were assignable to test.

Available data indicate that where the ACT and SAT tests were jointly reported, the proportion of 12th grade SAT test-takers was approximately 88 percent.  In this same subset of test-takers, the proportion of students taking the ACT remained at 63 percent, congruent with the full ACT analysis.

Correlation Results:  SAT Versus ACT & Versus PI

A correlation analysis was run to assess the fit of SAT scores to ACT scores, and the fit of SAT scores to the PI scores.  Results were:

   SAT x ACT:  r-squared = .716, 71.6% of SAT scores explained by ACT scores.

   SAT x PI:  r-squared = .526, 52.6% of SAT scores explained by PI scores.

With less data representation as well as excluded districts, there is little opportunity for disaggregated analysis of how error associated with the SAT x PI fit is distributed.  The overall fit of SAT scores with the PI scores is consistent with the larger set of PI x ACT findings reported (r-squared = .52), although it may reflect a different pattern of errors, referencing the above ACT x SAT fit.

In this subset of schools employing both ACT and SAT test taking, the ACT scores better fit the ODE PI scores in the region above the median of both variables, but not below.  The explanation may be that subset of systems heavily represents the 4th quartile cluster of Ohio systems with high level performance on all testing.

























Friday, December 16, 2011

SQUINTS 12/16/2011 - K-12 TECHNOLOGY PLUS: AN IDEA

Technology Candor

Professor Daphne Koller of Stanford’s Artificial Intelligence Lab, in a New York Times special on technology, and a take-no-prisoners opinion piece, cuts through the smoke and public education rationalizations for the US public K-12 standing:

“Our education system is in a state of crisis. Among developed countries, the United States is 55th in quality rankings of elementary math and science education, 20th in high school completion rate and 27th in the fraction of college students receiving undergraduate degrees in science or engineering.
As a society, we can and should invest more money in education. But that is only part of the solution. The high costs of high-quality education put it off limits to large parts of the population, both in the United States and abroad, and threaten the school’s place in society as a whole. We need to significantly reduce those costs while at the same time improving quality.
If these goals seem contradictory, let’s consider an example from history. In the 19th century, 60 percent of the American work force was in agriculture, and there were frequent food shortages. Today, agriculture accounts for less than 2 percent of the work force, and there are food surpluses.
The key to this transition was the use of technology — from crop rotation strategies to GPS-guided farm machinery — which greatly increased productivity. By contrast, our approach to education has remained largely unchanged since the Renaissance: From middle school through college, most teaching is done by an instructor lecturing to a room full of students, only some of them paying attention.”

She also points out that in 1984 Benjamin Bloom (educational psychologist and creator of Bloom's taxonomy of learning, visited earlier in SQUINTS) demonstrated that tutoring was vastly better than lecture, the average tutored student doing 98 percent better than a standard class.  A 2010 meta-analysis of 45 studies by the USDOE demonstrated online learning was equal to face-to-face learning, and that a blend of both was more effective than either.

Finally the argument is that you can't teach critical thinking and problem solving without the classroom interaction.  Aside from the inference that many K-12 teachers haven’t been prepared to teach either of these with quality even in the classroom, Stanford has used technology to support interactive formats that create the right environment for practice of both.

Basically the assertion is that K-12 education has been ducking adoption of contemporary technologies – as well as more valid subject matter education of teachers – applicable to learning for over half a century, and still shows little awareness of the need to change.  But change it likely will be, because our digital capabilities aren't going away, the pace of development is accelerating, and when some sanity returns to the alleged reform movement the realization may be either a tsunami technologically crashing on present K-12 tech infrastructure, or shaking it out with a 7.9 quake.

Where’s the Start Button?

The December 12 SQUINTS, pretty much non-judgmentally, surveyed the places where existing and near term digital technology can augment present K-12 pedagogy.  A byproduct of the research for that SQUINTS was finding that there is already an abundance of technologies scaled to K-12 use, but roadblocks in the critical path to getting them into the hands of teachers and successfully applied.

Enumerating those barriers places SQUINTS in a deeply critical mode that really isn’t preferred, but there’s need for forthrightness on the issue.   The roadblocks start with the USDOE for reasons of both present obsessive strategies for reform and naïve views of needed technology, jump to our state departments of education that are even less on top of the need, then finally hit local systems that recoil as from a rattlesnake at anything that challenges their comfort zones.

No credit to our private sector, it has been consumed in these markets with making the highest possible dollar off of every text, workbook, standardized test, commercialized curriculum, and online application it can peddle.  If that doesn’t sour the cream, states like Ohio have a deeply dug-in system of education service centers that could be candidates for RICO prosecution, lacking the expertise to actually serve schools, but extracting “the vig” from virtually every school acquisition of needed materials.
 
But some defense of US enterprise, the larger markets for everything from online utilities through digital hardware and software are being literally inhaled by consumer demand even in the current economy, versus K-12 education that ducks promotion attempts.

Lastly, on balance most US public K-12 administrators and teachers are still clueless about digital advances and their potential classroom applications, as are most local school boards.  Indeed, most K-12 educators are virtually clueless about the research tools that were created, refined and in use in the 20th century that could have been inserted into contents and pedagogy then.  The product:  “What you sees is what you gets.”

Rough, but there is also a weak defense for public education.  That is, that there is simply no adequate infrastructure to effectively deliver needed technology to local classrooms and teachers without the intervening bureaucracies that presently constrain getting any intellectual change into those systems.  A question, is it possible to envision an organizational model that could make that dissemination work broadly applied to digital K-12 technologies?

A Very Basic Idea

Everything starts with assumptions.  That roster for this idea includes:
  • The political climate and diversity of contents and sources of digital technology applicable to K-12 are roadblocks to the USDOE being the focal point for discovering, screening, integrating and disseminating needed technology to US schools. 
  • A classic market could be a viable mechanism if the buying end of the equation operated with the same organizational, incentive, and choice principles as the selling side when there is competition; it doesn't distorting a free market solution. 
  • Market solutions follow the general marketing model of diffusion of innovation; the case can be made that present public K-12 education has waffled for so long that any innovation must be artificially accelerated to get the US back into world education contention.
  • The core function is essentially logistical – supply chain – based on the assertion that there are in existence ample and diverse sources of digital technologies that can expand K-12 learning outcomes and productivity; classic wholesaling, combining utilities from many sources, redistributing in appropriate combinations and lots to the next layer of market delivery, with the addition of the academic function of matching learning needs and means.
  • For reasons that shouldn't need amplification, funding such an effort will fall off a cliff for the foreseeable future if positioned as a social or wholly public sector program.
  • Lastly, it will arguably take a different version of creativity and entrepreneurship than anything presently associated with the public education bureaucracies, Federal, state or local; plus human resources operating beyond the traditions of those venues.
  • This list, if anywhere close to target, puts some tough criteria in place for organizational options. In truly contemporary organization theory, the game is defined by functions, not by diagrams or extant arrangements in place.
Here is one vision at an admittedly embryonic stage; titling is generic intended to simply define the domain – NEDTEC, or the National Education Technology Consortium.

Mission:  NEDTEC

To identify, sort, recruit, assemble, appraise, refine, and distribute to US K-12 schools a comprehensive menu of third-party technology-based learning modalities, using a public license format, and using virtual methods to deliver working digital pedagogical capabilities to K-12 schools along with educator development supporting effective classroom use; all at the lowest possible cost to public K-12 schools.

Organization

The organizational model foresees a public-private corporation, independent of USDOE and state education departments, but incorporating those agencies as supply-chain affiliates.

One division is formed for each US economic/education region, for argument set arbitrarily at ten, with an average of five states constituting a region.  A region’s colleges and universities are recruited to be affiliates with each divisional organization.

A board for each region/division, representing the four stakeholder types, provides operating level oversight:  Stakeholders -- a region’s public schools, its state departments of education, participating colleges and universities, and related private sector organizations.

The model envisions peak coordination across regions to include:  Sourcing and qualifying technologies; coordinating offerings with other organizations with national advocacy roles for K-12, e.g., the National Academies, teachers' unions, applicable foundations; and potentially brokering basic learning research to balance the menu offered.  Focus at the region/division level would be on:  Assessments of region school needs; marketing to states and schools; operating management of online supply logistics for delivering technology units; and billing and customer service.

The organization is to be virtual, emulating the collaborative open source model that produced the Linux operating system; accordingly the overhead cost of the system is minimized, organization is flat, and the format communicates the modernity of the technology managed.

Operations

Methods, models, software, knowledge blocks, tools, etc., assembled become available to public K-12 schools in a region/division via licensing similar to the GNU model of public software license.

The “cloud” is employed to provide low cost online access to all material and models for learning, including actual operations intra-classroom, or inter-classroom or inter-source for collaborative learning.  School usage of all brokered technology can be automatically monitored to both enable billing and assess acceptance of the learning tools.

Strategic oversight is by a composite NEDTEC national board or academy, composed of representatives from each region/division oversight board plus USDOE representation.

Funding

Rationally, the model envisions a need for foundation or other seed funding to initially collect and sort offerings, and envisions staged development from a proof of concept test to regional rollouts of service.

NEDTEC operating funding is envisioned based on school system license/lease of all operating products, in effect the facility to lease any service/module/model from the full NEDTEC menu of technologies, with cost of applications based on actual usage.  A full operating model and plan are obviously T.B.D.

The model would allow very low cost provision of a comprehensive menu of learning technologies, with negotiated companion corporate contributions of limited specialized license of all proprietary technologies that might fit the K-12 model and menu.  Preliminary survey of technologies suggests that a major component of applicable pedagogical tools is already in the public domain, but simply widely dispersed, with varying awareness by potential users, and because of individual selectivity attracting limited resources to gain wide recognition and deployment.

Collaborative Potential

The national level of NEDTEC offers an opportunity for another level of technology dissemination, brokering collaborative relationships among public K-12 schools and third party sources of learning technologies, and among schools that would never ordinarily connect or exchange perspectives.  Conceive of this as a social model of school exchange, and collaborative and organizational learning.  For example, the sister-city inter-nation model promoted for some time could be used as a model for “sister-school” programs; simply exchanging perspectives and experiences has the potential to broaden schools’ awareness of alternative ways to enhance learning outcomes, or to enable matching multi-point classroom research on what works with greater representativeness hence inference potential.

A Unique Universe

In most venues of either public process or the private sector what’s proposed would face major constraints, many of those a function of how professional activity is motivated and incented, or bureaucratized.  What could allow something akin to this model to function is its expression of the values that enabled Linux, or that drives programs like TED.  That enables values and deeply held beliefs in the importance and social imperative of public elementary and secondary education to foster a committed component of intellectual assets willing to work toward that goal as open source providers or organizational members with nominal compensation.  The commitment is already demonstrated in the levels of competent educator performances trying to make sense of the current twisted school reform movement.

The Bottom Line

The bottom line is that the model veers away from making that perceived criterion for success the one and only measure of achievement; think of it by analogy as the difference between judging genuine learning and achieved knowledge among America’s youth from the sterile and pernicious present imposition of standardized tests, versus learning models that are keyed to critical and constructive use of that learning, and produce a positive longitudinal outcome stream.

This idea is admittedly still skeletal, although based on several years of churning up unexpected but exciting learning technologies applicable to K-12, typically widely dispersed in sourcing, and in many cases by accidentally happening on a web site or following an online trail.  Many are world-based rather than just US domestic designs.  Paradoxically, many of the technologies found were available at no or minimal cost to an educational user. 

Acceptance, critique, changes, embellishment, augmentation, or even just outright dismissal of this concept and NEDTEC would be welcomed to sort out its potential:  mailto:rwillett@nktelco.net.

Lastly, please share today's SQUINTS with any acquaintance or colleague who believes that one of the keys to perfecting our K-12 classrooms, and enhancing learning outcomes, is integrating digital technology into their pedagogy.  That attempt seems preferable to either arguing that exploding digital technologies have marginal utility for K-12 learning, or jumping on a digital bandwagon with the same disregard for testing and verification as present K-12 standardized testing, or generating another area that polarizes America's institutions.

Next SQUINTS – timing still a bit uncertain because of the season -- will try to do a meta-analysis of the actually extensive attempts to study current K-12 reform action, including the homegrown effort to assess Ohio’s school rating scheme.  Though commentators on 2010-11’s K-12 research assert that anything definitive is far from accomplished, a more optimistic view is that 2010-11 experienced more empirical research on K-12 performance than in any prior comparable period; that by itself could be viewed as a positive factor for assessing and enhancing US public education.

Best wishes for the Holidays.