Sunday, December 21, 2014

US Higher Education: The Light Versus Enlightenment?

The Obama Administration, fronted by Secretary of Education Arne Duncan, having virtually emasculated the chances for intelligent reform of US public schools by dogmatically and despotically backing test-based alleged “corporate reform” — from its inception a cultural throwback view of our schools’ issues, and dedicated to ‘test and punish’ — is now switching venues. Spoiler alert: Our system of US higher education may need to erect real battlements around their academic enclaves to fend off a horde of metric trolls.

For readers who have been preoccupied with trying to survive Black Friday and the need to gift those dear, what the US Department of Education is proposing to launch, allegedly in 2015, is a rating scheme for America’s 4,140 colleges and universities.  Here are the available details about that intent:

The Rating Scheme

On the table thus far from Duncan and company:
  • Schools would be rated as “high performers,” or “low performers,” or “in the middle.”  (Note:  The critique invited by the overwhelming sophistication of this scheme is an immediate temptation, but assessment will wait for the whole story.)
  • The reasoning, and justification for Federal intervention, is allegedly assessment of institutions with students receiving federal student aid.
  • Allegedly being considered:  Which metrics; how to give credit for improvement; meaning and span of “in the middle;” a single composite rating, or multiple ratings for an institution?
  • Factors in the scheme:  Accessibility — number of Pell Grant students, family contributions to tuition, student share whose parents did not attend college (what these have to do with educational performance seems a mystery); affordability — average net price, and ANP for families by income level; outcomes — graduation rates, transfer rates, grad school attendance, loan repayment, and “labor market success” (the latter apparently meaning graduates' beginning earnings, but a better index in today's economy might be time to acquire initial employment, or the discount in salary taken from the target profession's norm to acquire any job).
The last item to drop, ratings will allegedly be calculated separately for institutions segmented into homogeneous clusters.  An immediate observation is that even the simple factors noted — apparently there not because they are the most salient measures, but happen to be available as data — if they are probed are not at all simple.  Established in prior work by our institutions themselves, what seems straightforward, e. g., even average net price for an institutions’ students varies, with real import depending on how costs are staged or offset, and services delivered.

Many of the leaders of our colleges and universities have already weighed in on the cogency of these proposals. Not unexpectedly, most of the comments, while critical of the proposed mechanisms, have been constrained or politically correct.  Our colleges and universities in top tiers are virtually unanimously led by smart people; it is a reasonable proposition that were the faculty/research smarts encompassed by our best 100, or even a dozen excellent institutions, let loose on the validity and reliability of this proposal, the results might be a little fly ash remaining.

Educationredux readers will have to be content with a quick pass at the issues embedded in this scheme; Christmas would intervene were the whole enchilada attempted in one sitting.  Titles for the issues perceived include: Purpose of the ratings, and the core relevance of the proposed rankings; using ‘what’s out there,’ versus researching and designing metrics that are specific and valid; and the troubled path this proposal will encounter if its creators comprehend and apply the concept of “unit of analysis” that foots all science.

Purpose

The alleged purpose of the ratings is what; education quality, social equality, turning out the right human resources, deeply informed candidates, and all with a vague ordinal depiction of our colleges/universities?  The scheme as outlined so far is a patchwork of opportunistic measures, actually a crude multi-dimensional conceptualization; but purporting to offer information suitable for real world discrimination for life-modifying choices.  This scheme’s scope beggars the well developed work in marketing to develop multi-dimensional scaling of single brands.  To even consider simple ordinal positioning or ranking, i.e., comparative assignment of institutions of the complexity we have, ranges from magical thinking to a fool’s errand.

Using New Graduates' Earnings

This item gobsmacks even common sense, and raises the question of the core competence of those developing this scheme.  The determinants of beginning graduate salaries are complex, are a function of the subject matter specialization marketed, and are variably impacted by transitory demand versus supply of workplace candidates.  Beginning salaries are related to mid-career earnings, but not perfectly, and will this Duncanian dysfunctional rating factor wait for promulgation until the next 20 years’ experience of those graduates is logged?  Lastly, but critically, those salaries may have nothing to contribute to assessing the worth of either the graduates, or their preparation for practice, to our economy or society.  

How many more of the finance droids, that brought the US the financial meltdown, does our nation really need?  Or how many CEOs can the system support?  Versus how many more really good teachers, K-12, and post-secondary instructors, does this nation really need?  The proposed ratings scheme flips the world upside down.  It also says that Mr. Duncan, who has never graced a real classroom, or had an education about education, or has questionably matured beyond an extreme liberal visitor to “Alice in (education) Wonderland,” needs to find a new quixotic pursuit.  Perhaps he could link arms with Bill Gates, extinguish two misdirected blow-torches destroying rational US public education.

Unit of Analysis and Those Clusters

The readiness of this concept for prime time is already questionable simply based on the above issues.  The notion of creating a compensatory fix for inequities, by assigning metrics to clusters of institutions judged to be comparable, may constitute the most unreasonable part of the scheme among a litany of the unreasonable.  There are two issues:  What is the proper “unit of analysis” for assembling metrics; and what happens to the set when that unit becomes a valid one?

Saying you are going to rate a higher education institution on a few metrics is roughly the equivalent of saying you are going to assign one measure of assessment to, for example, the qualities of products in an Amazon warehouse. A newly minted college graduate may walk through a common commencement exercise, but the education represented issued from some specific track within that academic labyrinth.  Each track could be considered the proper unit of analysis, accumulated by a scheme and weighting that would metaphorically mirror putting a human on Mars.  Even going up another level of aggregation may work for valid metrics, but the reality of that analysis challenge doesn’t assuage much.  Here’s one example of the challenge you face in trying to decide how to assess one institution — it is one intimately familiar, but also representative of many in the US — Indiana University.

Indiana University (IU) has two main campuses, Bloomington and Indianapolis, different academic environments.  It has six regional campuses. The Bloomington campus has 14 separate schools plus a College of Arts and Science.  All 15 major units have multiple departments, multiple faculties, heterogeneous curricula (and some institutions differential tuition) — that factually determine the quality of a degree — with 180 majors, in 157 departments, representing 330 degree programs.  The other campuses have variable presence of the same venues, plus where a campus is a joint IU-Purdue campus, there may be additional departments representing engineering, nursing, et al.

So the question is:  What is the effective and defensible unit of analysis?  If it is the substantive track the student takes, and if even our 629 public 4-year institutions have an approximation of the above internal structure, the analysis chore for that subset masses up to over 200,000 unique entities to be judged.

But perhaps the most elemental critique of this Obama/Duncan odyssey is a classic used in virtually every operations research course ever offered, what is termed “the drunkard’s search.”  Referenced by philosopher Abraham Kaplan (author of a text used extensively in higher education research courses, The Conduct of Inquiry), it is his observation that:  “Much effort…in behavioral science itself, is vitiated, in my opinion, by the principle of the drunkard’s search:  There is the story of a drunkard, searching under a lamp for his house key, which he dropped some distance away.   Asked why he didn’t look where he dropped it, he replied ‘It’s lighter here!’”

Lastly, a challenge to the creators of this scheme to actually employ some of the science of measurement that has accumulated since Descartes, LaPlace, Pascal, Fermat, et al., roamed the historical halls of academe, through contemporary expertise:  Will the team developing this scheme even tap the most rudimentary pretest of its metrics; putting a test run of their results up against the expert judgements of a panel of our best and brightest, to see if their metrics can replicate the arguably informed and sophisticated professional judgements of quality of a cross section of institutions?  The prudent advice is, don’t try to hold your breath for the pretest.

Tentative Conclusions

American post-secondary institutions, especially the two-year and four-year variety that lack quality accreditation, or are isolated academically from primary campuses, and that lack the internal controls on faculty quality that are embedded in mainstream institutions, are most in need of assessment for the quality of outputs.  But a material fraction, of our almost 2,500 4-year public and private colleges/universities, probably internally does more work on maintaining learning quality than the US Department of Education does to police their own cognitive integrity.

All of America’s colleges and universities, however, may be candidates for inspection for symptoms of “Baumol’s cost disease,” referencing failure to aggressively seek functional productivity increases over decades.  And some of the mainstream campuses we all relate to may have components that have decayed, or are still fielding bricks-and-mortar excesses.  But what appears very clear is, this scheme by the Obama Administration is not a viable cure for any part of America's post-secondary education assessment needs; it comes closer to being another dose of Federal snake-oil.

1 comment:

  1. I would like to thank you for the efforts you have made in writing this article.Best essay writing service

    ReplyDelete