The US Department of
Education/Duncan proposal (Postsecondary Institutions Rating System or PIRS) to grade America's colleges and universities -- at
the moment into three still vague performance categories -- has not yet issued
in any detail. Representations have been that three factors are involved:
Affordability, access, and results. Implicit has been that the
three factors will need to be measured using data already Federally available,
byproducts of various Federal programs, including ones not directly involved in
the various Federal education "Title" authorities.
If one had just landed on earth from a distant planet, with the technological prowess that implies, the notion that over 4,000 diverse
higher education institutions could be successfully characterized and rated by
those three factors might actually seem to make sense. What could be
simpler: Do a nation’s applicable citizens have equal access to those
institutions; can they afford the price of attendance; and what has been the
value added by their participation?
After a
few trips around the societal track, that visitor from another place becomes
linguistically proficient and starts to understand organizational behavior and our societal hangups, concluding
the proposed scheme for characterizing an educational institution, by analogy, has the
credibility of studying earth's life and its behavior by simply designating it bacteria,
archaea, or eukaryota. (We as
multicellular organisms are constructed of eukaryotes, microbes, et al., but
that true depiction falls a bit short of characterizing the sentient human.)
In
fact, the scheme proposed by the US Department of Education is a total whack
job, calling into question Secretary Duncan’s intellectual competence,
or surfacing the question of what values and ideological excursion precipitated the
proposal?
Both
the rating scheme, and in fairness this writer’s challenge, fit
the trope “says easy, does
hard.” Let the reader be the
judge, based on the reality of the behaviors, factors, and rating process being proposed for PIRS.
What
are the issues?
The US Department of Education/Duncan depiction of the need for this scheme remains vague; what are the reasons the proposal has been floated now, and how do they hold up under scrutiny:
In the wake of undercutting of genuine learning experiences by dogmatic Federal pursuit of standardized testing as the backbone of US public school reform, it seems fair to propose that future initiatives be judged by one of the same standards as medical practice – first, do no harm.
- Are the proposed ratings – even if valid and reliable – needed?
- What is the valid unit of analysis, i.e., the total institution, intra-institutional colleges and schools (there may be great variance inside an institution.)
- On any factor requiring differentiation to constitute a rating basis, is there greater intra-organizational variation than variation among institutions?
- Will the ratings differentiate institutions judged deficient in providing equitable access?
- Will the ratings differentiate institutions based on cost of delivery of a degree; will those costs be comparable based on the quality of the degree delivered?
- Subsumed in the above, how are the times for delivery of a degree accurately determined?
- How is it determined that ratings of institutions are based on valid assessments of comparable institutions?
- How will the punitive measure proposed change institutional behaviors?
- How does a limited number of ad hoc measures of existing variables translate into a rational scheme to measure performance of any institution that is, de facto, a system and complex layered organization?
- Are the variables proposed up to the task of alleged measurement: Genuine accessibility–true net cost–education value added, and valid comparisons?
In the wake of undercutting of genuine learning experiences by dogmatic Federal pursuit of standardized testing as the backbone of US public school reform, it seems fair to propose that future initiatives be judged by one of the same standards as medical practice – first, do no harm.
Ratings
needed?
There
are currently in excess of 20 US web sites devoted to search that can mate a
collegiate prospect and a college or university, and multiple ratings already
published, e.g., US News, Forbes, Princeton, et al. Add the online sites of virtually every
credible college and university.
The
categories of information may not be comparable among these sources, and they
have variable credibility; however, the proposed assignment of multi-thousand
institutions into three crudely defined hoppers, even if those assignments were
valid, appears destined to add nothing to a prospect’s effective discrimination in choosing a collegiate destination.
The
unit of analysis?
A fancy
phrase for a core issue; what measure of homogeneity, or level of disaggregation
makes institutions being assessed comparable?
This is
an example previously used, but it makes the point:
“Indiana University (IU) has two main campuses,
Bloomington and Indianapolis, different academic environments. It has six
regional campuses. The Bloomington campus has 14 separate schools plus a
College of Arts and Science. All 15 major units have multiple
departments, multiple faculties, heterogeneous curricula (and some institutions
differential tuition) — that factually determine the quality of a degree — with
180 majors, in 157 departments, representing 330 degree programs. The
other campuses have variable presence of the same venues, plus where a campus
is a joint IU-Purdue campus, there may be additional departments representing
engineering, nursing, et al.”
What is
the appropriate unit for measurement:
The composite institution; each campus location; the college(s)
embedded in each campus; the various schools; even subject matter departments
that may be as large in student enrollment as some small colleges? Those differences in programs and
enrollees may produce very different results for the variables proposed as the
basis for ratings.
Foreknowledge
of the universe?
Is
there any a priori basis for the Department/Duncan proposals based on even
sample research of how ratings factors show dispersion across institutions, or within
institutions and across the above potential units of analysis? Thus far the Department has offered no
evidence of prior or ongoing research that would foot any rational proposal of
this magnitude and potential for negative effects.
The
second factor impacting validity is comparability. Are any two institutions of higher education comparable
given their capacity for independence of action and complexity of offerings? What research on multidimensional
properties has been executed to provide categories of institutions that can
arguably be comparable? The
factors allegedly being rated are intrinsically linked to many of those
properties, therefore have a potential of being misinterpreted as performance gradients rather than just concomitant effects of those properties.
A
college/university is a complex organization.
In the
rush to rate higher education institutions a fatal error is failure to
recognize that every college and university, even the most austere, is a level
of magnitude more complex as an organization than, for example, a public school that has narrower roots, fewer human resources, and relatively a fairly simple organizational structure; even with those similarities our public schools are not automatically comparable in assessing learning performance or
even test-based metrics.
Breathtaking is the naïveté to believe any organization, and ones as complex as a college or university, could be assessed for quality based on a handful of incomplete or flawed variables (if that is the true motivation, venality if it is not).
The
scope of measurement of organizational performance – especially for an
entity as layered and complex as a college or university – is impossibly beyond
the scope of this blog. Many
assessment models exist, and the real factors, variables, functions, actors,
and internal behaviors that foot an organization’s true performance are massive. Just one example of such a guide to
determinants of performance is linked here. The Department/Duncan model is roughly the equivalent of trying
to build a real operating system with Legos.
Assessing
student access to higher education?
As
complex as every other factor footing the proposed rating scheme,
this one is presently categorically blocked by both a lack of longitudinal
research on how admittance is sought and played out in real time, and
confidentiality law installed by Congress. To answer this question would require comprehensive access
to college applicant records leading to acceptance or rejection, not permitted
by law except at the moment available to the applicant.
The
latter access was just exploited by a cluster of Stanford University
undergraduates, who demanded and received their full files. The results underscore the complexity
and nuances of the admissions process; such full disclosure would be needed to
assign faults for failures to admit, and to attribute that failure to some form of
discrimination other than student performance criteria.
Time to
acquire a diploma as a performance factor?
On its
face this factor appears one, that coupled with the cost of the educational
experience, might be defensible.
In 2011 a group within the US Department of Education was tasked
with assessing the factors that might be measured for rating
colleges/universities, initially targeting two-year institutions. Of the
multiple factors noted above, only one was thoroughly vetted – the time required to
acquire a degree/diploma.
At the
moment the only data the Department has to quantify that factor is the
measurement of the number of years taken to acquire a degree or diploma, by a
first-time, full-time degree seeking student. As focus shifted to four-year as well as two-year programs, it is
from that narrow data concept that the various alerts have come, stating that some
material percent of BS/BA level students fails to get a degree within the nominal four
years, and now six years.
The
Department’s own report, citing the errors in that measure, because it did not
track transfers and possible degree completion or subsequent degree pursuit and acquisition after the initial
drop out, has seemingly been ignored in the PIRS ratings quest. In short, that six-year figure for a four-year degree, popularized by our press, is likely a misrepresentation of reality with little or no research undertaken to rectify
that to pursue the ratings.
Still
another idea floated, use of Federal job placement data of new graduates as a
surrogate for quality of education delivered. Your average eighth grader could slam that rendering of
uncritical thought; at the most basic level, starting salaries of new graduates
are tightly linked to job and professional service type, and our institutions
are diverse in occupational preparation supplied, therefore salaries are
confounded with job type. As the
occupational types number in the hundreds, type would have to be held constant
to impute a salary quality indicator.
The universe of college and universities categorically can’t support the
data logically needed.
Punish
to change?
First
question is, to change what; the time to degree, the net cost, the quality of
learning generated? The first item
is unresolved, the second subject to measurement of a total cost to the student
as yet undefined, and the third will allegedly not be attempted. One hammer proposed is tying availability of Pell Grants to a college's or university's rating. Other public critiques of PIRS suggest, that because of the crude reasoning and categories footing the scheme, redirecting Pell Grants may actually worsen support for collegiate candidates most needing support.
Next, will
the crude ratings being proposed by the Department/Duncan affect the behaviors and performance
of the institutions targeted?
Because of the complexity of decision making in present higher education,
with the layers of stake holders, it is highly questionable even if the ratings
induce greater deliberation. Using the
prior IU example for a moment, student financial aid measures roughly seven percent
of composite cash flow associated with annual operations, and that does not
include the influence of endowment funds flowing to the institution. Presently, the departure or hire of a handful of sports
coaches in some quarters might have greater impact than everything the US Department of Education
can use to put a brand on an institution.
The
list goes on, to where?
Pre-dating
NCLB, and blossoming in the period immediately prior to the Obama Administration’s installation, there was a small explosion of studies and
conferences addressing the core issues surrounding change in America’s colleges
and universities. Some of the most
comprehensive work, now simply being repeated in most discourse on higher
education change, was originated by the Association of American Colleges and
Universities, and by a small number of states, the latter focusing on measurement
of the quality of community college outputs. This work was seemingly lost in what subsequently became, it
is asserted, an unthinking and unreasonable commitment by the Department/Duncan
to ideologically driven postsecondary reform tactics.
This generic topic is only scratched by the above observations. There is cause to argue that America’s colleges and universities should be assessed for mission, and for operating performances that miss or contradict the mission. Staying with the former academic stomping grounds for an example, and with a prior small window into IU’s 2014 strategic planning for its Bloomington campus, the resultant plan was narrow in perspective, institutionally self-centric, virtually void of any recognition of the national and strategic issues that vex present higher education. Procedurally the planning process was less than inclusive, literally taking properly credentialed faculty representation out of the loop, substituting a set-piece of submissive faculty for broader campus faculty input. Change is arguably needed in present US higher education organizational leadership as well as in the mechanisms of pursuit of student learning.
This generic topic is only scratched by the above observations. There is cause to argue that America’s colleges and universities should be assessed for mission, and for operating performances that miss or contradict the mission. Staying with the former academic stomping grounds for an example, and with a prior small window into IU’s 2014 strategic planning for its Bloomington campus, the resultant plan was narrow in perspective, institutionally self-centric, virtually void of any recognition of the national and strategic issues that vex present higher education. Procedurally the planning process was less than inclusive, literally taking properly credentialed faculty representation out of the loop, substituting a set-piece of submissive faculty for broader campus faculty input. Change is arguably needed in present US higher education organizational leadership as well as in the mechanisms of pursuit of student learning.
But overall,
the present US Department of Education/Duncan initiative is arguably the
flimsiest and most disingenuous proposal thus far for the purpose of producing
positive change in our collegiate institutions.
There
is lastly also obvious room to argue that none of the narrow and simplistic
reform designs currently being floated for higher education, irrespective of
the origin, should be permitted to advance without some meaningful research
that first codifies key characteristics and performance indicators for all 4,000
plus institutions, or minimally a projectable sample of those institutions. Sequentially, that likely is not
possible without creativity currently evading higher education, and a new level
of inter-institutional conversation and cooperation among university
leaderships, along with comparable states’ cooperation via perhaps the National
Governors Association (NGA). The assumption is that the present US Congress is unlikely to grant such power for discovery to the present White House.
Conclusions?
Viewed
against the common sense of most of Tuesday’s SOTU address by Mr. Obama, this
proposal simply doesn’t satisfy a “sniff test.” The complexity of the mission, juxtaposed against the
ignorance and ad hoc tactics proposed to rate higher education, has to be viewed
as failed logic and programming.
Compared to pragmatically failing testing-only based alleged reform
being impressed on public schools, this proposal is not the product of
competence that should guide national education advocacy.
American
public higher education that was formerly dominated by state funding and
occasionally adequate oversight has executed a 180 over the last several
decades. For example, using IU
again for convenience, that university system’s funding from the State of
Indiana is now less than 24 percent of total annual revenue. There is an inevitable loss of
practical public control of oversight of institutions that must retool to
support themselves.
Our collegiate managements reflect intelligent and highly educated human resources, but are as vulnerable as any private sector firm to managerial failure; perhaps to a greater extent in many institutions where leadership has come through the academic ranks and lacks the managerial expertise demanded in the private sector. That has become increasingly evident in higher education leadership’s emulation of corporate leadership that formerly dismissed strategic thinking. In short, our collegiate leaderships can learn something from our private sectors and from resources who have pioneered change in management thought; the question is whether leaderships will register that in time?
Our collegiate managements reflect intelligent and highly educated human resources, but are as vulnerable as any private sector firm to managerial failure; perhaps to a greater extent in many institutions where leadership has come through the academic ranks and lacks the managerial expertise demanded in the private sector. That has become increasingly evident in higher education leadership’s emulation of corporate leadership that formerly dismissed strategic thinking. In short, our collegiate leaderships can learn something from our private sectors and from resources who have pioneered change in management thought; the question is whether leaderships will register that in time?
America’s
colleges and universities are also vulnerable to obsolescence in spite of the
intellectual capital they inventory.
Change is needed, as suggested in a prior post, to: Prioritize the real missions; get on
the same page in providing information for potential students; make the process
of accepting students as transparent as possible within the context of existing
confidentiality laws; address the phenomenon of substituting part-time faculty
for tenured and tenure-track teachers, or verify that the former’s vetting
equals traditional scrutiny; combine cost effectiveness initiatives with
learning output assessment to increase productivity; get back to four years (or
two years) means “four years;” consider the possibility that “lean” techniques
applied to industry do have a role in education; and move beyond present
institutionalization of curricula to aggressive updating of knowledge being
offered.
Lastly,
it is impossible to avoid the reality (provocative to the guilty) that a whole
lot of America’s higher education shortfalls do not spring from higher
education, at least tactically, but because US public schools, and especially
the secondary grades are simply not performing. Over a dozen years NCLB, in spite of the hype, has produced from a quarter to a
third of America’s children that have been “left behind” in spite of the hype, and will struggle to
get beyond that fate.
There
is really no mystery why America is still in a form of educational crisis – you
only have to pull cognitive function out of where it has been slumbering. Look critically at too many of our local schools still dug in to last century’s
rituals and knowledge obsolescence, refusing change, exhibiting administrative venality, and BOE that are unprepared or misdirected. That is amplified by inadequate teacher training by our schools of education, offset only by the better fraction of US teachers who have internalized stronger academic values and taken the initiative to advance their own learning and classroom skills.
Perhaps there is discovery afoot precipitated by a shift in emphasis to higher education: That a century, of disassociating US public PreK-12 systems and practices from the post-secondary education function, has to come to an end, or will at least begin to register educational and legislative awareness?
Perhaps there is discovery afoot precipitated by a shift in emphasis to higher education: That a century, of disassociating US public PreK-12 systems and practices from the post-secondary education function, has to come to an end, or will at least begin to register educational and legislative awareness?
No comments:
Post a Comment