This is the first of an n-part
series on technology integration into U.S. K-12 learning systems. The “n” is still an unknown because of
the explosion of narratives – and hundreds of pages of articles and citations
-- that issued from asking the question.
But the reader may quickly
discern a difference; this treatment doesn’t start with proprietary hardware,
or even software. No Internet, whiteboards,
professional videoconferencing, iPads, other pads, desktops, laptops, smartphones,
clickers, or even more futuristic technology
applications in this introduction. The proposition of this series is that those
are the last topics and questions to be posed. The objects typically identified
as present technologies are the dependent variables in the K-12 technology
equation, not its explanatory variables.
Another caveat is, that if one is
looking for a vending machine for mass distribution of pre-packaged learning
technologies, be disappointed. The
machine is still mostly empty.
Indeed, the core challenge is that, metaphorically, the prior decisions
to design the product, machines to blend and extrude the product, as well as
the machines to package it, haven’t been adequately put in place for U.S. K-12.
State of Technology Integration
Volumes have been spoken on
whether American K-12 schools and programs have sufficiently embraced digital
applications; then, even when those efforts have been executed, whether present
technologies are assets to learning, or distractions, or marginally useful, or
too costly for a learning ROI?
In reality, there has been enough
field research conducted over the last few decades, though not at the necessary
level of specificity, to suggest some higher order answers. There is great risk of over-simplifying
their outcomes, but for brevity, an assessment is that most studies found some
positive contributions from overall technology adoption, but not that “knock
your socks off” finding that routinely changes beliefs and causes
brand-switching.
Partially explaining that
temperate assessment, it has also been rare to find throughout public K-12 systems
instances where digital technologies were embraced normatively as they should
have been; too frequently pasted on top of traditional classroom rubrics,
rarely the foundation for rewrite of learning models, and even more rarely to
date put into the hands of teachers properly prepared to use the tools. Education’s desultory approach to its
own reform has been some drag in using even extant technologies proven
effective in other venues.
Bad Dog?
U.S. K-12 education is being
flogged for its past failure to extend the classroom delivery of learning,
knowledge, and creativity to meet dual challenges; growth of information now
allegedly doubling every couple of days, and need to create a learning
environment and performances with both greater reach and depth, but simultaneously,
with less variability regardless of the variances among its human
subjects. But operating K-12 has
not been principally complicit in failing to pioneer needed technology
adaptation for K-12 that could help produce such delivery.
That failure has some mainstream
contributors: U.S. schools of
education, more deeply embedded in the last century than public K-12 schools,
resistant to cross-discipline research; the sources of our various digital
technologies, either indifferent to applications of their creativity, or
lacking incentives to tackle applications in education versus financial,
business, research, industrial, or even public systems’ markets; and the USDOE
that dropped the ball when NCLB became the reform theme.
Critical in this view of the
challenge, years of digging have revealed no well- organized, definitive models
relating digital concepts to the learning chores, nor conceptual frameworks for
unfolding what should link the learning environments with those tools. There have
been no more than a handful of attempts to go beyond just a favorable
technology environment to treat experimentally propositions that specific tools
create specific learning gains. In
short, public K-12 schools have been handed “stuff” they barely comprehend, subsequently
beat on for failure to turn it into learning innovation.
The Issue
Start by imagining a continuum or
thread, at one end the billions of switches capable of indicating 0 or 1,
forming via binary arithmetic the capacity to express and store numbers and
letters, embedded in hardware that can manipulate and store the results as
information, that can be employed to carry out operations using Boolean logic
and mathematical expressions, to work on algorithms, formulas, graphic
manipulation, transmission of recoverable images, links to the Internet, and
creation of output in language and formats amenable to human interface and
understanding.
Next, imagine that these
capabilities can be applied via computer programming and language rules to address
all forms of communications processing and expressions that can be transformed
digitally, and via specific models (many existing before the digital revolution)
solve equations, maintain databases, execute algorithms, transform and store
both numbers and language content, all targeting specific common sense and
invented applications, many that were generally part of our knowledge base
before processors and computers.
Digital processing along with the humanly derived protocols for getting
solutions both massively changed the time to get solutions and created a sea
change in conceiving what could be solved.
A simple example: Appearing every day in our business,
science, or even general press, someone presents a relationship of two
variables – your family income (x) and your savings rate (y) – and asserts an
association shown graphically and with fit measured by a number (0 – 1) called
a correlation coefficient. That
coefficient is actually the product of one specific form of associative
calculation (in this case called a Pearson product moment r) that is derived by
using differential calculus to fit a least squares linear line (minimizing the
squared deviations from the line when y is regressed on x) to that x-y data. The points unexplained (not fitted by
the line) and their distance from the fitted line become the basis for deriving the r
coefficient. Complicate the game a
bit; add multiple explanatory variables to predict y. Sixty years ago solving for that (multiple) R with some material number
of observations was the product of roughly 12-15 hours of time on a mechanical
calculator – in 2012 the solution, with all attendant statistics for the fit, a
few seconds. The game’s still
pretty much the same, but the game has also been kicked to a new level of explanatory
capability.
With all of the latter power the
issue of meaning is still more than the statistical model and its assumptions, but
that the model is applicable with proper assumptions to literally any human
enterprise and knowledge where quantification is possible. In some domains of knowledge, in some
environmental setting, populated by some assumed human recipient, the model
will suffice and deliver useful information. Critically, it is the latter specification that carries
meaning. In the case of technology
applied to K-12 education, the game is no different: What basic digital technology, delivering what capacity for
explanation, fitting what needs by it recipients, tailored to deliver in what
environments, keyed to fit what capabilities and knowledge needs of its human
targets, effected by what goal for expression and use of results, delivered at
what cost, producing what differentials in performance, permits a given
technology to become a challenger to a present methodology for discovery and learning?
Expressed another way, the K-12
technology problem is: To specify technology
– along with the logical model or algorithm or rubric – in the form of usable
software – fitting the concepts of learning being employed by the system and
teacher – fitting the specific needs for functional delivery to a classroom’s
students – to be melded with other specific learning methods – that can be
executed within the school’s structure, support environment, and cost
constraints – to meet expressed learning and assessment goals.
The problem is not one of
selecting hardware from an electronics buffet, but of carefully matching the
specific utilities of that hardware and software to equally specific learning
plans.
Getting a Handle
Clearly there is more than one
way to address the K-12 technology issue.
One strategy, that became
managerially popular, is just do it – by small experiments and trial and error
simply try various digital tools until the better performers are
identified. The concept has worked
in markets where it may cost more to research needs than can be returned by the
segmented product markets, or where products have no link to past customer
experiences and they can’t define what they want. Apple under Jobs didn’t do market research on new products,
because the market could not have delivered answers. Apple’s present valuation speaks volumes about the question.
If learning needs that could be
augmented by technology were universal, with minimal variance, with lead-time
and funding to burn, the above is not an unacceptable strategy. American public K-12 does not have either
luxury.
Proposed below are just the
chapter titles of one approach for methodically sorting technologies for fit to
the K-12 classroom:
Need for categorization
of digital technologies, extant and on the horizon, based on their differential
capacity to deliver an educational service. For example, assess the comparative
utility of whiteboard, versus laptop, versus pad, versus smartphone, versus digital
projector, versus other even traditional visual display options, in
communicating a class of information.
For contrast, change the learning task from communicating by telling, to
creating an interactive learning situation; the utility of each technology changes.
The above has roots
in defining the sensory deliveries that need to accompany the various classroom-learning
needs. There is well-defined
theory for multi-sensory inputs effecting the speed and quality of learning. Self-evidently, every K-12-relevant
technology delivers a different potential for sensory delivery: Auditory, visual, tactile,
propriopception, even olfactory and gustatory. A remedial side to this factor is the difficult to diagnose
presence of student sensory integration dysfunction; understanding of
technologies that can serve as remedies are also part of this need for
technology matching.
Need for categorization
of digital models and techniques based on their fit to common K-12 models of
learning, for example, matching belief in behaviorism, cognitivism,
constructivism, or some amalgam.
Another is matching Bloom’s Revised Taxonomy. This well-known framework for staging learning in K-12
posits learning stages of remembering, understanding, applying, analyzing,
evaluating, and creating. Different technology hardware and digital models or
techniques will differentially serve the extended active components of these
stages.
Another need, categorization
of technologies based on whether lower order memorization and recall, or HOTS
(higher order thinking skills) are being targeted. For example, creating HOTS may engage developing taxonomies,
knowledge search techniques, database understanding, use of semantic networks, expert
systems, collaborative knowledge construction, microworlds and simulation, artificial
intelligence, constructivist learning methods, hands-on research, group and
socially-networked creativity exercises, and even computer programming proper
as a cognitive learning modality.
The specific K-12
environment – school organization, administration, culture, financial model –
all become conditions permitting, enhancing, or retarding technology adoption
and application; and similarly, specific operating or style properties enforced
by the above are bases for the features of adopted technology. These can be anticipated and linked to
the specific versions of technology best fitting a specific system.
Lastly, three
features of technology effectiveness frequently slip under the radar: One, the significant distinction
between the traditional view that the function of the K-12 classroom is
communication to, informing students, rather than creating an interactive
culture; two, at this time in technology’s and K-12 education’s joint
evolution, the client is as much the teacher as the student; and three, germane
to debate about the efficacy of standardized testing, the role that technology
can play in development of equally (or better and more honest) “standardized tests”
that can assess HOTS versus the lowest common denominator of K-12 delivery.
Present reform and
standardized testing are working against HOTS development, and it is uncertain
whether and how interactive models of real learning can exist coextensively with
the drill the former dictates. One
hypothesis is that there may be technology combinations that can sufficiently
increase the productivity of the low level stages of Bloom’s Taxonomy, and that
slack can be created for installing genuine interactive learning in many
systems. There is growing evidence
based on neural biological and fMRI work that the productivity of low-level
performance can be increased, but an empirical experimental platform and
organized ways to do school trials don’t yet appear to exist.
The second factor,
our K-12 teachers, may dwarf the first.
Some empirical experiments with teachers suggest that, not unexpectedly,
there are currently more negative beliefs about the efficacy of technology in
K-12 than positive ones.
Accompanying analysis suggests – also behaviorally not unexpected – that
those beliefs are difficult to change, requiring actual successful use of
technology to precede and induce attitude and belief change. This puts a premium on the leadership
and perspicacity of present public K-12 school administrations, currently not always
an auspicious bet, because of both their training and the heavy pressure to
meet current mechanical reform objectives.
The technology
contribution to assessment is closer to reality than many believe, because of
increasing work on expert systems, breakthroughs in artificial intelligence,
and the diffusion of knowledge of how to create simulations and serious
gaming. If your only connection to
gaming is “Angry Birds,” you are missing a large swath of computer experience,
with effective education simulation stretching back over a half-century,
although limited in application then to available mainframe computer support. A measure of how the game has changed,
an eighth-grade grandson is writing game simulations on today’s desktop, which
could just as easily be a laptop.
The challenge, instead of fixating on present low level standardized
testing being isolated by a corporate cabal, is to academically take back the
function of test development, make it part of the educational commons, and
inject the necessary creativity to develop standardized, technology-assisted
testing that covers the full range of learning outcomes.
If it was Easy...
The above broad categories of
both education theory and practical factors, impacting which digital
technologies are sought along with related models and techniques, are for this
post just chapter titles. Each factor
needs to be expanded and linked to a comparable expanded classification of
digital tools, that linking differentially matched to classroom need. Further, there is the need to factor in
availability of complementary learning modes that operate extra-classroom, for
example, online learning, third-party capabilities (Harvard’s and MIT’s free
curricula, the Kahn Academy learning modules), and parental contributions to
formal learning.
The process described may seem
complex. Just adopt the KISS
principle? For example, simply
vend a couple million dollars of iPads to distribute to students in the belief
that their use is obvious. In
fact, their effective use is conditioned by everything above, plus several hundred
education-specific third-party apps, and literally hundreds of thousands of
apps (the Apple Store alone vends 500,000 apps) that can range from dysfunctional for K-12 education through brilliant but
not intended nor tailored to K-12 use.
Connected in specific ways those pads become search, social, and
problem-solving networking devices rather than “laptop light.” This still excludes additional custom
apps that may need to be developed to serve a particular school’s
application.
The issue is no different than
the function served by Bloom’s Taxonomy and a generation of thinking it prompted
to structure K-12 learning stages and processes. It is arguable that the technology issue, equivalently, has
no magic answer – it will take the same level of developmental and HOTS efforts
to refine tailored classroom-learning-technology solutions.
Is there a return on the
investment in technology and the student engagement and learning it can produce? One seer of renown thought so long
before the world became littered with our ubiquitous 0's and 1's:
“I hear and I forget. I see and I remember. I do and I understand.” (Confucius)
Postscript
This continuing series of SQUINTS will attempt to unfold the above categories of determinants of technology that can work, with a similar attempt to align the specific technologies with their fit in classroom functions and in learning assessment.
No comments:
Post a Comment