Data-driven learning measurements under fire

We must resist the temptation that, because we can measure, we should.

– National Association of Independent Schools

I am entirely certain that 20 years from now we will look back at education as it is practiced in most schools today and wonder that we could have tolerated anything so primitive.

– John W. Gardner, founder, Common Cause

Paul Mann
Mad River Union

ARCATA – Education critics warn that accreditation agencies are imposing ill-judged and undue pressures on public universities like Humboldt State, forcing them to employ large-scale, data-driven measurements of student learning to verify classroom results.

The emphasis should be on creative measurements, not numerical, quantitative ones, critics say, which tend to be static and limited in what they confirm about successful learning.

Better than tests, critics believe, are multiyear, cross-disciplinary portfolios or journals of student work. They are more far-reaching and precise “mosaics” of educational progress across various subjects, centered on a lively interplay of disciplines. In other words, qualitative evaluations are organic and longitudinal, not static and short-lived snapshots like numerical ones.

Reinforcing this distinction, education critics urge, portfolio, journal and diary learning  requires constant writing which, by many standardized testing results, 21st century students sorely need. Rigorous writing should be taught in all courses, analysts say.

Only 25 percent of U.S. students are proficient in writing, according to 2013 data cited by Michigan State University. It called this collapse “abysmal.”

The quantitative-versus-qualitative debate over how to gauge whether education is working represents an epochal change in public education. In academic argot, it is a “paradigm shift” to a new campus architecture called learning management and learning management systems.

Its underlying premise is that learning can be measured empirically through verifiable numerical measurements (data analysis) and observation, despite the intangibles of educational experience: the cultivation of critical judgment, lucidity of mind and habits of intellectual curiosity that make for lifelong learning and existential enlightenment.

A corollary to this premise is that the data, the “facts,” will speak for themselves. But critics warn that this is a misapprehension. It is human judgment that will arrange the data and determine their meaning and reliability. Human judgment is subject to error and unconscious bias.

Judgment will also be exercised about the ways the data are handled – that is the methodologies brought to bear – and about the significance and interpretation of the conclusions.

These questions are topical at Humboldt State, where work is well along to meet the demand from its regional accreditation agency, the Western Association of Schools and Colleges (WASC), for metrics of successful student learning. Officials from the association are scheduled to be on campus in spring 2018.

A university-wide steering committee has drafted a cluster of recommendations to meet the association’s requirements, and HSU has already enacted several of them.

The committee’s draft does not address the validity and wisdom of relying extensively on metrics and “data dashboards” to measure the quality of education. That was not the panel’s charter.

But an analysis by educators at the University of Southern California strongly questions the diktat of accreditation agencies for quantitative measurements of student learning.   

In fact, contend Melissa Contreras-McGavin and Adriana J. Kezar of USC’s Rossier School of Education, so-called learning metrics “yield time-bound, partial and arguably weak evidence of student learning,” evidence that can be shallow, inaccurate and misleading.”

“One of the arguments in favor of quantitative assessments is their capacity to effectively predict future student performance and outcomes,” Contreras-McGavin and Kezar state. “However, the predictive usefulness of quantitative measurements often extends no more than the next year of course work.”

In their analysis, Assessing Student Learning in Higher Education, the two USC analysts comment with some asperity, “We suggest that leaders focus on assessment activities that best support student learning, rather than merely developing measures to placate external agents [accreditation agencies like WASC]. We also challenge those in public policy to reconsider their focus on simplistic measures.”

They make these points:

• Reliance on quantitative benchmarks and percentages is a tempting shortcut because those kinds of data are easy to collect, interpret and distribute. But they lack the depth and longevity needed to achieve real and lasting improvements in student learning.

• Quantitative assessments do not readily demonstrate student self-awareness, curiosity, interpersonal skills and development of leadership ability. More effective is qualitative assessment in the form of diary, journal and portfolio instruments, which capture more complex and recondite learning outcomes like moral judgment. “Spatial, naturalist, existential, intrapersonal, interpersonal, musical and bodily-kinesthetic intelligences,” which are the hallmark of a liberal arts education, are more apparent in qualitative appraisals, the USC analysts contend.

Also, portfolio learning helps both students and instructors to chronicle progress jointly on a sustained basis and to observe up close and together how and when learning occurs. Professor and student share directly and closely in each other’s educational experiences.

Many campuses pay lip service to those kinds of learning in their mission statements, but fail to integrate them either in their curricula or in their measurements of learning success, the USC analysis claims. New assessment techniques are needed to weigh multiple types of learning, including a student’s conceptual clarity, organizational skills, multicultural awareness and ability to assimilate revolving and conflicting perspectives.

Other independent analysts say learning assessments should take into account the holistic impact of campus climate and culture on student life and academic success. In 2007, the Philadelphia-based Middle States Commission on Higher Education, which represents Notre Dame, Syracuse University, Temple University and West Point Military Academy, among others, said in a treatise on “Student Learning Assessment” that strategic questions should be asked and answered in systematic fashion when crafting assessment tools. The commission gave these examples:

• What is the level of trust on campus? If trust is a problem, how can it be earned?

• What person or persons on campus are perceived to hold unofficial power? How can those persons be convinced of the benefits of assessment?

• What is the system of apportioning resources on campus? Are there disputes about the equity of resource distribution?

Summing up, the Middle States Commission commented, “Faculty members and students probably already have a good sense of what is working best on a campus. For example, there may be anecdotal evidence that graduates of one program have particularly strong research skills, while students in another program may be especially adept at using and adapting what they have learned to solve unforeseen problems while working as interns. An audit of teaching and assessment practices used by successful programs will produce models for other departments.”

Internationally, the Paris-based Organization for Economic Cooperation and Development (OECD) says educators should be aware that learning assessment and academic policy making are often hobbled by a serious institutional disconnect.

The 35-nation OECD unit for evidence-based policy research in education points out, “Too often, information gathered in classrooms is seen as irrelevant to the business of policy making. The fact is that knowledge on the impact of the different approaches to teaching and assessment is limited.”

However, OECD researchers hasten to add, “The good news is that the existing research base (including research on the generic methods of formative assessment, as well as practitioner wisdom) provides clear direction for future research and development.”

Authors

Related posts

Top
X