Reconceptualizing Assessment in the Service of Learning

One of Edmund W. Gordon’s early experiences in psychological assessment planted the idea of its potential to advance learning, and not just to measure and rank status. This is a tune that Dr. Gordon has hummed since that formative experience, elaborating it and creating a melody that made sense in a clinical community. He then expanded it to a cohort of scholars and students, encompassing a broad range of institutions. The melody became a chorus through the Gordon Commission for the Future of Assessment, funded by the Educational Testing Service. In this volume, through the efforts of a loyal group of associates that span the full range of humanity, the chorale has become an orchestra, if not an opera. … The opera is performed by a Who’s Who in the field of learning and assessment, and contains many pockets of Gordon’s vision, infused with a generous spirit of inquiry, urgency, and hope. 

Kenji Hakuta, Lee L. Jacks Professor, emeritus, Stanford University Graduate School of Education

In the 1950s in Brooklyn, New York, a special educator named Else Haeussermann defied testing conventions by asking not what a child knew, but how he or she learned. She wasn’t interested in sorting children by scores or noting deficiencies with clinical precision; she wanted to understand how they learned and under what conditions they could succeed. Working alongside a young educational psychologist, Edmund W. Gordon, she would adapt tasks—clarifying, chunking, or connecting them to a child’s experience—to discover their strengths and adaptations. This learner-centered approach affirmed an insight Gordon has retained to this day: the primary purpose of assessment is to inform and improve learning, not merely to certify status.

Today, that insight is more urgent than ever. Fundamental tensions exist between industrial-era testing practices and what modern science tells us about how people actually learn. For too long, educational assessment has been fixated on ranking students and certifying “what is,” a practice that often perpetuates opportunity gaps and creates a culture of anxiety. The Handbook for Assessment in the Service of Learning, Volume II, contends that it is time to shift away from outdated traditions and reconceptualize assessment not as a final judgment, but as an engine for learning. 

The chapters in this volume illustrate that using assessments to support learning, rather than just reflect it, requires a foundational shift–a move beyond the simple audit of achieved competence to a model that illuminates the learning process itself. As Susan M. Brookhart reminds us, this work begins by creating a supportive learning culture built on the solid foundation of formative assessment, where clear goals and rich feedback are the norm. But this new foundation cannot be built without confronting a legacy of bias. As Stephen G. Sireci, Sergio Araneda, and Kimberly McIntee argue, issues of social justice cannot be afterthoughts in assessment design; they must be foundational principles from the start. A learning-centric system must also be a justice-centric one, actively working to dismantle historical biases and create more opportunity rich educational environments.

Norris M. Haynes, Mary K. Boudreaux, and Edmund W. Gordon argue that realizing this vision requires grounding assessment practices in a robust and comprehensive theoretical framework. They advocate for a student-centered approach, one that accounts for student characteristics, curriculum, and the often-unseen factors of the instructional context. 

Reconceptualization also requires us to redefine what we mean by rigor and scientific soundness. Stephen G. Sireci and Danielle Crabtree make the case for expanding our concept of validity beyond statistical elegance to include evidentiary usefulness. The critical question becomes: does this assessment actually help learners learn? We need to expect evidence to prove it.

So, what does this new world of assessment look like in practice? It looks like blurring the lines between instruction, learning, and assessment until they are integrated through improved design.

Consider the world of a well-designed video game, which, as James Paul Gee observes, is an admirable system of teaching, learning, and assessment. In a good game, a player’s constant problem-solving and the immediate feedback they receive naturally generate evidence of learning; the assessment is woven invisibly into the gameplay itself. Every click, response, or choice a student makes in a digital learning environment is potential data about their thinking. As Gregory K.W.K. Chung, Tianying Feng, and Elizabeth Redman explore, capturing these fine-grained learner-system interactions allows us to glean insights into a student’s process that no traditional multiple-choice test could ever offer.

At the other end of the spectrum is the “educative portfolio,” a concept invigorated by Carol Bonilla Bowman and Edmund W. Gordon. Unlike the stealth assessment of a game, the portfolio is purposefully visible and transparent. It transforms a static showcase of work into a dynamic instructional tool where students select pieces, reflect on their growth, and articulate their learning. The assessment becomes an act of learning itself, giving students voice and choice in how they demonstrate their competence.

Assessment innovation does not just imply new formats; it entails new relationships. Maria Elena Oliveri, Kerrie A. Douglas, and Mya Poe demonstrate the power of building culturally and linguistically responsive assessments with learners, not just for them. Through participatory co-design, they show that when assessments are grounded in learners’ cultural contexts and allow multiple ways to demonstrate competence, they become not only fairer but more instructionally valuable.

This commitment to aligning assessment with the learner’s unique context is advanced by Randy E. Bennett, Eva L. Baker, and Edmund W. Gordon, who offer a theory of socioculturally responsive assessment focused on personalization. Their work contends that achieving fairness requires deliberate adaptation to individual differences—not just in terms of cognitive capacity, but also in non-cognitive factors such as motivation, engagement, and metacognition. These strategies for adaptation are supported by an empirically testable theory that seeks to enhance test performance and self-efficacy by valuing the examinee’s experience, culture, and identity throughout the design process.Yet, even the most brilliantly designed assessment is useless if its results are incomprehensible. In a provocatively titled chapter, Stephen G. Sireci and Neal Kingston call for “Removing the ‘Psycho’ from Education Metrics.” They critique traditional testing reports that mystify educators and alienate students with psychometric complexity. The alternative is to communicate results in intuitive, learner-centered ways—providing clear, actionable feedback and insights that students, teachers, and parents can readily understand and act upon.

Realizing this more humane and effective vision for assessment is a shared responsibility. It requires test developers to prioritize instructional value over mere psychometric elegance. It demands policymakers foster systems that trust educator expertise and value classroom-embedded assessment cultures over high-stakes, top-down mandates. And it requires that we empower educators to become designers of rich learning environments.

The journey is, at its heart, a commitment to honoring the whole learner. The powerful and hopeful message is that the frameworks and technologies we design are secondary to the humanistic vision that guides them. The final measure of any assessment’s worth is not found in a score report, but in the confidence, curiosity, and competence it inspires in a learner. As the chapters in this Volume emphasize, it is time we started building an assessment system worthy of that goal.

This blog series on Advancing AI, Measurement and Assessment System Innovation is curated by The Study Group, a non-profit organization. The Study Group exists to advance the best of artificial intelligence, assessment, and data practice, technology, and policy and uncover future design needs and opportunities for educational and workforce systems.

The post Reconceptualizing Assessment in the Service of Learning appeared first on Getting Smart.

Lost Password

Skip to toolbar