Education in the Second Quarter of the Twenty-First Century: Some Speculations 

© Howard Gardner 2025 

As compared to other sectors of our world, education changes very slowly. And that is not necessarily problematic. Our educational approaches—from kindergarten through university—have evolved over many decades; educators (schoolteachers, professors) tend to remain in their positions; and it is not easy to change buildings, recreational areas, daily schedules, annual calendars, and the like. 

Yet, if ever a time in recent memory that calls for ambitious—even radical—rethinking of education, it’s the second quarter of the 21st century. Technology alone has transformed the landscape and multiplied the options. Not only do we have devices that allow us to connect instantaneously to individuals and entities around the world, but these devices or platforms present the news and the gossip (friendly or malicious) from all over; they allow us to personalize—sometimes positively, sometimes harmfully—our relations with anyone having access to that platform. And of special salience for the traditional means and goals of education; we now have at our fingertips instrumentalities like ChatGPT, Claude, and other Large Language Models that can read, write, problem solve and problem-find as well as anyone on the planet—if not better than its eight billion inhabitants. 

And it’s not just technology. Between weaponry, on the one hand, and climate change on the other, we may well be at the end of the Anthropocene—the era where human beings dominated the planet (see this blog post). I am old—which (alas) is not the same as being wise (see my blog on wisdom here); I am unlikely to witness firsthand the world in decades to come. But perhaps for that very reason, I think that it’s appropriate for me to sketch out how education may evolve in the years to come, and, as well, how it should evolve. 

Reshuffling Familiar Categories 

Nadine Gaab

Begin with the phrase: “from cradle to grave.” Not only is education likely to cover that span, but because we can already access information about the nervous system pre-natally, it’s possible to anticipate and perhaps address cognitive challenges and opportunities. (Click here to learn about a branch of this work being done by Professor Nadine Gaab at her “Gaab Lab” at Harvard Graduate School of Education.) This is not to say that we should—that’s an ethical decision; but it’s very likely that at least some scientists and some societies will do so (perhaps seeking to help challenged newborns, perhaps to nurture future geniuses).  

As for the other end of the age spectrum, we may continue to educate individuals even when they are elderly and declining; and perhaps education will occur even post-death. Lest this sentence jar you, imagine how I felt when some high school students prepared a video (link here) which purported to show me giving a talk about my own work. Clearly, this could have been done post-graveyard; and perhaps much of the world would have been fooled, or at least left uncertain about my own contributions to this video.  

Assume that this background seems convincing, or at least plausible, I wonder about the rationale—let alone the wisdom—of adopting the sequence taken for granted in much of the “developed world”: preschool, kindergarten, elementary and middle school, secondary school, college and university. Education is likely to begin early and to continue gradually as long as there is motivation, curricula, pedagogy, and technology. No need to group people by age; even less reason to have separate teaching cohorts, buildings, professional licensure, etc.  

To be sure, there may be some individuals (call them teachers) who are deemed (or who deem themselves) more appropriate for one set of students than for others; there may be some individuals (call them students) who want or seek just to be surrounded by age peers, social peers, or cognitive peers. But, flexibility, rather than rigidity, should be the rule. Indeed, as I consider the persons I know best (myself, my children, my grandchildren), I can readily see how each might have benefited from a more personalized pedagogical track…not the least common denominator, but the best individualized track. 

If you know me and my writings at all, you won’t be surprised that I endorse a more personalized track. For better or worse, I am associated with the concept of “multiple intelligences”—the belief that humans have a range of intellectual strengths, and that we differ from one another in which intelligences are prepotent—which should be evoked when we are learning, and which should be drawn upon in our interactions with ourselves, and with others—be they peers, supervisors, offspring, employees or individuals with whom we just happen to be in contact.  

As I’ve often put it, education should be individuated in the sense that each of us should be taught in ways that work best for us; and education should be pluralistic, in the sense that materials of consequence should be presented to us (“taught” to use the conventional phrase) in a variety of ways, thereby reaching more of us in a variety of ways and enabling us to draw readily on what we have learned in appropriate ways. 

Changing the Balance Among the Five Minds 

When asked, a quarter a century ago, about the kinds of minds that should be valued in the period ahead, I outlined “five minds.” Three of them were distinctly cognitive: disciplined (mastering traditional ways of thinking and problem solving); synthesizing (putting together information and conclusions in ways that make sense to oneself and to others); and creating (identifying new issues and problems, proposing fresh solutions, opening up avenues for exploration). 

I still endorse and value these three kinds of minds. 

But today, a quarter of a century later, I would prioritize the two other minds—ones that determine how we relate to other individuals. They include the respectful mind—the way that we deal with those with whom we come in regular contact; and the ethical mind—the way in which we deal with difficult, vexing dilemmas—ones that we cannot easily articulate and solve. My colleagues and I recognized that ethical dilemmas are more likely to arise at work (dilemmas in various professions and vocations) and in our roles as citizens (civic dilemmas).  

What I had not appreciated until recently: the potential for addressing and solving ethical dilemmas presupposes that we are members of a democratic society (please see my blog post here). Should one live in a society that is totalitarian, oligarchic, or autocratic, one cannot truly grapple with ethical dilemmas because, in a sense, their solution or resolution is simply dictated by those in power. 

Back to Education, in the Era of Smart Entities 

At last—though it may deserve to be first—I come to the heart of the matter. To the extent that we continue to educate in any sense in our societies, and to the extent that we have machines and mechanisms that equal or surpass us in intellectual power, what should “the curriculum” be, and how should it be transmitted? 

To begin with, I have no hesitation in defending the importance of the classical basics—the three Rs of Reading, (W)riting, and (A)rithmetic—and I would definitely add a fourth basic—the principles and processes of Coding, perhaps introduced by an appealing curricular intervention like Scratch. Nor do I have any problem with individuals who want to pursue the classical disciplines—history and other humanities, biology and other sciences, music and other art forms, even my field of psychology and other social sciences—as far as any of us desire. Even though we are likely to be surpassed by any number of LLM, we should still have the opportunity and the option to be a historian, biologist, pianist, clinician, and the like (because of the joy of discovering, and the fulfillment of sharing what we’ve learned).  

Let’s say in a future that is effectively at hand, we turn over most of the classical educational assignments and products to Large Language Models. To be specific (and a tad hypothetical), “Howie” is growing up in the 2030s, and Howie asks his favorite LLM about his family background; what subjects or topics he should pursue when he is ten years old and how he should pursue them; what essay he should write to gain acceptance at a summer program, his favorite college, a desirable internship; what information he should provide to a future employer; how he should carry out his first task at work, so it does not become his last work task; how he should deal with an ethical dilemma at work; assuming he still lives in a democracy, how he should vote on a controversial ballot initiative…

With due respect to the rechristened Facebook, I call this “meta” knowledge—but if you prefer a less loaded term, just call it “Knowledge from the Summit of the Mountain”—“Sum” or “Summa” knowledge. Just as examples, how much knowledge of history and historical methods do we need to know if we are to assess a history provided by ChatGPT? How much knowledge of biology and of scientific methods do we need if we are to assess a medical account and prescription to be provided by Claude?

To answer such questions reliably, we need to carry out empirical and experimental studies—studies which indicate how much knowledge suffices, how much knowledge is insufficient, how much knowledge may be excessive

And this is the kind of question that individuals like me and my colleagues—who have carried out educational research for decades—and institutions like Harvard Project Zero—which has carried out quality research for decades—should be tackling in the years ahead.  

In the likely event that I am not around to learn the answer first-hand, please do inform my avatar—and put it on the appropriate e-site. 

Next
Next

Five Minds…Rethinking Education in the Era of AI General Intelligence