CHAT GPT: FIRST MUSINGS

Howard Gardner © 2023

How will ChatGPT—and other Large Language Instruments—affect our educational system—and our broader society? How should they?

I’m frequently asked questions like these—and they are much on my mind.

Something akin to ChatGPT—human or super-human levels of performance—has long been portrayed in science fiction: I’m familiar with the American, British, French, and Russian varieties. But few observers expected such excellent performance so fast, so impressively, so threatening (or enabling)—depending on your stance.

As suggested by historian Yuval Harari, we may be approaching the end of the Anthropocene era.

We can anticipate that large language instruments—like Open AI’s ChatGPT and DALL-E—will continually improve.

They will be able to do anything that can be described, captured in some kind of notation. Already they are able to conduct psychotherapy with patients, write credible college application essays, and create works of visual art or pieces of music in the style of well-known human creators as well as in newly invented styles. Soon one of their creations may be considered for the Nobel Prize in physics or literature, the Pulitzer Prize for musical composition or journalism.

Of course, superior AI performance does not—and need not—prevent human beings from engaging in such activities. We humans can still paint, compose music, sculpt, compete in chess, conduct psychotherapy sessions—even if AI systems turn out to outperform us in some or most ways.

Open AI introduced ChatGPT 3 in 2020 and DALL-E in 2021

We can also work in conjunction with AI programs. A painter may ask DALL-E to create something, after which the painter may alter what the program has furnished. A researcher may present ChatGPT with a hypothesis and ask the system to come up with ways to test that hypothesis—after which the researcher can carry out one or more of these approaches herself. Such activities can alternate, going back and forth between the human provision and the computational program.

We fear what could go wrong—and rightly so. AI systems like ChatGPT have not undergone a million-plus years of evolutionary history (including near extinction or sudden vaults in skill); such recently devised systems do not inhabit our planet in the same way that the hominid species has. They are not necessarily—and certainly not existentially—afraid of cataclysmic climate change, or nuclear war, or viruses, that prove fatal to homo sapiens. Indeed, such systems could spread misinformation rapidly and thereby contribute to destructive climate change and the probability of nuclear war (recall “The Doomsday Machine” featured in the dystopic movie Dr. Strangelove). These destructive outcomes are certainly possible, although (admittedly) such calamities might happen even had there been no digital revolution.

And what about the effects of Large Language Instruments on our schools, our broader educational system?

Many fear that systems like ChatGPT will make it unnecessary for students to learn anything, since ChatGPT can tell them everything they might want or need to know—almost instantaneously and almost always accurately (or at least as accurately as an 20th century encyclopedia or today’s “edition” of Wikipedia!). I think that AI will have a huge impact on education, but not in that way.

Now that machines are rivalling or even surpassing us in so many ways, I have an ambitious and perhaps radical recommendation. What education of members of our species should do—increasingly and thoughtfully—is to focus on the human condition: what it means to be human, what our strengths and frailties are, what we have accomplished (for good or evil) over many centuries of biological and cultural evolution, what opportunities are afforded by our stature and our status, what we should avoid, what we should pursue, in what ways, and with what indices of success...or of concern.

But to forestall an immediate and appropriate reservation: I don’t intend to be homo sapiens centric. Rather, I want us to focus on our species as part of the wider world, indeed the wider universe. That universe includes the biological and geological worlds that are known to us.

Psychologist-turned-educator (and my teacher) Jerome Bruner inspired me. His curriculum for middle school children, developed nearly sixty years ago, centered on three questions:

Bruner in the Chanticleer 1936, Duke University (Source: Wikipedia)

  • What makes human beings human?

  • How did they get to be that way?

  • How can they be made more so?

To approach these framing topics intelligently, we need disciplinary knowledge, rigor, and tools. We may not need to completely scuttle earlier curricular frameworks (e.g., those posed in the United States in the 1890s by the “Committee of Ten” or the more recent “Common Core”); but we need to rethink how they can be taught, modelled, and activated to address such over-arching questions.

We need to understand our human nature—biologically, psychologically, culturally, historically, and pre-historically. That’s the way to preserve the planet, all of us on it. It’s also the optimal way to launch joint human-computational ventures—ranging from robots that construct or re-construct environments to programs dedicated (as examples) to economic planning, political positioning, military strategies and decisions.

To emphasize: this approach is not intended to glorify; homo sapiens has done much that is regrettable, and lamentable. Rather, it is to explain and to understand—so that, as a species, we can do better as we move forward in a human-computer era.


Against this background, how have I re-considered or re-conceptualized the three issues that, as a scholar, I’ve long pondered?

  1. Synthesizing is the most straightforward. Anything that can be laid out and formulated—by humans or machines—will be synthesized well by ChatGPT and its ilk. It’s hard to imagine that a human being—or even a large team of well-educated human beings—will do better synthesis than ChatGPT4, 5, or n.

    We could imagine a “Howard Gardner ChatGPT”—one that synthesizes the way that I do, only better—it would be like an ever-improving chess program in that way. Whether ChatGPT-HG is a dream or a nightmare I leave to your (human) judgment.

  2. Good work and good citizenship pose different challenges. Our aspirational conceptions of work and of membership in a community have emerged in the course of human history over the last several thousand years—within and across hundreds of cultures. Looking ahead, these aspirations indicate what we are likely to have to do if we want to survive as a planet and as a species.

    All cultures have views, conceptions, of these “goods,” but of course—and understandably, these views are not the same. What is good—and what is bad, or evil, or neutral—in 2023 is not the same as in 1723. What is valued today in China is not necessarily what is admired in Scandinavia or Brazil. And there are different versions of “the good” in the US—just think of the deep south compared to the East and West coasts.

    ChatGPT could synthesize different senses of “good,” in the realms of both “work” and “citizenship.” But there’s little reason to think that human beings will necessarily abide by such syntheses—the League of Nations, the United Nations, the Universal Declaration of Human Rights, the Geneva convention were certainly created with good will by human beings—but they have been honored as much in the breach as in the observance.

A Personal Perspective

We won’t survive as a planet unless we institute and subscribe to some kind of world belief system. It needs the prevalence of Christianity in the Occident a millennium ago, or of Confucianism or Buddhism over the centuries in Asia, and it should incorporate tactics like “peaceful disobedience” in the spirit of Mahatma Gandhi, Martin Luther King, or Nelson Mandela. This form of faith needs to be constructed so as to enable the survival and thriving of the planet, and the entities on it, including plants, non-human animals, and the range of chemical elements and compounds.

Personally, I do not have reservations about terming this a “world religion”—so long as it does not posit a specific view of an Almighty Figure—and require allegiance to that entity. But a better analogy might be a “world language”—one that could be like Esperanto or a string of bits 00010101111….

And if such a school of thought is akin to a religion, it can’t be one that favors one culture over others—it needs to be catholic, rather than Catholic, judicious rather than Jewish. Such a belief-and-action system needs to center on the recognition and the resolution of challenges—in the spirit of controlling climate change, or conquering illness, or combatting a comet directed at earth from outer space, or a variety of ChatGPT that threatens to “do us in” from within….Of the philosophical or epistemological choices known to me, I resonate most to humanism—as described well by Sarah Bakewell in her recent book Humanly Possible.

Multiple Intelligences (MI)

And, finally, I turn to MI. Without question, any work by any intelligence, or combination of intelligences, that can be specified with clarity will soon be mastered by Large Language Instruments—indeed, such performances by now constitute a trivial achievement with respect to linguistic, logical, musical, spatial intelligences—at least as we know them, via their human instantiations.

How—or even whether —such computational instruments can display bodily intelligences or the personal intelligences is a different matter. The answer depends on how broad a formulation one is willing to accept.

To be specific:

Taylor Swift at 2019 American Music Awards (Source: Wikipedia)

  • Does a robotic version of ChatGPT need to be able to perform ballet à la Rudolf Nureyev and Margot Fonteyn? And must it also show how these performers might dance in 2023 rather than in 1963?

  • Does it need to inspire people, the way Joan of Arc or Martin Luther King did?

  • Should it be able to conduct successful psychotherapy in the manner of Erik Erikson or Carl Rogers ?

  • Or are non-human attempts to instantiate these intelligences seen as category errors— the way that we would likely dismiss a chimpanzee that purported to create poetry on a keyboard?

The answers, in turn, are determined by what we mean by a human intelligence—is it certain behavioral outputs alone (the proverbial monkey that types out Shakespeare, or the songbird that can emulate Maria Callas or Luciano Pavarotti, Mick Jagger or Taylor Swift)? Or is it what a human or group of humans can express through that intelligence to other human beings—the meanings that can be created. conveyed, comprehended among members of the species.

I’m reminded of Thomas Nagel’s question: “What is it like to be a bat?” ChatGPT can certainly simulate human beings. But perhaps only human beings can realize—feel, experience, dream—what it’s like to be a human being. And perhaps only human beings can and will care—existentially—about that question. And this is what I believe education in our post-ChatGPT world should focus on.


For comments on earlier versions of this far-ranging blog, I am grateful to Shinri Furuzawa, Jay Gardner, Annie Stachura, and Ellen Winner.

REFERENCES:

Bakewell, S. (2024). Humanly possible: Seven hundred years of humanist freethinking, inquiry, and hope. Vintage Canada.

Nagel, T. (1974). what is it like to be a bat? The Philosophical Review. https://doi.org/10.4159/harvard.9780674594623.c15

Wikimedia Foundation. (2023, August 21). Man: A course of study. Wikipedia. https://en.wikipedia.org/wiki/Man:_A_Course_of_Study

Previous
Previous

Revisiting Susanne Langer’s “Philosophy in a New Key”—Again

Next
Next

In Defense of Disinterestedness