The interplay of education, technology and humanity
In this longish essay, I reflect on what ChatGPT (and similar LLMs) means for the future of education.
Picture of scottamoore

scottamoore

How future LLMs will affect education

I just finished reading the second book of the Three Body Problem trilogy by Cixin Liu. It’s a masterpiece of science fiction that is, at its heart, about the interplay of two themes: 1) scientific and technological progress, and 2) what it means to be human.

Reading this series has opened my eyes—at an opportune time—to broader considerations raised by large language models (LLMs), the future of education, and what it means to be human.

The discussions about ChatGPT (at the time, based on GPT-3) that I saw started off with worries about students cheating on their assignments. Many of these discussions soon highlighted that ChatGPT tends to make up answers that sound extremely plausible when it doesn’t know the answer. They then progressed to analyses about how the technology can benefit people by providing text answers to relatively complex questions more easily than a standard Google search. Conclusions have generally been some form of “it’s a useful technology if used correctly; think of it as a hard-working, but possibly over-confident, assistant.”

GPT-4’s recent appearance and my reading of the trilogy have changed the meaning of this for me. I have seen some fairly impressive examples of a tutorial session where the model taught a student over a long back-and-forth about a complex college-level concept. This is an important step—one taken in just a few months!—and shouldn’t go unnoticed.

A few days ago, I wrote a LinkedIn post called “Academic advice for a 10-year-old”. One recommendation about future studies that I included is that “Our most human traits will matter more than ever, so these traits should be developed.” The underlying assumptions are that computers are better at doing things that computers can do, humans have some things that we can do better, and we need to build on those strengths.

When higher ed leaders think about the evolution of LLMs, they must think several (dozen?) steps ahead of this because it moves so slowly.

What do future LLMs (and related technologies) mean for future professors and for higher ed as a whole?

Most of the world had no idea that LLMs were this close to being as competent as they are. However, we can all see that they are more than competent discussion partners. We have to be open to the possibility that these LLMs might become even more capable (and, thus, take the lead on some capabilities that humans currently are better at). I am quite familiar with the history of AI predictions and how what feels like “it could be days” end up being more like “it’s simply never going to happen!” It simply feels like nothing more than a fairly short step from where technology is to the future that I’m envisioning.

What does this mean? I can think of a few not-so-distant-ways in which LLMs might improve:

     

      1. Understanding that “truth” means something and that “sounds reasonable to some people” is different than “is backed by evidence.”
      2. It should be able to justify anything it says with explicit and detailed citations, and
      3. Evidence is a measurable quality or at least something that can be approximated. Statements should only be made when they 1) have at least a certain level of evidence, and 2) have more evidence than other possible related statements.

    1.  

    1.  

    ChatGPT based on GPT-4 already understands what it means to instruct or lead a tutorial. It is quite competent with these already. Combine that with a more advanced understanding of the three points above, and now we have bigger points to consider. I don’t know where the researchers are in their path to addressing these questions, but I’m assuming that they will, at some point, be able to address them adequately.

    Let’s just start with this question.

    Will introductory and intermediate courses across a wide range of topic areas (majors!) be taught by LLMs? And by “taught” I mean have the syllabus written, lectures given, assignments written, and assignments graded. Fully.

    I say “yes.”

    What objections might be raised to this position?

    The first objection that I shall mention relates to the quality of interaction among students in a typical college classroom, I would say that “typical” right now means “mostly asynchronous online”. If that’s the case, then all of that interaction could be handled 24×7 by the LLM. If the objector’s baseline class is taught in-person or synchronously online (but, apparently, not by a humanoid avatar), then what is the value of that interaction? And by “value,” I mean the surplus of knowledge gained over both time and the monetary costs exchanged for it.

    Another objection might be that the LLM could not teach the class alone. Let’s grant that for a moment. Somehow there is something that the objector cannot foresee a computer being able to do. I would respond by encouraging the objector to wait ten more years for the technology to progress. Would it still be the case? What if the student could get 90% of what a university education could get, but could get it for 1% of the cost? Would that be good enough?

    One final objection might be that higher ed would never go in this direction. This might be the case but consider this prospect: A troubled higher ed institution has lost most of its students, but it still has a relatively large endowment. The campus could be shut down, and its endowment could be put towards automating the teaching of most classes. The institution could then offer an education whose output is indistinguishable in the learning objectives from a typical undergraduate or business degree…except for the minimal tuition that would have to be paid.

    Or maybe the Gates Foundation funds this. The details don’t matter.

    Higher education needs to realize that it will not always have a monopoly on education when it defines itself in terms of “meeting learning objectives.” This raises one more question that I will address in an upcoming essay:

    How should higher education define itself so as to maintain its relevance?

    One hint that I provide to my thinking about this question is in the graphic at the top of this post.

    Recent related posts

    A new vision for higher education

    In this article, I explore the challenges facing higher education, its historical evolution, the existential threat it faces, and an outline of a new vision for higher education.

    Read More »

    If you don’t want to miss any of our posts, we send out a periodic newsletter to let you know what we’ve been up to.

    Video chats about strategic digital learning

    Do you want to talk about some idea that you might have about digital learning? Maybe you have an idea but don’t know how to take it forward. Let’s talk!