top of page

AI in Education: From Threat to Opportunity

Writer's picture: Chris LeleChris Lele

Updated: Sep 20, 2024


Imagine for a moment, you are presented with the relatively daunting task of memorizing the capital cities for every one of the 54 countries that make up the continent of Africa. You only have one hour to commit as many possible African capitals to memory. To make this a little more appealing, you are offered a $100 Amazon gift card (and bragging rights the next time African capitals pop up while you’re watching Jeopardy!).


An hour later, before which time you’ll not have access to the internet (or a world atlas), you will be tasked with recalling as many African countries and their corresponding capitals as possible.


How would you go about studying for this challenge, if besides the list of countries and their corresponding capitals, you are given two choices: a quiz that tests you on African capitals or a blank piece of paper and a pencil?


I think most of us would make the obvious choice: the quiz. After all, you’d be able to see which capitals you struggled with and double down your efforts there. You’d also be able to see how close you were to getting all 54 countries — and the coveted gift card.


However, this is not what researchers have found. A 2007 study by Butler, Roediger, and Karpicke provided subjects with a list of word pairs like “Ocean-Garden” or “Buffalo-Rock.” They gave them time to memorize the list and then divided them into two groups: the first group, which we’ll call the “Multiple-Choice” group, used this format to reinforce their learning of the word pairs.


The other group was the “Free Recall” group, which was given a blank piece of paper and something to write with.


After this “learning phase,” each group was asked by the researchers to recall as many words as possible. The “Free Recall” group did far better, generating nearly 40% more word pairs than the “Multiple-Choice” group.


You might have picked up on this study being very similar to our imaginary contest of memorizing all 54 African capitals in one hour. In other words, if you want to win the hypothetical $100, choosing the blank piece of paper and struggling to come up with Djibouti City is far more likely to help.


The reason the blank page is so powerful a learning tool, as other similar studies have found, is forcing yourself to come up with as many units of information as possible with nothing but a blank piece of paper leads to far better long-term retention of the information. The belief is that the more you struggle to learn something, the more it sticks in your brain.


Scientists call this effect “desirable difficulties,” and it plays a huge part in how likely we are to retain information. Make the learning process too easy so there’s not much straining, and a new piece of information is likely to flit away. To use our African capitals example: if we cover up the capitals while looking at a country, it is tempting after a few seconds of drawing a blank to lift up our hand and take a look. Typically reactions are something of the nature of, oh yeah, that’s right, the capital of Djibouti is Djibouti City. I’ll get it next time.


But because you expended very little energy before lifting up your hand to check the answer, the chances that you’ll remember Djibouti City next time around are not that high.


Using a quiz or multiple-choice questions, as in the study, is somewhere in the middle of the “desirable difficulties” scale, since we need to exert some mental energy in having to match the capitals to the correct country.


The other side of the “desirable difficulties” spectrum entails struggling to recall a piece of information without any stimulus beyond a blank piece of paper (this is essentially what you do when writing a closed-book essay.) Yet it is this struggle, this “learning friction,” that is the most likely to lead to long-term learning.


Good teachers know this, whether intuitively or from reading the literature. They don’t just cough up answers, but lead students, often with a few word clues dropped in there, to the answer, students’ eyes lighting up with recognition when they are able to formulate the answer in their own words.


Within the last year and a half, this delicate learning balance has come under threat with the advent of AI. Now students have instant answers at their fingertips (as well as instant essays), which presents a clear threat to the “learning friction” seasoned teachers have been able to coax from students and that students have had to navigate when trying to come up with the answer at home alone. Today, even if students aren’t outright plagiarizing, they are potentially working less than ever to come up with the answers, as AI allows them to tap into what is probably the lowest level of the “desirable difficulties” scale–the instantaneous answer.


Yet, there is the flipside to this: with tools like ChatGPT, a student has the ability to learn something more quickly and deeply than ever before. Imagine this: a student reads 50 pages worth of dense material. They think they understand the overall message and, say, the quiz at the end of each chapter helps give them a sense of how well they are retaining the information. On our “desirable difficulties” spectrum, they are somewhere in the middle (assuming there are indeed quizzes or some kind of knowledge checks.)


Now imagine a student who has learned about the power of the blank page and who has learned how to leverage Chat GPT to help their own learning, enters into ChatGPT after every 4–5 pages, what they believe they’ve learned. They ask ChatGPT to tell them how accurate they are, what they might have misunderstood, and what they might have left out. By formulating what they are learning from scratch into the ChatGPT window (much like the blank page), they are triggering the neural pathways that give rise to long-term retention. And because they are actively engaging with ChatGPT in a dialogue around whatever they are learning, they are improving their understanding and mastery of it.


Instead of now having a technology that just takes us to the lowest level of “desirable difficulties,” we have one that takes us to the other end of the spectrum. Moreover, students will be getting the kind of feedback to fill in the gaps or correct misperceptions that a teacher or tutor would typically offer. And by teaching them other methods like summarizing again for ChatGPT all that they learned after reading all 50 of these hypothetical pages, we continuously ingrain knowledge at the efficacious end of the spectrum. Other exciting techniques like “bespoke quizzes,” which quiz them specifically on the material covered in the hypothetical ChatGPT thread, personalizes the assessment of their learning to a degree we’ve potentially never seen before.


The AI Learning Paradox

This is something I call The AI Learning Paradox: AI threatens to undermine the “friction of learning” that leads to domain mastery; yet teachers can use AI to help students achieve domain mastery more rapidly than ever before.


With the understandable focus on plagiarism, what’s gotten lost in the hubbub is that this tool can even be used for learning at all. Many educators see it as only that–a plagiarism factory–or, worse yet, the end to learning and a harbinger of something sinister and dystopian. It can be hard, then, for many to see this same tool as being able to augment not only students’ learning but also their own teaching.


Yet if we want the best learning outcomes for students, then it is imperative that we see AI as a powerful learning tool and actively work at ways of figuring out how best to leverage it in the classroom and overall learning process. The sooner we do so, the sooner we will not let students experiment with the technology on their own. And many are using it to help them learn, generating quizzes and having back and forths with the machine to check their own learning. Yet, this is the very thing that teachers should be doing. But if they don’t have the know-how themselves, then students will have to figure out how to incorporate AI in the learning process, something they don’t have a deep pedagogical understanding of. This can lead to uneven outcomes–those who know how to use AI well displacing (or at least outscoring) those who don’t use it at all, or don’t know how to use it well.


Much of the talk, though, has been banning this technology and/or using AI detectors. But this is counterproductive for several reasons: first AI is becoming ubiquitous in digital tools, from the google suite to Grammarly, the latter of which now crafts essays with the same plagiaristic touch of ChatGPT. Secondly, students are going to exploit the technology, and find ways around the ban. This creates an unhealthy dynamic of cat and mouse. Sure, there will always be students who will use it to take shortcuts, but there are also plenty of students who, if they were to learn to use AI to enhance their learning, would use it in good faith.


Granted, we can’t solely rely on good faith. To incorporate AI into the classroom, we will need to rethink how we teach and how we create assessments, or there will be those bad actors who continue to use AI to take the easy way out and to reduce “learning friction” to near zero.


Reassessing Assessments

Most classrooms use the model where a student has to submit an assignment that involves writing. Historically, this has given teachers the ability to grade students, to determine how well the student knows the material. Additionally, students receive quizzes and in-class tests. Class participation and perhaps behavior (if we are talking about lower grades) round up the grading criteria.


With AI able to do homework, the weight we give to homework will need to change. And if we assume that we can’t outright ban the use of cell phones in classrooms, especially in large college lecture halls, then even in-class written tests are under peril if we are to maintain grade integrity.


A new kind of assessment will be required that can circumvent even the craftiest of students from snapping a pic in class and uploading it to ChatGPT. Leaning more on video-based homework, in which students summarize what they’ve learned or answer specific questions by speaking to the video. This can still be somewhat gamed by AI, but much of the finished project will be a student speaking in real time, answering whatever the assignment calls for.


Group projects in which students have to present in front of the class is another option. All of their written homework, both in and out of class, will be around this project and help them to shine on test day. They should be encouraged to use AI to do that learning, the finished product being how well they present this topic. This format could be particularly useful for non-math based subjects (AIs are more inconsistent with numbers, so students rely on them at their own peril).


Other possibilities include in-class debates and hand-written essays, assuming with the latter that you have a vigilant teacher should a cellphone find its way outside of a student’s pocket.


These are just a few ideas. And I’m confident that other educators have plenty of other helpful ideas.


I’m going to suggest one more idea, one that might seem misguided, if not downright subversive: ask ChatGPT, Claude, or Gemini to come up with ideas.


Before you close the browser, hear me out. On tests of generating creative ideas and solutions, such as the long-time staple of cognitive and creative assessments, the Alternate Use Tasks (AUT), ChatGPT has shown that it can do better than over 90% of humans. The top 10% are still able to generate more and potentially better ideas in a study coming up with business proposals at Wharton. At the very least, we should be “inviting” ChatGPT into the conversation.


And it’s important to have these discussions now, so we can figure out how we are going to measure student learning in our AI world.


That leaves us with one final but critical area if we are to arm future generations with the learning they’ll need to tackle whatever challenges await.


Empowering teachers to lead the AI learning charge

Last week, a study by Kellogg, Lifshitz, Randazzo, and Mollick came out illustrating yet another aspect that makes AI unlike any technology released in the last 30 years. The trend with digital technology so far has been that younger employees typically “get” a technology first and then this know-how percolates up to the older employees.


Put it another way: how often have you found yourself thinking, wow, kids these days are just so good with the latest technology.


With AI, this young-to-old generation knowledge diffusion does not seem to be happening — the younger cohorts are no more expert at wielding technology than older generations.


Students essentially are then left on their own to explore this technology. Again, many are doing this in good faith; they actually want to learn the material and feel ChatGPT and similar AI tools can help. They just aren’t sure how to use them. They might watch youtube or read a Medium post but what they are not doing is a cohesive, well thought-out strategy on how to integrate ChatGPT into the learning process.


That is where educators come in. They need to rethink how they see AI plugging in and enhancing the learning process and how to share this knowledge with students.


But what seems to be happening in most classrooms is that ChatGPT has become the proverbial pachyderm in the room, and if it is discussed, it is almost exclusively in the context of plagiarism.


Instead, it is important that we change this narrative from one of subtraction (the AI will take away from learning) to one of addition (the AI will aid in student learning) amongst educators themselves. They should be the ones leading the vanguard in how we leverage AI in the classroom because they have the pedagogical knowledge and intuitions, as well as insights into how we learn, that most students simply lack. Indeed, if we leave learning solely up to AI and tech companies we are selling our students’ education short. And invoking the AI Learning Paradox again, if we shut out AI from the classroom the result is similar.


One way to start reframing how educators think about AI, as well as how confident they are wielding this technology is through AI literary workshops and training programs — programs that discuss how to responsibly and effectively use AI in the classroom. But such coursework can’t be issued top-down to educators. They themselves must be onboard first; they need not just the “how to best use” training but need to be a part of discussions on what exactly constitutes “how to best use.”


Essentially, educators are the ones who’ll likely know the distinctions of “desirable difficulties” and the power of Free Recall (they’d be your “phone a friend” on how to learn the African capital cities in an hour.)


As we head into a tomorrow of ever increasingly powerful AI tools, the AI Learning Paradox is going to become even more relevant. These new tools will be even better at sounding like a specific student, even harder to detect; meanwhile, their capabilities to extend learning, if used responsibly, will be greater than ever before. By continuing the narrative of subtraction, we are leaving students to the vagaries of the latest AI tool and potentially advantaging those who are the savviest–or the least scrupulous–at using them.


Instead, educators need to be on the forefront of discussions and policy on how AI should be incorporated into the learning process itself. When future generations look back at this moment, their eyes should widen with wonder, recalling how their learning was transformed by this remarkable tool, and the teachers who made this new kind of learning possible.

4 views0 comments

Recent Posts

See All

Comments


bottom of page