What Teachers Should Know About How AI Works

Picture of Kenny Bitgood

Kenny Bitgood

Few things have rocked the world of education like the arrival of ChatGPT in November of 2022. I am not a teacher, but I do work with them almost every day, so I sat in on a lot of meetings with concerned educators, most of whom were worried about cheating, but a few who were worried about the bigger implications of this new technology. Others were excited about the possibilities and could hardly wait to start getting their students to use these new tools.

As StudyForge’s Software Development Lead, I am constantly thinking about how technology impacts education. I think about how educators can leverage the benefits of new technologies as well as mitigate problems that come with them. I think there are some fundamental things that teachers need to know about AI as they help their students navigate this new world.

After all, if you don’t understand it, you can’t use it effectively.

Artificial intelligence is still artificial

When you type a request into a chatbot and an answer generates before your eyes, it feels very much like this program is exhibiting super-human characteristics. For all of its marvels, though, artificial intelligence is still artificial, not real intelligence. It is a program that does not have critical thinking skills, creativity, or a conscience. It mimics human intelligence, but it does not possess human intelligence.

There are five major ingredients to the recipe of large language models. There are a lot of spices and herbs thrown in too, but knowing the main ingredients can help demystify what is actually going on when engaging with your AI program of choice.

1. First ingredient: neural networks

AI Chatbots run on neural networks. You might know them by another name: machine learning. In layman’s terms, neural networks self correct their own programming in tiny steps to match a set of inputs and outputs. We give them training data, and then they adjust to more closely match that training data.

There are two big implications here. The first is that if you ask it to do something outside of its training data it will fail miserably. And the second is that the quality of the neural network is heavily dependent on the quality of the training data. That’s why you keep hearing about lawsuits over what goes into the training sets for LLMs. High quality training data is extremely valuable, and producers of that data, like the New York Times for high quality text content or Getty Images for high quality photos, do not want it used without their consent.

2. Second ingredient: text prediction (or autocomplete)

You are probably familiar with your phone’s autocomplete. If you type a word, it will predict the most likely next word. This uses a neural network that was trained on a bunch of text, including what you have written in your phone. Its main purpose is to guess what the next word is based on the previous words. You might guess that an LLM is doing this, but instead of using the previous words you typed, it uses the previous words it typed. That’s true, but next word prediction alone does not create the chatbots that we see today. We have had autocomplete for a long time, but to make it revolutionary it required the next ingredient.

3. Third ingredient: transformers (The T in ChatGPT)

If you try to type a message into your cell phone using only the next predicted word, it will give you a nonsensical mess. Here is what I got when I typed “If you try” and then autocompleted the rest:

If you try not match your Map to your Map showing the location and how to get faster.

Not great literature. The big leap forward for AI chatbots is that they are built on the transformer architecture. Transformers can be summed up in one word: attention. If an algorithm pays attention to some words in a sentence more than others, it changes everything in terms of prediction ability. It gives the AI the ability to predict meaning, not just words. Now, it can be both grammatically correct and cohesive.

If I say “Why did the chicken cross the…” you will likely know immediately that the next word needs to be “road.” If I changed it to “Why did a chicken cross a…” you will probably still complete that with “road.” That is because you know what to pay attention to in the sentence. If I now changed it to “Why did the ship cross the…” you might say “ocean” or “river.” We changed less words, but we changed more of the meaning. This is the real special sauce that makes LLMs different from anything we’ve had before.

The main implication of this ingredient is that if you give the AI more context — more source material — then it will be better at paying attention to what it needs to pay attention to. For example if you want to ask what is the story behind the song “Hey Jude” by The Beatles, then don’t just ask the question: What is the story behind the song, “Hey Jude?”. It may give a decent answer zero shot (no additional context). However, if you give it all the lyrics of the song first and copy and paste a Wikipedia page or two, it will give a much better and much more accurate result. With some of these systems, you can even just ask it to look up the lyrics or other source material before answering your question.

4. Fourth ingredient: generative (the G in ChatGPT)

A large neural network, trained to do text prediction with attention still doesn’t give us a sonnet-writing chatbot. There is a reason why robots in science fiction are often depicted as stiff and logical. It’s because our computers have always lacked a fundamental human ingredient: creativity. So how do these chatbots all of sudden seem to be so creative? Long story short: they’re faking it.

As it turns out, adding a bunch of randomness to text prediction is enough to trick us into thinking the output is novel and unique. This is a simplification, but you can think of it as guessing 12 words that could go next and then rolling a pair of dice to decide which one to actually say. That’s really it. Sprinkling some random factors into the process is all it takes for us to think there is a “someone” instead of a “something” on the other side of the chatbot.

This is why there is no surefire way to actually find out if something was generated by AI or not. You shouldn’t be relying on any AI detectors. A little bit of randomness sprinkled in makes them practically useless. This also means that we need to be our own AI detectors. Don’t fall asleep at the wheel. We need to be training ourselves, our students, and our whole society to be better at using our actual human intelligence to recognize news stories, viral images, and now video generated by artificial intelligence. And we will have to continually get better at this as the AIs get better.

5. Fifth ingredient: trained to be a chatbot

The model that the first version of ChatGPT was based on was actually available for about a year before ChatGPT was released. What really launched ChatGPT’s success was that it was trained to be a chatbot. It was trained to sound friendly and compliant, so people can quickly begin to trust it and almost think of it like a friend or study companion. This is also where the major distinctions lie between different chatbots like OpenAI’s ChatGPT and Google’s Bard/Gemini. You can think of this as their “personality.” But don’t forget: Every time ChatGPT responds to you with, “Certainly!,” remember that it is not really intelligent — it is just programming.

Artificial intelligence will get better and better over time

One intriguing and potentially frightening thing, though, is that ChatGPT is only the beginning. The next version of it, as well as competitors to it, are being trained and released as you read this. And they aren’t stopping at chatbots. They want to make truly multi-modal AIs that can take in and generate a mix of text, image, audio, or video.

This means that we will be able to use these computer programs to do things that Isaac Asimov or Philip K. Dick couldn’t even dream of. However, this also means that it will likely become more and more difficult to discern whether the intelligence is artificial or human. This has many implications for teachers who just want to know if their students are actually learning, but it also has profound implications for our whole society. What will happen if news stories about an election candidate generated by AI are impossible to discern from ones written by humans? How will we prepare our students for a world where these kinds of things are possible?

So keep learning

As a software engineer, what do I want my teacher colleagues to know about AI? What are the implications of these programs for their professional practice? Some of the implications are obvious — students can generate answers and copy and paste without learning — but some implications are still emerging because this is a brand new thing. We are still exploring what a world with AI is like, and what education with AI is like.

We do not know all the amazing and wonderful new things that these tools will enable us to do, nor can we predict all the horrible ways that some people will abuse them. I wish I could give you a simple solution, a rubric to evaluate everything by, or a short list of what to do or not do with AI. But the truth is, I can’t and no one can.

So what do I want you to know? I want you to stay alert. I want you to keep learning about AI, about how it works. You can only know how to use it or not use it in your context if you understand it. You can only protect yourself and your students from the potential harms if you stay informed.

Whether we like it or not, we are all guinea pigs. You are part of the experiment. You were not consulted, nor did you sign over your consent. This is the world that you now live in, and it is the world that your students now live in. My hope is that being armed with the knowledge of how AI works will help all of us to navigate this world in a better way.

Interested in how to talk to your students about AI? Read our AI-Cademic Integrity: Keeping Students Honest in the Age of Artificial Intelligence article to get video resources that you can share with your class on what AI is, when to use it, and what is cheating.

About the Author

Kenny Bitgood

Lead Developer

Kenny has led the StudyForge software team since its inception. He has spent his entire career designing and building software to help connect teachers and students. Since studying engineering in university he has cared about where technology can be leveraged to help others, and where it needs to just get out of the way. He currently lives in Langley, B.C. with his beautiful wife and two children.

Are you a StudyForge teacher and want to add AI resource videos to your course?

Stay Connected

More Articles

Share Post:

Share Post:

Want articles directly to your inbox?