In our summer Digital Citizenship course at the Well-Trained Mind Academy, one of the topics we discuss is the appropriate and ethical use of artificial intelligence. Because the world of artificial intelligence is huge and touches almost every aspect of our lives, we narrow our focus on the risks and benefits of using AI generative tools like ChatGPT and Google Translate in completing our schoolwork. These tools are even known by some as “ai homework helpers.” Let’s dive in.
What AI Says About Its Own Use as An Educational Tool
When I, Mrs. Heflin, was asked to write this article for the WTMA blog, I thought that an easy way to approach it would be to ask the chatbot itself to write the article. So, ChatGPT, what do you have to say about the use of, well, YOU, in the classroom?
“ChatGPT can be a valuable tool for enhancing the learning experience in the classroom when used appropriately. Its advanced language capabilities and vast knowledge base make it a versatile resource for students and teachers alike. In the classroom, ChatGPT can be used to facilitate discussions, provide additional information on specific topics, and offer personalized assistance to students. It can help clarify complex concepts, offer alternative perspectives, and serve as a virtual tutor, providing real-time feedback and guidance. However, it is crucial to establish clear guidelines for its use, ensuring that students understand that ChatGPT is a tool to support their learning rather than a replacement for critical thinking and independent research. By incorporating ChatGPT effectively, educators can harness its potential to foster engagement, curiosity, and deeper understanding among students.”
Risks of AI in Education
Well, okay then! Our work is done here, right? No way!
Looking at the caveats of “appropriately” and “effectively” from the ChatGPT output, we can already identify many problems. First of all, those aren’t my ideas. Second, it’s pretty generic and isn’t written for any specific context. Third, this bot has a pretty high opinion of itself, and only lightly touches on the potential risks and pitfalls of using it. So let’s take a look at some of those, which we discuss in our summer course:
The AI Homework Helper Can Be Wrong
The product of AI homework helper tools like ChatGPT is often not as correct or thoughtful as what students can produce on their own.
Tools like ChatGPT aren’t researched sources themselves. They use continually developing algorithms to get information from other sources that may or may not be accurate, or even spread misinformation intentionally. I see this happen often with Google Translate in the Latin courses I teach. In fact, the way that I know a student is using Google Translate is often that the sentence is incorrect! Google Translate works by scouring enormous databases of existing translations and breaking down sentences into smaller components and looking for matches in those existing translations. When a word or a phrase has multiple meanings, it’ll often choose the most commonly used meaning, even if that is not the correct one. It’s not programmed to consider grammatical rules or context, both of which are essential to understanding and communicating in another language.
AI Can Be Biased
Unlike real humans, AI can’t think for itself– it only gets information from its sources.
Artificial intelligence tools and the sources they use for their content are developed by people who have biases, and those biases are passed along to the sources, and then the output of the tools. You may have seen the viral news story about how ChatGPT was asked to write a poem about two public figures on opposite ends of the political spectrum: for one of the public figures, the chatbot produced a multi-stanza poem praising that person’s positive qualities and contributions. For the other figure, the bot refused to write the poem at all, saying that it was not programmed to create political or biased content. We can see here the built-in bias in ChatGPT inherited from its sources, and this puts the accuracy of what it produces in question. Most importantly, when we present AI-generated material as our own, we are then presenting the opinions and biases of the AI product as the things we believe. Do we really want those unknown sources and software developers to speak for us?
AI Use in Education can Hinder Learning and Communication
AI stops us from thinking for ourselves.
The aim of a classical education is to be able to express our own ideas about what we learn and know. When we submit our own work, our teacher can give us constructive feedback about what’s effective and what needs more thought. We grow as thinkers and writers, and we develop important critical thinking and communication skills that we need out in the real world. If we’re using an AI homework helper, we’re not working from our own thoughts, we aren’t developing our own critical analysis skills, and we could be at a loss in future situations. What happens when you take a final oral exam, or do an in-person interview and you don’t know what to say because you previously relied on AI to do your work for you?
Claiming AI Outputs as Your Own Work is Unethical
The bottom line is that copying and submitting work created by ChatGPT or another AI homework helper is not our own work. It’s cheating.
AI tools scour the internet for information, often getting it from sources that aren’t credited. Instructors are tipped off to AI homework helper use when students don’t properly cite their sources, which they are taught to do at the Well-Trained Mind Academy. Other clues are when the writing style of the work submitted is notably different from the style of the papers turned in previously, and when the skill level presented in the paper is well above the skill level of the course being taught. In a language class, using an advanced grammatical construction that the class hasn’t learned yet is often a dead giveaway that a student has received outside help. AI seems like it will help you in the creation of your work, but when used in place of your own work, it will only get you in trouble–and maybe even dropped from class.
AI Cannot Replicate Human Creativity
Generative AI tools like ChatGPT are flawed because they’re still very much in development, and while they always will be learning, they cannot recreate the spark of creativity that is uniquely human.
It’s likely that AI tools are going to get even more powerful, and their use will become even more common in many aspects of our daily lives. It will have an effect on the job market, but what that will be isn’t fully known. However, because AI is programmed by people, it is limited in its creativity because it is not thinking for itself. You are the best source for your own ideas.
Proceed with Caution when Using An AI Homework Helper
Going forward, be very careful of your AI use. Cheating and plagiarism can take many forms, and artificial intelligence has just made the line between your own work and someone else’s even more murky. When in doubt, talk to your instructor! We are here to help you grow as a critical thinker and a citizen of the world – part of that growth comes from learning to work in a world full of tempting shortcuts.
Stay tuned for a follow-up article on how AI tools can be used appropriately in regards to schoolwork.