Feb 13, 2025.
Earlier this week, I read an article about how people's reliance on generative AI is causing deterioration of cognitive functions, especially critical thinking. Two hours later, I had a classmate insist that the average of 283 and 267 was 416.5. When I jokingly told her "hey, might want to look over that again," she just stared at me and told me that she put it through her calculator, so it must be right. I pointed at the numbers on the paper, telling her it absolutely couldn't be since both of the numbers alone were way lower than the average she had come up with. She just looked at me like I was stupid, and asked me if I was telling her the calculator was wrong.
Would I say I'm the best at math? Absolutely not! I forgot how to do long division until a few weeks ago. The highest level of math I completed was the College stream of Grade 11 math. However, this should be common sense. In my Hands-on Mathematics course, we discussed what an important skill estimation is, using the example of a child subtracting 299 from 300 and mistakenly ending up with 101. Using the skill of estimation, before attempting to solve the equation, the child would have done a rough estimation, realizing they are subtracting a number that is almost 300 from a number that is exactly 300, and would realize that their calculations were off based on this estimate. Now, we're talking about students in teacher's college, who got decent enough grades in high school to go on to get their bachelor's, and decent enough grades in their bachelor's to continue on to teacher's college.
Correlation to generative AI? Maybe! It feels like everyone else in my program aside from me is using AI to do university assignments, and most upsettingly, to make lesson plans for their students. When I asked my mentor teacher for advice on how to create a lesson plan, she said "Oh, that's easy," typed in the letter C, and there popped up ChatGPT, right up at the top from how many times she's opened it. Teachers are using ChatGPT to make lesson plans for their students, the contents themselves of the lessons, long-range goals for their classes, report cards, you name it. I understand teachers are burnt out. But this is taking the humanity out of teaching, one of the jobs where it is most necessary.
In a course I took on disability and inclusion in the classroom, my professor discussed ways we as teachers can use generative AI to make our classrooms accessible. The example she gave us was that if we knew prior to the start of the school year that we were getting a student with low vision, we could ask ChatGPT how to make a classroom accessible to a student with low vision. In an era where we are so connected with social media, and with so many disabled self-advocates, why would I ever think about asking ChatGPT instead of talking to or reading posts from an actual disabled person?
A teacher at my school placement was talking in the staff room about how she's encouraging her students to use ChatGPT "properly" since it's "here to stay." This "proper" way for them to use it? She gave them an art assignment where they had to create a monument, explain why it's symbolic and where they would choose to put it, and then told them to use generative AI for each part. Giving them an assignment in art, a subject that is already just treated as another tool for assessment with creativity little more than an additional criteria for marking, to then replace every part that makes it creative and human with generative AI? The critical thinking of why this monument was designed to look a certain way, why they named it what they did, why they chose the location they did, all lost. For all intents and purposes, this is an assignment solely for the sake of doing an assignment and receiving a grade, no learning or reinforcement of skills or knowledge. The only thing they learn is that they don't need to think for themselves and a machine can do all their thinking for them.
Critical thinking is such a valuable skill that needs to be practiced regularly so that we don't lose it. People like Trump get elected when voters don't think critically about social issues. Oh, but eggs will be cheaper, right? How Doug Ford got elected the first and second times? Buck a beer? Yeah, that makes up for all the funding he cut towards education, disability services, and social assistance. The next provincial election is in two weeks, and guess who'll most likely get elected again? For buying votes oh-so-kindly giving Ontarians a two-hundred dollar cheque and a break on taxes? Yeah, probably.
I think it's concerning enough being asked if I'm saying the calculator is wrong in a tone that suggests that's impossible. Calculators always have room for human error. AI generation even more so, as it's not just human error you have to watch out for, but the nonsensical ramblings of a machine that spits blatant misinformation and horribly patchworked together estimations of the plagiarized work of real humans. I'm concerned for the day when the majority of people just accept everything that generative AI spits out at them without questioning. Will people start to ask "Are you telling me ChatGPT is wrong?" as if that's a crazy concept. Hell, blue checkmarks on Twitter are already using ChatGPT and Grok to ask for definitions of words and verify facts as if it were Google. Even Google itself has an "AI overview" at the top that you can't disable.
I don't want to sound pretentious, or too much of a pessimist, or overly judgy. I really love teacher's college so far, and I've met lots of lovely people. It just makes me deeply concerned and disappointed by how many people in this field are relying on generative AI in areas where humanity is so desperately needed.