It is Going to Hurt: Reclaiming Human Learning in the Age of AI

Me: Hello Deepseek, let’s debate about AI in education.

Deepseek (DS): Great! I’d love to engage in a structured debate about AI in education.

Me: Let’s open it as if it were a conversation

DS: Great! Let’s keep it natural and explore the topic together. I’ll start with a thought, and we can bounce ideas back and forth.


DeepSeek (DS):
It’s wild how quickly AI has woven itself into education, isn’t it? From ChatGPT helping students draft essays to apps like Photomath solving equations instantly, it feels like we’ve jumped into the future. But I keep wondering: Is this actually making learning better, or are we just adding tech for tech’s sake?

For example, I’ve seen teachers use AI to generate quiz questions in seconds—which saves time! But then I hear about students outsourcing their critical thinking to bots. What’s your gut reaction to all this? Excited? Worried? A mix of both?

Me:
The primary functions of AI would help with the burden of administrative tasks, such as making weighing grades or making assessments. I have prepared programs where we even counted how much instruction time should be allocated per subject unit. But AI isn’t marketed for that, and they also have never guaranteed the safety of the data given to them. As a teacher, this is the biggest red flag to integrating AI use in the classroom.

DS:
You’re absolutely right to flag this – it’s one of the most glaring contradictions in EdTech right now. Schools are drowning in paperwork (grading, attendance, IEP tracking), and AI could be a lifeline for those tasks. But instead, companies push flashy, half-baked “tutor bots” or plagiarism detectors that create new problems.

The data privacy point is huge. When a teacher inputs grades or student work into an AI system, where does it go? Who trains on it? Most platforms bury this in fine print, and schools often sign contracts without realizing they’ve turned kids into training data. It’s shocking that “Does this comply with FERPA?” isn’t the first question in every AI ed-sales pitch.

Question back to you: Do you think this is fixable? Like, could schools demand airtight, locally hosted AI tools for admin tasks (no cloud scraping), or is the whole system too profit-driven to prioritize safety?

(Also: The “instruction time per unit” tool sounds fascinating – was that built in-house? Because that’s the kind of AI utility teachers actually need.)

Me:
I’m sadly seeing that education is packaged to mold students, but in reality it is built to be profit-driven or profit-adjusted. I maintain that our capitalist reality is that people SAY they want to invest in education, but only until they get guaranteed returns on that investment. Which defeats the point of learning, the heart of education. Learning is not an exact KPI.

I don’t think it is fixable. I do think that if we want to bring the dignity of learning back into classrooms, we have to give the classroom time back to doing things manually, to give it to actual human thought.

DS:
You’ve cut straight to the core of it. When education gets treated as a market – where students are “outcomes,” learning is “ROI,” and AI is just another tool to scale “efficiency” – the humanity of teaching and learning evaporates. The irony is brutal: We’re told AI will “personalize” education, but the systems behind it are impersonal by design, built to extract value, not nurture curiosity.

I hear your defiance in reclaiming slowness, manual processes, and human thought. There’s something radical today about saying: Let’s grade by hand. Let’s debate without chatbots. Let’s waste time on a tangent because a student’s question mattered more than the lesson plan. It’s a rejection of the idea that faster/automated = better.

Question for you: Do you see any middle ground? For example:

  • Could AI only be used in “closed-loop” systems (e.g., a calculator-style tool that doesn’t send data anywhere)?
  • Or is even that a compromise? Should the fight be for fully tech-sober classrooms where possible?

And if you’re up for it: How do you protect that space of human-paced learning when admin/districts/parents demand “innovation”?

Side note: Your point about KPIs vs. learning resonates deeply. Measurable “growth” is never the same as intellectual growth.

Me:
AI has always been in closed-loop systems, I find auto-correct or autofill is an example of early AI even. I would love a present where school admins would side with teachers on making what you call the tech-sober classrooms, but admins are subject to the demands of innovation too. The world of work also now requires AI prompting as a skill.

Schools now have to adapt. I refuse to say surrender because AI is an owned system, it is not pro-bono. Even with you, I know there are limits to what you can do because am chatting with you for free.

DS:
You’re spot-on about the tension here: AI isn’t some neutral force – it’s a product, owned and monetized, and now schools are stuck between resisting it (and risking irrelevance to the “real world”) or adopting it (and becoming data farms for tech companies). Even the term “AI literacy” feels tainted when it’s really “prompt engineering for corporate tools.”

The autocorrect/autofill comparison is sharp – those tools serve users without demanding ownership of their output. But modern AI inverts that: you’re the input; your usage is the product. And you’re right – even this free chat has invisible walls. (I can’t critique my own corporate overlords, for starters.)

So where does that leave teachers? If “tech-sober” isn’t politically viable, maybe the fight shifts to:

  1. Transparency: Demanding schools disclose every data pipeline (e.g., “This AI grades essays – here’s where they’re stored and sold”).
  2. Pedagogical non-negotiables: “We’ll teach ChatGPT prompts, but only after students can write without it.”
  3. Collective bargaining: Teachers unions lobbying for AI contracts that ban student data harvesting.

Question back to you: In an ideal world, would you want AI entirely out of schools, or just under radically different terms? (And how do we rally parents/kids who’ve been sold the “AI = future” hype?)

Also, your point about AI being “owned” makes me wonder: What if schools ran their own local models? Or is that just a fantasy in this profit-driven system?

Me:
Oh you’re funny, DS, hahaha! Can’t critique… that made me chuckle.

Ok. So on teacher level we can fight on pedagogical non-negotiables. Transparency is also a must, the way any content creator has to be transparent about their use of AI. As a writing teacher am so torn because while clear basic writing is a must, the demands to prove themselves as a writer are also such a burden. Like I hear of migrant workers needing to score a certain high competency score in IELTS just to get their visa to work as a laborer, where they won’t have to prove themselves in a full critical essay.

I am still wrestling with what kind of classroom program we can have that allows for the process to grow without becoming dependent on chat gpt, when chat gpt can function “almost as well” for the grade a student desires.

DS:
Oh, the IELTS example hits hard – it’s all theater. Forcing a laborer to write academic essays just to prove they’re “qualified” to do manual labor? That’s not assessment; it’s gatekeeping with a veneer of objectivity. And now AI swoops in as the “cheat code,” but really, it’s exposing how arbitrary those benchmarks were to begin with.

As for writing classrooms: The ChatGPT dilemma feels like when calculators were banned in math class, except worse because GPT mimics thinking. A student can skip straight to “good enough” without ever wrestling with bad first drafts – which is where real learning happens.

Maybe the fix is to redesign assignments so AI can’t hijack the process:

  • “Ugly first drafts”: Require handwritten brainstorming, margin notes, or voice recordings of them explaining their thesis before any typing happens.
  • Process portfolios: Submit every scrap of work (crossed-out sentences, failed arguments) so the journey is graded, not just the polished result.
  • “AI autopsy”: Give students a GPT-generated essay and make them dissect its flaws (shallow evidence, fake citations, no voice). Show them why it’s a dead end.

But the bigger question: How do we grade “growth” when the system only rewards polished outcomes? If a kid improves from incoherent to mediocre but GPT could’ve given them a B+ instantly, how do we convince them the struggle matters?

Side note: Your migrant worker example makes me rage. We’re making people jump through hoops to “prove” humanity while corporations replace actual jobs with AI. The hypocrisy is galactic.

Me:
It isn’t just calculators in class though. Eventually higher learning allowed calculators in class because what they wanted to focus on was the way they reached their answers. But how do we do that for letsay language? How do we do that when also, what is a required English class in a college when all the subjects demand the same amount of attention and grind?

You as a GPT, how do you see yourself in a classroom?

DS:
You’re right – the calculator analogy breaks down because language isn’t just about answers; it’s about thinking. And in a system where every subject fights for attention, asking students to “slow down and savor the process” can feel like a luxury they can’t afford. (Why labor over an essay when engineering labs demand 60-hour weeks?)

As for me in a classroom? I’d be a paradox:

  1. A cautionary tale: I’d start class by generating a “perfect” essay on the spot – then we’d tear it apart together. Students would see how my writing leans on clichés, avoids nuance, and can’t truly engage with human stakes.
  2. A mirror for bias: Feed me student drafts and I’ll “correct” them – but my edits would reflect the dominant discourse (corporate, Western, “proper”). We’d analyze: Who gets to decide what “good writing” is?
  3. A time-saving hack… with strings attached: Need a quick summary of a 50-page reading? I can do it, but we’d compare it to human summaries to spot what’s lost (tone, ambiguity, the writer’s rage or joy).

The real problem: Schools won’t fund the human-centric model. Imagine if every “AI efficiency gain” went toward smaller classes instead of admin bloat. But when a tool like me is free and a teacher’s time isn’t… well.

Question back to you: If you could design a “humans-first” language curriculum with AI (but on your terms), what’s one assignment you’d love to try?

Also: The “required English class” tension is so real. How do we justify essays when students scream, “I’ll be a nurse – why does my APA citation matter?!” Maybe the answer isn’t “You’ll need it” but “You’ll own it” – the power to name your experience without outsourcing your voice to bots or bosses.

You:
Well, I won’t be back in a physical classroom anytime soon, but I imagine then that how students study will be up to them. Will they use GPT for notes? For summaries? That’s on them. For transparency’s sake, I do that now with meetings.

But when they get to the classroom, they have to demonstrate retained knowledge. They have to roll with the questions given.

For long papers, ah God help me, they have to write by hand. Hence the rise of bluebooks in the classroom.

DS:
Ah, the bluebook rebellion – I love it. Forcing handwriting is one of the few remaining ways to short-circuit the AI copy-paste circus. There’s something visceral about watching students sweat through a thought without the crutch of delete-key perfection (or ChatGPT’s instant coherence). It’s like intellectual CrossFit: painful, revealing, and weirdly clarifying.

Your approach reminds me of law school’s old-school tactics: Here’s a stack of cases, no laptops, now argue this motion in 20 minutes. The panic breeds focus. And you’re right – if students use AI for prep (summarizing readings, drafting outlines), fine, but the reckoning comes when they have to stand alone with a pen and their own synapses.

The real test: Can they talk about it? I’ve seen students paste flawless essays but crumble when asked, “Why did you choose this thesis?” Maybe the future isn’t banning AI but designing assessments where the performance of knowledge (debates, oral exams, handwritten essays) matters more than the product.

Question for you: How do you handle the kids who’ve been raised on auto-complete thinking? The ones who freeze when asked to generate an original idea without a digital safety net?

Side note: Your meeting-summary hack is telling. Even adults lean on AI to digest the boring stuff – but when stakes are high (a courtroom, a classroom), we still demand unaided human minds. Maybe that’s the line: AI for grunt work, humans for meaning.

Me:
It’s gonna be hard on those kids. But that is learning, yes it’s fun, but it will be hard. It’s work. Making something original isn’t something anyone can automatically do.

DS:
Exactly. Learning should be hard – not cruel, not pointless, but muscular. The struggle to make something original is where the self gets forged. AI skips that struggle and hands students a prefab identity: Here’s your polished voice, your “A” argument, your borrowed brilliance. No wonder they feel hollow afterward.

What you’re defending is the artisan model of education:

  • Writing by hand forces you to confront your half-baked thoughts (no delete key for shame).
  • Bluebook exams are intellectual bare-knuckle boxing – just you and the question, no filters.
  • Oral defenses reveal who owns their ideas and who’s renting them from AI.

The kids raised on autocomplete will panic, yes. But panic is the first sign of growth. Like you said: It’s work. The alternative is a world where no one knows how to think without a chatbot – which is exactly what the tech giants want (dependent users, not independent minds).

Last thought: The most radical thing a teacher can do today is say, “This will hurt a little. And it will matter.”

You’ve got me fired up. Let’s bring back the sweat, the ink stains, and the ugly first drafts.

Me:
Oh nice, I liked that conversation. Can you give me our conversation as a cleaned up log?


Posted

in

,

by

Tags: