AI Won’t Replace Teachers or Parents
But It Might Expose Us (3 of 6)

(Part 1 here) (Part 2 here)

One student I taught a decade ago – let’s call her Rina – was troubled by an almost crippling self-doubt. Though highly capable, she was never sure her work was good enough, despite high praise and high grades. I taught her for several years, and even knowing her well, nothing I said ever seemed to shift her view of herself. Her chronic lack of confidence was visible even in her rather hunched gait, and it was hard to see her be so persistently unkind to herself. When she graduated – crossing the stage as quickly as possible so she could get off it as soon as she could – I worried how she would cope beyond the relative safety of school.

I met Rina at an alumni reunion recently and I could see something was different as soon as she breezed in. She held her head high, made eye contact, moved with assurance and had a genuine smile. After a bit of small-talk I asked her how she was, and what she said amazed me. She could actually acknowledge, for example, that she had received several promotions at work and might even have deserved them; comfortable in her own skin, in other words – almost a different person. So I asked her what had changed – was she in a great relationship that gave her confidence? Had she had a breakthrough with a therapist? No: she had begun to ask an AI for its opinion of her diaries, where she kept a detailed daily account of what had happened and her reactions; she would ask the AI what a reasonable person would make of her worries, and whether she was being overly critical of herself. The answers reassured her, and gave her confidence.

What surprised me even more was what she said when I asked why she hadn’t turned to friends and family (humans, that is)  for comfort. It turned out that it was precisely the lack of humanity that made the AI so attractive. Having instructed it to be neutral or even critical, Rina’s typical defences – you would say that, because you have to or you’re only saying that because you’re my friend – were simply not available. She trusted the machine more than the people who cared about her, precisely because it had no reason to care.

I didn’t know what to make of this at the time, and I’m still not sure I do, because something here runs against what I’ve argued in my previous two posts. Both pieces assumed, without quite saying it, that the human things – the warmth, the relationship, the conversation – are the bits the machines can’t touch. Rina’s experience suggests it’s not that simple. It wasn’t warmth or encouragement that helped her most. It was something colder, cleaner; something  inhuman.

The Comfort Blanket

This unsettles me as an educator, because the claim that what matters is the human connection has become a kind of comfort blanket for people in my profession (and not only teachers; this applies to families too, arguably with even more force.) AI will do the cognitive work, we say, but the relational work is ours. The mentoring, the pastoral care, the quality of the conversation – that’s where teachers are irreplaceable. I believe this, mostly. But Rina’s story doesn’t fit that narrative.

Rina couldn’t connect with the people around her because her self-doubt created a filter that blocked everything: every compliment was suspect, every reassurance was dismissed, every human gesture was explained away. She was surrounded by people  but she couldn’t hear any of them. The AI broke through the filter not because it was warm, but precisely because it wasn’t. It created a space where she could hear clearly, without the static of human emotion.I am tempted to say that what Rina found was closer to a very good diagnostic tool than to a transformative relationship. It helped her see herself more clearly, which is valuable, I could easily tell myself but it’s not the same thing as being truly known by another person. I’m wary of this line of reasoning, though, because it’s a bit too comfortable. The thing that Rina needed, at that moment in her life, was not to be truly known – it was to be accurately assessed. The human relationships had plenty of warmth and plenty of depth, but they couldn’t give her what she needed. The noise of human care was the problem, not the solution.

Parents and educators talk a lot about human connection, and we should. The best teaching I’ve ever seen- and the best parenting, for that matter – involves an encounter that changes both parties, where something alive passes between adult and child. I’ve watched it and experienced it and it’s unmistakable. And wonderful, by the way. But I’ve also watched and experienced a lot of teaching and parenting that feels like performance, or routine, or well-intentioned warmth that doesn’t actually reach the child. If we’re going to claim that human relationships are the irreducible core of education, then they have to be worth the trouble. They have to offer something the machine can’t. Otherwise, students will find the machine more honest, just as Rina did.

So Rina’s story suggests an even more uncomfortable threat from AI: Might it displace more than just our cognitive capacities? If human presence isn’t automatically better than machine presence, we need to ask why anyone should choose it. Rina didn’t. This may not be a story about the triumph of technology after all, but a story about the limits of human care.

The best human relationships still offer something no machine can: the teacher who sees the student, the parent who listens without an agenda, the friend who challenges you because they care enough to risk the relationship. These work because both sides are vulnerable, both sides have something at stake. But the key word is ‘best’; AI raises the hard truth that human connection has to earn its claim to irreplaceability. 

Facing what the machines expose

Reading what I’ve just written, I notice I’m doing something educators do instinctively – treating Rina’s experience as a problem to be solved rather than something to understand. My first impulse is to say: well, if the machine gave her clarity, we should learn from that and make our human interactions more honest. And that’s partly right but it’s also partly a dodge because the logical conclusion of Rina’s story isn’t just that we should be more like the machine (itself a rather scary idea). It’s that sometimes the machine is simply better suited to the task. We haven’t failed; the absence of human feeling is the point, not a limitation. You can’t make a human relationship feel inhuman and still call it human. Some forms of support may genuinely work better without warmth, without history, without the messy entanglement of caring.If this is right and if machines can provide not just cognitive output but emotional support that some people actually prefer, then the claim that human relationships are the irreducible core of a good life starts to look less like a timeless truth and more like a preference held by people who grew up before the alternative existed. In The Matrix, the choice between the real world and the simulation was meant to be obvious – of course you’d choose reality. But what if a generation raised with AI companions doesn’t find it obvious at all? That’s the real red pill moment – not the discovery that your world is fake, but the discovery that you don’t even care.

But maybe Rina’s story also has a benevolent reading. AI was doing what a coach or therapist does at their best –  holding up a mirror, asking the right question, refusing to flatter. It wasn’t competing with human relationships but doing something different, something the humans around her couldn’t do precisely because they cared. And once Rina could see herself more clearly, she could hear the people around her more clearly too. Her AI didn’t so much replace as unblock her human relationships. So if the pattern is AI handles clarity, humans provide encounter then we might be looking at something that strengthens what matters most rather than threatens it. But that only works if we are holding up our end.

The question, therefore, for parents and educators, is not whether AI will take over human work. It’s whether the human work we’re doing is worthy of the name. If it is, AI doesn’t change anything that matters. If it isn’t, then the machine isn’t the problem. We are.

References

  • Images created with AI (OpenArt)
  • Wachowski, L. & Wachowski, L. (Directors). (1999). The Matrix. Warner Bros.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *