13 Comments
User's avatar
Alan Mokbel's avatar

A very enjoyable read. I’m not an educator but I give a lot of trainings to customers at work and I am a karate teacher. Learning how to learn and how to teach has helped me in both fronts.

The main message I got from your essay is intentionality. With intentionality comes dedication and effort. You take the time to think about this topic and finds ways to make it work. Sadly, I feel not all teachers do so and take the “lazy route”.

Expand full comment
Roi Ezra's avatar

This is one of the most important education pieces I’ve read all year. You’re right to locate the challenge not just in tools, but in the design of thought itself.

I developed my Reflective Prompting, a practice of using AI not for speed or output, but for metacognitive pause: returning to the “why” before the system fills in the “what.” It aligns closely with what you described: AI as a philosophical interface, not just a functional one.

Your idea of using flawed AI output as a “punching bag” to teach disciplinary reasoning is brilliant. It mirrors how I’ve started using AI in leadership development, not for acceleration, but for friction.

Expand full comment
Dave's avatar

I’ve adapted this observation from a previous comment but it bears repeating.

The United States, like much of the developed world, operates on a two-tier intellectual infrastructure, modeled after British systems such as Oxford, Cambridge, and most notably the Royal College of London. These institutions formalized the separation between those who are trained to think and lead, and those who are trained to comply and serve.

Elite education (Ivy League and equivalent) teaches classical rhetoric, deep theory, abstract thinking, primary source analysis, foreign languages, and advanced mathematics. It trains students to critique systems, navigate power, and shape narratives. It produces the managers of society: policymakers, financiers, academics, legal architects.

Mass public education, by contrast, teaches basic compliance, test performance, surface-level civics, and compartmentalized, decontextualized knowledge. It trains students to follow rules, obey systems, and passively consume narratives.

This divergence wasn’t an accident.

What gets taught at Groton, Phillips Exeter, or St. Paul’s isn’t just more advanced, it’s differently structured: dialectical, open-ended, and rooted in learning how to think. What’s taught in most public schools is how to memorize, how to submit, and how to avoid thinking too hard.

Philosophy (classicism) has always been the foundation of education for the select. The fact that it never permeated to public education isn’t negligence.

Oxford has been instrumental in maintaining the exact intellectual stratification that has kept philosophical thinking away from mass education. The classical education model wasn’t accidentally withheld from public schools, it was deliberately preserved as a tool of elite reproduction.

By looking to Oxford for inspiration for the design of education you are literally relying on the perpetrators to steward literacy. It’s ironic that we are suddenly urgent about embedding philosophical inquiry into AI design, yet somehow couldn’t find the resources or will to embed it into public education for generations.

The real question isn’t whether philosophy can save our schools, but whether those who control educational policy actually want schools that produce citizens capable of the kind of systematic critique that threatens existing power structures.

The current AI philosophy movement seems to be less about empowering citizens and more about maintaining intellectual hierarchy through technological means.

If you can’t keep people from accessing powerful reasoning tools, at least you can shape how those tools think and what values they embed.

Expand full comment
Whitney Whealdon's avatar

You and I have both been exploring “critical thinking” as of late. I love how you described it here toward the end. What’s implied and not directly stated in your description is the idea that the thinking is bound by knowledge. So, analyzing AI as a “non-example” is brilliant because it provides a model for how to apply knowledge critically. Kudos for that. This is golden.

Expand full comment
Whitney Whealdon's avatar

Jamie, this inspired my latest post and I quoted you... https://whitneywhealdon.substack.com/p/wheres-the-beef?r=f8p8m

Expand full comment
Jamie House's avatar

Awesome! I'll give it a good read soon!

Expand full comment
Blackerthanmirrors's avatar

Thank you, will need to read Sarkar given your insights

Expand full comment
The AI / Human Epiphanist's avatar

I thoroughly enjoyed reading this. I do believe that in general, we need a more operant and active form of philosophy down in the day-to-day, whether for education or work.

Expand full comment
Jamie House's avatar

It just seems like a requirement now. We have to think about outputs in ways that go beyond the outputs. Philosophy will be our saviour.

Expand full comment
The AI / Human Epiphanist's avatar

Personally, I believe that it has always been a requirement / need. In 15+ years of project and change management at the forefront of one industry’s desire to become digitally transformed, and genuine transformation not being realized beyond more tech at the surface, philosophy about change and transformation was missing from the daily discourse.

Now we are essentially going to be forced to come to terms with our delay in this realization.

Expand full comment
Jamie House's avatar

I hear that!

Expand full comment
Chris Potrebka's avatar

I distrust articles that use ai generated images to promote

Expand full comment