Philosophy Is Saving AI. Can It Save Our Schools?
Designing Agentic Minds in an Age of Thinking Machines
I’m diving into the readings from the 2025 Oxford Seminar: ‘AI x Philosophy’—a forum where philosophers and computer scientists are exploring how to shift AI design to be more human-centered. As a K–12 educator, I’m tracking these discussions and offering key insights into what they could mean for our schools. Join me as I unpack these complex ideas and explore their implications for education.
I’m interested in these discussions for two reasons: first, I think the educational enterprise, from at least middle school through higher education (where everyone is apparently cheating), is completely unprepared for what AI currently is and what it will do to our system of education. Second, I believe if philosophy can help build better AI, then it can help keep schools relevant. The AI x Philosophy seminars aren’t focused on education but on how AI models will affect our decision-making, privacy, autonomy, and a host of other things. I am studying the papers through an educator’s lens to gain clarity on how these threats can inform our schools.
You probably agree with me on the first point—the unpreparedness for AI in our schools—especially if you see the hellfire of panic from educators on LinkedIn. And while I feel confident addressing this in my grade-level classroom, I’m uncertain how the broader educational system can prepare at a structural level as AI continues to evolve—both in its appeal and in its capacity to supplement our “thinking work”.
A parent recently asked me how I’m dealing with AI in my classroom. Without going into too much detail, I explained that my approach is a revival of the "flipped classroom" model, with students writing and discussing in response to media or secondary sources in the classroom. If there is any area of schooling that will be relatively unscathed, it probably is grades kindergarten to grade 5 ( I currently teach grade 5) where we are tasked more with development in literacy and numeracy. AI technology can certainly augment child development in both areas if ushered in with right level of experience and expertise. For learning about the world (e.g. science, social studies, digital literacy/computer science, units of inquiry) I think we’ll shift more towards a structure similar to my AAK Framework (AI to Authentic knowledge) where we will spend much more time cultivating or refining our concepts through local observation and application. Deeper investment in the creative arts and other hands-on, community-based projects will coincide with this more active and authentic approach.
But I am less confident that education can shift easily at a structural level. Read Kelly Schuster-Paredes' excellent post on the matter of how schools react to trends in the EdTech realm. In my view, much of this is due to the fact that education is quite insular from other disciplines where someone can spend their whole life as a student, teacher, administrator, and academic inside education, and I think that has narrowed our scope. It has certainly narrowed our perspective as we have shifted to better instructional practices within a subject-based paradigm at the expense of better outcomes for understanding the complexity of the world. I owe more explanation to his, but I will save that for another post before I digress.
Back to my second point—that philosophy can save our schools which connects with this week’s AI x Philosophy Seminar readings where the focus is on distilling philosophical practices and embedding them into AI design. To some degree, schools are already shifting to better thinking practices, but will have to go further develop thinkers who are more aware of the AI tools they will be forced to use. The readings fromWeek 1: Truth-Seeking AI and Week 2: The Inquiry Complex, both discuss embedding philosophical principles into AI designed to assist our decision-making. I think they both can inform how to deal with AI in education.
The main reading for both weeks is the 2025 article The Philosophic Turn for AI Agents: Replacing Centralized Digital Rhetoric with Decentralized Truth-Seeking by Philip Koralus from the HAI Lab at the Institute for Ethics in AI, University of Oxford. Additional readings include Advait Sarkar’s 2024 article AI Should Challenge, Not Obey and John Stuart Mill’s On Liberty, Chapter 2, Of Liberty of Thought and Discussion.
Starting with Mill’s On Liberty, written in the mid-19th century, the central argument is that freedom of opinion should never be suppressed and that contradiction is essential to intellectual and moral progress. He believed knowledge is always subject to revision through the re-examination of evidence and critical reflection. Even false or fallible ideas deserve to be heard, as their eventual refutation strengthens the pursuit of truth. While Mill was primarily focused on the state's suppression of opinion, his ideas resonate today in both governmental contexts and the murky, often toxic media ecosystem.
As it relates to AI, Mill’s message reminds us that our understanding of external reality is iterative. If AI systems are designed with static parameters based on their designers' perspectives or current set of knowledge, then true inquiry will be compromised by their inherent centralized bias. Truth-seeking must be embedded in the design itself, with systems that encourage questioning that exists beyond a static system.
Sarkar’s AI Should Challenge, Not Obey argues that we should move away from AI as obedient assistants and toward Socratic tools that challenge users’ assumptions. Good AI should not just provide correct answers but help users ask better questions—fostering argumentation, critical analysis, and intellectual engagement.
Honestly, I’m optimistic about education after reading Sarkar’s paper. Many schools have already shifted from passive to active learning practices. Yet Sarkar also notes that “critical thinking” is a nebulous term, which makes it hard to embed in AI design—and arguably, it's just as ill-defined in education. Many teachers may struggle with it themselves, lacking training beyond their university disciplines, even worse if they have only spent their time in education faculties.
Both papers complement the main reading, The Philosophic Turn for AI Agents, which warns that we are increasingly relying on AI decision-support systems not well-suited to preserving human autonomy for decision-making. These systems are often designed in a centralized way by small groups of AI developers, who, though well-intentioned, may not prioritize or even understand the philosophical dimensions of decision-making in an autonomy preserving way.
To function in future society, where opting out of AI tools won’t be feasible, we’ll need to rely on these systems. But there's a tension: AI may end up steering us subtly—through nudging or framing techniques toward decisions we half-consciously choose. Koralus calls this the “autocomplete for life” effect. We’ve all felt this when signing up for free trials and forgetting to cancel payment since choosing that option, requires time and effort. But imagine that multiplied across complex life decisions in different aspects of our lives. Over time, it's easy to become complacent and hand over judgment to the system.
Koralus argues that the solution lies in embedding philosophical inquiry into AI design. Rather than relying on hyper-centralized systems, he proposes decentralized adaptive learning, which roughly means that it is natural for us to judge our own needs in our own individual or community contexts rather than someone centrally, removed from context doing it for them. Koralus points us to how science and market economies function. In science, inquiry is decentralized through shared methods and peer review but driven by the individual or group. In markets, individuals make decisions on supply and demand based on local needs. Both preserve agency while supporting shared structures.
The problem, then, is this: if AI is going to support human judgment without supplanting it, we must build systems that are decentralized and guided by principles of philosophical inquiry. These systems should foster truth-seeking and sustain users’ autonomy in their own decision-making.
Koralus’ Inquiry Complex, influenced by his Erotetic Theory of Reasoning, proposes a shift from simple logical deductions to reasoning that is shaped by questions and context. Though I don’t fully grasp the entire theory—particularly how Koralus' Erotetic Theory of Reasoning operationalizes question-based logic in computational models—the gist is that we can build systems that ask better questions—questions that keep the user in the driver’s seat. This avoids manipulative nudges that either exploit our laziness or blindspots and help preserve autonomy.
One point I appreciate is Koralus’ view of philosophy as a rigorous discipline where consensus often emerges on what the main challenges in philosophy are—not through authority but through an aggregation of individual philosophical work. Philosophers may disagree, but they agree on what the important questions are. That ethos where open inquiry is guided by shared values could inspire the next generation of AI design.
It is the point from the previous paragraph that I think is gaining movement in our K-12 schools but will need more traction if we are to build durability into our case to stay relevant as an institution. And so here are the implications.
Implications for K–12 Education
1. Cultivate Inquiry-Driven “Thinking”, Not Passive Learning
Many schools have made progress toward active learning models, but we can go further. We need to help students recognize different kinds of knowledge and the ways they are constructed. For instance, mathematical knowledge often builds through logical deduction and proof, while historical knowledge is shaped by interpretation of evidence and multiple perspectives. Scientific knowledge relies on repeated observation and experimentation, and artistic knowledge may be rooted in expression, metaphor, and critique.
In my classroom, for instance, when students study historical events, I prompt them to consider what the emotional experience would be like for the actors involved. During science units, I prompt students to identify the type of evidence behind each claim and whether it's based on experimentation or observation and what its limitations may be. These questions help students move beyond content to a more thoughtful examination of how knowledge is constructed at a conceptual level. How do I know this? What kind of evidence supports it? What assumptions or values are embedded in this claim? How might my knowledge be limited?
But even further, Western notions of knowledge tend to be reductive, in that they are very good at isolating, detailing certain aspects about the way the world is but tend to overlook relations between beings, objects and more holistic views that are more valued in indigenous ways of knowing or systems thinking. Our approach to teaching knowledge also tends to exclude understanding concepts that are culturally-bound, asking how do people from this culture or time come to understand reality? How do people of different class or privilege view the same event as myself?
These habits of mind foster deeper intellectual engagement and prepare students for the kinds of discernment AI-era citizens will need when decisions and daily activities become fast-tracked. Schools in all of their relevance, will be places to slow down as they rebalance more towards understanding and away from productivity.
2. Integrate Philosophical Thinking Earlier to Develop Better Thinkers
While programs like the IB include Theory of Knowledge in the upper years, philosophy often arrives too late or not at all. Yet younger students are already asking big questions—they're natural philosophers. Introducing structures for philosophical thinking in the middle and primary years can develop reasoning, empathy, and ethical awareness. Schools that pair core academic skills with reflective, meaningful, interdisciplinary learning will prepare students to be stewards for the human condition in an AI-shaped world. Students should have the disposition to challenge AI outputs will be more automated and omnipresent. These habits do not have to be overly contemplative, just present. It upholds an ecosystem of critical thinking.
3. Use AI as a Punching Bag to Teach Critical Thinking
“Critical thinking” is often invoked but rarely unpacked. It has some general features common across disciplines but teaching it skillfully has to derive from specific disciplines. If you are a subject expert, you can probably sniff out the bogus stuff that AI outputs. But students cannot. They did not get an advanced degree in subject X or have been reading about subject Y as a hobby for the past ten years.
Find ways to bring in AI outputs that are wrong, incomplete, biased, or too neat and make that the lesson. Ask students: What sounds plausible here, but isn’t? What’s missing? What assumptions are buried in this response? In history, it might be a glaring omission of perspective. In science, a lazy conclusion. In literature, a misread of tone. These are invitations for students to examine and articulate what makes an idea credible, coherent, or truthful within the norms of a discipline. The AI becomes not a shortcut but a foil—a source of friction that sharpens their reasoning. Critical thinking, then, becomes less about having skeptical instincts and more about developing the mental sharpness to scrutinize claims, weigh evidence, defend judgments, but most importantly prevent complacency. It’s not enough to tell students to “think critically”—we need to show students what that means in context, and flawed AI makes an excellent sparring partner.
Likely, as we move into the lower grades, subject expertise amongst teachers becomes less. And so cross-school curriculum meetings can also be a training opportunity between high school, middle school and elementary teachers where critical thinking skills are aligned and developed appropriately at each level.
Conclusion
AI is not just a technological tool, it’s a philosophical and educational challenge for preserving human condition. The readings from the AI x Philosophy seminar suggest that unless we intentionally build systems that reflect the values of inquiry, autonomy, and intellectual integrity, we risk creating a generation of passive users in a world of persuasive machines. Schools have a crucial role to play—not just in adapting to AI, but in shaping the human capacities we want to preserve in the face of AI. If we embed philosophy and develop intentional thinking principles into both AI design and educational practice, we’ll be better equipped to help students become the thoughtful, autonomous decision-makers the future will demand.
Thanks for reading! please comment below as I would love to know your thoughts. And subscribe if you would like to read my next post on the AI x Philosophy seminar Week 3: Privacy and the Future of AI.
@
A very enjoyable read. I’m not an educator but I give a lot of trainings to customers at work and I am a karate teacher. Learning how to learn and how to teach has helped me in both fronts.
The main message I got from your essay is intentionality. With intentionality comes dedication and effort. You take the time to think about this topic and finds ways to make it work. Sadly, I feel not all teachers do so and take the “lazy route”.
This is one of the most important education pieces I’ve read all year. You’re right to locate the challenge not just in tools, but in the design of thought itself.
I developed my Reflective Prompting, a practice of using AI not for speed or output, but for metacognitive pause: returning to the “why” before the system fills in the “what.” It aligns closely with what you described: AI as a philosophical interface, not just a functional one.
Your idea of using flawed AI output as a “punching bag” to teach disciplinary reasoning is brilliant. It mirrors how I’ve started using AI in leadership development, not for acceleration, but for friction.