Stephanie Schneider from SUNY Old Westbury examines how Artificial Intelligence is reshaping our understanding of knowledge and challenging traditional concepts as it becomes increasingly integrated into our daily lives
When you ask Siri for the weather, consult ChatGPT for writing advice, or rely on GPS to navigate an unfamiliar city, you’re participating in a profound shift in how knowledge works in our world. These everyday interactions with Artificial Intelligence raise questions that philosophers have grappled with for centuries: What does it mean to ‘know’ something? And what happens when machines seem to know things too? Can machines possess knowledge in the same way humans do?
These aren’t merely abstract philosophical puzzles. As AI becomes increasingly integrated into our schools, workplaces, and daily lives, understanding how machine ‘knowledge’ differs from human knowledge becomes crucial for navigating our rapidly changing world.
Artificial Intelligence is reshaping our understanding of knowledge across several domains, including how we define knowledge, how we teach and learn, how we trust sources of information, and how we make decisions. Each of these areas reveals different ways in which AI challenges traditional epistemology while opening new possibilities for how we think, learn, and know.
The traditional view of knowledge
For over two thousand years, philosophers have generally agreed that knowledge requires three things: a belief must be true, you must have good reasons for believing it, and you must believe it yourself. This is called ‘justified true belief.’ When you know that Paris is the capital of France, you believe it’s true, it is true, and you have reliable sources supporting this belief.
This traditional view assumes that knowledge resides in human minds, shaped by our experiences, reasoning, and understanding. Knowledge isn’t just information; it involves comprehension, the ability to explain why something is true, and the capacity to apply that understanding in new or unfamiliar situations.
How AI challenges our understanding of knowledge
Artificial Intelligence systems operate very differently from human minds. They don’t form beliefs or develop understanding in the way we do. Instead, they process vast amounts of data, identify patterns, and make predictions based on statistical relationships. When an AI system tells you it’s going to rain tomorrow, it’s not ‘believing’ anything about the weather; it’s calculating probabilities based on atmospheric data.
This creates a fascinating puzzle. If knowledge requires belief and understanding, can AI systems ‘know’ anything? Or are they simply very sophisticated information processors that simulate knowledge without possessing it?
Consider a language model that can write poetry, answer complex questions, or even assist students in learning calculus. The AI might produce responses that demonstrate apparent knowledge, but it lacks conscious experience, intentional understanding, or the ability to reflect on what it ‘knows.’ It operates through pattern matching and statistical inference rather than comprehension.
Yet these systems can be remarkably effective at tasks we associate with knowledge and expertise. This forces us to reconsider whether our traditional definition of knowledge adequately captures all how reliable information can be generated and utilized.
The educational implications
These philosophical questions become practical concerns in educational settings, where AI is increasingly present in classrooms, online learning platforms, and assessment tools. Educational AI systems often embed assumptions about how learning works and what counts as knowledge, though these assumptions are rarely made explicit.
Many AI-powered educational platforms operate on behaviorist principles, treating learning as a process of accumulating correct responses to questions. They measure knowledge through performance on specific tasks rather than deeper understanding or the ability to transfer knowledge to new contexts. This approach can inadvertently narrow our conception of what it means to learn and know.
For instance, an AI tutoring system might help a student solve algebra problems by guiding them through step-by-step procedures. The student may achieve correct answers and even improve their test scores, but do they truly understand algebraic concepts? Have they developed the kind of mathematical thinking that allows them to approach novel problems creatively? The AI system, focused on measurable outcomes, might miss these deeper dimensions of mathematical knowledge.
The problem of the black box
One of the most significant challenges AI poses to traditional ideas about knowledge is what philosophers call the ‘black box’ problem. Many AI systems, particularly those using deep learning, operate in ways that are opaque even to their creators. They can produce accurate predictions or useful outputs, but we often cannot explain exactly how they arrived at their conclusions.
This creates an epistemological dilemma. Throughout history, the ability to provide justifications and explanations has been crucial to the establishment of knowledge claims. We expect experts to be able to explain their reasoning, and we teach students to ‘show their work.’ However, AI systems are increasingly making important decisions, such as medical diagnoses, loan approvals, or educational recommendations, through processes that we cannot fully understand or explain.
This opacity doesn’t necessarily make AI outputs wrong or useless, but it does challenge traditional notions of justified belief. How should we evaluate knowledge claims from systems whose reasoning processes we cannot access or verify?
New forms of epistemic trust
As AI systems become more prevalent and powerful, we’re developing new forms of what philosophers call ‘epistemic trust’ – trust in sources of knowledge. Just as we learn to trust certain human experts, institutions, or publications, we’re now learning to trust (or distrust) various AI systems.
This trust operates differently from our confidence in human experts. When we trust a doctor’s diagnosis, we’re relying not just on their knowledge but also on their professional training, ethical commitments, and ability to explain their reasoning. With AI systems, we’re trusting complex algorithmic processes, the data used to train them, and the institutions that created and maintain them.
This shift has profound implications for education and a democratic society. If students increasingly rely on AI for information and analysis, how do we ensure they maintain the ability to think critically about sources, evaluate evidence, and develop independent judgment? How do we balance the genuine benefits of AI assistance with the need to preserve human epistemic agency – our capacity to think and know for ourselves?
References
- Alvarado, R. (2023). AI as an epistemic technology. Sci Eng Ethics 29 (32). Springer. https://doi.org/10.1007/s11948-023-00451-3
- Billingsley, W. (2024). The practical epistemologies of design and artificial intelligence. Science and Education. Springer.
- Carabantes, M. (2020). Black-box artificial intelligence: An epistemological and critical analysis. AI & Society, 35, 309-317. https://doi.org/10.1007/s00146-019-00888-w
- Cheung, K.K.C., Wong, K.C.K., & Lam, K.W. (2024). Unpacking epistemic insights of artificial intelligence in science education: A systematic review. Computers & Education, 210, 104-118.
- Coeckelbergh, M. (2023). Democracy, epistemic agency, and AI. AI & Society. https://doi.org/10.1007/s00146-023-01645-9
- Doroudi, S. (2024). On the paradigms of learning analytics: Machine learning meets epistemology. Computers & Education, 201, 104-125. https://doi.org/10.1016/j.compedu.2023.104825
- Ganascia, J.G. (2010). Epistemology of AI revisited in the light of the philosophy of information. Minds and Machines, 20(3), 309-329. https://doi.org/10.1007/s11023-010-9199-4
- Moleka, P. (2025). A new epistemology of intelligence: Rethinking knowledge through nosology. Journal of Philosophical Research, 50, 45-67.
- Schmidt, C.T.A. (2007). Artificial intelligence and learning: Epistemological perspectives. Educational Technology Research and Development, 55(4), 445-464. https://doi.org/10.1007/s11423-006-9015-6
- Wheeler, G.R., & Pereira, L.M. (2004). Epistemology and artificial intelligence. Journal of Applied Logic, 2(4), 469-493. https://doi.org/10.1016/j.jal.2004.07.001