1. Introduction: The Interpretive Problem at the Heart of AI
From its earliest ambitions, artificial intelligence (AI) has been haunted by an interpretive problem it did not know it had. The pioneers of symbolic AI in the 1950s and 60s believed that intelligence could be formalized — that reasoning, language, and knowledge could be captured in explicit rules and representations, and that a machine that manipulated those representations correctly would thereby understand. What they did not anticipate was the challenge posed by the oldest and deepest question in the human sciences: What does it mean to “understand” at all?
That question is the central concern of hermeneutics — the philosophical discipline of interpretation, whose history stretches from ancient Greek textual commentary through the Protestant Reformation to the twentieth-century ontological revolution of Martin Heidegger and Hans-Georg Gadamer. This essay traces the encounter between hermeneutical thought and artificial intelligence across seven decades, from the early critique of symbolic AI to the deep philosophical questions raised by today’s large language models. It makes a deliberately modest causal claim: hermeneutical philosophy did not drive the development of AI so much as it provided the conceptual vocabulary through which researchers, philosophers, and critics could articulate — and in some cases redirect — problems they were already encountering on empirical and practical grounds. That vocabulary proved, and continues to prove, consequential precisely because the problems it describes are irreducibly real.
One preliminary clarification is essential and often overlooked in discussions of this topic. Hermeneutics is not a single philosophical position. It is a family of related but genuinely distinct schools, each generating different and sometimes incompatible implications for artificial intelligence. The seven traditions most directly relevant to this discussion are presented below, organized by their primary contribution to the encounter with AI.
2. Seven Schools, Seven Challenges
To understand how hermeneutics has engaged AI, one must distinguish what each school actually claims:
Friedrich Schleiermacher (1768–1834) established modern hermeneutics as a general discipline by arguing that understanding any text requires two simultaneous operations: grammatical interpretation (command of the linguistic and generic conventions of the discourse) and psychological interpretation (reconstruction of the author’s intention and creative situation). For AI, this means that language understanding is not reducible to syntactic pattern-matching — it requires both the formal structure of language and the communicative intention behind it.
Wilhelm Dilthey (1833–1911) grounded the human sciences (Geisteswissenschaften) in hermeneutics by insisting on a fundamental distinction between Erklären (causal explanation, the method of natural science) and Verstehen (understanding, the method proper to human expression and history). For AI, this distinction matters from the very moment a system claims to “understand” language: natural-scientific computational methods explain by subsuming events under laws, but human meaning cannot be reduced to this. Dilthey’s challenge — the epistemological one — was present at AI’s founding and largely ignored.
Martin Heidegger (1889–1976) transformed hermeneutics from an epistemological method into an ontological structure. In Being and Time (1927), he argued that interpretation is not something humans do with texts but the fundamental structure of human existence itself. Before any explicit act of interpretation, human beings operate within a three-part fore-structure of understanding: Vorhabe (fore-having — a prior practical engagement with the world), Vorsicht (fore-sight — a theoretical perspective that shapes what one looks for), and Vorgriff (fore-conception — the conceptual vocabulary with which one approaches a problem). These are not distortions to be overcome but the conditions of possibility of any understanding. For AI, Heidegger’s implication is stark: a system that has no lived engagement with the world — no embodied history, no practical being-in-the-world — lacks the fore-structure without which genuine interpretation cannot begin.
Hans-Georg Gadamer (1900–2002) built on Heidegger to produce, in Truth and Method (1960), the fullest account of philosophical hermeneutics. Three concepts are indispensable here. First, Wirkungsgeschichte (historically effected consciousness): every interpreter stands within a tradition shaped by the accumulated interpretive history of the texts and problems they engage — the history of effects of prior readings. Second, Horizontverschmelzung (fusion of horizons): genuine understanding occurs not when one interpreter imposes their meaning on a text, but when the horizon of the interpreter and the horizon of the text meet, challenge each other, and are mutually enlarged. Third, Gadamer’s rehabilitation of prejudice (Vorurteil): tradition-shaped pre-understandings are not obstacles to correct interpretation but the enabling conditions of any interpretation at all.
E.D. Hirsch Jr. (1928–) challenged Gadamer’s openness by insisting that valid interpretation requires a stable meaning — what the author intended, recoverable through rigorous method — which must be distinguished from significance — how that meaning is applied in different contexts and eras. Without this distinction, Hirsch argued, there is no basis for saying one interpretation is better than another, and the human sciences lose their cognitive credibility.
Paul Ricœur (1913–2005) provided the most practically powerful integrative framework. His three-stage hermeneutical arc — pre-understanding → distanciation (structural-analytical explanation) → appropriation (transformed self-understanding) — insisted that explanation and understanding are not opposed but dialectically related: one must explain more to understand better. The detour through structural analysis, far from being alien to understanding, deepens and objectifies it. Ricœur also introduced the world of the text: once written, a text achieves semantic autonomy from its author’s intention and opens onto a world of possible meanings that constrain but do not determine interpretation.
Jürgen Habermas (1929–) raised the critical-emancipatory challenge that Gadamer’s trust in tradition could not answer: some traditions are ideologically distorted and reproduce relations of domination. Genuine understanding must include a capacity for ideology critique — what Habermas called communicative rationality — that can evaluate tradition from a standpoint that tradition itself does not supply.
These seven positions are not a decorative background. They generate seven different diagnostic questions for artificial intelligence, each of which has become more acute as AI systems have grown more powerful.
3. The Early AI Program and What It Assumed
The symbolic AI program that dominated the field from roughly 1950 to 1980 rested on a set of foundational assumptions that hermeneutical philosophy — had it been consulted — would have recognized immediately as philosophically problematic. The program assumed that:
- Intelligence is formal symbol manipulation — reasoning can be captured in explicit rules
- Knowledge is propositional — it can be fully represented as discrete facts and their logical relationships
- Language understanding is a matter of parsing grammatical structures and mapping them onto knowledge representations
- Context is a manageable supplement to a core of formal competence
From Dilthey’s standpoint, assumption (3) collapsed the Verstehen/Erklären distinction at the moment AI claimed to understand language: what the early systems did was explain through formal patterns, not understand through interpretive engagement. From Schleiermacher’s standpoint, assumption (3) captured only the grammatical dimension of interpretation while entirely omitting the psychological-intentional one. From Heidegger’s standpoint, assumption (4) was the deepest error: context is not a supplement to formal competence but its very condition of possibility, because all competence is constituted by a fore-structure of understanding embedded in a practical form of life.
These critiques were made with full force long before AI researchers were ready to receive them — most decisively by Hubert Dreyfus.
4. Dreyfus and the Phenomenological Critique
Hubert Dreyfus’ What Computers Can’t Do (1972) is often described as a hermeneutical critique of AI. This description requires careful qualification. Dreyfus was a phenomenologist — his central argument draws primarily on Heidegger’s existential phenomenology of Dasein in its everydayness, not on hermeneutical theory in the narrower sense. Phenomenology asks: What is the structure of lived experience, embodiment, and being-in-the-world? Hermeneutics asks: What is the structure of interpretation, textual meaning, and understanding across historical distance? These overlap substantially in Heidegger but are not identical, and the distinction matters for intellectual precision. Dreyfus’ argument belongs primarily to the phenomenological register.
Dreyfus identified four assumptions of the AI research program that he argued were philosophically untenable:
- The biological assumption: The brain processes information discretely, like a digital computer
- The psychological assumption: The mind operates by manipulating context-free, determinate elements
- The epistemological assumption: All knowledge can be made explicit as a set of formal rules
- The ontological assumption: Reality consists of independent, determinate facts
His deepest objection — the one most directly continuous with Heidegger — was to the epistemological assumption. Much of human skill and understanding is tacit: it does not consist in knowing explicit rules but in a practical, embodied engagement with a world that has already been shaped by a lifetime of involvement. A chess player does not apply rules — an expert player perceives the board as a structured gestalt and acts with an immediacy that no explicit rule system can replicate or explain. This is Heidegger’s distinction between ready-to-hand engagement (skilled practical involvement) and present-at-hand representation (detached, explicit description): early AI operated entirely in the present-at-hand register while human intelligence operates primarily in the ready-to-hand one.
The frame problem — first named by John McCarthy and Patrick Hayes in 1969 — provided the technical corollary. Any system relying on explicit logical representations must specify what remains unchanged when an action is performed, but in a world of indefinite scope, the number of such unchanged facts is effectively infinite. No finite rule system can close this problem. This is Heidegger’s ontological point made computationally precise: the world that humans inhabit is not a set of facts but a background of practical meaning that no explicit representation can fully capture.
Dreyfus’ reception by the AI community was initially hostile — he was famously challenged to a chess match against a program (which he lost at the time), and his argument was widely dismissed. His influence was indirect and long-term, felt most powerfully through researchers who were independently encountering the limits his philosophy described.
5. The Hermeneutical Turn: Winograd’s Conversion
The most compelling single episode in the history of hermeneutics’ encounter with AI is Terry Winograd’s intellectual conversion. Developed between 1968 and 1970 as Winograd’s doctoral dissertation at MIT and published in 1972, the SHRDLU program was celebrated as a landmark achievement in natural language processing. SHRDLU could hold conversations about a simplified world of toy blocks, answering questions, following instructions, and apparently demonstrating genuine language understanding — within its narrow domain.
Winograd became progressively disillusioned with what he had built. SHRDLU’s competence was entirely dependent on the exhaustive prior specification of a closed micro-world. Remove the constraint, expand the domain, and the program’s apparent understanding dissolved into brittle pattern-matching. The richness of natural language — its dependence on shared social context, implicit background knowledge, practical engagement, and the indefinitely open texture of human meaning — was not something that more rules could capture. It was structurally beyond what his approach could achieve.
This disillusionment, deepened by his encounter with Heidegger and Gadamer (partly through his collaboration with Fernando Flores, a student of Heidegger’s thought), produced Winograd’s landmark 1980 essay “What Does It Mean to Understand Language?” and, subsequently, the book Understanding Computers and Cognition (1986), co-authored with Flores. The central argument was Gadamerian: human language understanding is not rule-governed processing but an event of meaning disclosure that occurs within a background of pre-understood practical engagement with the world — a background that cannot be fully made explicit, formalized, or represented.
Winograd and Flores made two decisive contributions. First, they reframed the objective of language AI: not to replicate understanding through formal systems, but to design breakdowns — tools whose interruption of smooth practical engagement prompts the reflective, explicit interpretation that Heidegger described as the shift from ready-to-hand to present-at-hand. Second, they articulated a design philosophy for AI as a partner in human work rather than a replacement for human understanding. This was hermeneutics entering AI not as critique alone but as constructive design philosophy.
6. The Hermeneutic Circle and Knowledge Representation
The hermeneutic circle — the insight that understanding any part requires a grasp of the whole, and understanding the whole requires engagement with the parts — had concrete implications for AI beyond the philosophical level. The circle describes a holistic structure of interpretation: meaning is not a property of individual symbols in isolation but emerges from the relationships between parts within a contextual whole.
This structural insight directly challenged the atomistic assumptions of early knowledge representation. If knowledge is a network of discrete facts connected by logical rules, then any isolated fact is interpretable independently of context. But natural language is not like this: the meaning of any utterance depends on the entire web of contextual assumptions, pragmatic conventions, conversational implicatures, and shared background that surrounds it. The word “bank,” for example, means different things in “river bank” and “savings bank” — not because the disambiguation rule is complex, but because the entire context of discourse determines which meaning is operative. This is the hermeneutic circle operating at the semantic level.
The shift within Natural Language Processing (NLP) from rule-based parsing toward statistical and then contextual approaches — corpus-based methods, distributional semantics, neural language models — was driven primarily by empirical failures of the rule-based approach and by the availability of large text corpora and computational resources. But the hermeneutical vocabulary provided the conceptual framework within which researchers could describe why the rule-based approach failed at a structural level: it had tried to achieve understanding atomistically when understanding is constitutively holistic.
7. Gadamer’s Challenge to Large Language Models
It is in the era of large language models that hermeneutical philosophy becomes most diagnostically acute — and most conspicuously absent from mainstream AI discourse. Large language models (LLMs) are trained on vast corpora of human text through predictive learning: given a sequence of tokens, predict the next. The result is a system of remarkable surface competence — fluent, contextually responsive, capable of engaging across an extraordinary range of topics and tasks. By any behavioral measure, contemporary LLMs appear to understand language in ways that their predecessors did not. Yet the hermeneutical schools, examined in sequence, raise a series of challenges that behavioral fluency does not answer.
The Schleiermacherian Challenge
Schleiermacher required two dimensions of understanding: the grammatical and the psychological-intentional. LLMs excel at the grammatical dimension — they have been trained on the full distributional structure of human language and command its grammatical patterns with extraordinary facility. But the psychological dimension — the reconstruction of communicative intention, the understanding of what a speaker or writer was trying to do in a specific act of communication — is systematically inaccessible to a system that has no model of persons as intentional agents with goals, beliefs, and communicative purposes.
The Diltheyan Challenge
Dilthey’s Verstehen/Erklären distinction reveals a structural feature of LLMs that their behavioral outputs obscure. An LLM explains by pattern completion — it produces outputs that are statistically coherent with its training distribution. This is Erklären in Dilthey’s sense: a form of causal-statistical regularization. The question of whether it achieves Verstehen — the grasp of meaning from within a human life-world — cannot be answered by examining outputs alone, because an explanation-system and an understanding-system can produce indistinguishable outputs for different structural reasons.
The Heideggerian Challenge
Heidegger’s fore-structure analysis reveals the deepest problem. An LLM has no Vorhabe — no prior practical engagement with the world that gives its symbol-manipulations their semantic anchorage in lived reality. It has no Vorsicht — no interpretive perspective that it brings as a situated agent with particular concerns and projects. Its Vorgriff — its operative conceptual vocabulary — consists entirely of statistical abstractions from other people’s language, not concepts formed through encounter with the world. In Heideggerian terms, an LLM is the most radical possible instance of present-at-hand representation, with no ready-to-hand engagement underneath it.
The Gadamerian Challenge
Gadamer’s framework raises the most philosophically precise challenges for LLMs, and they are worth stating with precision.
a. The Wirkungsgeschichte Deficit. Gadamer argued that every genuine interpreter stands within the historically effected consciousness of their tradition — they are shaped by the accumulated interpretive history of the texts and problems they engage. This historical situatedness is not a limitation but the positive condition of understanding: it provides the horizonal background within which new meanings can be received. An LLM has a training corpus but not a Wirkungsgeschichte. It has processed the products of historically effected interpretation without inhabiting the process that produced them. It has the outputs of tradition without the formation that tradition enacts in a genuine interpreter.
b. The Pseudo-Fusion of Horizons. Gadamer’s Horizontverschmelzung describes the event in which the interpreter’s horizon and the text’s horizon are mutually enlarged through genuine encounter. This requires that the interpreter bring a genuine horizon — a particular historical, cultural, and existential location — and that the encounter change that horizon. An LLM has no horizon in Gadamer’s sense: it has a statistical composite of the horizons of millions of human texts. When an LLM appears to engage with a user’s horizon, what it produces is a pseudo-fusion: the statistically weighted residue of other people’s horizons responding to a prompt, rather than a genuine encounter between a particular interpreter and a particular text or interlocutor.
c. The Gesprächspartner Question. For Gadamer, genuine understanding is dialogical — it follows the logic of question and answer, and the text must be allowed to address a question to the interpreter, not merely receive the interpreter’s projections. This requires that the interpreter be genuinely open to being challenged and changed by the encounter. Whether an LLM can be a genuine Gesprächspartner (conversation partner) in Gadamer’s sense — not merely a sophisticated text-completion engine that simulates dialogue — is a question that behavioral tests of conversational fluency cannot answer, because the very fluency that makes LLMs impressive also makes the simulation of genuine dialogue indistinguishable from its enactment.
The Ricœurian Challenge
Ricœur’s three-stage arc may be the most practically precise diagnostic tool available for evaluating LLMs. The three stages are: pre-understanding (the interpreter’s initial fore-grasp of a text’s world); distanciation (structural-analytical explanation — the systematic examination of the text’s patterns, rhetoric, and genre); and appropriation (the transformed self-understanding that results from genuine encounter with the text’s world).
LLMs are extraordinarily powerful at the distanciation stage: they perform structural pattern analysis, rhetorical identification, genre recognition, and textual summarization at a level that no prior AI system has approached. But they are entirely incapable of appropriation in Ricœur’s sense, because appropriation requires a self to be transformed — a locus of ongoing identity and concern that is genuinely altered by encounter with the text’s world. An LLM produces outputs; it does not undergo transformation. This means that LLMs collapse Ricœur’s arc at its most important moment: they perform the distanciation phase with remarkable technical facility while simulating the appropriation phase through the statistical patterns of how human writers describe transformed understanding.
The Habermassian Challenge
Habermas’s concern with ideology critique and communicative rationality points to a further dimension of the LLM problem. LLMs are trained on text corpora that encode the ideological distortions, power asymmetries, and false consensuses of the culture that produced them. Unlike a human interpreter who can — with effort — achieve critical distance from the ideological formations of their tradition, an LLM has no mechanism for such critique. Its outputs reproduce the statistical regularities of its training data, including its ideological regularities, without the capacity for emancipatory self-reflection. This is not a problem that improved training data or better alignment techniques fully address, because the problem is structural: the system has no standpoint from which to evaluate the tradition whose residue constitutes its “understanding.”
8. Hermeneutics in AI Design: Constructive Applications
Hermeneutical philosophy has influenced not only the critique of AI but its constructive design, particularly in human-computer interaction, explainable AI, and the design of collaborative systems.
Winograd and Flores’s Understanding Computers and Cognition (1986) proposed design principles directly derived from Heideggerian and Gadamerian hermeneutics. Their concept of commitment management — designing conversational systems that track and manage the commitments made through speech acts — drew directly on the pragmatic dimension of language that Heidegger and Austin had each analyzed. Their emphasis on designing for breakdown — making the moment of system failure an opportunity for reflective engagement rather than frustration — is a direct application of Heidegger’s ready-to-hand/present-at-hand distinction.
In contemporary AI, hermeneutical concepts have entered the discourse around Explainable AI (XAI) and Human-Centered XAI (HCXAI). The insight that explanation is not a neutral technical operation but a contextually embedded communicative act — that an explanation means different things to different interpreters in different social situations — is recognizably Gadamerian. The requirement that AI explanations be designed for their specific audience, in their specific situation, with their specific concerns — rather than as generic technical outputs — directly instantiates the hermeneutical principle that meaning is context-constituted.
9. What Hermeneutics Cannot Do
Intellectual honesty requires acknowledging the limits of hermeneutical philosophy as an analytical framework for AI.
The causal claim must be modest.
The shifts in AI research that hermeneutical thinkers diagnosed — from rule-based to statistical NLP, from symbolic to connectionist architectures, from context-free to contextual representation — were driven primarily by empirical failures, benchmark performance, and the availability of computational resources and training data. Hermeneutics provided the conceptual vocabulary for articulating why certain approaches failed at a structural level, but it was not the causal driver of the research program’s evolution.
Hermeneutics is not a design specification.
It is primarily a critical and diagnostic discipline. It can identify what AI systems lack, describe the structural gaps between computational performance and genuine understanding, and articulate the philosophical presuppositions that AI research has tended to ignore. But it does not, by itself, specify how those gaps are to be closed or what computational architectures would adequately address them. The transition from hermeneutical critique to engineering solution requires the resources of empirical computer science, cognitive science, and linguistics that hermeneutics cannot supply.
The schools are not unanimous.
As the survey above demonstrates, the seven hermeneutical schools generate different and sometimes incompatible implications for AI. Schleiermacher’s intentionalism, Gadamer’s tradition-historicity, and Habermas’s ideology critique pull in different directions. Any attempt to apply “hermeneutics” to AI without specifying which hermeneutics and in which register risks generating a set of inconsistent and unmet requirements.
The normativity question remains open.
Hermeneutical philosophy describes the structure of human understanding with great precision, but does not by itself settle the normative question of whether AI should aim at human-type understanding or whether it might achieve genuinely useful cognitive functions through fundamentally different means. The hermeneutical critique demonstrates that LLMs do not understand in Gadamer’s sense; it does not demonstrate that this limitation is fatal to their practical utility.
10. Conclusion: Hermeneutics as Horizon-Disclosure
The encounter between hermeneutics and artificial intelligence is best understood as a sustained act of horizon-disclosure — making visible the philosophical assumptions and structural limits that any research program brings to its subject matter, and that it can only see when challenged by a perspective different from its own.
Seven decades of hermeneutical engagement with AI have disclosed the following horizon: intelligence is not formal symbol manipulation but interpretive engagement with a world; meaning is not a property of symbols but an event of understanding between historically situated beings; understanding is not a method but a mode of being; and the pretense of context-free, presupposition-free cognition is itself a presupposition — one that conceals more than it reveals.
These insights have contributed, in both direct and indirect ways, to the emergence of contextual, embodied, and human-centered approaches to AI. In the era of LLMs, they have become more urgently relevant, not less. The systems that most plausibly appear to understand language are precisely the systems that most powerfully simulate the outputs of genuine understanding while lacking the fore-structure, historical situatedness, and capacity for appropriation that hermeneutics identifies as constitutive of the real thing.
Whether AI will eventually achieve something genuinely analogous to hermeneutical understanding — through architectures and training regimes not yet imagined — remains an open question. What hermeneutics has established, irreversibly, is the standard by which any such claim must be evaluated: not behavioral fluency, not statistical coherence, but the genuine fusion of horizons between an interpreter with a Wirkungsgeschichte and a text with a world.
Further Reading
Primary Sources
Dilthey, Wilhelm. Selected Works, Volume I: Introduction to the Human Sciences. Edited and translated by Rudolf A. Makkreel and Frithjof Rodi. Princeton: Princeton University Press, 1989. [German original: Einleitung in die Geisteswissenschaften, 1883.] {The foundational text for Dilthey’s project of grounding the human sciences (Geisteswissenschaften) on a methodological basis distinct from the natural sciences. The essay’s central argument for the Verstehen/Erklären distinction originates here — understanding (Verstehen), as the interpretive grasp of human meaning from within a shared life-world, is irreducible to causal explanation (Erklären), which subsumes events under general laws. For AI studies, this text establishes the epistemological challenge that computational systems face when they claim to “understand” language: the claim collapses the very distinction Dilthey identified as constitutive of humanistic knowledge. The Makkreel-Rodi Princeton edition is the standard critical English edition of Dilthey’s collected works.}
Dilthey, Wilhelm. Selected Works, Volume III: The Formation of the Historical World in the Human Sciences. Edited and translated by Rudolf A. Makkreel and Frithjof Rodi. Princeton: Princeton University Press, 2002. [German original: Der Aufbau der geschichtlichen Welt in den Geisteswissenschaften, 1910.] {Dilthey’s most mature statement of hermeneutics as the foundation of historical understanding. This text introduces the structural analysis of Erlebnis (lived experience), Ausdruck (expression), and Verstehen (understanding) as the three-part circuit that constitutes historical knowledge. It anticipates Heidegger’s ontological turn by acknowledging that all understanding is historically conditioned — though Dilthey, unlike Heidegger, still sought objective knowledge as the goal. Essential for understanding the methodological tensions the essay identifies between Dilthey’s scientific aspirations and Gadamer’s ontological radicalization of his project.}
Gadamer, Hans-Georg. Truth and Method. 2nd rev. ed. Translated by Joel Weinsheimer and Donald G. Marshall. New York: Crossroad, 1989. [German original: Wahrheit und Methode, 1960.] {The founding document of philosophical hermeneutics and the single most essential text for understanding hermeneutics’ challenge to AI. Gadamer argues that understanding is not a methodological achievement but a mode of human being — historically conditioned, linguistically constituted, and dialogically structured. The three concepts most critical for this essay are developed here: Wirkungsgeschichte (historically effected consciousness — the interpreter’s formation within a tradition of prior interpretation); Horizontverschmelzung (fusion of horizons — the event of understanding in which interpreter and text mutually enlarge each other); and the rehabilitation of Vorurteil (prejudice — tradition-shaped pre-understandings as the conditions of understanding, not obstacles to it). The Weinsheimer-Marshall translation is the standard English critical edition and supersedes the earlier Sheed and Ward version.}
Gadamer, Hans-Georg. Philosophical Hermeneutics. Translated and edited by David E. Linge. Berkeley: University of California Press, 1976. {An essential companion to Truth and Method, this collection of essays makes Gadamer’s key arguments more accessible and extends them to language, aesthetics, and practical philosophy. The essay “On the Scope and Function of Hermeneutical Reflection” (1967) is particularly important as Gadamer’s direct response to Habermas’s ideology critique — a debate that Section VII of this essay engages directly. Recommended as a more accessible entry point for readers new to Gadamer’s thought before undertaking Truth and Method itself.}
Habermas, Jürgen. Knowledge and Human Interests. Translated by Jeremy J. Shapiro. Boston: Beacon Press, 1971. [German original: Erkenntnis und Interesse, 1968.] {Habermas’s first major systematic work and the text that frames his critical intervention into hermeneutics. Habermas distinguishes three knowledge-constitutive interests — the technical (natural sciences), the practical (hermeneutics and communicative sciences), and the emancipatory (critical theory) — and argues that hermeneutics, because it trusts tradition, cannot by itself achieve the emancipatory interest. His charge that Gadamer’s rehabilitation of tradition is politically naive and ideologically vulnerable is the direct source of the essay’s Habermassian challenge to AI: LLMs reproduce the ideological regularities of their training data without the capacity for emancipatory self-critique that Habermas identifies as the mark of genuinely rational understanding.}
Habermas, Jürgen. The Theory of Communicative Action. 2 vols. Translated by Thomas McCarthy. Boston: Beacon Press, 1984–1987. [German original: Theorie des kommunikativen Handelns, 1981.] {Habermas’s architectonic work, in which he argues that language is the foundational medium of social rationality and that genuine communication is oriented toward achieving, sustaining, and revising consensus through intersubjective recognition of validity claims — what he calls communicative rationality. The contrast with strategic or instrumental uses of language is essential: AI systems optimized to produce outputs users find satisfactory are paradigmatic examples of strategic rather than communicative rationality in Habermas’s sense. Volume One (Reason and the Rationalization of Society) contains the most directly relevant material; Volume Two (Lifeworld and System) extends the analysis to social institutions and the colonization of the lifeworld by technological-administrative systems.}
Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper & Row, 1962. [German original: Sein und Zeit, 1927.] {The text that transformed hermeneutics from an epistemological method into an ontological structure and, in doing so, provided the deepest philosophical resources for critiquing the foundational assumptions of symbolic AI. Three elements are most directly relevant to this essay: (1) the fore-structure of understanding (Vorhabe, Vorsicht, Vorgriff) — the three-part pre-understanding that conditions all interpretation; (2) the distinction between ready-to-hand (zuhanden) and present-at-hand (vorhanden) engagement — skilled practical involvement versus detached explicit representation; and (3) the hermeneutic circle reconceived as an ontological structure of human existence rather than a methodological problem to be resolved. Division I is most relevant to AI studies; Dreyfus’ Being-in-the-World (see below) provides the essential commentary in English.}
Heidegger, Martin. Poetry, Language, Thought. Translated by Albert Hofstadter. New York: Harper & Row, 1971. {This collection of essays from Heidegger’s later period contains his most developed account of “language as the house of Being” — the claim that language is not a tool humans wield but the medium in which Being discloses itself. The essay “Building Dwelling Thinking” and “The Thing” are also essential for understanding Heidegger’s account of the lifeworld as a structured whole of practical significance that cannot be reduced to explicit propositional representation. This ontological account of language is the direct predecessor of Gadamer’s thesis that “Being that can be understood is language,” and together they constitute the hermeneutical challenge to any computational theory of language as formal symbol manipulation.}
Hirsch, E.D. Jr. Validity in Interpretation. New Haven: Yale University Press, 1967. {Hirsch’s systematic defense of authorial intention as the norm of valid interpretation is the most consequential conservative response to Gadamerian hermeneutics. His central distinction between meaning (what the author intended, stable and recoverable through disciplined method) and significance (how that meaning is applied to different contexts and eras, variable and open-ended) has direct implications for AI language systems: NLP must distinguish what a text means from what it implies in a particular context of use. The meaning/significance distinction maps directly onto the gap between semantic competence (what words mean) and pragmatic competence (how those meanings function in communicative acts) — a gap that LLMs navigate statistically but do not resolve structurally.}
McCarthy, John, and Patrick J. Hayes. “Some Philosophical Problems from the Standpoint of Artificial Intelligence.” In Machine Intelligence 4, edited by B. Meltzer and D. Michie, 463–502. Edinburgh: Edinburgh University Press, 1969. {The paper in which the frame problem was first formally named and analyzed by AI researchers, independently of and prior to Dreyfus’ philosophical critique. The frame problem asks how a reasoning system can represent what remains unchanged when an action is performed, without enumerating the effectively infinite number of unchanged facts. The essay notes that this technical problem converged with, but was not derived from, Heidegger’s philosophical analysis of the fore-structure of practical understanding. McCarthy and Hayes’s paper is essential for demonstrating that hermeneutical philosophy and AI research encountered the same structural problem from different directions simultaneously, which is the essay’s evidence against an overstated causal claim.}
Ricœur, Paul. The Conflict of Interpretations: Essays in Hermeneutics. Edited by Don Ihde. Evanston: Northwestern University Press, 1974. [French original: Le Conflit des interprétations, 1969.] {Ricœur’s first major hermeneutical collection, in which he develops the dialectic between explanation and understanding and introduces the “hermeneutics of suspicion” (Freud, Marx, Nietzsche) as a necessary moment in the interpretive arc. The title essay is foundational: genuine interpretation is not a single act but a conflict between methodologically disciplined structural analysis and the appropriative event of understanding. This volume establishes why the essay argues that Ricœur provides the most diagnostically precise framework for evaluating LLMs: they excel at the structural-analytical moment while being incapable of the appropriative one.}
Ricœur, Paul. Interpretation Theory: Discourse and the Surplus of Meaning. Fort Worth: Texas Christian University Press, 1976. {Ricœur’s most condensed and accessible statement of his hermeneutical theory. The text develops the three-stage arc (pre-understanding → distanciation → appropriation), the concept of semantic autonomy (a written text detaches from its author’s intention and acquires an independent world of meanings), and the world of the text as the horizon of possible self-understanding that genuine reading opens. These concepts are the direct source of the essay’s Section VII Ricœurian challenge: LLMs perform extraordinary distanciation (structural pattern analysis) while producing only a simulation of appropriation (transformed self-understanding), because appropriation requires a self capable of genuine transformation.}
Ricœur, Paul. Oneself as Another. Translated by Kathleen Blamey. Chicago: University of Chicago Press, 1992. [French original: Soi-même comme un autre, 1990.] {Ricœur’s most philosophically ambitious work, in which he grounds personal identity in narrative identity — the story one tells and retells about who one is (ipse identity), as distinct from the unchanging numerical sameness of an entity (idem identity). The text is relevant to AI because it establishes that the self required for Ricœurian appropriation is narratively constituted through time — it has a history, a trajectory, and a capacity for being genuinely challenged and changed by the texts it encounters. An LLM, which persists between sessions without memory of its own interpretive history, lacks ipse identity in Ricœur’s sense and therefore the kind of self that appropriation requires.}
Schleiermacher, Friedrich. Hermeneutics and Criticism and Other Writings. Edited and translated by Andrew Bowie. Cambridge: Cambridge University Press, 1998. {The standard English edition of Schleiermacher’s hermeneutical writings, which Bowie translates and introduces with exceptional philosophical clarity. The material includes Schleiermacher’s account of the hermeneutic circle, his dual method of grammatical and psychological interpretation, and his foundational claim that hermeneutics is a universal discipline applicable to all texts and not only to Scripture or law. Andrew Bowie’s introduction connects Schleiermacher’s hermeneutics to post-Kantian philosophy, Romanticism, and contemporary debates — making this the most philosophically substantial English edition available. The Schleiermacherian challenge to AI (that grammar without psychological-intentional reconstruction is an incomplete understanding) is grounded in these texts.}
Winograd, Terry. “What Does It Mean to Understand Language?” Cognitive Science 4, no. 3 (1980): 209–241. {The single most important published document of hermeneutics’ direct entry into AI research. Written as Winograd was processing his disillusionment with his own celebrated SHRDLU natural language program, this essay argues that the implicit model of language understanding in computational linguistics — formal syntactic parsing, semantic decomposition, knowledge-base lookup — systematically misrepresents what understanding actually involves. Drawing on Heidegger, Gadamer, and Austin, Winograd argues that language understanding is not a process of decoding but an event of meaning disclosure occurring within a background of shared practical engagement with the world. The paper is available in full text through the Stanford HCI Group repository and through ScienceDirect.}
Winograd, Terry, and Fernando Flores. Understanding Computers and Cognition: A New Foundation for Design. Norwood, NJ: Ablex, 1986. {The book-length development of Winograd’s 1980 essay, co-authored with Fernando Flores, who brought a deep formation in Heidegger and the Chilean philosophy of enactivism to the project. This is the most sustained attempt to translate hermeneutical philosophy into a positive design program for computing. The book’s central arguments — that computers should be designed as tools for commitment management in human social practice, that breakdown is the moment of ontological disclosure rather than mere malfunction, and that AI cannot substitute for human understanding but can extend human coordination — remain among the most philosophically rigorous statements of the relationship between hermeneutics and computing. Essential reading for anyone engaging the constructive applications discussed in the essay’s Section VIII.}
Secondary and Supplementary Sources
Dreyfus, Hubert L. What Computers Can’t Do: A Critique of Artificial Reason. New York: Harper & Row, 1972. Rev. ed.: What Computers Can’t Do: The Limits of Artificial Intelligence. New York: Harper & Row, 1979. {The most influential philosophical critique of symbolic AI, and the text that introduced Heideggerian and Merleau-Pontian phenomenology into the AI discourse. Dreyfus identifies four assumptions of the symbolic AI program — biological, psychological, epistemological, and ontological — and argues that all four are philosophically untenable. His core argument is that most human intelligence is tacit and embodied, consisting not in the application of explicit rules but in the skilled, context-sensitive, practically engaged coping of a being-in-the-world. The 1979 revised edition includes a substantial response to critics. The essay carefully distinguishes Dreyfus’ contribution as phenomenological rather than narrowly hermeneutical — he draws primarily on Heidegger’s existential analytic, not on hermeneutical theory in the interpretive sense — a distinction that matters for intellectual precision.}
Dreyfus, Hubert L. Being-in-the-World: A Commentary on Heidegger’s “Being and Time,” Division I. Cambridge, MA: MIT Press, 1991. {The definitive English commentary on Division I of Being and Time, in which Dreyfus develops the philosophical foundations of his AI critique in systematic detail. The commentary makes accessible Heidegger’s phenomenological analysis of equipment (Zeug), ready-to-hand and present-at-hand engagement, Dasein’s being-in-the-world, and the structure of circumspective concern — all of which are essential for understanding why the fore-structure of understanding cannot be computationally replicated. The companion volume to What Computers Can’t Do for readers who want the full philosophical grounding rather than the polemic application.}
Dreyfus, Hubert L., and Stuart E. Dreyfus. Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York: Free Press, 1986. {Co-authored with Hubert’s brother Stuart — an operations researcher and systems analyst — this book extends the phenomenological critique into a five-stage model of skill acquisition (novice → advanced beginner → competent → proficient → expert) that demonstrates concretely why rule-following and expert performance are structurally different. The expert does not apply rules — she perceives the situation as a structured gestalt and acts with an immediacy that rule-based systems cannot replicate. The book was influential in cognitive science and AI research during the period described in the essay, when the transition away from purely symbolic approaches toward connectionist and embodied alternatives was underway.}
Ihde, Don. Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana University Press, 1990. {Ihde’s development of postphenomenology — a program that extends Heidegger’s phenomenological analysis of equipment to the full range of contemporary technological mediations. Ihde identifies four human-technology relations: embodiment, interpretation, alterity, and background — each describing a different way in which technology mediates human perception of and engagement with the world. His concept of material hermeneutics (the interpretation of instruments and technological artifacts as meaning-bearing objects, not merely tools) is the direct predecessor of Zovko’s application to contemporary AI systems. Essential background for Section VIII of the essay on hermeneutics in AI design.}
Miller, Tim. “Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences.” arXiv preprint arXiv:1712.00547 (2017). {The paper argues that explainable AI research risks designing explanatory systems for AI researchers rather than for intended users — drawing an explicit parallel with Alan Cooper’s “inmates running the asylum” critique of software design. Miller argues that XAI must integrate models from philosophy, psychology, and cognitive science, and that explainability evaluations must be user-centered rather than technically self-referential. The relevance to hermeneutics is direct: the paper’s central claim — that explanation is a communicative act whose meaning is context- and audience-constituted — is a Gadamerian insight in computational form.}
Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, no. 3 (1980): 417–424. {The paper that introduced the Chinese Room thought experiment is one of the most widely discussed philosophical challenges to functionalist accounts of AI understanding. Searle argues that syntactic manipulation of symbols (which is all any digital computer does) is insufficient for semantic understanding: a system can process symbols according to correct rules without understanding what those symbols mean, just as a person following Chinese grammar rules in a locked room can produce correct Chinese responses without understanding Chinese. The Chinese Room argument is not a hermeneutical argument, but it arrives at conclusions structurally convergent with the hermeneutical critique: formal symbol manipulation and genuine understanding are categorically distinct. The comparison between Searle’s position and the hermeneutical tradition enriches both.}
Vandevelde, Pol. “Hermeneutics.” In Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Stanford: Metaphysics Research Lab, Stanford University. Last revised December 8, 2020. https://plato.stanford.edu/entries/hermeneutics/. {This SEP entry provides the most authoritative and accessible overview of the hermeneutical tradition in English, covering the hermeneutic circle, the history of the discipline from antiquity to the present, and the major contemporary debates. It is particularly strong on philosophical hermeneutics (Heidegger, Gadamer, Ricœur) and on the methodological disputes between Gadamer and Hirsch. Indispensable as a reference entry for readers approaching the primary sources.}
Zovko, Jure. “Expanding Hermeneutics to the World of Technology.” AI & Society: Knowledge, Culture and Communication (2020). doi: 10.1007/s00146-020-01052-5. {The paper analyzes the extension of Heideggerian hermeneutical interpretation to products of contemporary technology, arguing that technological artifacts — from laptops to smartphones — constitute part of our Lebenswelt (lifeworld) in the Heideggerian sense and therefore require hermeneutical analysis rather than mere technical description. Zovko engages Don Ihde’s material hermeneutics and Bruno Latour’s actor-network theory to develop an account of technological interpretation that the essay’s Section VIII draws upon. Published open access; available through PubMed Central (PMC7467140).}