Philosophy and the Vision of Language (routledge Studies in Twentieth-century Philosophy)

II. RADICAL TRANSLATION AND INTERSUBJECTIVE PRACTICE

Introductory: From Syntax to Semantics and Pragmatics

“The scientific world-conception serves life, and life receives it.”[^140]

-The Vienna Circle Manifesto

In the last two chapters, we have seen how the analytical projects of Frege and the early Wittgenstein already demonstrated some of the revolutionary implications of a determinative theoretical recourse to thestructure of language in relation to its everyday practice. Although this recourse did not figure explicitly in Frege’s project of logical clarification, it was nevertheless, as we have seen, already strongly suggested by his application of the context principle to criticize psychologism. In Wittgenstein’s explicit formulation of a use-theory of meaningfulness in theTractatus , this critical application became the basis of a methodologically radical reflection on the significance of the structure of signs in the ordinary and everyday contexts of their use. Both projects, indeed, insofar as they raised the question of the relationship of signs to their ordinary, intersubjective use, also suggested, at least implicitly, the pervasive and determinativeinstabilities of a structuralist picture of language in relation to the life of practice it aims to capture. Although it would take a long time yet for these implications to come clearly to light, the projects that immediately followed in the course of the developing tradition of analysis would nevertheless confirm them even as they redefined and broadened the practice of logical or conceptual “analysis” itself.

The first, and most methodologically significant, application of Wittgenstein’s program of logical syntax was, as we have seen, the Vienna Circle’s project of analysis. Carnap, Schlick, and other logical empiricists applied the methods of structural analysis to produce a wide-ranging critical and reformative project, conceived by at least some of its adherents as having radical and utopian social consequences as well.[^141] Especially in its pejorative application against “metaphysics,” the project involved, as recent scholarship has demonstrated, significant and central misunderstandings of Wittgenstein’s original project.[^142] Nevertheless it demonstrated the relevance of the specific methods of logical analysis to broader questions of philosophy of science, politics, and culture, and consolidated the legacy of these methods for the logically based styles of philosophical analysis and reflection that became more and more popular, especially in the USA and Britain, following World War II.

Around the same time, the continuation, by philosophers associated more or less directly with the Circle’s central project, of Frege’s original attempt to display the logical foundations of mathematics, produced a set of radical results, of a mostly negative character, that demonstrated in a fundamental way the inherent instabilities involved in the attempt to analyze their structure. Kurt Gödel’s 1931 “On Formally Undecidable Propositions ofPrincipia Mathematica ” reported what would become the best-known and most historically decisive of these results, the two famous “incompleteness” theorems showing that any consistent axiomatic system powerful enough to describe the arithmetic of the natural numbers will formulate truths that cannot be proven within that system. The result was widely perceived as demonstrating the failure of the logicist program of reducing mathematics to logic that had been begun by Frege and continued by Russell and Hilbert. It turned on the possibility of constructing, in any sufficiently strong system, a sentence asserting its own unprovability within that system. The resulting sentence is true but, since it is true, cannot be proven. In reaching it, Gödel used the metalogical technique of “arithmetization” to represent the syntax of a formal system, including the notions of proof and consequence, within that system itself. Working independently with a similar metalogical technique, Alfred Tarski showed in 1933 theindefinability of arithmetical truth within a formal system of arithmetic[^143] . That is, he showed that it is impossible, in any system strong enough to capture the axioms and results of arithmetic, to define within it a formula which holds of all and only the sentences within it that are true (on its standard interpretation). The result, like Gödel’s, again turned upon the possibility of constructing a “self-referential” sentence, in this case one saying of itself (given any putative truth predicate) that it is not true; to demonstrate this possibility of construction, Tarski depended, as Gödel had, of arithmetization to represent the formal syntax of a language within the language itself.[^144] Both results undermined intuitively plausible assumptions about the ability of formal systems to capture the basis of ordinary judgments about the truth of mathematical propositions.

The results of Gödel and Tarski were to have a deep and determinative influence on the methodological assumptions of philosophers within the analytic tradition. Most decisive were their effects on the program, of which Frege, Russell, Carnap, Schlick, Wittgenstein and Hilbert had all been partisans, of seeking to clarify the logical structure of a language or a specialized portion thereof (for instance the language of arithmetic) purely through asyntactic description of its structure. In a later paper, published in 1944, Tarski presented his own earlier result as demanding that the purely syntactic description of language structures be supplemented with what he called “semantic” concepts of truth and designation. Semantics, he said,

is a discipline which, speaking loosely,deals with certain relations between expressions of a language and the objects (or ‘states of affairs’) ‘referred to’ by those expressions As typical examples of semantic concepts we may mention the concepts of designation, satisfaction, and definition as these occur in the following examples:

the expression ‘the father of his country’ designates (denotes) George Washington ;

snow satisfies the sentential function (the condition) ‘x is white’ ;

the equation ‘2 x = 1’ defines (uniquely determines) the number ½ .[^145]

Because it is impossible, as was shown by Tarski’s own earlier result, to give a consistent purely syntactical definition of truth for a language within that language itself, the theorist who wishes to give an account of truth must avail himself also of the semantic or “referential” relationships between the language’s terms and the objects they stand for. The distinction Tarski suggested between syntax and semantics was later to play a definitive role in the foundations of (what would come to be called) model theory. Even more broadly, it expressed the necessity, for a wide range of philosophers who followed, of supplementing the purely syntactical analysis of a language with a “world-directed” semantical analysis of the referential character of its terms and formulas.[^146]

In addition to the dual analysis of language in terms of rules of syntax and rules of semantics, practitioners of analytic reflection on language would soon have a third, explicitly formulated category of sign behavior with which to reckon as well. This was the category of “pragmatics” suggested by Charles Morris in 1938.[^147] Drawing on pragmatist philosophers such as James, Mead, Dewey and (especially) Pierce, Morris suggested that, in addition to the syntactic theory of the relations to signs to one another, and the semantic theory of their relations to theirdesignata , pragmatics be added as a third explicit component of semiosis, or the total theory of sign function. Pragmatics could then be defined thus:

By ‘pragmatics’ is designated the science of the relation of signs to their interpreters…Since most, if not all, signs have as their interpreters living organisms, it is a sufficiently accurate characterization of pragmatics to say that it deals with the biotic aspects of semiosis, that is, with all the psychological, biological, and sociological phenomena which occur in the functioning of signs.[^148]

With explicit reference to Carnap’s “logical syntax” project and to the definition of semantics with which, it now seemed, it had to be supplemented, Morris held that the three dimensions of sign analysis could, jointly, comprise the basis for a complete program of logical analysis.[^149] With the clear separation of the three dimensions of semiosis, and the analysis of the ‘rules of usage’ of given sign vehicles in each of them, the potential objectivity of any sign, and so its utility for scientific description, could be verified. Indeed, with the intersubjective standardization of usage, such objectivity could indeed actually be achieved. Even more generally, through this description of rules, the three-dimensional analysis could clarify, without residue, all the questions and problems that adhere to the ordinary concept of “meaning,” showing the uselessness of this concept for logical analysis and the possibility of dropping it from scientific discussion.

These innovations of semantics and pragmatics clearly represent a widening and diversifying of the original, purely syntactical project that had defined the conception of analysis most broadly shared at the beginning of the period of logical positivism. In relation to this original project, they expressed the necessity of a broader set of theoretical categories to capture the referential and intersubjective complexities of sign functioning. Nevertheless, the difficulties and considerations that led to the supplementation of syntax with semantics and pragmatics did not cause any abandonment of the basic structuralist picture of language as a regular totality of signs wholly governed by rules of use. Indeed, as is clear in Morris’ text, the possibility of including the other, non-syntactic sign dimensions in this structuralist picture was even seen as strengthening it. On Morris’ conception, the rules of usage might have to comprise not only syntactical rules of formation and intercombination for signs and sign sequences and semantical rules connecting particular signs to their objects, but also pragmatic rules specifying the tendencies of language-users to employ, or expect the employment of, particular signs on particular occasions. But since all of these rules were “rules of usage” in the relevant sense, and all of them (or so Morris assumed) could be completely and exhaustively described within an analysis of a language as a whole, the introduction of the category of pragmatics provided no essential difficulty to this project of analysis or the utility of its results. The structuralist picture of language that had originally been the basis of Carnap’s “syntax” project thus continued to characterize the aims and ambitions of analysis, even when the dimensions of semantics and pragmatics were explicitly brought in as well. The results and tensions that could have demonstrated an inherent and general instability within the structuralist project of analysis were instead taken only to demand, within it, an expansion of the categories of analysis to include the other dimensions of sign functioning that had been ignored by the purely syntactic conception.

Thus, with the conclusion of the project of logical positivism, structuralism remained entrenched as an underlying theory of language; the results that could have led to a more general recognition of its underlying instabilities and inherent tensions were interpreted, instead, simply as requiring an expansion of its terms and categories of analysis. But the question of the relationships between the syntactical and semantical clarification of language and the “pragmatic” dimension of the structure and effects of its use would soon become deeply relevant to quite another development of the methods of philosophical reflection on language. In 1955, John Langshaw Austin, then White’s Professor of Moral Philosophy at Oxford, delivered at Harvard the William James Lectures that were later collected asHow to Do Things With Words . The lectures expressed ideas that had occurred to Austin as early as 1939, and had also formed the basis for lecture courses and discussions at Oxford in the 1940s and early 50s.[^150] In the lectures, Austin set out, first of all, to criticize what he saw as a longstanding over-emphasis, in philosophical discussions of language, on the work of “statements” in “stating” or “describing” facts truly or falsely. The recent trend of submitting language to a new level of scrutiny, Austin said, had indeed clarified the fact that, in many cases, what appear to be propositions with sense are in fact either nonsensical or mean something quite different than they at first appeared to.[^151] Austin followed Schlick and Carnap in proclaiming the new scrutiny a “revolution in philosophy”[^152] ; but its further development, Austin suggested, would depend on the recognition of a type of utterance that the new criticism of language had not, as yet, considered. As distinct from “constative” utterances whose work is to describe or otherwise state (and so can be evaluated as true or false),performative utterances can be defined, according to Austin, as those that, though they do not “describe” or “report”, nevertheless are such that their utterance “is, or is part of, the doing of an action, which again would notnormally be described as, or as ‘just’, saying something.”[^153] As homespun examples, Austin offers the utterance of a vow in the course of a marriage ceremony, the naming of a ship while smashing a bottle on its bow, the bequest of an item in a will, or the placement of a bet. None of these utterances are true or false; yet they accomplish their work, the performance of some action, when uttered in the right circumstances and along with the right (normally conventional or traditional) accompaniments.

The question of the status of these circumstances and accompaniments emerges, in the subsequent analysis, as a particularly important and decisive one for the possibility of the general theory of performatives that Austin attempts to develop. For a performative utterance to succeed in accomplishing its ordinary effect, Austin argues, at least two kinds of conditions must normally be satisfied. First, there are must be an “accepted conventional procedure,” for instance the marriage ceremony, and it must in fact be completely carried out in the right way and in appropriate circumstances by the right people; second, there are conditions concerning the feelings, intentions, and subsequent actions of the actors, including at least in some cases that they must “in fact have those thoughts and feelings” that the ordinary procedure demands.[^154] Austin devotes the next several lectures to the analysis of these two sets of conditions for the success or “felicity” of performatives. In connection with the second set of conditions in particular, Austin recognizes that the infelicities that can affect the utterances most obviously deserving the status of performatives can also, equally, preclude the success of some constatives, in particular those expressing belief. Here, as Austin admits, the original distinction between constatives and performatives threatens to break down.[^155] As becomes evident upon further analysis, and as Austin himself argues, there is, indeed, no purelygrammatical or structural criterion sufficient to distinguish performatives from constatives in all cases.[^156] The underlying idea, of course, is that uttering a performative isdoing something, whereas uttering a constative is not (or is, only insofar as what is done is that something is said). We may try, Austin suggests, to mark this difference by noticing the primacy of the first person singular present indicative in the ordinary utterance of performatives. We might offer it as a criterion, for instance, that all genuine performatives can be put into this form.[^157] But this does not, as Austin says, settle the question. For many non-performatives can also be put into this form, and there are in-between cases as well. In describing the difference between performatives and constatives, we may be tempted to say that in the case of performatives, the person issuing the utterances (and so performing the action) is referred to in a special way, either explicitly in the first person, or, when this does not take place, by being the person who does the uttering or (in the case of writing) by appending a signature.[^158] But again, these criteria fail to distinguish performatives, since they may hold in the case of constatives as well.

The results of Austin’s analysis lead him to despair of finding a general, structurally motivated distinction between performatives and constatives in language as a whole; instead, he suggests that we undertake the analysis of the “speech act” as a “total speech situation” without prejudice to the question of its performative or constative character.[^159] In particular, within the analysis of such situations, “stating” and “describing” are to be seen simply as the names of two particular types of acts, with no essential priority in the large and diverse set of illocutionary accomplishments of which ordinary language is capable. The determination of the truth or falsity of sentences is, then, to be treated simply as one dimension, among many others, of their evaluation in terms of satisfactoriness, and the longstanding distinction between the “factual” and “normative” or “evaluative” thereby undermined in the course of a more comprehensive analysis of language and its effects.[^160]

The “speech act theory” that Austin inaugurated has enjoyed a long and influential career, both within and without the analytic tradition itself. John Searle’s influential development and schematization of Austin’s original distinctions represents perhaps the most direct continuance of the theoretical project of analysis that Austin had suggested; in a somewhat different direction, Paul Grice has developed Austin’s inspiration into a wide-ranging theoretical analysis of “speaker meaning” in terms of the intentions and maxims that are operative in determining and constraining ordinary communication. Some of the subsequent developments of speech act theory, and some contemporary contributions to the analysis of linguistic phenomena such as indexicality, continue to assume, following Morris’ gesture, a distinction of the pragmatic dimension of “speaker meaning” from the semantical and syntactical analysis of the meaning of sentences and words. But in relation to the analytic tradition’s longstanding project of structuralist analysis and reflection on the systematic structure of language, the most enduring and methodologically significant contribution of Austin’s analysis is not simply his development of the third, “pragmatic” dimension of language that Morris had already suggested. It is, rather, his demonstration of the essential inseparability of the pragmatic dimension from the other two, and hence of the insuperable entanglement of any philosophical account of the basis of meaning with the problems of the pragmatic application of signs.[^161]

Within the subsequent development of structuralist methods of analysis and reflection on language, the main effect of Austin’s work was, most of all, to make explicit what had long been implicit in discussions of the “rules of usage” governing a language: namely that such rules, if they exist at all, must be conceived as constraining or systematizing the ordinary, intersubjectivebehavior and action of individuals in a community. From this point on, and with very few exceptions, the tradition’s main projects devoted to the analysis and clarification of language and its structure all centrally involved reflection on the significance ofpublic linguistic action and its relevance to the determination of meaning. After Austin, these projects almost universally took it for granted that the structure of meaning in a language is intelligible, if at all, in the regularities evident in the linguistic usage of terms and sentences across a variety of circumstances, and controlled by judgments or standards of what is “regular,” “normal,” or “ordinary” within a larger speech community.

Thus the structuralist picture of language, which had begun its philosophical career as the theoretical correlate of the early project of a purely logical or syntactic analysis, explicitly became the expression of a much broader and more varied project of analytical and structural reflection on the relationship of language to the ordinary life of its users. “Reductionist” or “foundationalist” attempts to analyze empirical language into the elementary constituents or sense-data that were earlier supposed to provide their ultimate basis were replaced by “naturalist” and holist projects that assumed no such foundation, instead tracing the meaning of empirical propositions to their public and intersubjectively observable conditions of verification or demonstration. Meanwhile, in the nascent field of “philosophy of mind,” the earlier analyses that had still accorded the subjective experience of an individual a basic and pre-linguistic status as anexplanandum ceded to discussions of the use, in essentially public and intersubjective contexts, of the various terms and expressions of mental life. In many cases, the assumption underlying the shift was that those earlier analyses, tracing the phenomena of mental life to the closed interiority of the subject, had ignored or misplaced the significance of language to the question of their status.[^162] The confusion was to be corrected through insistence on the essential role of language in articulating our access to the concepts and terms involved, and of language itself asessentially “public.”

The new forms of analysis and analytic reflection, recognizing the indispensability of reflection on the use of terms and locutions in public, intersubjective contexts, were, however, bound to encounter essential questions and constitutive uncertainties in just the places that the difficulties of Austin’s original analysis already suggested they might lie. If, for instance, linguistic meaning is to be understood as a matter of the usage of terms or sentences across diverse contexts, then the question of the basis of theregularity of this usage is bound to come to the fore. Reference to the influence of a “community” or a set of conventionally established procedures or practices in determining or regulating usage does not solve the problem, but instead raises the additional questions of the nature and institution of such communal standards and the basis of their force in constraining or criticizing individual performances. Emphasis on the essential “publicity” or “intersubjectivity” of linguistic acts tends to make the agency of the individual, what Austin in fact found essential to any possibility of distinguishing performatives from constatives, look mysterious; perhaps more significantly (as we shall see in the next chapter), it poses deepprima facie problems for the analysis of the form and structure of reports of first-person, subjective experience. Finally, and most decisively for the ultimate fate of the specific project with which Austin most directly associated himself, the conception of linguistic meaning as grounded in regular and structurally interconnected patterns of “ordinary” usage makes the elucidation of meaning dependent on the systematic elucidation of these patterns, as they operate within, and define, a language as a whole. The epistemological and methodological problems of the theoretician’s access to this usage, and his claim to distinguish between the “ordinary” uses of language and its non-ordinary (typically “metaphysical” or “philosophical”) extensions, would prove determinative in the reception, development, and eventual widespread repudiation of the philosophical practices that now came to represent the main stream of analytic philosophy.

The school of “ordinary language philosophy” that Austin and Ryle represented, and that flourished at Oxford after World War II, was initially influenced to a larger degree by Moore and the early Wittgenstein rather than by the Vienna Circle. Nevertheless, like the philosophers of the Circle, its foremost proponents took it that reflection on the systematic interrelationships of terms and propositions, and the regularities of their use in various contexts, could provide the basis for a radical critique of the illusions and unclarities to which we can regularly (and especially when doing philosophy) be prone. One chief form of these errors was the tendency of language to appear to refer to pseudo-objects or fictitious entities which, upon analysis, could be seen to be eliminable within a clarified account of linguistic reference. As early as 1932, in the influential article “Systematically Misleading Expressions,” Ryle had argued for the utility of such an analysis of the reference of ordinary terms and phrases in demonstrating their misleading referential pretensions.[^163] Such analysis involved, as it had for Frege and Russell, demonstrating the real logical form of the terms and locutions in question, over against the tendency of ordinary language to obscure them. It therefore required the determination and specification of the logical or (as Ryle was inclined to put it)categorical structure of terms in a language as a whole. The errors and confusions to which philosophical analysis most directly responds, Ryle argued in the 1938 article “Categories,” could uniformly be presented as categorical confusions, failures to understand or distinguish the categories or logical types to which, within the structure of a language as a whole, certain terms belong.[^164] Such analysis, Ryle followed Frege and Wittgenstein in holding, would trace the structure of terms in a language by reflecting on the inferential relations among propositions as a whole, for, as Ryle put it, the logical types into which terms in a language must be sorted “control and are controlled by the logical form of the propositions into which they can enter.”[^165] In accordance with this recognition, Ryle argued, logical analysis of propositions to show their categorical structure - to identify and analyze the simple concepts that comprise them - must begin with the identification of logical relationships of identity and difference of sense amongwhole propositions:

It has long been known that what a proposition implies, it implies in virtue of its form. The same is true of what it is compatible and incompatible with. Let us give the label ‘liaisons’ to all the logical relations of a propositions, namely what it implies, what it is implied by, what it is compatible with and what it is incompatible with. Now, any respect in which two propositions differ in form will be reflected in differences in their liaisons … Indeed the liaisons of a proposition do not merelyreflect the formal properties of the proposition and, what this involves, those of all of its factors. In a certain sense, they are the same thing …[^166]

Like Wittgenstein in theTractatus , Ryle thus held that a proposition’s logical relations with other propositions determine its logical form; and it is only by determining these relations that its simple terms can be isolated. Ryle followed Wittgenstein, as well, in identifying the simple terms thereby shown with symbols defined by their logical possibilities of significantuse in propositions. The resulting segmentation of propositions into their constituent concepts would yield a categorial grammar for the language, a structure or system of categories whose possibilities of significant combination are the direct image of the logical relations of significant propositions.

The doctrine of categories expounded in the 1938 article provides the setting for the notion of “category mistakes” that would become Ryle’s most pervasive critical tool in the widely influential reflection on the logical structure of the ordinary language of “mental life” that he undertook inThe Concept of Mind . Such mistakes, he held there, stem from the failure of users of language to appreciate the systematic categorical structure of the terms they use. As a result, they tend to formulate propositions which are in fact absurdities, although they may not seem to be so at first glance. The analyst’s work, in criticizing the absurdities inherent in traditional philosophical theories, consists in elucidating the actual categorical structure inherent in ordinary usage in order to show the particular ways in which the traditional philosopher abuses it. Here, Ryle takes the theory of mind tracing to Descartes, in particular, as a target of philosophical criticism. It is to be shown to consist in a single overarching category mistake subsuming a wide variety of smaller, more specific ones. The analysis and treatment of these individual mistakes sets the agenda for the specific analyses of the concepts of intelligence, thinking, perception, and intention that Ryle undertakes.[^167]

For Ryle as well as for Austin, therefore, the possibility of directing reflection on language against the errors of traditional philosophical theories depended on the theorist’s ability to elucidate the actual logical structure of the ordinary use of terms within a language as a whole. This ambition to characterize the actual logical structure of use was the basis of Ryle’s attempt at “rectifying the logical geography” of our concepts as well as Austin’s unsuccessful attempt to systematize the distinction between performative and constative. In both cases, even if atotal orcompleted description of the overall structure of language was not in view, philosophical insight was seen as relying on the partial application of reflection on distinctions and implications of ordinary usage to specially problematic areas. The standard for such reflection was the patterns of regularity and difference of usage implicit in the speech of language users, as these could emerge upon a bit of systematic reflection.

In the 1953 article “Ordinary Language,” Ryle sought to explain the program and defend it against misinterpretation by defining its key concepts and characteristic methods.[^168] Philosophically relevant reflection on “the ordinary use” of various expressions does not, Ryle clarifies, restrict itself to “ordinary” or “vernacular” terms or demand the drawing of any adventitious line between terms in use in “everyday” contexts and those employed only in special theoretical or technical ones. Nor is the philosopher’s attention to the use of an expression correctly directed toward what Ryle calls a “usage” - namely a “custom, practice, fashion or vogue” of using it. Whereas to know how to use a term is always, for Ryle, to know how to do something, “knowing” a usage in this sense does not amount to such a knowing-how. For it makes sense to suppose that a term may be, in some context, mis used , but “there cannot be a misusage any more than there can be a miscustom or a misvogue.” What the philosopher who attends to the uses of words takes interest in is not, therefore, the description of customs or practices of using them, but rather the distinction between what Ryle calls their “stock” or “standard” and “non-stock” or “non-standard” uses. He seeks to elucidate, in other words, in any particular case, what a term is doing when it does what it ordinarily does, what it accomplishes when it accomplishing the job it normally accomplishes. What is elucidated in such an elucidation, according to Ryle, is what earlier philosophers grasped as the nature of “ideas,” “concepts,” or “meanings;” we can understand it, in a more contemporary idiom as capturing the “rules of logic” as well:

Learning to use expressions, like learning to use coins, stamps, cheques and hockey-sticks, involves learning to do certain things with them and not others; when to do certain things with them, and when not to do them. Among the things that we learn in the process of learning to use linguistic expressions are what we may vaguely call 'rules of logic'; for example, that though Mother and Father can both be tall, they cannot both be taller than one another; or that though uncles can be rich or poor, fat or thin, they cannot be male or female, but only male.[^169]

For a brief time immediately after World War II, the methods of “Oxford language analysis” enjoyed great popularity. During this period, as the methods of analytic philosophy developed most centrally by the Vienna Circle and its associates were being transmitted to other scenes and supplanted by their methodological descendents, the Oxford style of analysis was even routinely treated as capturing the claims of linguistic analysistout court . For many of those who were just beginning to realize the philosophical implications of the reflection on language that Wittgenstein and the Vienna Circle had begun, the claims and practices of the Oxford analysts seemed to capture, especially well, the possibility of using such reflection to criticize traditional sources of philosophical error without, nevertheless, leading to (what was now being recognized as) the newer error of verificationism. But the vogue of ordinary language philosophy was brief. It was soon to become the subject of widespread doubts as well as brutal and almost wholly unjustified attacks on its basic methods and practices of philosophical clarification and analysis; these led, by the 1960s and 70s, to its general repudiation and replacement by other projects, in particular the methods of formal analysis and interpretation more directly associated with Quine and Davidson.

One of the most direct, if unfortunate, reasons for this repudiation was the attack launched by Ernest Gellner in his celebrated bookWords and Things in 1959; the book, which was notably introduced by Russell, accused ordinary language philosophy and the whole methodology of linguistic analysis of an empty and essentially “conservative” project that substituted the “cult of common sense” (p. 32) for genuine insight into reality and thus blocked or precluded any possible of criticizing socially entrenched practices or norms. [^170] The book became the cause of a notorious and public scandal when Ryle refused to allow a review of it to appear in Mind and Russell protested the refusal in the Times The resulting exchange ran for several weeks and consolidated, in the popular imagination, the image of a bitter debate over the proper methods and results of philosophical analysis. As was recognized by most of the philosophers who weighed in on the debate, though, Gellner’s image of the practices of ordinary language philosophy had been, from the beginning, a caricature. His arguments against such supposed bases of ordinary language philosophy as the “paradigm case argument” and the “contrast theory of meaning” did not, in fact, address any recognizable component of the methods that Austin, Ryle, and Wisdom had in fact articulated and defended. [^171] But rather than producing a broader, more critical discussion of its methods and the implications of their recourse to language, Gellner’s attack led, for the most part, to the still-current tendency to discuss ordinary language philosophy as a bygone or superseded method, without gaining any clear understanding of why it is so or what makes the methods that replaced it any better.

More generally, the practice of ordinary language philosophy still represents one of the most detailed and methodologically articulated expressions of the reflective and critical implications of our knowledge of language for the traditional problems of philosophy. As such, it expresses in a determinate and methodologically sophisticated way the significance of this knowledge of language for the form of a human life, or of its clarification for the solution or resolution of its problems. There is a tendency, evident in Gellner’s attack and still unfortunately widespread, to take the inherent instabilities of our access to language to show the irrelevance of linguistic reflection to the problems of a philosophical inquiry. This tendency is, no doubt, partially responsible for the dissimulation or refusal of language as a specific source of philosophical insight, in many of the current projects that nevertheless still persist in practicing modes of analysis or reflection first determined by the problems of our ordinary access to language. But it need not be taken to demand the wholesale refusal of the methods of ordinary language philosophy that are in fact responsible for some of the analytic tradition’s deepest and most penetrating insights into language, use, and our relationship to the words we speak. Recovered within a broader critical consideration, these methods could contribute substantially to a sharpening of these insights, and a consolidation of their significance for the future of philosophical inquiry.