Skip to main content

Verified by Psychology Today

Philosophy

Bare-Naked Philosophy

Does knowing a word’s meaning reveal the nature of things?

What happens in philosophy when science discovers facts about the mind – or about the brain? Nothing much. To paraphrase most mainstream philosophers, “science has nothing to teach philosophy.” Really? Are we not talking about the same thing – the mind? Yes, but philosophers have a different– much deeper -- method for getting at the Really Deep Truths, many philosophers assumed in the last century, and still do today. Conceptual analysis – reflecting on what words mean – was thought to be the philosophers’ special route to the Really Deep Truths. This might involve thought experiments, but not, of course, real experiments. Mere science could neither reveal nor challenge those Deep Truths. Not everyone bought into this. But Willard Van Orman Quine at Harvard was the first established philosopher to expose this swell emperor as merrily swaggering about in nothing but his birthday suit.

Quine’s books changed my intellectual life. He made me realize that the old philosophers, such as Aristotle and Hume, were right while the new boys -- Fodor and Kripke, for example, were wasting my time. Philosophy is about understanding the nature of things. Arm-chair declarations on word meaning, by contrast, are not about the actual nature of perception or choice or consciousness. Hence my shift to neurophilosophy.

Below is my foreword to the new edition of Quine’s classic, Word and Object, first published by MIT Press way back in 1960. Amazingly enough, the book’s message is still relevant. Speaking of inertia…..

Foreword

In the winter of 1966 the Philosophy Department at the University of Pittsburgh ran a graduate seminar on a controversial book, Word and Object, by WVO Quine. Already contentious for having melted down the profession’s favorite tool -- the analytic/synthetic distinction – in his much ballyhooed “Two Dogmas of Empiricism”, Quine now went further.

The Pittsburgh seminar divided along these lines: those who adhered to idea that conceptual analysis revealed necessary truths about the way things are and the way the mind works, and those who, siding with Quine, did not. The weekly meetings were scenes of fiercely fought battles, led mainly by the more senior graduate students who well understood the stakes in the debate and who could draw deeply on the history of science and philosophy to make points. Wilfrid Sellars was a powerful figure at Pittsburgh, and though he was skeptical about many claims of necessary truth, some still seemed defensible. Sellars’ students mounted a spirited defense.

It was a melee, a rhubarb, a brawl where no holds were barred. And the discussion was not confined to the seminar, but raged all week, over coffee, over beer, and in the common room. Are there any a priori truths or just highly probable, very strongly held beliefs? Is language essentially just a communicative tool, not a repository of conceptual truths? If concepts change as empirical discoveries are made by the developing sciences, does that hold also for deeply personal concepts like knowledge, free will and consciousness? Is metaphysics just a batch of questions not yet answered by science? Likewise for epistemology and philosophy of mind?

For all of us in that memorable seminar, these were questions at the heart of philosophy as practiced in the twentieth century. Quine, it was clear from Word and Object, fully realized the implications of his points. As he said, “And philosophy in turn, as an effort to get clearer on things, is not to be distinguished in its essential points of purpose and method from good and bad science.” P. 3-4 W&O. Notice: purpose and method. He meant what he said.

Quine taught us that a “conceptual scheme” is a loose and dynamical organization of interconnected beliefs and meanings. He realized that separating beliefs from meanings was mainly a pragmatic, not a principled, business, yielding nothing interesting by way of necessary truths. When important beliefs about the world change, it is evident that meanings change too.1 In the brain, there would be no principled difference. Consequently, as fast as Sellarsians in our seminar contrived thought experiments to bolster claims about conceptual truths, the Quine faction dissected them as parochial, circular, or uninformed. They saw no respectable way to test a thought experiment except by an exercise of the imagination, a method sorely in need of a more reliable foundation if it is to tell us anything about how things actually are, as opposed to how someone happens to suppose they are.

The standing strategy among the seminar’s Quineans was to challenge in all contexts any claim to necessary truth – to conceptual truth. After all, the obvious necessary “truth” that space was Euclidean had been exposed by science as a falsehood. Other “necessary truths” -- such as that knowledge of one’s own mental states is incorrigible -- suffered comparable indignities. One earnest response was to complain that if the counter-examples were allowed, meaning would change and your conceptual scheme might fall apart (I am not making this up). Yes, meanings do change, was the reply, and that was precisely Quine’s point. Meanings are not essences in Plato’s heaven. And so it went.

It took a long time for the monumental significance of Quine’s work, including his later essays on naturalizing epistemology, to sink in. (Naturalizing epistemology meant using science to understand learning and memory, for example.) Surprisingly, many philosophers went on doing conceptual analysis and pushing alleged necessary truths as though Quine were irrelevant. His main arguments were not so much countered as sidestepped. Conceptual necessities remained fashionable, though as often as not they were merely convictions marketed as necessary truths. In the meanwhile, barely noticed by the profession, the sciences of brain and behavior moved on.

Neuroscience made progress in understanding how brains construct perceptual images from retinal stimulation, how brains learn and remember things, and how brain makes decisions, just as Quine had believed it probably would. The idea that the bedrock meanings must correspond to sense data rather than objects like dogs and dads, fell apart because early visual processing – in the retina, in the thalamus and in cortical visual area V1 -- is not conscious.

Clinical neurology produced striking patient profiles that implied the need for conceptual revision; for example, from split brain subjects in whom conscious awareness was not unified, from cortically blind patients who were nevertheless utterly convinced they could see (Anton’s syndrome), and from amnesic patients who maintained a sense of self despite having lost virtually all autobiographical memory. The logic seemed stark: either you deny the data or you see your conceptual necessities related to “self” or “consciousness” reduced to merely empirical claims whose truth was on the skids.

Psychologists began to study conceptual structure empirically, finding that work-a-day concepts were not defined in terms of necessary and sufficient conditions. Rather, they had a radial structure, with prototypes marking general agreement on what counts as an instance, and strong similarity to the prototype falling off with distance from the center. The boundaries are fuzzy, not sharp, meaning that sometimes there is no right answer to whether an instance falls under a category or not. This holds not only for categories like vegetable and friend, but also for knows and believes. Field linguists began to find that linguistic categories tended to reflect local ecology, the groups’ history, and the way members of the group made their living. Linguistic universals, long the darlings of theorists, took a drubbing as one by one, they fell to the disconfirming data of field linguists.2

Developmental psychologists began to discover what cognitive organization the newborn brings to its world, and how cognitive capacities develop and change over time. These discoveries did not yield necessary truths but rather empirical truths about how brains navigate their physical and social worlds.

Of course a priori truths with no epistemological heft could always be cooked up. As Quine pointedly acknowledged, sure, you could dig in your heels and refuse to allow a change in meaning consequent upon a discovery of fact. If you are stubborn enough, you could insist that fire is an element because by element ‘we’ mean earth, air, fire and water. Nonetheless, such heel-digging is unlikely to be rewarding. The method, alas, is ad hoc and problematic; it is more similar to bad than to good science. And anyhow, the project no longer looks like analysis of concepts actually in use, but a futile exercise in conceptual hygiene aimed at rescuing a discredited idea.

To many whose copy of Word and Object had become dog-eared and held together by rubber bands, a wide range of scientific achievements in the brain and behavioral sciences seemed to fit the idea of empirical progress in epistemology that Quine had broadly advocated. Thus at some point in the early 1970’s, Paul Churchland and I looked at each other and agreed: it is pretty clear by now that the arguments concerning naturalizing epistemology favor Quine. So let us just get on with it. Neuroscience had become irresistible, and there was no reason to want to resist it. Ditto for psychology, behavioral economics and computer science. Others who had initially viewed philosophy as a method for augmenting our understanding of the mind-brain also saw the fertility in the brain and behavioral sciences, and many left philosophy to pursue those sciences. The die-hard conceptual analysts waved us off, cheerfully predicting that nothing of philosophical significance would come from the advancing brain and behavioral sciences. Quine, by contrast, had rightly suspected where this was leading.

I have no doubt that Quine had to muster a great deal of courage in order to publish Word and Object, for he was bucking an overwhelmingly powerful tradition of conceptual analysis as a method advancing knowledge. He was not just biting at its heels, he was rooting out the core. As he calmly noted, he wanted to view language as a physical phenomenon. There are mechanisms underlying language use; there are productive ways to study those mechanisms. Conceptual analysis is not a productive method to address those mechanisms. Suitable clarification is always welcome, of course, but forced or phony precision where none exists is counter-productive.

So what is a philosopher to do, if not troll his mind for conceptual truths? The Quinean answer is this: many things, including synthesizing across various subfields, and theorizing while immersed in and constrained by available facts. Despite much hand-wringing by over-wrought philosophers, Quine did not aim to put an end to philosophy, but to remind us of what the older philosophical tradition had always been: broad, encompassing, imaginative, and knowledgeable of everything relevant.

1 (See also Roger Gibson’s excellent book The Philosophy of WVO Quine: An Expository Essay 1982).

2 Daniel Everett, 2011 Language: The Cultural Tool. Random House/Pantheon

Patricia S. Churchland is the author of the forthcoming Touching a Nerve: The Self as Brain published by W.W. Norton.

advertisement
More from Patricia Smith Churchland B. Phil
More from Psychology Today