A decade ago, philosopher Peter Unger said the following in an interview:
To me, all this sort of stuff is parochial, or trivial. People who are signing up for philosophy don’t think they’re going to end up with this kind of stuff. They want to learn something about the ‘ultimate nature of reality’, and their position in relation to it. And when you’re doing philosophy, you don’t have a prayer of offering even anything close to a correct or even intelligible answer to any of these questions.
In a way, all I’m doing is detailing things that were already said aphoristically by Wittgenstein in Philosophical Investigations. I read it twice over in the sixties, pretty soon after it came out, when I was an undergraduate. I believed it all — well, sort of. I knew, but I didn’t want to know, and so it just went on.
… I should have known better as an undergraduate, with David Lewis in our sophomore year. We read Philosophical Investigations, twice over with yellow markers. We knew, but didn’t want to know. So we puffed around, and churned out a lot of pages.
This interview was done in part to promote his new (at the time) book, though I suspect this kind of self-effacing bluntness isn’t the most effective way to get folks to read the pages you churn out.
But yes, much philosophy should have stopped at Wittgenstein’s Philosophical Investigations. Its relentless questioning had a point:
What use is this [philosophical] investigation? It seems to destroy what is interesting, great, and important, like wrecking buildings and leaving stone and rubble. But we destroyed only houses of cards, and cleared up the ground of language on which they stood.
The thesis was that so many of these philosophical things, which presented as rather profound or fundamental or transcendental or what have you, were really just lexical issues, whose resolutions were dictated strictly by opt-in, opt-out lexical choices.
Consider Die Hard. Is it a Christmas movie? As folks bat the question around, they refer to this or that feature observed in Die Hard, but alongside each such observation is, explicitly or implicitly, an assertion of whether that puts Die Hard within the category of “Christmas movie” or outside it.
As one observes these quibbles one soon realizes that nobody’s arguing about Die Hard, really; they’re arguing about the category, “Christmas movie,” which is to say, which litmus criteria by which the label “Christmas movie” should be applied. Some have narrow litmus criteria that require the film to be Christmas-focused, with an overt lesson about acknowledging the holiday and its meaning. Others have broad litmus criteria that take time of year, Christmas paraphernalia, mention of the holidays, etc. as qualifying features when their tally crosses some fuzzy “sorites”-like boundary.
Turns out that most attention-getting philosophy is just that. And nothing more.
Philosopher Hilary Putnam, in 2004:
Conceptual relativity… holds that the question as to which of these ways of using “exist” (and “individual,” and “object",” etc.) is right is one that the meanings of the words in the natural language, that is, the language that we all speak and cannot avoid speaking every day, simply leaves open. Both the set theory that developed in the 19th (and early 20th) century and the mereology that Lezniewski invented are what I will call optional languages (a term suggested by Jennifer Case)… The question of whether mereological sums “really exist” is a silly question. It is literally a matter of convention whether we decide to say it exists.
And for a spree of further examples, here’s some indulgent self-quotation from a few years back:
For example, I could say my body includes my mind. I could say my self includes my body. I could even say that my self is equivocal to my body. I could say my hair is a part of my body, or that it is not anymore. I could say my dead skin cells are a part of my body, or not. I could say that a tumor is part of my body, or not. I could say, even under naturalism, that my-self is something beyond my body at any given state, and more of a collection of patterns across a period of time. Then again, someone could say the same about my body. In a certain culture, "body" may be qualitatively limited to a small time period, whereas "self" is over my lifetime, and goes through many changes. In another, "body" may be understood as the thing that undergoes change over time as it grows and develops, while "self" is seen as a thing that is reborn, i.e., a person has many selves throughout their lifetime.
And how do you judge which definition is "right"? You can look for inconsistencies, of course, but some frameworks have as much internal consistency as others. You can accuse folks of being fuzzy, but sometimes fuzziness more precisely captures reality than a contrived pretense of precision. And so typically what you do is you judge them by their being "unintuitive" or "strange." Which, of course, is just an expression of what you're used to and what you aren't.
A key moment for me was in 2016 when I was driving to my daughter's daycare, and she asked where "the sky" starts.
And I realized that not only are there different senses we use meaningfully — sometimes we imagine it as equivocal to the atmosphere, but we usually don't call the air right next to my face "the sky"; sometimes we imagine it as more of a two-dimensional region in our viewports (whatever we see outside that isn't land or an airborne object); if I throw a ball vertically, two perfectly sane people could dispute whether it "truly" "went into the sky" depending on those senses and impressions and vibes — but there's no "referee" outlawing or permitting any particular usage, save for heuristics like consistency (which we often don't obey) and utility (particularly in communication; in brief, we want to avoid confusion as we share our thoughts).
And this applies to everything in philosophy of mind (terms like “mind,” “body,” “consciousness,” “thoughts,” “self,” “personality,” etc.), epistemology (terms like “know,” “justify,” “evidence,” and “warrant”), identity (terms like “is” and “self”), modality (terms like “possible,” “possible worlds,” “contingent,” “counterfactual,” and “necessary”), ethics (terms like “good,” “utile,” “flourishing,” “significant,” and “value”), agency (terms like “free will,” “responsible,” “agent,” “choice”), etc.
The mess is pervasive because metaphysics is just whatever we say "about the stuff" — i.e., it's language.
A metaphysic is a lexicon, a set of expressive choices (labels, framings, litmus criteria in various contexts, taxonomies, gestures, symbols, etc.). There isn’t just one of them. The available metaphysics are beyond count.
Nothing is safe. Even Aristotelian “actuality,” “potentiality,” “substance,” etc. can be interpreted in various ways. For each set of ways, that is a materially different metaphysic. Perhaps the ways in which they’re commonly used all have coherence problems. Could be!
Nothing is safe. “Simple” doesn’t even have a simple meaning, as Wittgenstein showed…
"We see we use the word 'composite' (and therefore the word 'simple') in an enormous number of different and differently related ways."
… and for each set of ways, that is a materially different metaphysic.
Nothing is safe. “Exist,” “illusion,” “right,” “wrong,” “rational,” “logical,” “probable,” “physical,” “significant,” “plausible,” “intuitive,” “seems,” “subjective,” “objective” — you’re seeing the pattern now, I hope. It’s everything. And this isn’t just an annoying situation; it’s a fraught one.
But knowing bad news is its own good news. This recognition, that all of our “philosophy words” are radioactive, has us finally capable of appreciating philosophy’s biggest problem: Polysemy.
Polysemy is when a single term can mean subtly different things, perhaps even a galaxy of subtly different things.
Now, we all know this can happen. We might even understand that it happens quite often, but we hope it largely gets addressed. Confusion happens, it’s messy, but people get back on the same page, and everything’s fine. Right?
Not right.
What actually happens: The philosophical arguments you hear most about are riddled with terms that are chronically under-defined and under-notated, and this may be part of the reason you hear so much about those arguments. These ambiguities allow for a strange syllogistic trick that drives both confusion-based engagement (good for surfacing) and the false appearance of a bold-yet-rational conclusion (good for surfacing).
The rundown, from a lovely discussion with philosopher Lance Bush earlier this year:
And this one weird trick is the root of the torrent of bathwater that continues to cascade through philosophy, obscuring the little baby worth holding onto.
That’s because, when it comes to engagement, the bug is a feature:
Yeah, problems of individuation are ubiquitous. And, as you say, there’s no privileged set. Philosophers are loath to admit this because they think it’ll destroy philosophy. But it’s actually the first step to enlightenment. You just have to ask the right questions. Like: why are we so interested in analysis — in reducing larger individuations (e.g. ideas and concepts) to smaller individuations? And why is there always a gap — a counterexample — such that the parts don’t quite add up to the whole?
What do you use for your diagrams?