When we speak of a property or trait instantiated by an object, we take two assumptions into account: the object in question has a property Bo which causes it to interact with the surrounding world in a particular way, and the perceiver has the property Bp which causes them to perceive those interactions in a particular way. This is an asymmetric relationship between perceiver and perceived.
Thus, when we talk about intensionality (what the concept of some predicate actually means), the perceiver is taken into account: the intensionality expressed by “x is blue” can be expressed as Bx, but in fact it would be more accurately expressed as:
∃x∃y((Box ∧ Bpy)→Bxy)
Where Bo is the way in which x interacts with the world (e.g. the way in which the chemical properties of the object interact with light) and B is the relational proposition for the way in which y perceives x. Or, in words, there exists an object x and there exists a perceiver y such that if x has a property Bo that interacts with the world in a certain way, then y will perceive x as B (i.e. blue).
Intensionality is defined by Carnap as that aspect of a concept which is analytical and logically true: in “x is B” the intensionality of B is the preservation of meaning when different predicates are substituted in for B. But if we say that:
B ≔ Bo ∧ Bp
Bo ≠ Bp
What intensionality remains if one of the components of B, either Bo or Bp, are changed?
Perceptions in the imagination or in dreams or in hallucinations could then be thought of as Bxy where x is refers to nothing, making the conditional false (modus tollens). But this does not mean that y has not had a perceptual experience, just that the perceptual experience was missing part of the full intensionality of some property (i.e. B = blue).
But what is the relationship between B` and B? There is not a necessary relationship with Bo and Bp: other things can cause B and Bo does not require that B occur (for instance, if there is no y to perceive it, or if y is has some other property Ap instead of Bp such that Bo causes some other perceptual experience A (as in the case of color blindness, for instance). The intensionality of a predicate, then, requires a determinate x and a determinate y for the predicate to be true. This also implies, too, that as long as ∃x∃y(Box→Bxy) can be stated with universal quantification:
∀x∀y((Box ∧ Bpy)→Bxy)
Then it is taken as full intensionality. For, if it were only the case that:
∃x∀y((Box ∧ Bpy)→Bxy)
Then the dependence on x for Bo would be factual (contingent) and therefore could be different for each instance of x. For instance, we cannot say that is the case that all cars are blue, and therefore we cannot predicate blue of all cars. This is somewhat trivial. But more significant, though, is that if we say:
∀x∃y((Box ∧ Bpy)→Bxy)
Then there is no way of saying that a predicate B is true for some x because there is still the y dependence. It is completely dependent on the particular y being instantiated: regardless of how all of the x interacts with the world in the same way (and regardless if it is some other way, be it Bo, or Ao, or Do or whatever else), all that matters is the relationship Bxy to make some perception B by y true. This is dependent on the range of the quantifier for y, since we could say, for instance, that it quantifies over all humans, but then it is not quantified over all possible perceivers.
One might argue that, for two different y’s, call them y1 and y2, that even though we would predicate Bp of y1 and Ap of y2, that both would still use the same word to describe their individual experiences Bxy1 and Axy2. But would the word used by both people for the predicate “x is B” still preserve the same synonymy (and therefore intensionality), given that the predicate B requires contribution from Bo and Bp?
We begin by supposing that elsewhere in the universe there is a planet exactly like Earth in virtually all aspects, which we refer to as “Twin Earth”. (We should also suppose that the relevant surroundings are exactly the same as for Earth; it revolves around a star that appears to be exactly like our sun, and so on). On Twin Earth, there is a Twin equivalent of every person and thing here on Earth. The one difference between the two planets is that there is no water on Twin Earth. In its place there is a liquid that is superficially identical, but is chemically different, being composed not of H2O, but rather of some more complicated formula which we abbreviate as “XYZ”. The Twin Earthlings who refer to their language as “English” call XYZ “water”. Finally, we set the date of our thought experiment to be several centuries ago, when the residents of Earth and Twin Earth would have no means of knowing that the liquids they called “water” were H2O and XYZ respectively. The experience of people on Earth with water and that of those on Twin Earth with XYZ would be identical.
Now the question arises: when an Earthling (or Oscar for simplicity’s sake) and his twin on Twin Earth say ‘water’ do they mean the same thing? (The twin is also called ‘Oscar’ on his own planet, of course. Indeed, the inhabitants of that planet call their own planet ‘Earth’. For convenience, we refer to this putative planet as ‘Twin Earth’, and extend this naming convention to the objects and people that inhabit it, in this case referring to Oscar’s twin as Twin Oscar.) Ex hypothesi, they are in identical psychological states, with the same thoughts, feelings, etc. Yet, at least according to Putnam, when Oscar says ‘water’, the term refers to H2O, whereas when Twin Oscar says ‘water’ it refers to XYZ. The result of this is that the contents of a person’s brain are not sufficient to determine the reference of terms they use, as one must also examine the causal history that led to this individual acquiring the term. (Oscar, for instance, learned the word ‘water’ in a world filled with H2O, whereas Twin Oscar learned ‘water’ in a world filled with XYZ.)
And so, if we could say that intensionality is identical between Oscar and Twin Oscar, yet they are referencing different things, then is intensionality preserved? One might object that this argument by Putnam gets at the ontological status of language more than the ontology of semantics: we might say that Oscar and Twin Oscar are speaking different languages, just by virtue that a single word, namely “water”, is understood differently. For Oscar and Twin Oscar, when they speak, it is just an accident that for everything, except “water”, that the sounds they utter are the same. The languages, ontologically speaking, are not numerically identical; the language, as spoken by Oscar specifically and Twin Oscard specifically, ontologically speaking, are tokens of two distinct different types, rather than tokens of the same type.
It’s similar to the Ship of Theseus: if we take Twin Oscar’s language, which has only a single word that refers to something different than what we have here on earth, and then change one other reference (say, we change it so that on Twin Earth, what we would call air, is actually made up of something else, say W, yet Twin Earth still calls it air, and by the standards of people several centuries ago, when this thought experiment takes place, there would be no way of distinguishing between air on Earth and air (i.e. X) on Twin Earth). Would we say, then, that Oscar and Twin Oscar still speak the same language? What if we did this for everything on Twin Earth, so that the physical makeup of everything on Twin Earth was completely different, and yet by the standards of a few centuries ago, everything on Twin Earth would appear indistinguishable from everything on Earth. Now, all of their referents are physically, chemically, biologically (and in whatever other way you can think of) different, though, from ours. They are uttering the same sounds, but all the referents are different. At what point did Twin Earth’s languages become different?
Conversely, we can look at how language works here on our Earth, where x is H2O. We English speakers say “x is water” while Germans say “x ist wasser.” That doesn’t mean that Germans are talking about some different x than what English speakers are. And so, the sounds being uttered are not deterministically related to what the actual object we’re talking about is: if Twin Earth has a completely different physical makeup, although they are uttering the same sounds, they are talking about something different, and so it is a different language; this is true even if Twin Earth has only the word “water” that is different.
However, what Putnam’s argument is saying is that, if we only take the brain as a closed system, then we’re missing part of the information required to explain semantics, and causality is information. Intensionality, considered without the causal information that led to it, does not contain all of the relevant information. And so, going back to earlier in this post, I said that some predicate is defined as:
B ≔ Bo ∧ Bp
Bo ≠ Bp
But, given what I was just talking about, we would have to say that this is incomplete. We would need to further define Bp as:
Bp ≔ Si (G(v) ∧ Se )
B ≔ Bo ∧ Si (G(v) ∧ Se )
Where Si = internal semantics (the actual brain events leading one to perceive things a certain way), which is a function of: G(v) = contribution of genetics (which is a function of evolution, v, which I won’t go into here) and Se = external semantics (the information that led one particular person to acquire the semantic resources they bring to bear: in other words, perhaps, the Fregean sense of the intensionality of a concept). But we can further rearrange this, since Se is a function in which objects like Bo are within the domain of the variable – the objects we interact with (both inert and other people) are the source of the information that defines our external semantics. And so:
B ≔ Si (G(v) ∧ Se (O|Bo∈O))
Where O is the set of all objects (people, concrete things, abstract things) a person has ever been exposed to. And so, we’ve potentially turned our original problem into a different, yet familiar one: nature G(v) vs nurture Se (O|Bo∈O).