If Materialism
Is True, the United States Is Probably Conscious
Eric Schwitzgebel
Department of Philosophy
University of California at Riverside
Riverside, CA
92521
eschwitz at domain:
ucr.edu
November 8, 2012
If
Materialism Is True, the United States Is Probably Conscious
Abstract:
If
you’re a materialist, you probably think that rabbits are conscious. And you ought to think that. After all, rabbits are a lot like us,
biologically and neurophysiologically. If you’re a materialist, you probably also
think that conscious experience would be present in a wide range of alien
beings behaviorally very similar to us even if they are physiologically very
different. And you ought to think that. After all, to deny it seems insupportable
Earthly chauvinism. But a materialist
who accepts consciousness in weirdly formed aliens ought also to accept
consciousness in spatially distributed group entities. If she then also accepts rabbit
consciousness, she ought to accept the possibility of consciousness even in
rather dumb group entities. Finally, the
United States would seem to be a rather dumb group entity of the relevant
sort. If we set aside our morphological
prejudices against spatially distributed group entities, we can see that the
United States has all the types of properties that materialists tend to regard
as characteristic of conscious beings.
Keywords:
metaphysics, consciousness, phenomenology, group mind, superorganism,
collective consciousness, metaphilosophy
If Materialism
Is True, the United States Is Probably Conscious
If
materialism is true, the reason you
have a stream of conscious experience – the reason there’s something it’s like
to be you while there’s (presumably!) nothing it’s like to be a toy robot or a
bowl of chicken soup, the reason you possess what Anglophone philosophers call phenomenology – is that the material
stuff out of which you are made is organized the right way. You might find materialism attractive if you
reject the thought that people are animated by immaterial spirits or possess
immaterial properties.[1]
Here’s another thought you might reject:
The United States is literally, like you, phenomenally conscious. That is, the United States, conceived of as a
spatially distributed, concrete entity with people as some or all of its parts,
literally possesses a stream of conscious experience over and above the
experiences of its members considered individually. In this essay, I will argue that accepting
the materialist idea that you probably like (if you’re a typical early
21st-century philosopher) should draw you to accept some group consciousness
ideas you probably don’t like (if you’re a typical early 21st-century
philosopher) – unless you choose, instead, to accept some other ideas you
probably ought to like even less.
The argument in brief is this. If you’re a materialist, you probably think
that rabbits have conscious experience.
And you ought to think that.
After all, rabbits are a lot like us, biologically and neurophysiologically.
If you’re a materialist, you probably also think that conscious
experience would be present in a wide range of naturally-evolved alien beings
behaviorally very similar to us even if they are physiologically very
different. And you ought to think
that. After all, to deny it seems insupportable
Earthly chauvinism. But, I will argue, a
materialist who accepts consciousness in weirdly formed aliens ought also to
accept consciousness in spatially distributed group entities. If she then also accepts rabbit
consciousness, she ought to accept the possibility of consciousness even in
rather dumb group entities. Finally, the
United States would seem to be a rather dumb group entity of the relevant
sort. (Or maybe, even, it’s rather
smart, but that’s more than I need for my argument.) If we set aside our morphological prejudices
against spatially distributed group entities, we can see that the United States
has all the types of properties that materialists tend to regard as
characteristic of conscious beings.
Of course it’s utterly bizarre to suppose
that the United States is literally phenomenally conscious.[2] But how good an objection is that? Cosmology is bizarre. Microphysics is bizarre. Higher mathematics is bizarre. The more we discover about the fundamentals
of the world, the weirder things seem to become. Should metaphysics be so different? Our sense of strangeness is no rigorous index
of reality.[3]
My claim is conditional and gappy. If materialism is true, probably the United States is
conscious. Alternatively, if materialism
is true, the most natural thing to
conclude is that the United States is conscious.
1. Sirian Supersquids, Antarean Antheads,
and Your Own Horrible Contiguism.
We are deeply prejudiced beings. Whites are prejudiced against blacks;
Gentiles against Jews; overestimators against underestimators.[4] Even when we intellectually reject such
prejudices, they permeate our behavior and our implicit assumptions.[5] If we ever meet interplanetary travelers
similar to us in overall intelligence and moral character, we will likely be
prejudiced against them too, especially if they look weird.
It’s hard to imagine a prejudice more
deeply ingrained than our prejudice against entities that are visibly spatially
discontinuous – a prejudice built, perhaps, even into the basic functioning of
our visual system.[6] Analogizing to racism, sexism, and speciesism, let’s call such prejudice contiguism.
You might think that so-called
contiguism is always justified and thus undeserving of a pejorative label. You might think, for example, that spatial
contiguity is a necessary condition of objecthood or
entityhood, so that it makes no more sense to speak of a spatially
discontinuous entity than it makes sense – unless you adopt some liberal views
about ontology[7]
– to speak of an entity composed of your left shoe, the Eiffel Tower, and the
rings of Saturn. If you’ll excuse me for
saying so, that attitude is foolish provincialism. Let me introduce you to two of my favorite
non-Earthly species.
The
Sirian supersquids. In
the oceans of a planet around Sirius lives a naturally-evolved animal with a
central head and a thousand tentacles.
It’s a very smart animal – as smart, as linguistic, as artistic and
creative as human beings are, though the superficial forms of its language and
art differ enormously from ours. Let’s
call these animals “supersquids”.
The supersquid’s
brain is not centrally located like our own.
Rather, the supersquid brain is distributed mostly among nodes in its
thousand tentacles, while its head houses digestive and reproductive organs and
the like. Despite the spatial
distribution of its cognitive processes across its body, however, the supersquid’s cognition is fully integrated, and supersquids
report having a single, unified stream of experience. Part of what enables their cognitive and
phenomenal integration is this: Rather than having relatively slow
electrochemical nerves, supersquid nerves are reflective capillaries carrying
light signals, something like Earthly fiber optics. The speed of these signals ensures the tight
temporal synchrony of the cognitive activity shooting among its tentacular nodes.
The supersquids show all external signs
of consciousness. They have covertly
visited Earth, and one is a linguist who has mastered English well enough to ace
the Turing test (Turing 1950): He can be, when he wants to, indistinguishable
in verbal behavior from a normal adult human being. Like us, the supersquids have communities of
philosophers and psychologists who write eloquently about the metaphysics of
consciousness, about emotional phenomenology, about their imagery and dreams. Any unbiased alien observer looking at Earth
and looking at the supersquid home planet would see no good grounds for
ascribing consciousness to us but not them.
Although some supersquid philosophers doubt that Earthly beings are
genuinely phenomenally conscious, given our radically different physiological
structure, I’m glad to report that only a small minority holds that view.
Here’s another interesting feature of
supersquids: They can detach their limbs.
To be detachable, a supersquid limb must be able to maintain homeostasis
briefly on its own and suitable light-signal transceivers must appear on the
surface of the limb and on the bodily surface to which the limb is normally
attached. Once the squids began down
this evolutionary path, selective advantages nudged them farther along. Detachable limbs revolutionized their hunting
and foraging. Two major subsequent
adaptations were these: First, the nerve signals between the head and
limb-surface transceivers shifted to wavelengths less readily degraded by water
and obstacles. And second, the
limb-surface transceivers developed the ability to communicate directly among
themselves without needing to pass signals through the central head. Since the speed of light is negligible,
supersquids can now detach arbitrarily limbs and send them roving widely across
the sea with hardly any disruption of their cognitive processing. (The energetic costs are high, but they supplement
their diet and use technological aids.)
In this limb-roving condition, their
limbs are not wandering independently under local limb-only control, then
reporting back. Limb-roving squids
remain as cognitively integrated as do non-roving squids, and as intimately in
control of their entire spatially distributed selves. Despite all the spatial intermixing of their
limbs with those of other supersquids, each individual’s cognitive processes
remain private because each squid’s transceivers employ a distinctive signature
wavelength. If a limb is lost, new limbs
can be artificially grown and fitted, though losing too many at once can result
in substantially impaired memory and cognitive function. The supersquids are now starting to
experiment with limb exchange, including developing inter-individual compatible
transceiver signals. This has led them
toward more Parfitian views of personal identity than
one typically finds in human beings, and they are re-envisioning the
possibilities of marriage, team sports, and scientific collaboration.[8]
I hope you’ll agree with me, and with
the opinion universal among the supersquids, that supersquids are coherent
entities. Despite their spatial
discontinuity, they aren’t arbitrary collections. They are integrated systems that can be
treated as beings of the sort that might house consciousness. And if they might, they do. Or so you should probably say if you’re a
mainline philosophical materialist.
After all, supersquids are naturally evolved beings that act and speak
and write and philosophize just like we do.
Does it matter that this is only science
fiction? I hope you’ll agree that
supersquids, or entities relevantly similar, are at least physically possible. And if
such entities are physically possible, and if the universe is as large as most
cosmologists currently think it is – maybe even infinite, maybe even one among
an infinite number of infinite universes![9] –
then it might not be a bad bet that some such spatially distributed
intelligences are actual. Biology can be
provincial, maybe, but not metaphysics; you’d better have room in your
metaphysics for supersquids.
The
Antarean antheads.
On the surface of a planet around Antares lives a species of animals who
look like woolly mammoths but who act much like human beings. I have gazed into my crystal ball and this is
what I see: Tomorrow, they visit Earth.
They watch our television shows, learn our language, and politely ask to
tour our lands. It turns out that they
are sanitary, friendly, excellent conversationalists, and well supplied with
rare metals for trade, so they are welcomed across the globe. They are quirky in a few ways, however. For example, their cognitive activity takes
them on average ten times longer to execute.
This has no overall effect on their intelligence, but it does test the
patience of conversation partners unaccustomed to the Antareans’ slow
pace. They also find some tasks easy
that we find difficult and vice versa.
They are baffled and amused by our trouble with simple logic problems
like the Wason Selection Task (Wason
1968) and tensor calculus, but they are impressed by our skill in integrating
auditory and visual information.
Over time, some Antareans migrate
permanently down from their orbiting ship.
Patchy accommodations are made for their size and speed, and they start
to attend our schools and join our corporations. Some achieve political office and display
approximately the normal human range of vices.
Although Antareans don’t reproduce by coitus, they find some forms of
physical contact arousing and have broadly human attitudes toward
pair-bonding. Marriage equality is
achieved. What a model of interplanetary
harmony! Ordinary non-philosophers all
agree, of course, that Antareans are conscious.
Here’s why I call them “antheads”: Their
heads and humps contain not neurons but rather ten million squirming insects,
each a fraction of a millimeter across.
Each insect has a complete set of minute sensory organs and a nervous
system of its own, and the antheads’ behavior arises from complex patterns of
interaction among these individually dumb insects. These mammoth creatures are much-evolved
descendants of Antarean ant colonies that evolved in symbiosis with a
brainless, living hive. The interior
insects’ interactions are so informationally efficient that neighboring insects
can respond differentially to the behavioral or chemical effects of other
insects’ individual outgoing efferent nerve impulses. The individual ants vary in size, structure,
sensa, and mobility. Specialist ants
have various affinities, antagonisms, and predilections, but no ant
individually approaches human intelligence.
No individual ant, for example, has an inkling of Shakespeare despite
the Antareans’ great appreciation of Shakespeare’s work.
There seems to be no reason in principle
that such an entity couldn’t execute any computational function that a human
brain could execute or satisfy any high-level functional description that the
human organism could satisfy. All the
creativity of literary interpretation, all the cleverness of humor and
weirdness of visual art, should be available to the antheads on standard
materialist approaches to cognition.
Maybe there are little spatial gaps
between the ants. Does it matter? Maybe, in the privacy of their homes, the
ants sometimes disperse from the body, exiting and entering through the
mouth. Does it matter? Maybe if the exterior body is too severely
injured, the ants recruit a new body from nutrient tanks – and when they march
off to do this, they retain some cognitive coordination, able to remember and
report thoughts they had mid-transfer.
They reconvene and say, “Oh it’s such a free and airy feeling to be
without a body! And yet it’s a fearful
thing too. It’s good to feel again the
power of limbs and mouth. May this new
body last long and well. Shall we dance,
then, love?”
We humans are not so different perhaps. In one
perspective (e.g., Maynard Smith and Szathmáry 1995)
we ourselves are but symbiotic aggregates of simpler organisms that invested in
cooperation.
2. Anti-Nesting Principles.
You might object to the Antarean
antheads even if you’re okay with the Sirian supersquids. You might think that the individual ants
would or could be individually conscious and that it’s impossible for one
conscious organism to be constituted by other conscious organisms. Some theoreticians of consciousness have said
such things – though I’ve never seen a good justification of this view.
Hilary Putnam (1965), for example,
simply stipulates: No organism capable of feeling pain possesses a
decomposition into parts which are separately capable of feeling pain. Putnam offers no argument for this
stipulation apart from the fact that he wants to rule out the apparently absurd
possibility of “swarms of bees as single pain-feelers” (p. 163). Putnam doesn’t explain why this possibility
is absurd for actual swarms of bees, much less why no possible future
evolutionary development of a swarm of conscious bees could ever also be a
single pain-feeler. It seems a danglingly unjustified exception to his otherwise clean
functionalism.
Giulio Tononi (forthcoming) also
advances an anti-nesting principle. On
Tononi’s theory of consciousness, consciousness arises whenever information is
integrated; and whenever one informationally integrated system is nested in
another, consciousness occurs only at the level of organization that integrates
the most information – what he calls
the “exclusion principle”. Tononi
defends the exclusion principle by appeal to Occam’s razor, with intuitive
support from the apparent absurdity of supposing that group consciousness could
emerge from two people talking.[10] But it’s unclear why Tononi should put any
weight on intuitive resistance to group consciousness, given his near
panpsychism: He defends the idea that a photodiode or an OR-gate could have a
single bit’s worth of consciousness (Tononi 2004, 2008, 2012; Balduzzi and Tononi 2009).
Why not some such low-level consciousness from the group, too? And Occam’s razor is a tricky implement. Although admitting the existence of
unnecessary entities seems like a bad idea, what’s an “entity” and what’s
“unnecessary” is often unclear, especially in part-whole cases. Is a hydrogen atom unnecessary once one
admits the proton and electron into one’s ontology? What makes it necessary, or not, to admit the
existence of consciousness in the first place?
It’s obscure why the necessity of admitting consciousness to Antarean
antheads should depend on whether it’s also necessary to admit consciousness among
the individual ants.
Anti-nesting principles, though seemingly
designed to avoid counterintuitive implications of group consciousness, bring
different counterintuitive consequences in their train. As Ned Block (1978/1991) argues against
Putnam, such principles appear to have the unintuitive consequence that if ultra-tiny
conscious organisms were somehow to become incorporated into your brain –
perhaps, for reasons unbeknownst to you, each choosing to play the role of one
neuron or one part of one neuron – you would be rendered nonconscious, despite
the fact that all your behavior, including self-reports of consciousness, might
remain the same. Tononi’s principle also
seems to imply that if there were a large enough election, organized the right
way with enough different ballot measures, the resulting polity-level
informational integration would eclipse the informational integration of the
main conscious stream in the human brain, and thus the individual voters would
all lose consciousness. Perhaps we are
already on the verge of this in California?[11] Furthermore, since “greater than” is a
dichotomous property and not a matter of degree, there ought on Tononi’s view
to be an exact point at which polity-level integration causes human level
consciousness suddenly to vanish (see esp. Tononi 2010, note 9). There ought to be a point at which the
addition of a single voter would cause the loss of consciousness in all other
voters – even without any detectable behavioral or self-report effects, or any
loss of integration, at the level of individual voters. It seems odd to suppose that so much, and
simultaneously so little, could turn on the discovery of a single mail-in
ballot.
3. Dumbing Down
and Smarting Up.
If you’re a materialist, you probably
think that rabbits are phenomenally conscious – that is, that “there’s something
it’s like to be” a rabbit, that rabbits experience pain, have visual
experiences, and maybe have feelings like fear.
Some philosophers would deny rabbit consciousness; more on that later. For purposes of this section, I’ll assume
you’re on board. And if you accept
rabbit consciousness, you probably ought also to accept the possibility of
consciousness in the Sirian and Antarean equivalents of rabbits.
One such species is the Sirian
squidbits, a species with cognitive processing distributed among detachable
limbs but with approximately the intelligence of Earthly rabbits. When chased by predators, the squidbits will
sometimes eject their thousand limbs in different directions and hide their
central heads. Most Sirians regard
squidbits as conscious entities; whatever reasoning justifies attributing
consciousness to Earthly rabbits similarly justifies attributing consciousness
to Sirian squidbits. A similar story
also holds on Antares.
Let me tie Sirius, Antares, and Earth a
more tightly together. Gazing into the
distant future on Sirius I see this: The central body of the squidbit becomes
smaller and smaller – thus easier to hide – and the limbs develop more
independent homeostatic and nutritional capacities, until the primary function
of the central body is just reproduction of these increasingly independent
limbs. Earthly entomologists come to
refer to these heads as “queens”. Still
later, squidbit queens enter into symbiotic relationship with brainless but
mobile hives, and the thousand bits learn to hide within for safety. These mobile hives look something like woolly
mammoths. Where is the sharp, principled
line between group and individual?
We can increase the size of the
Antareans and the intelligence of the ants.
Maybe they’re the size of houses and filled with naked mole rats rather
than ants. This wouldn’t seem to affect
the argument. What if the ants or rats
rise to human levels of intelligence, while the Antareans’ behavior still emerges
in roughly the same way from the system as a whole, through a complex,
informationally rich integration of the behavior of the ten million interior
individuals? I can feel the pull of treating
that last change as crucial to the case – the pull of thinking that humanlike
intelligence inside would somehow turn off the lights upstairs, rendering the
Antarean Antheads entirely nonconscious despite their outward seemingly
introspective reports – but the idea doesn’t seem to me to survive reflection. Would the thought be that as long as our
subprocesses are executed by sufficiently stupid entities, we are conscious,
but if the exact same processes were to be executed by more intelligent entities,
who might reserve most of their intellectual resources for other activities, we
would no longer be conscious? This
would, I suppose, just be a version of an anti-nesting principle – anti-intelligent nesting. The same worries arise as arise against
anti-nesting in general, and the same justifiable suspicion, I think, that our
commonsensical intuitions rest on unfounded prejudice.
The most natural way to develop the
materialist thesis, I am suggesting, based upon reflection on hypothetical
cases – cases which are presumably physically possible and probably even actual
in a large and diverse enough universe – is to suppose that entities that are
structured very differently from us can be conscious, as long as they have
mammal-like behavioral sophistication and mammal-like evolutionary
histories. This claim isn’t quite the
same as the claim that linguistic behavioral similarity alone is enough to establish
the existence of consciousness, for example via the Turing test. Requiring sophisticated linguistic behavior is
overkill if rabbits are conscious. On
the flip side, the right kind of evolutionary or developmental history might be
necessary, if we want to allow for naturally-acquired representational
functions of the sort championed by Millikan (1984), Dretske
(1988, 1995), and others.
The present view might seem to conflict
with “type-materialist” views that equate human consciousness with specific
biological processes.[12] I don’t think it does conflict, however. Most type-materialist accounts allow that
weird alien species might have conscious experiences. Maybe the phenomenal experience of feeling
pain, for example, is identical to different types of physical states in
different species. Or maybe the
phenomenal type pain really requires
Earthly neurons but Antareans have conscious experiences of schmain, which feels very
different but plays a broadly similar functional role. Or maybe radically different low-level
physical structures (neurons vs. light signals vs. squirming bugs) can still be
physically type-identical at a coarse grain of organization, or even have to be, if they are to play similar
enough roles in undergirding the behavioral patterns. To insist that consciousness requires our
familiar biology and no other – well, that’s a very substantial commitment, not
to be confused with type-identity materialism, a commitment I will discuss in
Section 7 under the heading of “neurochauvinism”.
4. A Telescopic View of the United
States.
A planet-sized alien who squints might
see the United States as a single diffuse entity consuming bananas and
automobiles, wiring up communications systems, touching the moon, and
regulating its smoggy exhalations – an entity that can be evaluated for the
presence or absence of consciousness.
You might say: The United States is not
a biological organism. It doesn’t have a
life cycle. It doesn’t reproduce. It’s not biologically integrated and homeostatic. Therefore, it’s just not the right type of thing to be conscious.
To this concern I have two replies.
First, why should consciousness require
being an organism in the biological sense?
Properly-designed androids, brains in vats, gods – these things may not
be organisms in the biological sense and yet are sometimes thought to have
consciousness. (I’m assuming
materialism, but some materialists believe in actual or possible gods.) Having a distinctive mode of reproduction is
often thought to be a central, defining feature of organisms (e.g., Wilson
2005), but it’s unclear why reproduction should matter to consciousness. Human beings might vastly extend their lives
and cease reproduction, or they might conceivably transform themselves through
technology so that any specific condition on having a biological life cycle is
dispensed with, while our brains and behavior remain largely the same. Would we no longer be conscious? Being composed of cells and organs that share
genetic material might also be characteristic of an organism, but as with
reproduction it’s unclear what would justify regarding such composition as
essential to mentality, especially once we consider a variety of physically
possible non-Earthly creatures.
Second, it’s not clear that nations
aren’t biological organisms. The United
States is (after all) composed of
cells and organs that share genetic material, to the extent it is composed of
people who are composed of cells and organs and who share genetic
material. The United States also
maintains homeostasis. Farmers grow
crops to feed non-farmers, and these nutritional resources are distributed with
the help of other people via a network of roads. Groups of people organized as import
companies bring in food from the outside environment. Medical specialists help maintain the health
of their compatriots. Soldiers defend
their compatriots against potential threats.
Teachers educate future generations.
Home builders, textile manufacturers, telephone companies, mail
carriers, rubbish haulers, bankers, police, all contribute to the stable
well-being of the organism. Politicians
and bureaucrats work top-down to ensure that certain actions are coordinated,
while other types of coordination emerge spontaneously from the bottom up, just
as in ordinary animals. Viewed
telescopically, the United States is a pretty awesome animal.[13] Now some parts of the United States also are
individually sophisticated and awesome, but that subtracts nothing from the
awesomeness of the U.S. as a whole – no more than we should be less awed by
human biology as we discover increasing evidence of our dependence on
microscopic symbionts.
Nations also reproduce – not sexually
but by fission. The United States and
several other countries are fission products of Great Britain. In the 1860s, the United States almost
fissioned again. And fissioning nations
retain traits of the parent that influence the fitness of future fission
products – intergenerationally stable developmental resources, if you
will. As in cellular fission, there’s a
process by which subparts align into different sides and then separate
physically and functionally.
On Earth, at all levels, from the
molecular to the neural to the societal, there’s a vast array of competitive
and cooperative pressures; at all levels, there’s a wide range of actual and
possible modes of reproduction, direct and indirect; and all levels show
manifold forms of symbiosis, parasitism, partial integration, agonism, and
antagonism. There isn’t as radical a
difference in kind as people are inclined to think between our favorite level
of organization and higher and lower levels.
Okay, you might say, but is this
spatially distributed group entity or organism, or whatever it is, conscious? Well, what would it take for it to be so?
5. What Is So Special About Brains?
According to materialism, what’s really
special about us is our brains. Brains
are what make us conscious. Maybe brains
have this power on their own, so that even a lone brain in an otherwise empty
universe would have conscious experience if it were structured in the right
way; or maybe consciousness arises not strictly from the brain itself but
rather from a thoroughly entangled mix of brain, body, and environment.[14] But all materialists agree: Brains are
central to the story.
Now what is so special about brains, on
the materialist view? Why do they give
rise to conscious experience while a similar mixture of chemical elements in
chicken soup does not? It must be something
about how those elements are organized.
Two general features of brain organization stand out: their complex high
order / low entropy information processing, and their role in coordinating
sophisticated responsiveness to environmental stimuli. These two features are of course related. Brains also arise from an evolutionary and
developmental history, within an environmental context, which may play a
constitutive (and not merely a causal) role in determining function and
cognitive content.[15] According to a broad class of plausible
materialist views, any system with sophisticated enough information processing
and environmental responsiveness, and perhaps the right kind of historical and
environmental embedding, should have conscious experience. My central claim is: The United States seems
to have what it takes, if standard materialist criteria are straightforwardly
applied without post-hoc noodling. It is
mainly unjustified morphological prejudice that blinds us to this.
Consider, first, the sheer quantity of
information transfer among members of the United States. The human brain contains about 1011
neurons exchanging information through an average of about 103
connections per neuron, firing at peak rates of about once every several
milliseconds. The United States, in
comparison, contains only about 3 x 108 people. But those people exchange a lot of
information. How much? We might begin by considering how much
information flows from one person to another via stimulation of the retina. The human eye contains about 108
photoreceptor cells. Most people in the
United States spend most of their time in visual environments that are largely
created by the actions of people (including their own past selves). If we count even 1/300 of this visual
neuronal stimulation as the relevant sort of person-to-person information
exchange, then the quantity of visual connectedness among people is similar to
the neuronal connectedness within the human brain (1014
connections). Very little of the
exchanged information will make it past attentional filters for further
processing, but analogous considerations apply to information exchange among
neurons. Or here’s another way to think
about the issue: If at any time 1/300th of the U.S. population is viewing
internet video at 1 megabit per second, that’s a transfer rate between people
of 1012 bits per second in this one minor activity alone.[16] Furthermore, it seems unlikely that conscious
experience requires achieving the degree of informational connectedness of the
entire neuronal structure of the human brain.
If mice are conscious, they manage it with under 108 neurons.
A more likely source of concern, it
seems to me, is that information exchange among members of the U.S. population
isn’t of the right type to engender a
genuine stream of conscious experience.
A simple computer download, even if it somehow managed to involve 1017
bits per second or more, presumably wouldn’t by itself alone do the job. For consciousness, there presumably needs to
be some organization of the information in the service of coordinated,
goal-directed responsiveness; and maybe, too, there needs to be some sort of
sophisticated self-monitoring.
But the United States has these
properties too. Our information exchange
is not in the form of a simply-structured massive internet download. The United States is a goal-directed entity,
flexibly self-protecting and self-preserving.
The United States responds, intelligently or semi-intelligently, to
opportunities and threats – not less intelligently, I think, than a small
mammal. The United States expanded west
as its population grew, developing mines and farmland in traditionally Native
American territory. When Al Qaeda struck
New York, the United States responded in a variety of ways, formally and
informally, in many branches and levels of government and in the populace as a
whole. Saddam Hussein shook his sword
and the United States invaded Iraq. The
U.S. acts in part through its army, and the army’s movements involve perceptual
or quasi-perceptual responses to inputs: The army moves around the mountain,
doesn’t crash into it. Similar, the spy
networks of the CIA detected the location of Osama bin Laden, whom the U.S.
then killed. Is there less information,
less coordination, less intelligence than in a hamster? The Pentagon monitors the actions of the
Army, and its own actions. The Census
Bureau counts us. The State Department
announces the U.S. position on foreign affairs.
The Congress passes a resolution declaring that we hate tyranny and love
apple pie. This is
self-representation. Isn’t it?
I am asking you to think of the United
States as a planet-sized alien might – or maybe as a planet-sized group
intelligence would think of us – that is, to evaluate the behaviors and
capacities of the United States as a concrete, spatially distributed entity
with people as some or all of its parts, an entity within which individual
people play roles somewhat analogous to the role that individual cells play in
your body. If you are willing to
jettison contiguism and other morphological prejudices, this is not, I think,
an intolerably radical perspective. As a
house for consciousness, a rabbit brain is not clearly more sophisticated.
The United States has long been embedded
in a natural and social environment, richly causally connected to the world
beyond – connected in a way that would seem to give meaning to its
representations and functional duties to its parts. It’s no randomly congealed “Swampman” that
lacks a content-giving history (Davidson 1987; Dretske
1995; Millikan 2010). The United States wars
against Germany, then reconciles, then wars again. It threatens Iran. It cooperates with other nations in
threatening Iran. The United States
monitors space for asteroids that threaten Earth and would presumably respond
cooperatively with other nations were one detected. The United States tracks climate change and
ozone levels and takes muted, semi-cooperative action. When the spy camera generates an image of bin
Laden’s compound, that triggers changes both in people and in artifacts such as
computers and photographic plates. We
might or might not wish to consider such artifacts as part of the body of the
United States. If they are part of the
body, they contribute substantially to the internal functional dynamics of the
United States, which now operate in loops both within and between people and
artifacts – loops that can potentially be incredibly fancy and serve a panoply
of functional roles. If such artifacts
are not part of the body of the United States, interaction with them
contributes substantially to the complexity of the functional dynamics between
the United States and its environment.
Some actions of the United States arise
via a fairly simple aggregation of individuals’ behaviors, with the individuals
knowledgeable about both the process and the results – as when the United
States elects a new President. Other
behaviors arise through less obvious processes.
For example, the United States might respond intelligently to changes in
corporate governance laws in other countries, e.g., by shifting its import-export
behavior, in ways that are unrecognizable to any individual member of the
United States. The attitudes of the
United States can also differ from the attitudes of its citizens. Hypothetically, the United States could be
angry about something, as reflected in group-level punitive behavior and in the
pronouncements of spokespeople, even if no individual person in the United
States is angry about that thing.[17]
Some actions of the United States are
executed primarily by small groups of individuals. Few individual members of the United States
are directly involved in importing oil, but it does not follow that importing
oil is done only by oil import companies and not by the U.S. as a whole. Few individual cells in your body, or even in
your brain, might be directly involved with your “hello” as you answer the
phone. It doesn’t follow that your
greeting is done only by your speech centers and vocal tract and not by you.
Maybe the actions, attitudes, and
representations of the United States are all ultimately reducible to the actions,
attitudes, and representations of U.S. citizens and residents, in some complex
combination. On that issue I take no
stand. (What does “reducible” mean?) All that is required for this part of my
argument is that the United States actually engages in actions, actually adopts
attitudes, and actually formulates representations, whether reducibly or not, of
at least a mammalian level of sophistication.
Perhaps, too, all your actions, attitudes, and representations are in
some sense reducible to other things; few philosophers conclude that this makes
them unreal. When the United States
imports oil from countries A and B, rather than countries C and D, intelligently
responding to represented price, it is always individual people, organized in
groups, and embedded in a larger environment, who together import the oil. It is also individual cells in your brain,
organized in groups, that together respond to the tilting angle of the visual
stimulus, that together match the incoming signal with the stored
representation of grandma’s face. Great
things emerge from contextually embedded groups. My argument does not turn on the
irreducibility of group behavior or any ineliminable explanatory
necessity of appeals to the group level.
Rather, my thought is this: There’s something awesomely special about
brains such that they give rise to consciousness, and considered from a
materialist perspective, the United States seems to be awesomely special in
just the same sorts of ways.
One might object that “the United
States” is an abstraction, like “the average Californian” or “the teenage
mind”. The average Californian may be
conscious, and the teenage mind may seethe with angst, but it would be an
absurd category mistake to suppose that therefore there exists some additional
stream of consciousness of the average Californian beyond the streams of
consciousness of each Californian considered individually, or some further bit
of angst in addition to all the individual angst of particular teenagers. This objection, however, forgets the
concreteness on which I have repeatedly insisted. I am willing to be somewhat flexible about
the best way to conceptualize the boundaries of the body of the United States
(are the roads included? ex-pat citizens?), but I insist that we consider the
matter concretely, as a planet-sized alien observer might. It’s not like seeing all the buildings around
campus and then seeking some additional ghostly building which is “the
university” (Ryle 1949). Rather, it’s
like seeing all the buildings around campus and then wondering what features,
like open space between the buildings, might be possessed by the campus as a
whole but neglected by someone with too narrow a focus on the individual parts.
What is it about brains, as hunks of
matter, that makes them special enough to give rise to consciousness? Looking in broad strokes at the types of
things materialists tend to say in answer – things like sophisticated
information processing and flexible, goal-directed environmental
responsiveness, things like representation, self-representation,
multiply-ordered layers of self-monitoring and information-seeking
self-regulation, rich functional roles, and a content-giving historical
embeddedness – it seems like the United States has all those same
features. In fact, it seems to have them
in a greater degree than do some beings, like rabbits, that we ordinarily
regard as conscious.
What could be missing?
6. What Could Be Missing.
In this section, I would have liked to
apply particular, detailed materialist metaphysical theories to the question at
hand. It seems like the natural next
step! Unfortunately, I face four
obstacles, in combination nearly insurmountable. First: Few materialist theoreticians
explicitly consider the possibility of literal group consciousness.[18] Thus, it is a matter of speculation how
properly to apply their theory to a case that might have been overlooked in the
theory’s design and presentation.
Second: Many theories, especially those constructed by neuroscientists
and psychologists, implicitly or explicitly limit themselves to human or at most vertebrate
consciousness, and thus are silent about how consciousness would work in other
sorts of entities (e.g., Baars 1988; Crick 1994). Third: Further limiting the pool of relevant
theories is the fact that few thinkers really engage the metaphysics from top
to bottom. For example, most
theoreticians advocating “higher order” models of consciousness don’t provide
sufficient detail on the nature of “lower order” mental states for me to
evaluate whether the United States would qualify as having such lower-order
states (though if it does, it would probably have higher-order states too).[19] Fourth: When I did arrive at what I thought
would be a representative sample of four prominent, metaphysically ambitious,
top-to-bottom theories of consciousness, the task of presenting each view in
enough detail to explore how it would plausibly apply to this new range of
cases proved quite complex – too complex to embed in an already long essay.[20] Thus, I think further progress on this issue
will require having some concrete counterproposals to evaluate. In this section, I will address three
objections, one inferred from remarks by Andy Clark on the extended mind
hypothesis and two derived from email correspondence with prominent materialist
philosophers. In the next section, I
will explore three other ways of escaping my conclusion – ways that involve
rejecting either rabbit consciousness, alien consciousness, or both.
Andy Clark (2009) has recently argued
that consciousness requires high bandwidth neural synchrony – a type of
synchrony that is not possible between the external environment and structures
interior to the human brain. Thus, he
says, consciousness stays in the head.
Now in the human case, and generally in the case of Earthly animals with
central nervous systems, perhaps Clark is right – and maybe such Earthly
animals are all he really has in view.
As a universal principle, maybe it has some appeal, too. The massive, swift parallelism of neural
processing is sufficiently impressive to invite the thought that the brain is
potentially qualitatively different in its integration, and thus perhaps
qualitatively different as a potential site for consciousness, than the
population of the United States. Nonetheless,
I think reflection on this principle shows it to be insufficiently
unmotivated. If we were to discover that
some people, though outwardly very similar to us in behavior, or some alien
species, operated using brain processes that involved not synchronous, parallel
processing but instead incredibly swift serial processing, would we therefore
be justified in thinking they had no conscious experience? If we were to discover a species of
planet-sized aliens that behaved much as we do, only vastly more slowly, with
reaction times measured in hours not milliseconds, with substantial transfer
delays between its subprocesses, would be justified in denying consciousness to
them too? I can understand our being
skeptical and wanting to withhold judgment about consciousness in such cases
(despite, presumably, those beings’ own indignant-seeming protests); and thus
maybe taking a similar “who knows?” reaction to the U.S. case. That would already be a major shift from the
standard “that’s ridiculous!” reaction.
But I do think the more natural move for the materialist is to go
farther than that – to deny that speed or synchrony per se is essential.
Fred Dretske,
in correspondence, has suggested that the United States could not be conscious
because its representational states depend on the conscious states of
others. Such dependence, he says,
renders its representations conventional
rather than natural – and a conscious
entity must have natural representations.[21]
In earlier work, Dretske
(1995) highlights the implausibility of supposing that an object that has no
intrinsic representational functions can become conscious simply because
outside users impose representational functions upon it. We don’t make a mercury column conscious by
calling it a thermometer, nor do we make a machine conscious by calling it a
robot and interpreting its outputs as speech acts. The machine either is or is not conscious, it
seems, independently of our intentions and labels. A wide range of materialists, I suspect, will
and should accept that an entity cannot be conscious if all its representations
depend in this way on external agents.
Focusing on such cases, Dretske’s independency criterion seems
appealing.
But the citizens and residents of the
United States are parts of the U.S. rather than external agents, and it’s not
clear that the dependency of consciousness on the intentions and purposes of internal agents is problematic in the
same way, if the internal agents’ behavior is properly integrated with the
whole. The internal and external cases,
at least, are sufficiently dissimilar that before accepting Dretske’s principle
in general form we should at least consider some potential internal-agent
cases. Antares seems to give us just
such a case. Furthermore, although
Dretske’s criterion is not exactly an anti-nesting principle in the sense of
Section 2, it is subject to the same concerns.
In its broad form it seems unmotivated, except by a desire to exclude
the very cases in dispute; it seems improperly to exclude the Antareans, at
least on the assumption that the individual Antarean ants are “others” in the
relevant sense; and it brings new counterintuitive consequences in its train,
such as loss of consciousness upon inhaling Planck-scale people whose actions
are smoothly incorporated into one’s brain functions. On Dretske’s proposed principle, as on the
anti-nesting principles of Section 2, entities that behave identically on a
large scale and have superficially similar evolutionary and developmental
histories might either have or lack consciousness depending on micro-level
differences that are seemingly unreportable (to them), unintrospectible (to
them), unrelated to what they say about Proust, and thus, it seems natural to
suppose, irrelevant.
Dretske
conceives his criterion as dividing “natural” representations from
“conventional” or artificial ones. Maybe
it is reasonable to insist that a conscious being have natural
representations. But from a telescopic
perspective national groups and their representational activities are eminently natural – as natural as
the structures and activities of groups of cells clustered into spatially
contiguous individual organisms. What
should matter on a broadly Dretskean approach, I’m
inclined to think, is that the representational functions emerge naturally from
within rather than being imposed artificially from outside, and that they are
properly ascribed to the whole entity rather than only to a subpart. Both Antarean opinions about Shakespeare and
the official U.S. position on Iran’s nuclear program meet these criteria.
Daniel Dennett, in correspondence,
offers a pragmatic objection: To the extent the United States is radically
unlike individual human beings, it’s unhelpful to ascribe consciousness to
it. Its behavior is impoverished
compared to ours and its functional architecture radically unlike our own. Ascribing consciousness to the United States
is not so much straightforwardly false, Dennett suggests, as it is misleading,
inviting the reader to too closely assimilate human architecture and group
architecture.
To this objection I respond, first, that
the United States is not behaviorally impoverished. It does lots of things, as described in
Sections 4 and 5 above – probably more than any individual human does. (In this way it differs from the aggregate of
the U.S., Germany, and South Africa, and maybe also from the aggregate of all
of humanity.) Second, to hang the
metaphysics of consciousness on specific details of architecture runs counter
to the spirit that admits the Sirians and Antareans to the realm of beings who
would (hypothetically) be conscious.
Thus it risks collapse into neurochauvinism (Section 7 below). And third, we can presumably dodge such
practical worries about leaping to assimilative inferences by being restrained
in our inferences. We can refrain from
assuming, for example, that when the U.S. is angry its anger is felt, phenomenologically, as anything like the anger of
individual human beings; we can even insist that “anger” is not a great word
and simply the best we can do with existing language. The U.S. can’t feel blood rush to its head; it
can’t feel tension in its arms; it can’t “see red”. It can muster its armies, denounce the
offender via spokespeople in Security Council meetings, and enforce an
embargo. What it feels like, if
anything, to enforce an embargo, defenders of U.S. consciousness can wisely
refrain from claiming to know.
Riffling through existing theories of
consciousness, we could try to find, or we could invent, some necessary
condition for consciousness that human beings meet, that the United States
fails to meet, and that sweeps in at least some of the more plausibly conscious
non-human entities. I would not object
to responding to my argument in that way, that is, as a challenge: Let’s work
out a criterion that delivers this appealing conclusion! However, I still worry, first, that this is
suspiciously post-hoc, and second, that the resulting theory will lack the kind
of elegant simplicity of a materialism that treats Earthly rabbits, Sirian
squidbits, Antarean antheads, and the United States as all on a par due to
their broad behavioral and functional similarities.
Alternatively, some readers – perhaps
especially empirically-oriented readers – might suggest that my argument does
little other than display the bankruptcy of metaphysical speculation about
bizarre cases. How could we hope to build
any serious theory on science-fictional intuitions? I sympathize with this reaction too. Perhaps we should abandon any aspiration for
a truly universal metaphysics that would cover the whole range of bizarre
possibilities. But this reaction wouldn’t
give us much guidance about the question of U.S. consciousness, if we are
suspicious enough of common sense to think that our commonsensical reactions do
not decisively settle the question.
Despite my sympathies with skepticism about the metaphysics of bizarre
cases, I want, and I think it’s reasonable to want, at least a conditional
assessment or best guess about whether we are parts of a larger conscious
entity, and I see no better way to
try to reach such a tentative assessment.
7. Three Ways Out.
Let’s briefly consider three more
conservative views about the distribution of consciousness in the universe, to
see if they can provide a suitable exit from the bizarre conclusion that the
United States is literally conscious.
Eliminativism. Maybe the United States isn’t conscious
because nobody is conscious – not
you, not me, not rabbits, not aliens.
Maybe “consciousness” is such a corrupt, broken concept, embedded in
such a radically false worldview, that we should discard it entirely, as we
discarded the concepts of demonic possession, the luminiferous
ether, and the fates.
In this essay, I have tried to use the
concept of consciousness in a plain
way, unadorned with dubious commitments like irreducibility, immateriality, and
infallible self-knowledge. Maybe I have
failed, but then I hope you will permit me to rephrase: Whatever it is about is
in virtue of which human beings and rabbits have appropriately unadorned
quasi-consciousness or consciousness*, the United States has that same thing.
The most visible philosophical
eliminativists about terms from folk psychology still seem to have room in
their theories for consciousness, suitably stripped of dubious commitments.[22] So if you tread this path, you’re going
farther than they. In fact, Paul
Churchland (1984/1988) says several things that seem, jointly, to commit him to
accepting the idea that cities or countries would be conscious (though he
doesn’t to my knowledge explicitly draw the conclusion).[23] Galen Strawson says that denying the
existence of conscious experience is “the strangest thing that has ever
happened in the whole history of human thought” (2006, p. 5). Strawon’s remark
underestimates, I suspect, the strangeness of religion; but still, radical eliminativism seems at least as bizarre as believing that
the United States is conscious.
Extreme
sparseness. Here’s
another way out for the materialist: Argue that consciousness is rare, so that
really only very specific types of systems possess it, and then argue that the
United States doesn’t meet the restrictive criteria. If the criteria are specifically neural, this position is
neurochauvinism, which I will discuss shortly.
Setting aside neurochauvinism, the most commonly endorsed extreme
sparseness view is one which language
is required for consciousness. Thus,
dogs, wild apes, and human infants aren’t conscious. There’s nothing it’s like to be such beings,
any more than there is something it’s like (most people think) to be a diode or
a fleck of dust. To a dog, all is dark
inside, or rather, not even dark. This
view is both highly counterintuitive and, I suspect, a gross overestimation of
the gulf between us and our nearest relatives.
However, it’s not clear that we get to
exclude U.S. consciousness by requiring language for consciousness, since the
United States does seemingly speak as a collective entity, as I’ve mentioned. It linguistically threatens and
self-represents, and these threats and self-representations influence the
linguistic and non-linguistic behavior of other nations. If the materialist is to deny U.S.
consciousness on grounds of a general commitment to the sparseness of
consciousness in the universe, then even more severe restrictions are required,
or at least different ones. Perhaps
phenomenal consciousness requires the ability to self-report the existence of
phenomenal consciousness? Then even
four-year-olds might not have it. This
seems a tough road.
Neurochauvinism. A third way out is to assume that
consciousness requires neurons –
neurons clumped together in the right way, communicating by ion channels and
all that, rather than by voice and gesture.
All the entities that we have actually met and that we normally regard
as conscious do have their neurons bundled in that way, and the 3 x 1019
neurons of the United States are not as a whole bundled that way.
Examples from Ned Block (1978/1991) and
John Searle (1980, 1984) lend intuitive support to this view. Suppose we arranged the people of China into
a giant communicative network resembling the functional network instantiated by
the human brain. It would be absurd,
Block says, to regard such an entity as conscious (though see Lycan 1981). Similarly, Searle asserts that no arrangement
of beer cans, wire, and windmills, however cleverly arranged, could ever host a
genuine stream of conscious experience (though see Cuda
1985). According to Block and Searle,
what these entities are lacking isn’t a matter of large-scale functional
structure revealed in patterns of input-output relations. Consciousness requires not that, or not only
that; consciousness requires human biology.
Or rather, consciousness, on this view,
requires something like human
biology. In what way like? Here Block and Searle aren’t very
helpful. According to Searle, “any
system capable of causing consciousness must be capable of duplicating the
causal powers of the brain” (1992, p. 92).
In principle, Searle suggests, this could be achieved by “altogether
different” physical mechanisms. But what
mechanisms could do this and what mechanisms could not, Searle makes no attempt
to adjudicate, other than by excluding certain systems, like beer-can systems,
as plainly the wrong sort of thing.
Instead, Searle gestures hopefully at future science.
The reason for not strictly insisting on
neurons, I suspect, is this: If we’re playing the common sense game – that is,
if bizarreness by the standards of current common sense is our reason for
excluding beer-can systems and organized groups of people – then we’re going to
have to allow the possibility, at least in principle, of conscious beings from
other planets who operate other than by neural systems like our own. By whatever commonsense or intuitive
standards we judge beer-can systems nonconscious, by those very same standards,
it seems, we would judge hypothetical Martians, with different internal biology
but intelligent-seeming outward behavior, to be conscious.
From a cosmological perspective it would
be strange to suppose that of all the possible beings in the universe that are
capable of sophisticated, self-preserving, goal-directed environmental
responsiveness, beings that could presumably be (and in a vast enough universe
presumably actually are) constructed in myriad strange and diverse ways,
somehow only we with our neurons have genuine conscious experience, and all
else are mere automata there is nothing it is like anything to be. Contrary to the “Copernican Principle” of
cosmological method,[24]
this view would suggest that we, as the sole possessors of consciousness, are
in a uniquely favored position in the universe.
How lucky we are! (The other
beings, I suppose, only say they’re
lucky, or only emit noises that we would mistakenly regard as having that
semantic content.) For this reason, it
seems not only unintuitive but also scientifically unjustified to suppose that
conscious experience requires Earthly biology.
It would be like supposing that life requires Earthly nucleotides.
If they’re to avoid un-Copernican
neuro-fetishism, the question must become, for Block and Searle, what feature of neurons, possibly also
possessed by non-neural systems, gives rise to consciousness? In other words, we are back with the question
of Section 5 – what is so special about brains? – and the only well-developed
answers on the near horizon seem to involve appeals to the sorts of features
that the United States has, features like massively complex informational
integration, functionally directed self-monitoring, and a long-standing history
of sophisticated environmental responsiveness.
8. Conclusion.
In sum, the argument is this. There seems to be no principled reason to
deny entityhood, or entityhood-enough, to spatially distributed but
informationally integrated beings. So
the United States is at least a candidate
for the literal possession of real psychological states, including
consciousness. Once we view the United
States in this way, the question then becomes whether it meets plausible
materialistic criteria for consciousness.
My suggestion is that if those criteria are liberal enough to include
both small mammals and highly intelligent aliens, then the United States
probably does meet those criteria.
Although that conclusion seems absurdly
bizarre, even a passing glance at contemporary physics and metaphysics suggests
that common sense is no sure guide to fundamental reality (a point I develop
farther in Schwitzgebel in draft). Large
things are hard to see properly when you’re in their midst. The homunculi in your head, the tourist in
Leibniz’s mill, they don’t see consciousness either.[25] Too vivid an appreciation of the local mechanisms
overwhelms their view.
The space between us is an airy synapse.
If the United States is conscious, is
Exxon-Mobil? Is an aircraft carrier?[26] Is the seven dwarfs of Snow White? And if such
entities are conscious, do they have rights?
Is dissolution murder? The bizarrenesses multiply, and I worry about the moral implications. But then, maybe, if such entities are
conscious (in some small way?), perhaps I should
worry. I don’t know.
In fact, I don’t even know whether I
have provided grounds for believing that the United States is conscious, or
instead a challenge to existing materialist theories of consciousness, or
instead reasons to be wary in general of ambitions toward a universal
metaphysics of mind. Whoops, I hear
something knocking on my office door....[27]
References:
Allen,
Colin (1995/2010). Animal
consciousness. Stanford Encyclopedia of Philosophy (Winter 2011 edition).
Arico,
Adam (2010). Folk psychology,
consciousness, and context effects. Review of Philosophy and Psychology, 1,
371-393.
Averroës
(Ibn Rushd) (12th
c./2009). Long commentary on the De Anima of Aristotle, trans. R.C.
Taylor. New Haven: Yale.
Baars,
Bernard J. (1988). A cognitive theory of consciousness. Cambridge: Cambridge.
Balduzzi,
David, and Giulio Tononi (2009). Qualia:
The geometry of integrated information. PLoS Computational Biology, 5: 8.
Barnett,
David (2008). The simplicity intuition
and its hidden influence on philosophy of mind.
Noûs, 42, 308-335.
Barnett,
David (2010). You are simple. In The
waning of materialism, ed. R.C. Koons and G. Bealer. Oxford:
Oxford.
Beisbart,
Claus, and Tobias Jung (2006).
Privileged, typical, or not even that?
Our place in the world according to the Copernican and Cosmological
Principles. Journal for General Philosophy of Science, 37, 225-256.
Bettencourt,
B. Ann, Marilynn B. Brewer, Marian Rogers Croak, and Normal Miller (1992). Cooperation and the reduction of intergroup
bias: The role of reward structure and social orientation. Journal
of Experimental Social Psychology, 28, 301-319.
Block,
Ned (1978/1991). Troubles with
functionalism. In The nature of mind, ed. D.M. Rosenthal. Oxford: Oxford..
Block,
Ned (2002). The harder problem of
consciousness. Journal of Philosophy, 99, 391-425.
Bondi,
Herman (1952/1968). Cosmology, 2nd ed.
Cambridge: Cambridge.
Bratman,
Michael (1999). Faces of intention.
Cambridge: Cambridge.
Burge,
Tyler (1979). Individualism and the
mental. Midwest Studies in Philosophy, 4, 73-122.
Campbell,
Donald T. (1958). Common fate,
similarity, and other indices of the status of aggregates of persons as social
entities. Behavioral Science, 3, 14-25.
Carey,
Susan (2009). The origin of concepts. Oxford:
Oxford.
Carruthers,
Peter (1996). Language, thought and consciousness. Cambridge: Cambridge.
Carruthers,
Peter (1998). Animal subjectivity. Psyche,
4: 3.
Carruthers,
Peter (2001/2011). Higher-order theories
of consciousness. Stanford Encyclopedia of Philosophy (Fall 2011 edition).
Chomsky,
Noam (2009). The mysteries of nature:
How deeply hidden? Journal of Philosophy, 106, 167-200.
Churchland,
Patricia S. (1983). Consciousness: The
transmutation of a concept. Pacific Philosophical Quarterly, 64,
80-95.
Churchland,
Patricia S. (2002). Brain-wise. Cambridge, MA:
MIT.
Churchland,
Paul M. (1981). Eliminative materialism
and the propositional attitudes. Journal of Philosophy, 78, 67-90.
Churchland,
Paul M. (1984/1988). Matter and consciousness, rev. ed. Cambridge, MA: MIT.
Churchland,
Paul M. (1984/1988). Matter and consciousness, rev. ed. Cambridge, MA: MIT.
Clark,
Andy (2009). Spreading the joy? Why the machinery of consciousness is
(probably) still in the head. Mind, 118, 963-993.
Clark,
Austen (1994). Beliefs and desires
incorporated. Journal of Philosophy, 91, 404-425.
Crick,
Francis (1994). The astonishing hypothesis.
New York: Charles Scribner’s Sons.
Cuda,
Tom (1985). Against neural
chauvinism. Philosophical Studies, 48, 111-127.
Davidson,
Donald (1987). Knowing one’s own mind. Proceedings and Addresses of the American
Philosophical Association, 61,
441–58.
Dennett,
Daniel C. (1991). Consciousness explained.
Boston: Little, Brown, and Company.
Dennett,
Daniel C. (1996). Kinds of minds. New York: BasicBooks.
Dennett,
Daniel C. (1998). Brainchildren. Cambridge,
MA: MIT.
Dennett,
Daniel C. (2005). Sweet dreams. Cambridge, MA:
MIT.
Descartes, René (1649/1991). Letter to More, 5 Feb. 1649. In The philosophical writings of Descartes, vol. 3, ed. J. Cottingham, R. Stoothoff, D. Murdoch, and A. Kenny. Cambridge: Cambridge.
Dretske,
Fred (1988). Explaining behavior.
Cambridge, MA: MIT.
Dretske,
Fred (1995). Naturalizing the mind.
Cambridge, MA: MIT.
Edelman, Shimon (2008). Computing the mind. Oxford: Oxford.
Egan, Greg (1992).
Closer. Eidelon, vol. 8. Available at http://www.eidolon.net/old_site/issue_09/09_closr.htm
.
Elder,
Crawford (2011). Familiar objects and their shadows.
Cambridge: Cambridge.
Espinas,
Alfred (1877/1924). Des sociétés animales,
3rd ed. Paris: Félix Alcan.
Fodor, Jerry A. (1968). The appeal to tacit knowledge in psychological
explanation. Journal of Philosophy, 65, 627-640.
Frankish, Keith (2012). Quining diet qualia. Consciousness and Cognition, 21,
667-676.
Gendler, Tamar Szabó
(2008a). Alief
and belief. Journal of
Philosophy, 105, 634–663.
Gendler, Tamar Szabó (2008b). Alief in action,
and reaction. Mind & Language, 23, 552–585.
Gilbert,
Margaret (1989). On social facts. Princeton,
NJ: Princeton.
Godfrey-Smith,
Peter (2009). Darwinian populations and natural selection. Oxford: Oxford.
Gopnik,
Alison, and Eric Schwitzgebel (1998).
Whose concepts are they, anyway?
The role of philosophical intuition in empirical psychology. In Rethinking
intuition, ed. M.R. DePaul and W. Ramsey.
Lanham: Rowman and Littlefield.
Greene, Brian (2011). The
hidden reality. New York: Vintage.
Haslanger, Sally (2008). Changing the ideology and culture of philosophy: Not by reason
(alone). Hypatia, 23, 210-22.
Hilbert,
Martin, and Priscila López (2011). The world’s technological capacity to store,
communicate, and compute information. Science, 332, 60-65.
Hill,
Christopher S. (1991). Sensations. Cambridge: Cambridge.
Hill,
Christopher S. (2009). Consciousness. Cambridge: Cambridge.
Huebner,
Bryce (forthcoming). Macrocognition. Oxford: Oxford
Huebner,
Bryce, Michael Bruno, and Hagop Sarkissian (2010). What does the nation of China think about
phenomenal states? Review of Philosophy and Psychology, 1, 225-243.
Hume,
David (1740/1978). A treatise of human nature, ed. L.A. Selby-Bigge
and P.H. Nidditch.
Oxford: Oxford.
Hurley,
Susan (1998). Consciousness in action.
Cambridge, MA: Harvard.
Hutchins,
Edwin (1995). Cognition in the wild.
Cambridge, MA: MIT.
Kant,
Immanuel (1781/1787/1998). Critique of pure reason, ed. and trans.
P. Guyer and A.W. Wood. Cambridge: Cambridge.
Knobe,
Joshua, and Jesse Prinz (2008). Intuitions about consciousness: Experimental
studies. Phenomenology and the Cognitive Sciences, 7, 67-83.
Koch,
Christof (2012).
Consciousness: Confessions of a
romantic reductionist. Cambridge,
MA: MIT.
Korman,
Daniel (2011). Ordinary objects. Stanford
Encyclopedia of Philosophy (Winter 2011 edition).
Kornblith,
Hilary (1998). The role of intuition in
philosophical inquiry: An account with no unnatural ingredients. In Rethinking
intuition, ed. M.R. DePaul and W. Ramsey.
Lanham: Rowman and Littlefield.
Kurzweil,
Ray (2005). The singularity is near. New
York: Penguin.
Ladyman,
James, and Don Ross (2007). Every thing must go. Oxford: Oxford.
Leibniz,
G.W. (1714/1989). The principles of
philosophy, or, the monadology. In Philosophical
Essays, ed. and trans. R. Ariew and D.
Garber. Indianapolis: Hackett.
Lewis.
David K. (1980). Mad pain and Martian
pain. In Readings in philosophy of psychology, ed. N. Block. Cambridge, MA: Harvard.
List,
Christian, and Philip Pettit (2011). Group agency. Oxford: Oxford.
Lycan,
William G. (1981). Form, function, and
feel. Journal of Philosophy, 78, 24-50.
Madden,
Rory (forthcoming). The naive topology
of the conscious subject. Noûs.
Mandik,
Pete, and Josh Weisberg (2008). Type Q
materialism. In Naturalism, reference, and ontology, ed. C.B. Wrenn. New York: Peter Lang.
Maynard
Smith, John, and Eors Szathmáry
(1995). The major transitions in evolution.
Oxford: Oxford.
McDougall,
William (1920). The group mind. New York:
Putnam.
McLaughlin,
Brian (2007). Type materialism for
phenomenal consciousness. In The Blackwell companion to consciousness,
ed. M. Velmans and S. Schneider. Malden, MA: Blackwell.
Metzinger,
Thomas (2003). Being no one. Cambridge, MA:
MIT.
Millikan,
Ruth Garrett (1984). Language, thought, and other biological
categories. Cambridge, MA: MIT.
Millikan,
Ruth Garrett (2010). On knowing the
meaning: With a coda on Swampman. Mind, 119, 43-81.
Montero,
Barbara (1999). The body problem. Noûs, 33, 183-200.
Moravec,
Hans (1997). When will computer hardware
match the human brain? Available at
http://www.transhumanist.com/volume1/moravec.htm (accessed June 1, 2012).
Noë,
Alva (2004). Action in perception.
Cambridge, MA: MIT.
Parfit,
Derek (1984). Reasons and persons. Oxford:
Oxford.
Petty, Richard E., Russell H. Fazio, and Pablo Briñol, eds. (2009).
Attitudes: Insights from the new implicit measures. New York: Taylor and Francis.
Phelan, Mark, Adam Arico, and Shaun Nichols
(forthcoming). Thinking
things and feeling things: On an alleged discontinuity in folk metaphysics of mind.
Polger,
Thomas W. (2004). Natural minds. Cambridge,
MA: MIT.
Putnam, Hilary (1965). Psychological predicates. In Art,
mind, and religion, ed. W.H. Capitan & D.D. Merrill. Liverpool: University of Pittsburgh Press / C.
Tinling.
Putnam,
Hilary (1975). Mind, language and reality.
London: Cambridge.
Rockwell,
Teed (2005). Neither brain nor ghost. Cambridge, MA: MIT.
Rupert, R.
(2005). Minding one’s own
cognitive system: When is a group of minds a single cognitive unit? Episteme,
1, 177-88.
Ryle,
Gilbert (1949). The concept of mind. New
York: Barnes & Noble.
Saul,
Jennifer (forthcoming). Implicit bias,
stereotype threat and women in philosophy.
In Women in philosophy, ed. F.
Jenkins and K. Hutichson.
Schäffle,
Albert E. F. (1896). Bau und Leben des socialen
Körpers, 2nd ed. Tübingen: Laupp’schen.
Scholl,
Brian (2007). Object persistence in
philosophy and psychology. Mind & Language, 22, 563-591.
Schwitzgebel,
Eric (2010). Acting contrary to our
professed beliefs, or the gulf between occurrent
judgment and dispositional belief.
Pacific Philosophical Quarterly, 91, 531-553.
Schwitzgebel,
Eric (2012a). Why Dennett should think
that the United States is conscious.
Blog post at The Splintered Mind (http://schwitzsplinters.blogspot.com),
Feb. 9, 2012.
Schwitzgebel,
Eric (2012b). Why Dretske
should think that the United States is conscious. Blog post at The Splintered Mind
(http://schwitzsplinters.blogspot.com), Feb. 17, 2012.
Schwitzgebel,
Eric (2012c). Why Humphrey should think
that the United States is conscious.
Blog post at The Splintered Mind (http://schwitzsplinters.blogspot.com),
Mar. 8, 2012.
Schwitzgebel,
Eric (2012d). Why Tononi should think
that the United States is conscious.
Blog post at The Splintered Mind (http://schwitzsplinters.blogspot.com),
Mar. 23, 2012.
Schwitzgebel,
Eric (2012e). Why Tononi should allow
that conscious entities can have conscious parts. Blog post at The Splintered Mind
(http://schwitzsplinters.blogspot.com), June 6, 2012.
Schwitzgebel,
Eric (in draft). The crazyist
metaphysics of mind. Available at
http://faculty.ucr.edu/~eschwitz
Searle,
John (1980). Minds, brains, and
programs. Behavioral and Brain Sciences, 3, 417-457.
Searle,
John (1984). Minds, brains, and science.
Cambridge, MA: Harvard.
Searle,
John (1992). The rediscovery of the mind.
Cambridge, MA: MIT.
Searle,
John (2010). Making the social world.
Oxford: Oxford.
Sober,
Elliott, and David Sloan Wilson (1998). Unto others.
Cambridge, MA: Harvard.
Spelke,
Elizabeth S., Karen Breinlinger, Janet Macomber, and Kristen Jacobson (1992). Origins of knowledge. Psychological
Review, 99, 605-632.
Stich,
Stephen (1983). From folk psychology to cognitive science. Cambridge, MA: MIT.
Stich,
Stephen (2009). Five answers. In Mind
and consciousness, ed. S. Grim. Automatic Press.
Stock,
Gregory (1993). Metaman. Toronto: Doubleday Canada.
Stoljar,
Daniel (2010). Physicalism. Oxford: Routledge.
Strawson,
Galen (2006). Consciousness and its place in nature. Exeter: Imprint Academic.
Strawson,
P.F. (1959). Individuals. London:
Methuen.
Sytsma,
Justin M., and Edouard Machery (2010).
Two conceptions of subjective experience. Philosophical Studies, 151, 299-327.
Teilhard
de Chardin, Pierre (1955/1965). The
phenomenon of man, rev. English ed., trans. B. Wall. New York: Harper & Row.
Tononi,
Giulio (2004). An information
integration theory of consciousness. BMC Neuroscience, 5: 42.
Tononi,
Giulio (2008). Consciousness as
integrated information: A provisional manifesto. Biological
Bulletin, 215, 216-242.
Tononi,
Giulio (2010). Information integration:
Its relevance to brain function and consciousness. Archives
Italiennes de Biologie,
148, 299-322.
Tononi,
Giulio (2012). Phi. New York: Pantheon.
Tononi,
Giulio (forthcoming). The integrated
information theory of consciousness: An updated account. Archives
Italiennes de Biologie.
Tuomela,
Raimo (2007). The philosophy of sociality. Oxford: Oxford.
Turing,
A.M. (1950). Computing machinery and
intelligence. Mind, 59, 433-460.
Vinge,
Vernor (1992).
A fire upon the deep. New York: Tor.
Vinge,
Vernor (2011).
Children of the sky. New York: Tor.
Wason,
P.C. (1968). Reasoning about a
rule. Quarterly Journal of Experimental Psychology, 20, 273-281.
Wilson,
Robert A. (2004). Boundaries of the mind.
Cambridge: Cambridge.
Wilson,
Robert A. (2005). Genes and the agents of life.
Cambridge: Cambridge.
Wittenbrink, Bernd, and Norbert Schwarz, eds. (2007). Implicit measures of attitudes. New York: Guilford.
Wundt,
Wilhelm (1897/1897). Outlines of psychology, trans. C. H.
Judd. Leipzig: Wilhelm Engelmann.
[1] For
purposes of this essay, I’m going to assume that we know, at least roughly,
what “material stuff” is. I recognize
that this assumption might be problematic.
Discussions include Montero 1999; Chomsky 2009; Stoljar
2010.
[2] The
empirical literature on folk opinion about group consciousness is more equivocal
than I would have thought, however. See Knobe and Prinz 2008; Sytsma and Machery 2009; Arico 2010; Huebner, Bruno, and
Sarkissian 2010; Phelan, Arico, and Nichols forthcoming.
Few scholars have clearly endorsed the
possibility of literal group consciousness.
On group minds without literal consciousness see McDougall 1920; Wilson
2004; and the recent literature on collective intentionality (e.g., Gilbert
1989; Clark 1994; Bratman 1999; Rupert 2005; Tuomela 2007; Searle 2010; List and Pettit 2011; Huebner
forthcoming).
For more radical views of group minds
see Espinas 1877/1924; Schäffle
1875/1896; maybe Wundt 1897/1897; maybe Strawson 1959 (none of whom were
materialists). Perhaps the best
developed group consciousness view – with some affinities to the present view,
though again not materialist – is that of Tielhard de
Chardin 1955/1965.
See also Lewis & Viharo’s “Google
Consciousness”, TEDxCardiff (June 9, 2011); Vernor Vinge’s science fiction
portrayal of group minds in Vinge 1992, 2011; Averroës
(Ibn Rushd) on the active
intellect, 12th c./2009; Edelman 2008, p. 432; Koch 2012, p.
131-134.
[3] I
develop this idea farther in Schwitzgebel in draft. Some others who doubt common sense as a guide
to metaphysics are Churchland 1981; Stich 1983; Gopnik
and Schwitzgebel 1998; Kornblith 1998; Dennett 2005; Ladyman and Ross 2007; Mandik and Weisberg 2008. Hume 1740/1978 and Kant 1781/1787/1998 are
also interesting on this issue, of course.
[5] See,
for example, the essays collected in Wittenbrink and Schwarz, eds., 2007; Petty, Fazio, and Briñol, eds., 2009.
Philosophical discussions include Gendler 2008a-b; Haslanger
2008; Schwitzgebel 2010; Saul forthcoming.
[6] Especially
if those parts move on different trajectories.
See, for example, Campbell 1958; Spelke, Brelinger, Macomber, and Jacobson
1992; Scholl 2007; Carey 2009. See
Barnett 2008 and Madden forthcoming for arguments that we do not intuitively
attribute consciousness to scattered objects.
[10] See
also Barnett 2008, 2010; Madden 2012.
Barnett, like Putnam, seems to rely simply on an intuitive sense of
absurdity (2010, p. 162). In an earlier
work, Tononi (2010, note 9) discusses an anti-nesting principle without
endorsing it. There he states that such
a principle is “in line with the intuitions that each of us has a single,
sharply demarcated consciousness”. In
his more recent article, Tononi does not repeat his appeal to that intuition.
[11] In
conversation, Tononi has resisted this suggestion. However, it remains unclear to me what might
justify his resistance on this point, especially given his principle that
spatial and temporal grain are not intrinsically meaningful but rather should
be chosen to maximize the measure of integrated information. (I choose a temporal grain of one day and a
spatial grain of one node per voter.)
[12] For a review of
“type materialism” see McLaughlin 2007.
For more detail how some of the options described in this paragraph
might play out, see Lewis 1980; Bechtel and Mundale
1999; Polger 2004; Hill 2009. Block 2002 illustrates the skeptical
consequences of embracing type identity without committing to some possibility
of broadly this sort.
[14] E.g., Hurley
1998; Noë 2004; Wilson 2004; Rockwell 2005.
[15] E.g., Putnam
1975; Burge 1979; Millikan 1984; Davidson 1987; Dretske
1988, 1995; Wilson 2004.
[16] See also Moravec 1997; Kurzweil 2005; Hilbert and López 2011.
[17] Also see
discussions of collective intentionality in List and Pettit 2011 and other work
cited in note 2.
[18] Two notable
exceptions are Robert A. Wilson (2004) and Bryce Huebner (forthcoming). Huebner seems open to the possibility of
group consciousness while refraining from endorsing it. Wilson I am inclined to read as rejecting
group consciousness on the grounds that it has been advocated only sparsely and
confusedly, with no advocate meeting a reasonable burden of proof. Edelman (2008) and Koch (2012) make passing
but favorable remarks about group consciousness, at least hypothetically. Tononi and Putnam I discuss in Section 2.
[19] For a review of
higher-order theories, see Carruthers 2001/2011.
[20] The theories I
chose were Dretske’s, Dennett’s, Humphrey’s, and Tononi’s. You can see some of my preliminary efforts in
blog posts Schwitzgebel 2012a-e (compare also Koch’s sympathetic 2012 treatment
of Tononi). On the most natural
interpretations of these four test-case views, I thought that readers
sympathetic with any of these authors’ general approaches ought to accept that
the United States is conscious. And I
confess I still do think that, despite protests from Dretske,
Dennett, Humphrey, and Tononi themselves in personal communication. See the comments section of Schwitzgebel
2012c for Humphrey’s reaction, the remainder of the present section for Dretske and Dennett, and Section 2 for Tononi.
[21] In his 1995
book, Dretske says that a representational is natural
if it is not “derived from the intentions and purposes of its designers,
builders, and users” (p. 7) rather than the more general criterion, above, of
independency from “others”. In light of
our correspondence on group consciousness, he says that he has modified this
aspect of his view.
[22] P.M. Churchland
1984/1988; P.S. Churchland 2002; Stich 2009.
Contrast skepticism about loaded versions of “consciousness” or “qualia” in P.S. Churchland 1983; Dennett 1991; Frankish
2012.
[23] Churchland
characterizes as a living being “any semiclosed
system that exploits the order it already possesses, and the energy flux
through it, in such a way as to maintain and/or increase its internal order”
(1984/1988, p. 173). By this definition,
Churchland suggests, beehives, cities, and the entire biosphere all qualify as
living beings (ibid.). Consciousness and
intelligence, Churchland further suggests, are simply sophistications of this
basic pattern – cases in which the semiclosed system
exploits energy to increase the information it contains, including information
about its own internal states and processes (1984/1988, p. 173 and 178).
[24] Bondi 1952/1968; Beisbart and
Jung 2006.
[25] On the
homunculi, see e.g., Fodor 1968. Leibniz
imagines entering into an enlarged brain as into a mill in his 1714/1989.
[26] Hutchins 1995
vividly portrays distributed cognition in a military vessel. I don’t know whether he would extend his
conclusions to phenomenal consciousness, however.
[27] For helpful
discussion of these issues in the course of writing, thanks to Rachel Achs,
Santiago Arango, Scott Bakker, Zachary Barnett, Mark Biswas, Ned Block, David
Daedalus, Dan Dennett, Fred Dretske, Louie Favela,
Kirk Gable, Chris Hill, Linus Huang, Nick Humphrey, Enoch Lambert, Bill Lycan,
Pete Mandik, Tori McGeer, Luke Roelofs,
Giulio Tononi, Vernor Vinge, and Rob Wilson; to
audiences at University of Cincinnati, Princeton University, Tufts University,
University of Basque Country, and Bob Richardson’s seminar on extended
cognition; and to the many readers who posted comments on relevant posts on my
blog, The Splintered Mind. My wife
Pauline sensibly worries that I am too passionate in defending the rights of
antheads who, she says, don’t even really exist. My thirteen-year-old son Davy, however,
thinks that for the first time in my career I’m now actually writing about
something interesting.