The Moral Behavior
of Ethicists and the Power of Reason
Joshua Rust
Department of Philosophy
Stetson University
421 North Woodland Boulevard
DeLand, Florida 32723
Eric Schwitzgebel
Department of Philosophy
University of California at Riverside
Riverside, CA 92521-0201
February 4, 2013
The
Moral Behavior of Ethicists and the Power of Reason
Professional
ethicists behave no morally better, on average, than do other professors. At least that’s what we have found in a
series of empirical studies that we will summarize below. Our results create a prima facie challenge
for a certain picture of the relationship between intellectual reasoning and
moral behavior – a picture on which explicit, intellectual cognition has substantial
power to change the moral opinions of the reasoner and thereby to change the
reasoner’s moral behavior. Call this
picture the Power of Reason
view. One alternative view has been prominently
defended by Jonathan Haidt. We might
call it the Weakness of Reason view,
or more colorfully the Rational Tail
view, after the headline metaphor of Haidt’s seminal 2001 article, “The
emotional dog and its rational tail” (in Haidt’s later 2012 book, the emotional
dog becomes an “intuitive dog”). According
to the Rational Tail view (which comes in different degrees of strength), emotion
or intuition drives moral opinion and moral behavior, and explicit forms of
intellectual cognition function mainly post-hoc, to justify and socially
communicate conclusions that flow from emotion or intuition. Haidt argues that our empirical results favor
his view (2012, p. 89). After all, if
intellectual styles of moral reasoning don’t detectably improve the behavior
even of professional ethicists who build their careers on expertise in such
reasoning, how much hope could there be for the rest of us to improve by such
means? While we agree with Haidt that
our results support the Rational Tail view over some rationalistic rivals, we
believe that other models of moral psychology are also consistent with our
findings, and some of these models reserve an important
role for reasoning in shaping the reasoner’s behavior and attitudes. Part One summarizes our empirical
findings. Part Two explores five
different theoretical models, including the Rational Tail, more or less
consistent with those findings.
Part One: Our Empirical Studies
Missing library books.
Our first study (Schwitzgebel 2009) examined the rates at which ethics
books were missing from 32 leading academic libraries, compared to other
philosophy books, according to those libraries’ online catalogs. The primary analysis was confined to
relatively obscure books likely to be borrowed mostly by specialists in the
field – 275 books reviewed in Philosophical
Review between 1990 and 2001, excluding titles cited five or more times in
the Stanford Encyclopedia of Philosophy. Among these books, we found ethics books
somewhat more likely to be missing
than non-ethics books: 8.5% of the ethics books that were off the shelf were
listed as missing or as more than one year overdue, compared to 5.7% of the
non-ethics philosophy books that were off the shelf. This result holds despite a similar total
number of copies of ethics and non-ethics books held, similar total overall
checkout rates of ethics and non-ethics books, and a similar average
publication date of the books. In
another study, we found that classic pre-20th-century ethics texts were more
likely to be missing than comparable non-ethics texts.
Peer ratings. Our second study examined peer opinion about
the moral behavior of professional ethicists (Schwitzgebel and Rust 2009). We set up a table in a central location at
the 2007 Pacific Division meeting of the American Philosophical Association and
offered passersby gourmet chocolate in exchange for taking a “5-minute
philosophical-scientific questionnaire”, which they completed on the spot. One version of the questionnaire asked
respondents their opinion about the moral behavior of ethicists in general,
compared to other philosophers and compared to non-academics of similar social
background (with parallel questions about the moral behavior of specialists in
metaphysics and epistemology). Opinion
was divided: Overall, 36% of respondents rated ethicists morally better behaved
on average than other philosophers, 44% rated them about the same, and 19%
rated them worse. When ethicists’
behavior was compared to that of non-academics, opinion was split 50%-32%-18%
between better, same, and worse. Another
version of the questionnaire asked respondents to rate the moral behavior of
the individual ethicist in their department whose last name comes next in
alphabetical order, looping back from Z to A if necessary, with a comparison
question about the moral behavior of a similarly alphabetically chosen specialist
in metaphysics and epistemology. Opinion
was again split: 44% of all respondents rated the arbitrarily selected ethics
specialist better than they rated the arbitrarily selected M&E specialist, 26%
rated the ethicist the same, and 30% rated the ethicist worse. In both versions of the questionnaire, the
skew favoring the ethicists was driven primarily by respondents reporting a
specialization or competence in ethics, who tended to avoid rating ethicists
worse than others. Non-ethicist
philosophers tended to split about evenly between rating the ethicists better,
same, or worse.
Voting rates. We assume that regular participation in
public elections is a moral duty, or at least that it is morally better than
non-participation (though see Brennan 2011).
In an opinion survey to be described below, we found that over 80% of
sampled U.S. professors share that view.
Accordingly, we examined publicly available voter participation records
from five U.S. states, looking for name matches between voter rolls and online
lists of professors in nearby universities, excluding common and
multiply-appearing names (Schwitzgebel and Rust 2010). In this way, we estimated the voting
participation rates of four groups of professors: philosophical ethicists,
philosophers not specializing in ethics, political scientists, and professors
in departments other than philosophy and political science. We found that all four groups of professors
voted at approximately the same rates, except for the political science
professors, who voted about 10-15% more often than did the other groups. This result survived examination for
confounds due to gender, age, political party, and affiliation with a
research-oriented vs. teaching-oriented university.
Courtesy at philosophy
conferences. While some rules
of etiquette can be morally indifferent or even pernicious, we follow Confucius
(5th c. BCE/2003), Karen Stohr (2012), and others in seeing polite,
respectful daily behavior as an important component of morality. With this in mind, we examined courteous and
discourteous behavior at meetings of the American Philosophical Association,
comparing ethics sessions with non-ethics sessions (Schwitzgebel, Rust, Huang,
Moore, and Coates 2012). We used three
measures of courtesy – talking audibly during the formal presentation, allowing
the door to slam when entering or exiting mid-session, and leaving behind
litter at one’s seat – across 2800 audience-hours of sessions at four different
APA meetings. None of the three measures
revealed any statistically detectable differences in courtesy. Audible talking (excluding brief, polite
remarks like “thank you” for a handout) was rare: .010 instances per audience
hour in the ethics sessions vs. .009 instances per audience hour in the
non-ethics sessions (two-proportion z test, p = .77). The median rate of door-slamming per session
(compared to mid-session entries and exits in which the audience member
attempted to shut the door quietly) was 18.2% for the ethics sessions and 15.4%
for the non-ethics sessions (Mann-Whitney test, p = .95). Finally, ethicists were not detectably less likely
than non-ethicists to leave behind cups (16.8% vs. 17.8% per audience member,
two-proportion z test, p = .48) or trash (11.6% vs. 11.8%, two-proportion z
test, p = .87). The latter result
survives examination for confounds due to session size, time of day, and
whether paper handouts were provided.
However, we did find that the audience members in environmental ethics sessions left behind less trash than did the
audience in all other sessions combined (3.0% vs. 11.9%).
APA free
riding. We assume a prima facie duty
for program participants in philosophy conferences to pay the modest
registration fees that the organizers of those conferences typically
charge. However, until recently the
American Philosophical Association had no mechanism to enforce conference
registration, which resulted in a substantial free-riding problem. With this in mind, we examined the Pacific
Division APA programs from 2006-2008, classifying sessions into ethics,
non-ethics, or excluded. We then
examined the registration compliance of program participants in ethics sessions
vs. program participants in non-ethics sessions by comparing anonymously
encrypted lists of participants in those sessions (participants with common
names excluded) to similarly encrypted lists of people who had paid their
registration fees (Schwitzgebel forthcoming).
(Although the APA Pacific Division generously supplied the encrypted
data, this research was neither solicited by nor
conducted on behalf of the APA or the Pacific Division.) During the period under study, ethicists
appear to have paid their conference registration fees at about the same rate
as did non-ethicist philosophers (74% vs. 76%, two-proportion z test, p = .43). This result survives examination for
confounds due to gender, institutional prestige, program role, year, and status
as a faculty member vs. graduate student.
Responsiveness to student emails. Yet another study examined the rates at
which ethicists responded to brief email messages designed to look as though
written by undergraduates (Rust and Schwitzgebel forthcoming). We sent three email messages – one asking
about office hours, one asking for the name of the undergraduate advisor, and
one inquiring about an upcoming course – to ethicists, non-ethicist
philosophers, and a comparison group of professors in other departments,
drawing from online faculty lists at universities across several U.S.
states. All messages addressed the
faculty member by name, and some included additional specific information such
as the name of the department or the name of an upcoming course the professor
was scheduled to teach. The messages
were checked against several spam filters, and we had direct confirmation
through various means that over 90% of the target email addresses were actively
checked. Overall, ethicists responded to
62% of our messages, compared to a 59% response rate for non-ethicist
philosophers and 58% for non-philosophers – a difference that doesn’t approach
statistical significance despite (we’re somewhat embarrassed to confess) 3,109
total trials (χ2 test, p = .18).
Self-reported attitudes and
behavior. Our most recent
study examined ethicists’, non-ethicist philosophers’, and non-philosophers’
self-reported attitudes and behavior on a number of issues including membership
in disciplinary societies, voting, staying in touch with one’s mother,
vegetarianism, organ and blood donation, responsiveness to student emails,
charity, and honesty in responding to survey questionnaires. The survey was sent to about a thousand professors
in five different U.S. states, with an overall response rate of 58% or about
200 respondents in each of the three groups.
Identifying information was encrypted for participants’ privacy. On some issues – voting, email
responsiveness, charitable donation, societal membership, and survey response
honesty – we also had direct, similarly encrypted, observational measures of
behavior that we could compare with self report. Aggregating across the various measures, we
found no difference among the groups in overall self-reported moral behavior,
in the accuracy of the self-reports for those measures where we had direct
observational evidence, or in the correlation between expressed normative
attitude and either self-reported or directly observed behavior. The one systematic difference we did find was
this: Across several measures – vegetarianism, charitable donation, and organ
and blood donation – ethicists appeared to embrace more stringent moral views
than did non-philosophers, while non-ethicist philosophers held views of
intermediate stringency. However, this
increased stringency of attitude was not unequivocally reflected in ethicists’
behavior.
This
last point is best seen by examining the two measures on which we had the best
antecedent hope that ethicists would show moral differences from non-ethicists:
vegetarianism and charitable donation.
Both issues are widely discussed among ethicists, who tend to have
comparatively sophisticated philosophical opinions about these matters, and
professors appear to exhibit large differences in personal rates of charitable
donation and in meat consumption.
Furthermore, ethicists’ stances on these issues are directly connected
to specific, concrete behaviors that they can either explicitly implement or
not (e.g., to donate 10% annually to famine relief; to refrain from eating the
meat of such-and-such animals). This
contrasts with exhortations like “be a kinder person” that are difficult to
straightforwardly implement or to know if one has implemented. Looking, then, in more detail at our findings
on vegetarianism and charitable donation:
Self-reported attitude and behavior: eating
meat. We solicited normative
attitude about eating meat by asking respondents to rate “regularly eating the
meat of mammals such as beef or pork” on a nine-point scale from “very morally
bad” to “very morally good” with the midpoint marked “morally neutral”. On this normative question, there were large differences
among the groups: 60% of ethicist respondents rated meat-eating somewhere on
the bad side of the scale, compared to 45% of non-ethicist philosophers and
only 19% of professors from other departments.
Later in the survey we posed two behavioral questions. First, we asked “During about how many meals
or snacks per week do you eat the meat of mammals such as beef or pork?” Next, we asked “Think back on your last
evening meal, not including snacks. Did
you eat the meat of a mammal during that meal?”
On the meals-per-week question, we found a modest difference among the
groups: Ethicists reported a mean of 4.1 meals per week, compared to 4.6 for
non-ethicist philosophers and 5.3 for non-philosophers. We also found 27% of ethicists to report no
meat consumption (zero meat meals per week), compared to 20% of non-ethicist
philosophers and 13% of non-philosophers.
However, statistical evidence suggested that respondents were fudging
their meals-per-week answers: Self-reported meals per week was not
mathematically consistent with what one would expect given the numbers
reporting having eaten meat at the previous evening meal. And when asked about their previous evening
meal, the groups’ self-reports differed only marginally, with ethicists in the
intermediate group: 37% of ethicists reported having eaten the meat of a mammal
at their previous evening meal, compared to 33% of non-ethicist philosophers
and 45% of non-philosophers (χ2 test, p = .06).
Self-reported attitude and behavior:
charity. We solicited normative opinion
about charity in two ways. First, we
asked respondents to rate “donating 10% of one’s income to charity” on the same
nine-point scale we used for the question about eating meat. Ethicists expressed the most approval, with
89% rating it as good and a mean rating of 7.5 of the scale, vs. 85% and 7.4
for non-ethicist philosophers and 73% and 7.1 for non-philosophers. Second, we asked what percentage of income
the typical professor should donate to charity (instructing participants to
enter “0” if they think it’s not the case that the typical professor should
donate to charity). 9% of ethicists
entered “0”, vs. 24% of non-ethicist philosophers and 25% of non-philosophers. Among those not entering “0”, the geometric
mean was 5.9% for the ethicists vs. 4.8% for both of the other groups. Later in the survey, we asked participants
what percentage of their income they personally had donated to charity in the
previous calendar year. Non-ethicist
philosophers reported having donated the least, but there was no statistically
detectable difference between the self-reported donation rates of the ethicists
and the non-philosophers. (Reporting
zero: 4% of ethicists vs. 10% of non-ethicist philosophers and 6% of
non-philosophers, χ2 test, p = .052; geometric mean of the
non-0’s 3.7% vs. 2.6% vs. 3.6%, ANOVA, p = .004.) However, we also had one direct measure of
charitable behavior: Half of the survey recipients were given a charity
incentive to return the survey – $10 to be donated to their selection from
among Oxfam America, World Wildlife Fund, CARE, Make-a-Wish Foundation, Doctors
Without Borders, or American Red Cross. By this measure, the non-ethicist
philosophers showed up as the most
charitable, and in fact were the only group who responded at statistically detectably
higher rates when given the charity incentive (67% vs. 59%; compared to 59% on
both versions for ethicists and 55% vs. 52% for non-philosophers). While we doubt that this is a dependably
valid measure of charitable behavior overall, we are also somewhat suspicious
of the self-report measures. We judge
the overall behavioral results to be equivocal, and certainly not to decisively
favor the ethicists over both of the two other groups.
Conclusion. Across a wide variety of measures, it appears
that ethicists, despite expressing more stringent normative attitudes on some
issues, behave not much differently than do other professors. However, we did find some evidence that
philosophers litter less in environmental ethics sessions than in other APA
sessions, and we found some equivocal evidence that might suggest slightly
higher rates of charitable giving and slightly lower rates of meat-eating among
ethicists than among some other subsets of professors. On one measure – the return of library books
– it appears that ethicists might behave morally worse.
Part Two: Possible Explanations.
The Rational Tail view. One
possibility is that Haidt’s Rational Tail view, as described in the
introduction, is correct. Emotion or
intuition is the dog; explicit reasoning is the tail; and this is so even among
professional ethicists, whom one might have thought would be strongly
influenced by explicit moral reasoning if anyone is. Our opinions and behavior – even the opinions
and behavior of professional ethicists – are very little governed by our
reasoning. We do what we’re going to do,
we approve of what we’re going to approve of, and we concoct supporting reasons
only after the fact, as needed. Haidt
compares reasoning and intuition to a rider on an elephant, with the rider,
reasoning, generally compelled to travel in the direction favored by the
elephant. Haidt also compares the role
of reasoning to that of a lawyer rather than a judge: The lawyer does her best
to advocate for the positions given to her by her clients – in this case the
intuitions or emotions – producing whatever ideas and arguments are convenient
for the pre-determined conclusion. Reason
is not a neutral judge over moral arguments but rather, for the most part, a
paid-off advocate plumping for one side. Haidt cites our work as evidence for this
view (e.g., Haidt 2012, p. 89), and we’re inclined to agree that it fits nicely
with his view and so in that way lends support.
We have also recently found evidence that philosophers, perhaps especially
ethics PhDs, may be especially good at or especially prone toward embracing
moral principles as a form of post-hoc rationalization of covertly manipulated
moral judgments (Schwitzgebel and Cushman 2012).
It
would be rash, however, to adopt an absolutely extreme version of the Rational
Tail view (and Haidt himself does not).
At least sometimes, it seems, the tail can wag the dog and the elephant
can take direction from the rider.
Rawls’s (1971) picture of philosophical method as involving “reflective
equilibrium” between intuitive assessments of particular cases and rationally
appealing general principles is one model of how this might occur. The idea is that just as one sometimes
adjusts one’s general principles to match one’s pretheoretical intuitions about
particular cases, one also sometimes rejects one’s pretheoretical intuitions
about particular cases in light of one’s general principles. It seems both anecdotally and
phenomenologically compelling to suppose that explicit moral reasoning sometimes
prompts rejection of one’s initial intuitive moral judgments, and that when
this happens, changes in real-world moral behavior sometimes follow. How could there not be at least some truth in the Power of Reason view?
While
Haidt acknowledges that reasoning is not inert (particularly in social
contexts), the question remains, how much
power to influence behavior does explicit, philosophical-style reasoning in
fact have? And why does there seem to be
so little systematic evidence of that power – even when looking at what one
might think would be the best-case population for seeing its effects?
Without
directly arguing against Haidt’s version of the Rational Tail view, we present
four models of the relationship between explicit moral reasoning and real-world
moral behavior that permit explicit reasoning to play a substantial role in
shaping the reasoner’s moral behavior, compatibly with our empirical findings above. We focus on our own evidence, but we
recognize that a plausible interpretation of it must be contextualized with
other sorts of evidence from recent moral psychology that seems to support the
Rational Tail view – including Haidt’s own dumbfounding evidence (summarized in
his 2012); evidence that we have poor knowledge of the principles driving our
moral judgments about puzzle cases (e.g., Cushman, Young, and Hauser 2006; Mikhail
2011; Ditto and Liu 2012); and evidence about the diverse factors influencing
moral judgment (e.g., Hauser 2006; Greene 2008; Schnall et al. 2008).
Narrow principles. Professional
ethicists might have two different forms of expertise. One might concern the most general principles
and unusually clean hypothetical cases – the kinds of principles and cases at
stake when ethicists argue about deontological vs. consequentialist ethics
using examples of runaway trolleys and surgeons who can choose secretly to
carve up healthy people to harvest their organs. Expertise of that sort might have little
influence on one’s day-to-day behavior.
A second form of expertise might be much more concretely practical but
concern only narrow principles – principles like whether it’s okay to eat meat
and under what conditions, whether one should donate to famine relief and how
much, whether one has a duty to vote in public elections. An ethicist can devote serious,
professional-quality attention to only a limited number of such practical
principles; and once she does so, her behavior might be altered favorably as a
result. But such reflection would only
alter the ethicist’s behavior in those few domains that are the subject of
professional focus.
If
philosophical moral reasoning tends to improve moral behavior only in
specifically selected narrow domains, we might predict that ethicists would
show better behavior in just those narrow domains. For example, those who select environmental
ethics for a career focus might consequently pollute and litter less than they
otherwise would, in accord with our results.
(Though it is also possible, of course, that people who tend to litter
less are more likely to be attracted to environmental ethics in the first
place.) Ethicists specializing in issues
of gender or racial equality might succeed in mitigating their own sexist and
racist behavior. Perhaps, too, we will
see ethicists donating more to famine relief and being more likely to embrace
vegetarianism – issues that have received wide attention in recent Anglophone
ethics and on which we found some equivocal evidence of ethicists’ better
behavior.
Common
topics of professional focus tend to be interestingly difficult and nuanced. So, maybe intellectual forms of ethical
reflection do make a large difference in one’s personal behavior, but only in
hard cases, where our pre-reflective intuitions fail to be reliable guides: The
reason why ethicists are no more likely than non-ethicists to call their
mothers or answer student emails might be because the moral status of these action
is not, for them, an intuitively nonobvious, attractive subject of
philosophical analysis and they take no public stand on it.
Depending
on other facts about moral psychology, the Narrow Principles hypothesis might
predict – as we seem to find for the vegetarianism and charity data – that
attitude differences will tend to be larger than behavioral differences. It will do so because, on this model, the principle
must be accepted before the behavior changes, and since behavioral change
requires further exertion beyond simply adopting a principle on intellectual
grounds. Note that, in contrast, a view on which people embrace attitudes wholly to
rationalize their existing behaviors or behavioral inclinations would probably
not predict that ethicists would show highly stringent attitudes where their
behavior is unexceptional.
The
Narrow Principles model, then, holds that professional focus on narrow
principles can make a behavioral difference. In their limited professional domains, ethicists
might then behave a bit more morally than they otherwise would. Whether they also therefore behave morally
better overall might then turn on whether the attention dedicated to one moral issue
results in moral backsliding on other issues, for example due to moral
licensing (the phenomenon in which acting well in one way seems to license
people to act worse in others; Merritt, Effron, and Monin 2010) or ego
depletion (the phenomenon according to which dedicating self-control in one
matter leaves fewer resources to cope with temptation in other matters; Mead,
Alquist, and Baumeister 2010).
Reasoning might lead one to behave more
permissibly but no better. Much
everyday practical moral reasoning seems to be dedicated not to figuring out
what is morally the best course – often we know perfectly well what would be
morally ideal, or think we do – but rather toward figuring out whether
something that is less than morally ideal is still permissible. Consider for example, sitting on the couch
relaxing while one’s spouse does the dishes.
A very typical occasion of moral reflection for some of us! One knows perfectly well that it would be morally better to get up and help. The topic of reflection is not that, but
instead whether, despite not being morally ideal, it is still permissible not to help: Did one have a
longer, harder day? Has one been doing
one’s fair share overall? Maybe explicit
moral reasoning can help one see one’s way through these issues. And maybe, furthermore, explicit moral
reasoning generates two different results approximately equally often: the
result that what one might have thought was morally permissible is not in fact
permissible (thus motivating one to avoid it, e.g., to get off the couch) and
the result that what one might have thought was morally impermissible is in
fact permissible (thus licensing one not to do the morally ideal thing, e.g.,
to stay on the couch). If reasoning does
generate these two results about equally often, people who tend to engage in
lots of moral reflection of this sort might be well calibrated to
permissibility and impermissibility, and thus behave more permissibly overall than do other people, despite not acting morally better overall. The Power of Reason view might work
reasonably well for permissibility even if not for goodness and badness. Imagine someone who tends to fall well short
of the moral ideal but who hardly ever does anything that would really qualify
as morally impermissible, contrasted
with a sometimes-sinner sometimes-saint.
This
model, if correct, could be straightforwardly reconciled with our data as long
as the issues we have studied – except insofar as they reveal ethicists behaving
differently – allow for cross-cutting patterns of permissibility, e.g., if it
is often but not always permissible not to vote. It would also be empirically convenient for
this view if it were more often permissible to steal library books than non-ethicists
are generally inclined to think and ethical reflection tends to lead people to
discover that fact.
Compensation for deficient
intuitions. Our empirical research can support the
conclusion that philosophical moral reflection is not morally improving only
given several background assumptions, such as (i.) that ethicists do in fact
engage in more philosophical moral reflection than do otherwise socially
similar non-ethicists and (ii.) that ethicists do not start out morally worse
and then use their philosophical reflection to bring themselves up to average. We might plausibly deny the latter
assumption. Here’s one way such a story
might go. Maybe some people, from the
time of early childhood or at least adolescence, tend to have powerful moral intuitions
and emotions across a wide range of cases while other people have less powerful
or less broad-ranging moral intuitions and emotions. Maybe some of the people in the latter group
tend to be drawn to intellectual and academic thought; and maybe those people
then use that intellectual and academic thought to compensate for their
deficient moral intuitions and emotions.
And maybe those people, then, are disproportionately drawn into
philosophical ethics. More or less, they
are trying to figure out intellectually what the rest of us are gifted with
effortlessly. These people have
basically made a career out of asking “What is this crazy ethics thing, anyway,
that everyone seems so passionate about?” and “Everyone else seems to have
strong opinions about donating to charity or not, and when to do so and how
much, but they don’t seem able to defend those opinions very well and I don’t
find myself with that same confidence; so let’s try to figure it out.” Clinical psychopathy isn’t what we’re
imagining here; nor do we mean to assume any particularly high uniformity in
ethicists’ psychological profile. All
this view requires is that whatever positive force moral reflection delivers to
the group as a whole is approximately balanced out by a somewhat weaker set of pre-theoretical
moral intuitions in the group as a whole.
If
this were the case, one might find ethicists, even though no morally better
behaved overall, more morally well behaved than they would have been without
the crutch of intellectual reflection, and perhaps also morally better behaved
than non-ethicists are in cases where the ordinary intuitions of the majority
of people are in error. Conversely one
might find ethicists morally worse behaved in cases where the ordinary
intuitions of the majority of people are a firmer guide than abstract
principle. We hesitate to conjecture
about what issues might fit this profile but if, for example, ordinary intuition is a poorer guide than abstract
principle about issues like vegetarianism, charity, and environmentalism and a
better guide about the etiquette of day-to-day social interactions with one’s
peers, then one would expect ethicists to behave better than average on the
issues of the former sort and worse on issues of the latter sort.
Rationally driven moral
improvement plus toxic rationalization in equal measure. A final possibility is this: Perhaps the
Power of Reason view is entirely right some substantial proportion of the time,
but also a substantial proportion of the time explicit rational reflection is
actually toxic, leading one to behave worse; and these two tendencies approximately
cancel out in the long run. So perhaps we sometimes care about morality
for its own sake, think things through reasonably well, and then act on the
moral truths we thereby discover. And
maybe the tools and habits of professional ethics are of great service in this
enterprise. For
example: One might stop to think about whether one really does have an
obligation to go to the polls for the mayoral runoff election, despite a strong
preference to stay at home and a feeling that one’s vote will make no practical
difference to the outcome. And one might
decide, through a process of explicit intellectual reasoning (let’s suppose by
correctly applying Kant’s formula of universal law) that one does in fact have
the duty to vote on this particular occasion.
One rightly concludes that no sufficiently good excuse applies. As a result, one does something one would not
have done absent that explicit reasoning: With admirable civic virtue, one
overcomes one’s contrary inclinations and goes to the polls. But then suppose that also, in equal measure,
things go just as badly wrong: When one stops to reflect, what one does is
rationalize immoral impulses that one would otherwise not have acted on, generating a superficially plausible patina of
argument that licenses viciousness which would have been otherwise have avoided. Robespierre convinces himself that forming
the Committee of Public Safety really is for the best, and consequently does evil
that he would have avoided had he not constructed that theoretical veil. Much less momentously, one might concoct a
superficial consequentialist or deontological story on which stealing that
library book really is just fine, and so do it.
The tools of moral philosophy might empower one all the more in this
noxious reasoning.
Again, one might make conditional
predictions, depending on what one takes to be the moral truth. For example, if common opinion and one’s
inclinations favor the permissibility of single-car commuting and yet single-car
commuting is in fact impermissible, one might predict more ethicist bus riders. If stealing library books is widely frowned
upon and not usually done, though tempting, we might expect ethicists to do so
at higher rates.
Conclusion. We
decline to choose among these five models.
There might be truth in all of them; and still other views are available
too. Maybe ethicists find themselves increasingly
disillusioned about the value of morality at the same time they improve their
knowledge of what morality in fact requires.
Or maybe ethicists learn to shield their personal behavior from the
influence of their professional reflections, as a kind of self-defense against the
apparent unfairness of being held to higher standards because of their choice
of profession. Or…. We believe the empirical evidence is
insufficient to justify even tentative conclusions. We recommend the issues for further empirical
study and for further armchair reflection.[1]
References:
Brennan, Jason (2011). The ethics of voting.
Princeton: Princeton.
Confucius (5th
c. BCE/2003). The analects, trans. E.
Slingerland. Indianapolis: Hackett.
Ditto, Peter H., and
Brittany Liu (2012). Deontological dissonance and the consequentialist crutch. The social psychology of morality, ed. M. Mikulincer and P.R.
Shaver. Washington, DC: American
Psychological Association.
Green, Joshua D. (2008). The secret joke of Kant’s
soul. In Moral psychology, vol. 3, ed. W.
Sinnott-Armstrong. Cambridge, MA:
MIT.
Haidt, Jonathan (2001). The emotional dog and its rational tail: A
social intuitionist approach to moral judgment.
Psychological Review, 108, 814-834.
Haidt, Jonathan (2012). The righteous mind.
New York: Random.
Hauser, Marc D. (2006). Moral minds. New
York: HarperCollins.
Mead, Nicole L., Jessica
L. Alquist, and Roy F. Baumeister (2010). Ego depletion and the limited resource model
of self-control. In Self-control in society, mind, and brain, ed. R. Hassin, K.
Ochsner, and Y. Trope. Oxford: Oxford.
Merritt, Anna C., Daniel
A. Effron, and Benoît Monin (2010).
Moral self-licensing: When being good frees us to be bad. Social
and Personality Psychology Compass, 4, 344-357.
Mikhail, John (2011). Elements of moral cognition.
Cambridge: Cambridge.
Rawls, John (1971). A theory of justice.
Harvard: Harvard.
Rust, Joshua, and Eric
Schwitzgebel (forthcoming).
Ethicists’ and non-ethicists’ responsiveness to student emails:
Relationships among self-reported behavior, expressed normative attitude, and
directly observed behavior. Metaphilosophy.
Schnall, Simone, Jonathan
Haidt, Gerald L. Clore, and Alexander H. Jordan (2008). Disgust as embodied moral
judgment. Personality and Social Psychology Bulletin, 34, 1096-1109.
Schwitzgebel, Eric (2009). Do ethicists steal more books? Philosophical
Psychology, 22, 711-725.
Schwitzgebel, Eric (forthcoming). Are ethicists any more likely to pay their
registration fees at professional meetings?
Economics &
Philosophy.
Schwitzgebel, Eric, and
Fiery Cushman (2012). Expertise in moral reasoning? Order effects on moral
judgment in professional philosophers and non-philosophers. Mind
& Language, 27, 135-153.
Schwitzgebel, Eric, and
Joshua Rust (2009). The moral
behaviour of ethicists: Peer opinion. Mind, 118, 1043-1059.
Schwitzgebel, Eric, and
Joshua Rust (2010). Do ethicists
and political philosophers vote more often than other professors? Review
of Philosophy and Psychology, 1, 189-199.
Schwitzgebel, Eric, and
Joshua Rust (forthcoming). The
moral behavior of ethics professors: Relationships among expressed normative
attitude, self-described behavior, and directly observed behavior. Philosophical Psychology.
Schwitzgebel, Eric, Joshua
Rust, Linus T. Huang, Alan T. Moore, and Justin Coates (2012). Ethicists’ courtesy at philosophy
conferences. Philosophical Psychology, 35, 331-340.
Stohr, Karen (2012). On manners. New York:
Routledge.