A very intriguing book, and very much along the topic of cognitive biases that I so love. Maybe a bit too dense to really recommend, though.
This book is rough to say is ‘good’. It’s pop science, the good kind that brings together a lot of different studies, but not quite as readable or approachable as, say, Pinker or even Dawkins, which is saying something. But the topic is phenomenal, and Hauser pulls together an incredible amount of data. If you’re in to cognitive psychology–which I think is one of the most best-covered subjects in popular science today–this one’s not to be missed. It’s too important.
Hauser postulates, and provides significant evidence for, a basic moral grammar, akin to the universal linguistic grammar, a system of moral precepts from which moral systems can be built based on local societal norms, but which limits certain moral systems from existing. Much like no known language has OSV word order (I think, I am paraphrasing stuff I read a long time ago here), no moral system will allow the death penalty for a 6 year old fighting with their sibling.
The most important thing to me is the ample evidence that the reasoning we use to make moral decisions is not available to us. We cannot say why it’s okay to divert a speeding train to kill 1 and save 5 but we can’t kill 1 to harvest their organs for to save 5. We have a moral engine in our brains, chugging away at the problem, but the plans for the engine are not available to us. We try and capture them, but much of morality is jut rationalization.
This is very, very important for a lot of areas. Take public policy. We can rationalize that we help the needy, because helping an injured person in front of you is logically equivalent, in a way, to donating money to poor children across the world. But we have a lot of evidence that that kind of large-scale, institutional charity is actually harmful in the long run. Our attempts to extrapolate on moral reasoning result in public policy that is wildly ineffective, and we need to recognize that sometimes large-scale decisions cannot be made in a way that reconciles with our moral feelings.
It’s also got a lot of important ramifications for animal rights. We grant human rights to mentally disabled folks, but it sure looks like a big part of what makes us moral is that we can put together certain kinds of information in our brains and apply them to base biological systems–i.e. we need to do logical processing on the history of our interactions with a person to perform the proper context analysis to come up with a modern, moral solution to a quandary. But an awful lot of animals–primates, dogs, dolphins, etc–display certain kinds of moral systems that might become all but equivalent to our own instantly with the addition of human-level information storage and processing. Lots of animals understand the concepts of teasing and unfairness. On the other hand, lots of animals do not seem to understand the concept of cruelty–in situations where violence is allowed (which is most of them, for most animals), there is little research to suggest that ‘enough’ violence is a concept animals understand. Submission poses seem to stimulate an end to violence, but not to actually engage moral reasoning.
The book was very worrisome for me. I have the moral ability to be a vegetarian because I don’t think that any animals contain the intellectual processing required to be truly moral creatures. But it seems that a lot of moral processing is more or less hard-coded, including human morality. I had not considered that we might be working with more or less the same reflexive system but with the added ability to apply knowledge over long time spans.
What if I’m wrong and I have to give up vegetarianism? That would suck! Animals taste so goddamn good! Fortunately, I can already say that I would still be able to eat seafood if this is enough to make me re-think things.