“All I have seen teaches me to trust the Creator for all I have not seen.”
Ralph Waldo Emerson
Since at least the Enlightenment, when religious skepticism became widespread among the educated, philosophers have tried to provide an answer for why people believe. Many such theories have been simplistic and dismissive. For example, Freud posited that religious belief was related to family dynamics and one’s relationship with one’s father as well as a delusional way to alleviate the fear of death. Many people who disdain religion find some variant of the “delusion” hypothesis appealing, probably more because it is insulting than because it is illuminating. Religious belief, however, is natural and indeed almost inevitable. It is not a delusion in the traditional sense of the word; and it is not a facile attempt to conquer a fear of death.
The first thing to understand, before specifically addressing religious belief, is that the mind is not a general-purpose processing system. Rather, it is composed of cognitive tools, “gadgets” that evolved to solve certain specific, recurrent problems and to help humans successfully to navigate the world. These cognitive tools are generally effective. If they weren’t, of course, then it’s hard to imagine that they would have evolved. But they also have byproducts, or effects that weren’t “designed,” but are just the natural, perhaps inevitable output of a system. Consider, for example, the heat that is produced by a light bulb. The purpose of a bulb is not to heat your room, but rather to illuminate it. But heat is a byproduct of the process that creates the light.
Humans likely have a “face processing system,” which makes us exceptionally good at recognizing and remembering human faces. But this system also makes us prone to see faces in the clouds or in swirls of cream in a cup of coffee. This is a subset of pareidolia, or the tendency incorrectly to see objects or patterns that don’t actually exist. The face processing system did not evolve so that humans would detect Marilynn Monroe’s face in a distribution of clouds. But it is a byproduct of a system that is generally good at discovering, cataloging, and remembering human faces, which is obviously a very important ability.
The main thesis here, then, is that these cognitive gadgets and their byproducts lead almost ineluctably to the belief in supernatural agents, ghosts, and ancestor spirits, and ultimately to gods. And then those supernatural beliefs are woven into complex narratives (religions) that spread and evolve, eventually creating social ideologies that help to encourage cooperation. In particular, three gadgets are worth exploring: An agent-detection device (ADD), a theory of mind (TOM), and an intuitive ontology.
Agent-detection device (ADD): This device detects agency in the world, that is, it detects things that propel themselves, that are moved by an internal set of causes. A wound clock is not an agent, because the series of mechanisms that leads to its telling time are explicable and “external” — the very definition of push-pull causality. On the other hand, a wolf is an agent because the series of mechanisms that lead to its behavior are inexplicable and internal. (Note, I leave aside the question of whether ultimate the wolf’s behavior is just the result of push-pull causality.) Humans use agency to explain myriad things. Imagine, for example, that you come home, and somebody has left a cake on your counter that reads, “I love you.” You immediately know that an agent produced it. And this allows you to understand why and how the cake came to exist and what it “means.”
Sometimes the ADD is promiscuous, though, positing agency where none exists. Scholars have called this a hyperactive agency detection device (HADD) because of this. Imagine, for example, that you are in the middle of writing a very important paper and your computer crashes. You might shout at your computer, insult it, swear at it, even negotiate with it. In other words, you treat it as an agent, as something that might listen to and respond to your insults and inducements. This means that we are prone to positing agency as an explanation for things. Often it is a perfectly reasonable explanation, but not always. Perhaps one of the most obvious cases of applying agency is when we want to explain how something complicated came to exist. Think back to the cake example. This is exactly what William Paley, perhaps the most popular expositor of the so-called teleological argument, argued about the existence of plants and animals. They are evidence of an agent, a creator, in the same way that the cake is (or a watch, in his famous example).
Theory of mind (TOM): A TOM allows us to understand and theorize about other minds in the world. We know that people know things. We know that they see things. We know that they have perspectives. And we know that they can believe things that are wrong. Also, importantly, we can think about minds even when people are not present. For example, if your roommate, Jenny, leaves, you can still think about her mental states. Perhaps she would like to have a clean apartment, so you clean it for her before she comes back. Or perhaps she would like pizza, so you order it for her. Similarly, we can think about a person’s mental states even when they are no longer alive.
This happens often at funerals. “Steve would want us to be happy.” “Jenny would love this song.” In fact, much of a funeral’s ritual seems, at least in the contemporary West, to depend upon reflecting upon the interests and desires of the deceased. And, of course, this ability can be extended to gods — we can impute beliefs and desires to a deity. God wants us to uphold the law, to be charitable, to be loving, and on and on. Humans, therefore, unlike other animals, can create (or discover) and worship a detached mind, a mind with no physical, sensory presence.
TOM also allows us to search for “meaning,” because meaning is bestowed by a person who has imbued an object with his or her intentions. What’s the meaning of a flower, for example? Well, it depends upon the intention of the of the person who is using it. If Jill lays flowers on a coffin, the meaning is much different from if she gives them to her husband. Humans are obsessed with meaning. And they often want an answer to a most perplexing and anxiety-producing question: What is the meaning of life? Perhaps some modern people, learned in the philosophy of Camus, will answer, “there is no meaning.” But for many, that is an unappealing answer. We want a meaning, and the meaning should come from the intent of the agent who created the universe: what did he or she or it intend for our lives?
Intuitive Ontology: The intuitive ontology system carves the world into understandable ontological categories such as “inanimate object,” “artifact,” “animal,” and “human.” Each of these categories has important characteristics and therefore allow great inferential potential. If I tell Tommy that I have an inanimate object at myself, then he instantly knows many things about it. He knows that it doesn’t have feelings, desires, thoughts, interests. He knows that it won’t move on its own. That it doesn’t have a conscious design. On the other hand, If I tell Tommy that Sarah is at my house, then he knows many things about her, even if he has never seen or talked to her. He knows that she has interests and desires. That she breathes. That she walks (or moves), talks, thinks.
The world would be chaotic and confusing without this system which carves it up into digestible chunks. Instead of being perplexed by a moving furry thing running in the distance, we immediately know many things about it.
One byproduct of this system is that the mind appears fascinated by “minimally counter intuitive” (MCI) entities. These are entities that share most of the features of one category but violate expectations in one or two ways. A talking tree, for example. It preserves the features of a tree: It is wood, needs water, probably can’t walk, grows slowly, but adds it “mind” and thus can reason, discourse, debate, et cetera. A ghost is another good example. It is basically a human minus the physical body. MCIs preserve enough of the features of a category that they aren’t too hard to understand but are unique and contrary to expectation in ways that make them memorable. A talking tube of tooth paste that can see the future but only on Tuesday and that barks like a dog on Wednesday definitely violates ontological expectations but is so complicated that it is difficult to remember.
On the other hand, the ghost of King Hamlet’s father does not and is thus much easier to remember. Notice the importance of inferential potential. We do not have to be told that King Hamlet’s father can get angry. We already know that because we apply our ontological expectations for “human” to the ghost.
We now have a framework for understanding why religious beliefs survive and are, in fact, intuitive and more likely to spread than alternatives, and, incidentally, why religious belief is natural and science is not.
Today’s ideas and beliefs, like today’s organisms, have battled in a long and ceaseless competition. They have survived. An idea survives by reproducing. And it reproduces by appealing to the human mind. Therefore, Ceteris Paribus, ideas that appeal more to the human mind will survive better than ideas that don’t. The mental gadgets described above lead to certain tendencies that make some ideas “feel right” and some ideas “feel wrong.” And those ideas that feel right spread like a catchy pop tune or a juicy piece of gossip, whereas those ideas that feel wrong don’t. Many supernatural ideas are catchy. They feel right. And therefore they spread.
Consider, for example, the idea that a dead person’s ancestral spirit keeps watch over the family. This feels right because of (1) our ability to think about other people’s minds and (2) our facility with and penchant for MCIs. An ancestral spirit is a mind minus a body. And it’s great for explaining otherwise puzzling phenomena. The roof fell in last year and killed Samantha. Why? Well, the scientifically minded person would say that there is absolutely no reason. It’s accident. The roof was bound to fall at some point and Samantha, unfortunately, just happened to be under it when it did. But the person who believes in ancestral spirits has a more appealing answer: Because the family has been disobeying the code of its elders. It’s easy to imagine that the ancestral spirits can be angry, because that’s a trait we naturally impute to humans. And it provides meaning for the incident. It wasn’t an accident. It was an omen. A sign. A signal. A call to action.
Notice that many scientific explanations don’t feel right and, in fact, require long and laborious years of training to accept. For example, the theory of evolution. Humans are likely natural creationists, because creationism is a more intuitive explanation than is a purely mechanical one. It concords with our intentionalist and functionalist biases. When researchers ask children why big, jagged rocks exist, they will generally provide a functional explanation. They exist so that animals can scratch themselves or so that people can admire their beauty. If the rocks and animals exist so that then somebody or something must have created them. Stripping humans of so that reasoning about the natural world is difficult precisely because it does not satisfy the intuitive mind.
Culture is composed of thousands to millions of minds interacting with each other. Ideas that are slightly more likely to appeal to one mind are much more likely to spread in the aggregate. And so supernatural beliefs often flourish, even among populations that pride themselves on tough-minded skepticism. Meanwhile, scientific ideas often flounder unless they are supported by incredibly elaborate educational systems that constantly emphasize causal, mechanistic reasoning.
Human mental gadgets, then, almost inevitably lead to belief in supernatural agents and to narratives about them. But they do not necessarily lead to belief in the moral “Big Gods” with which we are familiar. Those gods, the powerful and morally concerned gods of Judaism, Christianity, Islam, Hinduism, et cetera, likely arose through a long process of cultural evolution because they provided groups of believers with advantages over other groups.
Humans are uniquely cooperative and form groups to compete against other groups. But they are also self-interested. They will lie and cheat and steal if they can get away with it. Thus, they face a dilemma: Groups that cooperate more than others are likely to vanquish (or at least outcompete) those other groups. But individuals within groups are always tempted to cheat. Tools that facilitate cooperation are therefore desirable. And religions with morally interested gods might be one such tool. An all-powerful and all-knowing god, after all, sees you even when you are alone. And can punish you even if nobody in your social group can.
Because groups compete against each other, ideas that make groups successful are more likely to spread than other ideas. This might happen because the successful group completely destroys other groups, taking their territory and resources. Or it might happen because the successful group assimilates other groups, slowly converting their populations. Or it may happen because other groups imitate successful groups. They see a group flourishing with a morally interested god and import the idea into their group.
Hypereducated Westerners often ask, “Why would anybody believe in god?” But the better question is “Why would anybody not believe in god.”