Monday, July 23, 2018

The Believing Brain: From Ghosts and Gods to Politics and Conspiracies: How We Construct Beliefs and Reinforce Them as Truths

Book Review: The Believing Brain: From Ghosts and Gods to Politics and Conspiracies: How We Construct Beliefs and Reinforce Them as Truths – by Michael Shermer (Henry Holt, 2011)

This is an excellent book spanning the psychology, biology, and neuroscience of belief. Shermer has gone the rounds through religious and political belief and broke through to an appreciation of the science of belief itself. Shermer is a psychologist, the founding publisher of Skeptic magazine, and has a Ph. D. in the History of Science.

Shermer sees the post-modernist media-reinforced idea of ‘truth is relative’ being taken too far and out of context to allow more possibilities than is realistic. As the X-Files had it – we want to believe. ‘The truth is out there.’ He agrees but intervenes saying that science is by magnitudes the best method of discerning truth. Many or most people believe in the supernatural and the paranormal when asked. Perhaps it is the persistence of life’s mysteries that leads to this or as he says, a misunderstanding of the scientific process. But why do people still believe regardless of what science says?

“Belief change comes from a combination of personal psychological readiness and a deeper social and cultural shift in the underlying zeitgeist, which is affected in part by education but is more the product of larger and harder-to-define political, economic, religious, and social changes.”

He notes that after we form our beliefs we defend, justify, and rationalize them. So, belief comes first, and our conception of reality is based on that belief. He calls this idea belief-dependent realism and the idea is modelled on the ‘model-dependent realism’ theory of the reality of physics put forth by Stephen Hawking and Leonard Mlodinow in their book, The Grand Design (also reviewed in this blog). He takes the idea a step further to say that all scientific models, including model-dependent realism are in essence belief-dependent realism.

Our brains interpret sensory data to find patterns and then infuse those patterns with meaning:

“The first process I call patternicity: the tendency to find meaningful patterns in both meaningful and meaningless data. The second process I call agenticity: the tendency to infuse patterns with meaning, intention, and agency.

The meaningful patterns become beliefs and shape how we view reality. Once beliefs are formed we look for confirmatory evidence which strengthens beliefs with emotional support. It is rare, he notes for humans to change their beliefs. We tend to hold onto them even in light of new evidence, probably because we have invested in them.

He begins with the story of retired bricklayer Chick D’Arpino, who had a mystical experience at 4 A.M. in 1966 where he heard the clear voice of the “source” (not determined whether God or some other) that was just 13 words involving love – the source knows us, loves us, and we can have a relationship with it. However, he won’t tell anyone the 13 words and says he never will. Why? I don’t know. He thought for sure the experience came from outside his mind but Shermer, who became friends with D’Arpino, thinks otherwise.

He talks about his undergrad experience in an Abnormal Psychology class where he visited clinics and hospitals for mental illness and had to read the famous experiment of psychologist Davis Rosenhan where his associates clandestinely entered mental hospitals as patients after reporting hallucinations. Seven of the eight were diagnosed as schizophrenic and one as manic depressive. The hospital staff ‘believed’ the diagnoses were correct, treated the patients accordingly, and interpreted their behavior as symptomatic. However, quite a few of the real patients, suspected the ruse which is interesting.  The power of expectation is significant. Part of the issue is the assumption that since the patient is there then he or she must have mental health issues – so the diagnostic bias or label is already in place. Rosenhan tried the experiment in reverse – to see if insane people would be judged sane under opposite circumstances. He told an institution that he would send fake patients. However, none were actually sent. Even so, the institution judged about 20% of newly admitted patients fake and suspected many others – so, yes, the bias worked both ways, though not as well.

“What you believe is what you see. The label is the behavior. Theory molds data. Concepts determine percepts. Belief-dependent realism.”

Shermer describes himself as a materialist. He thinks mind is generated solely by the activities of the brain.

Next, he tells the story of an atheistic scientist, a geneticist, Francis Collins, M.D. Ph. D., who had an epiphany and became a born-again Christian. He was influenced by the writings of C.S. Lewis. After his epiphany his belief was formed and then his new reality unfolded – remember, belief-dependent realism. He was the head of the National Institutes of Health. He wrote a best-selling book called The Language of God in 2006 where he concluded that it is more rational to believe in God than to not believe in God. He is well-versed in science, supports evolution, and debunks intelligent design theory. He was raised very secular, noticed that some people found comfort in religion and faith, and was swayed by Lewis’s arguments at a critical point in his life, apparently. He calls himself a theistic evolutionist. He describes his conversion as a choice – Lewis said one needed to make a choice – and a leap. Shermer, a once-believer now non-believer, gives parts of his interview with him, a once non-believer, now a believer. Shermer sees his conversion as having intellectual and emotional components. Collins even came to agree with Shermer who thinks its becoming clear that our moral sense evolved along with our tendencies to be social, cooperative, and altruistic – they all increase our fitness. They agree to disagree with Collins seeing our inner voice or moral sense as deriving ultimately from God and Shermer seeing it derived solely from evolution.

Shermer also points to evidence that more educated people with higher IQs are more skilled at rationalizing beliefs. This leads to his rule of thumb:

smart people believe weird things because they are skilled at defending beliefs they arrived at for nonsmart reasons.”

Another way he says it is that “reason’s bit is in the mouth of belief’s horse,”

Shermer became ‘born again’ while a senior in high school so he recounts his journey from there to skepticism. He was a “Jesus freak,” or “bible thumper” for seven years. He hung around with others like him. He recounts his slow and gradual ‘deconversion,’ becoming ‘unborn’ again. When at college he was among others more secular. That along with philosophy classes, secular and behaviorist professors, science, and seeking a masters degree in experimental psychology helped him deconvert. Nowadays, Shermer doesn’t even believe in the existence of mind. He sees it as a form of dualism innate to our cognition – perhaps a convenient (and functional) illusion. In grad school he studied ethology (one precursor to evolutionary psychology) and cultural anthropology. Once deconversion was unstoppable he realized what a pain in the ass he must have seemed trying to evangelize and convert others. He understands the worldview of religious indoctrination because he was once under its spell. Perhaps that gives him great perspective to study belief. Again, he notes that reality follows belief. We choose a way to believe and our reality tends to accord with it. Everything one encounters is put through that belief framework – one sees through the lens of belief. The final straw for him was the problem of evil which he could not reconcile. He wrote a book about it called The Science of Good and Evil. He also found that morality is not at all dependent on religion.

Continuing his own skeptic’s journey, Shermer turns to politics. Similar to his religious conversion he had a political conversion sparked by the writings of Ayn Rand. He now acknowledges that Rand developed a cult-like following. She was venerated like a cult leader and thought to be right without question. He is still a fan of Rand’s ideas but not of her infallibility. He began a study of economics and capitalism. He is basically a free market advocating libertarian.

He says that science has three legs: data, theory, and narrative. He splits narrative into formal (narrative of explanation) and informal (narrative of practice). The informal narrative of practice is messier, like life. While Shermer is a skeptic who does believe in science he does acknowledge that he might be wrong. He says, “Maybe. But I doubt it.” Regarding religion he does mention the absolute absurdity that belief in a specific supernatural scenario could be the dividing line between intense joy and intense suffering in some afterlife.

Next, he addresses patternicity. The main example involves a hominid in the savanna encountering a rustling in the grass. It could be a dangerous predator or it could be the wind. Successful prediction could be a life and death matter. If one assumes predator but it turns out to be wind then that is called a ‘false positive,’ or Type I error in cognition. No major harm done. If one assumes wind but it turns out to be a dangerous predator then that is called a ‘false negative,’ a Type II cognitive error, and death or serious injury could be the result.

“Our brains are belief engines, evolved pattern-recognition machines that connect the dots and create meaning out of the patterns we think we see in nature. Sometimes A is connected to B; sometimes it is not.”

This is patternicity or ‘association learning.’ Our success in interpreting meaningful patterns aids our survival so natural selection strengthens it via our evolution through time. All animals do it to some extent. He defines patternicity as “the tendency to find meaningful patterns in both meaningful and meaningless noise.” An earlier version of his theory (as he says) was presented by biologists Foster and Kokko (from Harvard and Helsinki respectively) in their 2008 paper “The Evolution of Superstitious and Superstitious-Like Behavior,” and was based on Hamilton’s idea of inclusive fitness and kinship. They determined that “whenever the cost of believing a false pattern is real is less than the cost of not believing a real pattern, natural selection will favor the patternicity.” Thus, they concluded that the evolutionary rationale for superstition is that “natural selection will favor strategies that make many incorrect causal associations in order to establish those that are essential for survival and reproduction.” True pattern recognition helps us survive but false pattern recognition does not necessarily have negative consequences, so it tends to stick around. Shermer states it this way: “people believe weird things because of our evolved need to believe nonweird things.”

He notes that anecdotal association is an example of patternicity that often leads to “faulty conclusions.” Anecdotal thinking is part of our history and folklore and likely our biology too so it competes with the methods of science, which are a few hundred years old and far more learned than biological. Shermer mentions B.F. Skinner’s experiments to explore the superstitious behavior of pigeons when presented with a variable feeding schedule after a period of a regular feeding schedule. Skinner concluded that the pigeons were attempting to mimic their body positions and orientations during the previous feeding to bring on the next. Did they first scan for a pattern then attempt to recreate it? Skinner was wholly convinced that their gestures were performed in order to get the food via pattern matching, albeit imagined pattern matching (aka superstition). The false connections have also been termed “accidental learning.” Shermer notes that superstition can be extinguished in pigeons but that it is much more difficult to do so in humans.

Learning by imprinting as many animals do involves forming fixed and lasting memory patterns. This is what Shermer calls ‘hardwired patternicity,” our instinctual imprinting. These are mostly stimulus response sequences among animals, to recognize a pattern and act according to the imprint learning or other associative learning. Responses and response-tendencies evolve. Facial recognition learning occurs early in humans and, he says, is likely a Sign Stimulus-Innate Releasing Mechanism-Fixed Action Pattern (SS-IRM-FAP) process as first described in herring gulls feeding their young by ethologists Tinbergen and Lorenz in the 1950’s. Recognizing faces offered evolutionary advantages to early humans. It also has an unconscious aspect – we actually recognize and begin to react to faces (and other hardwired patterns) unconsciously before we do so consciously. Our intentions appear to be acted upon before we are aware of a conscious decision to act – suggest some experiments using EEG.

Another form of patternicity is when an insect-eating predator may avoid insects with colors similar to stinging insects. Thus, evolution has influenced our tendencies to consume or avoid. Evolution also favors non-poisonous snakes that resemble poisonous ones. What we look for in potential romantic partners is also evolutionarily hardwired to some extent. We tend to respond to ‘super-normal’ stimuli like unusual or enhanced features. These are examples of preprogrammed patternicities.

Regarding control of one’s environment, psychologists refer to an internal locus of control and an external locus of control. Those with a strong internal locus of control tend to think that they create their own situations and experiences while those with a dominant external locus of control tend to think more that things just happen to them. Skeptics, he says, tend to have a stronger internal locus of control while believers in the paranormal tend to have a stronger external locus of control, as measured by tests designed to do so. Where the environment is more certain (as in modern times in developed countries) the tendency for internal locus of control is higher. Anxiety and uncertainty are more prevalent in magical thinking. He also recounts experimental psychologist Susan Blackmore’s change from a believer in the paranormal to a skeptic and her experiments that showed believers were far more likely to see hidden patterns and messages, that led her and others toward skepticism. Believers and skeptics approach the data of experience differently. Perhaps it was uncertainty and lack of control that led to the conspiracy theories around the 9/11 terrorist attacks. Evidence suggests that negative events, especially unexpected ones, are more likely to be attributed to incorrect causes and conspiracies. Many of us have a tendency to make ‘illusory correlations’ or illusory pattern detections under certain circumstances. Experiments also suggest that a sense of control is also associated with positive health and feelings of well-being.

Patternicity can be useful or damaging. People indulging delusional conspiracies have committed murder. Quackery and pseudo-science can also occasionally be harmful or even deadly.

Agenticity often involves the presumption of an ‘other’ that is also an intentional agent. Shermer defines agenticity as “the tendency to infuse patterns with meaning, attention, and agency.” Belief in spirits, ghosts, souls, gods, demons, aliens, government conspiracies, etc. in most forms are examples of agenticity. Patternicity and agenticity make up the cognitive basis of shamanism, paganism, animism, monotheism, polytheism, spiritualism, intelligent design, and New Age-ism, he says – in most forms (I add) since he does not consider here the possible metaphorical and psychological aspects of such beliefs which may be given meaning without actually subscribing to the beliefs. He also considers and I agree that such ideas are or at least seem intuitive, that we often find patterns and ascribe agency to them in our everyday analysis of experience. Essentialism, or belief in a life force that can be transferred, strengthened, or weakened, is an example of such a seemingly intuitive idea as is animism. We all tend to do it sometimes. Shermer has spent some of his scientific career (as a professional skeptic) debunking the attribution of agenticity to meaningless patterns. He has participated in experiments and debates related to ‘sensed presence,’ magnetically-induced OBEs, and many psychic and parapsychology experiments. He has even had some bizarre experiences himself, one of a stress-induced and sleep-deprivation-induced alien-like visitation during a long cross-country bicycle race. Such experiences are also common with mountain climbers and other extreme sport enthusiasts. He relates these experiences as stress-induced ‘sensed presence.” He sees extreme conditions as the trigger-cause with deeper causes in the brain as follows:

1) an extension of our normal sense of presence of ourselves and others in our physical and social environments; 2) a conflict between the high road of controlled reason and the low road of automatic emotion; 3) a conflict within the body schema, or our physical sense of self, in which your brain is tricked into thinking that there is another you; or 4) a conflict within the mind schema, or our psychological sense of self, in which the mind is tricked into thinking that there another mind.”

He goes into some of the possible neuroscience of these causes such as controlled vs. automated brain functions and the emotional circuits such as the amygdala fight-flight-freeze circuit and the autonomic nervous system. He also mentions things like phantom limb syndrome as a learned component of paralysis based on the expectations and habits of the past about our body schema. The sensed presence of another mind may have to do with our ‘theory of mind’ which concludes that there are other minds different from our own mind.

Shermer sees the mind as ‘what the brain does.’ It reduces to the level of the neuron. Here he reviews cognitive neuroscience, once referred to as physiological psychology. Neurons make excitatory and inhibitory post-synaptic potentials. They communicate information by firing frequency, firing location, and firing number. They are considered similar to the binary 1-0 digits of a computer. Electrical signals course through neurons until they reach the synapses where chemical transmitter substances (CTS) are the chemical signals that transfer information to subsequent neurons. Various drugs can effect CTS release and uptake processes. The CTS dopamine has been called the ‘belief drug,’ and is involved with the learning and reward systems of the brain (discovered by Skinner in his operant conditioning experiments). Skinner called the reward reinforcement and the sequence of operant conditioning as Behavior-Reinforcement-Behavior. The dopamine system is also involved. There is debate as to whether dopamine “acts to stimulate pleasure or to motivate behavior. The dopamine system is involved in addiction as the drugs or behaviors take over the role of reward signals. UCLA neuroscientist Russell Poldrack thinks that the dopamine system is more involved with motivation and the opioid system is involved with pleasure. He says that blocking the dopamine system (in rats) will stop motivation but not enjoyment. Experiments have suggested that increased dopamine boosts the signal, or rather the signal-to-noise ratio (SNR) which can aid ‘error detection’ and other patternicity. Dopamine can enhance our responses to patterns by boosting our pattern-detection abilities – by boosting SNR. Schizophrenics and creative people may also develop enhanced pattern-detection abilities.

Shermer says he is a monist, meaning the brain is all, rather than a mind-body, mind-brain, body-soul dualist Descartes-style. Some researchers think we are intuitively dualists, seeing mind and body as separate. It just seems that way. This perhaps reaches into our views about life after death. It seems plausible to many that the soul/mind lives on somehow yet there is no evidence. Some have even said we have a belief instinct (see Jesse Bering’s book, The Belief Instinct). Neurologists like Oliver Sacks showed us that changes in the brain are often the cause of hallucinations, some of which are interpreted as real by the experiencer.

When we become aware that we and others have beliefs, desires, and intentions we engage in what is called Theory of Mind (ToM). ToM is the basis for agenticity. We realize we are an agent and taken to a higher level we realize others as agents. ToM evolved out of necessity to read the intentions of others to enhance our own survival. He says ToM is an automatic system that kicks in during social situations. ToM may be involved in learning through imitation, transferring the movements of others into our own movements when learning, possibly with the use of so-called ‘mirror neurons,’ that fire during imitation learning. Shermer recounts 2007 experiments by neuroscientist Sam Harris and colleagues that suggest that it is easier to believe than to reject a belief, to accept appearances until proven false. Those experiments also looked for neural correlates of belief and found activity in the ventromedial prefrontal cortex which links lower-order emotional responses with higher-order cognitive factual evaluations. Shermer says this supports “Spinoza’s conjecture: belief comes quickly and naturally, skepticism is slow and unnatural, and most people have a low tolerance for ambiguity.” Other experiments by Harris etal. suggested that there was no belief or disbelief module in the brain and that we can rely on feelings and convictions to support especially as they decouple from reason and evidence. Shermer hopes that we can use reason and evidence in counterarguments to re-couple to emotion and change beliefs.

He explores belief in an afterlife. After going through some stats he comes up with the following observations: 1) belief in an afterlife is a kind of agenticity; 2) it is also a kind of dualism; 3) it derives from Theory of Mind; 4) it is an extension of our body schema (we mentally project the body schema into the future); 5) afterlife belief is probably mediated by our left-hemisphere interpreter (this neural network/circuit is involved in creating narratives which is how belief in afterlife scenarios seem to work); 6) it is an extension of our normal ability to imagine ourselves somewhere else in space and time.

He also says we are intuitive immortalists. Jesse Bering noted that we have a hard time fathoming what it would be like to not exist as we have no basis for understanding so we just assume we will always exist in some way.

Shermer notes that there are four lines of evidence often given by those who believe in life after death: 1) information fields and universal life force – these are also intuitive notions without evidence; 2) ESP and evidence of mind; 3) quantum consciousness; and 4) near-death experiences. On the first point he goes through the work of Rupert Sheldrake regarding information fields and concludes that it is mostly bunk. He does the same with ESP and has done so in many experiments where he was the skeptic. It is the same with quantum consciousness and NDEs (and OBEs) – no real evidence. He talks about a 2009 episode of Larry King Live on which he appeared with Dr. Sanjay Gupta, Dr. Deepak Chopra, and Conservative Christian apologist (recently pardoned by Trump for illegal political contributions) Dinesh D’Souza (who also wrote a book arguing in favor of life after death). His “baloney detector” was going haywire. Their arguments were often simply – if one can’t provide a natural explanation then a supernatural one can suffice. Bull he says. Chopra, it appears, simply wants to verify fuzzy language New Age consciousness mumbo jumbo with some quantum mechanics and neuroscience thrown in. Shermer, for all his skepticism, says he would like to believe in some sort of afterlife but there is simply no evidence.

Most people in the world believe in God or gods or some higher power. According to surveys America has some of the highest percentage of believers. Darwin pondered whether evolution could account for the universality of religious beliefs. Shermer believes it is indeed a powerful influence, one of several. He defines religion as: “a social institution to create and promote myths, to encourage conformity and altruism, and to signal the level of commitment to cooperate and reciprocate among members of a community.” He thinks that as human bands coalesced into larger tribes and eventually city-states, that religion co-evolved with government to codify moral behavior into laws and principles. He thinks that specific human universals related to religion (belief in the supernatural, anthropomorphizing, ideas about fortune, etc.) are influenced by our genetic predispositions and this is why they have come to be recognized as human universals. He notes that small hunter-gatherer bands are often very egalitarian and that is likely because the needs of the group are favored over the needs of the individual by strong enforcing of moral rules with gossip, ridicule, shunning, and other forms of ostracization. Myths and supernatural beings are often employed to promote fairness in the social group which also becomes a moral group. While our own culture gives us the specifics of religion the desire to be religious itself is influenced by evolution. Studies of identical twins vs. fraternal twins have strongly suggested that genetics influences one’s religious activities and to a lesser extent their beliefs. However, it is doubtful that we possess a “God gene” as geneticist Gene Hamer’s book title suggests (apparently, he did not approve of the title chosen by publisher) even though we may have genes that make us more predisposed to engaging in spiritual activities.

Shermer notes that man created gods rather than the other way around. Da, this is obvious. Gods and myths often arise in response to the conditions and trials of the tribe. Shermer’s section – Theist, Atheist, Agnostic, and the Burden of Proof – goes through the various arguments for the existence of God. As an agnostic he favors the words of a bumper sticker he once saw “Militant Agnostic: I Don’t Know and You Don’t Either.”

Shermer asks the odd yet compelling question that determines what he calls Shermer’s last law: “any sufficiently advanced extraterrestrial intelligence is indistinguishable from God.” He relates this idea to evolution, to SETI (the search for extraterrestrial intelligence), and intelligent design theory. His arguments are interesting but inconclusive. He considers ‘Einstein’s God’ (which I see as more or less simply giving a name and creator rank to mystery itself) and whether Einstein meant his ideas literally or metaphorically. He considered God to be beyond comprehension. He was also influenced by his Jewish identity. Einstein favored Spinoza’s God that is the harmony of existence, however, he did not think that God was concerned with the fate or actions of humans.

Shermer describes the ‘supernatural’ as simply a term given to mysteries as of yet not understood fully. The history of science shows that we now understand many things naturally and scientifically that were once considered to have supernatural causes. Still mysteries enthrall us. Nonetheless, Shermer sees it this way:

“Flawed as they may be, science and the secular Enlightenment values expressed in Western democracies are our best hope for survival.” (I might add that many of those values are also expressed in many non-Western democracies as well)

Shermer next considers belief in aliens. As a skeptic he has debated several so-called alien abductees, including the famed Whitney Strieber. Shermer asked Strieber before the show what he does in his off time – he said he writes science fiction! He also considers other causes for perceived alien abductions including hypnagogic hallucinations, sleep paralysis, hypnosis, sleep deprivation, stress, and lucid dreams – especially since the “visions” recounted are often similar. His own view of ETs is that they could exist but their rarity combined with the vast distances make encounters unlikely. He recounts a conversation with Richard Dawkins about what ETs are likely to look like, assuming evolution occurs in a similar way in other parts of the universe. Sci-Fi writer Michael Crichton went to far as to describe SETI as a religion – having faith that there is ‘someone out there.’ While this may be case SETI is also science run by scientists to possibly answer a question that may end up being more religious than scientific.

Conspiracy theories are next considered. Conspiracy theories are different from actual conspiracies. They are often highly improbable, illogical, tend to snowball, and yet can be held onto even in the face of heaps of refuting evidence. Shermer thinks that they are believed due to not applying pattern detection filters and are aided by confirmation bias and hindsight bias – manipulating information to the narrative. There are patterns in the way they develop that are pretty easy to figure out. He goes through several in detail, including 9/11 conspiracies and JFK murder conspiracies.

Next is the politics of belief which also includes economics and ideologies. This part was good I thought. Psychologists have studied why people tend to lump into the liberal and conservative edges of the political spectrum. A 2003 Stanford study of conservatives concluded that they suffer from “uncertainty avoidance” and “terror management” and have a “need for order, structure, and “closure” along with “dogmatism” and “intolerance of ambiguity,” which lead to “resistance to change” and “endorsement of inequality” in their actions. Many conservatives did not agree and dissed the study which also associated some conservatives with Nazis. Shermer acknowledges that there has long been a liberal belief bias in academia which Harvard psychologist Stephen Pinker has written about and which bubbled up recently, especially in so-called alt-right circles. University of Virginia psychologist Jonathan Haidt considered that the standard liberal bias for why people vote Republican is that conservatives are “cognitively inflexible, fond of hierarchy, and inordinately afraid of uncertainty, change, and death.” Haidt encouraged his academic colleagues to move beyond such biases. Shermer changes the analysis to suggest what conservatives might say about liberals: ‘lack of moral compass, inability to make clear ethical choices, lack of certainty about social issues, a fear of clarity that leads to indecisiveness, a na├»ve belief in equal talent, and a belief that culture and environment are way more important than biological human nature influences (these last two mesh with Pinker’s “myth” of the Blank Slate – that we are all the same before culture takes over – not so says Pinker). So, both liberals and conservatives tend to be biased, especially about the other. The belief that “bleeding heart” liberals are more generous and that conservatives are “heartless” is not borne out by data that show conservatives give more to charity (although religious motives and being wealthier in general may account for some of that). One reason this might be the case is that conservatives think charity should be private – provided by individuals, companies, and non-profits while liberals think charity should be public – provided by the government.

A 2005 UCLA study suggested that the media have a liberal bias and the current period of Trumpism says the same in a much over-the-top version. Of course, with Fox News we have conservative bias strongly manifested. The more biased media sources are also the most predictable. Moderates and libertarians tend to be less predictable. Liberals and conservatives stereotype each other and such stereotypes tend to be emotionally-charged. Haidt proposed that our ‘moral sense’ is based on five innate psychological systems: 1) Harm/care (empathy and sympathy); 2) Fairness/reciprocity – reciprocal altruism evolved into our sense of justice and morality; 3) In-group/loyalty – social evolution based in tribalism; 4) Authority/respect – based on social hierarchies developed from our primate histories onward; 5) Purity/sanctity – we evolved to equate morality and civility with cleanliness and immorality and barbarism with filth. On Haidt’s survey liberals score higher on the first two and conservatives on the last three. In other words, liberals and conservatives emphasize different moral values.

Psychological experiments about generosity and rule of law suggest to Shermer that “in order for there to be social harmony society needs to have in place a system that both encourages generosity and punishes free riding.” Religion and government are two such systems, he says. When societies became too large for ostracizations such as gossip, ridicule, and shunning, then the institutions of government and religion developed to take over enforcement of moral codes. Conservatives tend to favor private regulation of behavior through religion while liberals tend to favor public regulation of behavior through government (Shermer adds – except for sexual mores where liberals tend not to want government to interfere). A perhaps confounding issue is that we also evolved tribally in-group and out-group biases that tend to make us competitive as ‘us vs. them’ team players. Shermer admits that he, as a civil libertarian, is conflicted politically. He hopes that identifying the moral values of liberals and conservatives will help bridge the political divide.

Shermer goes through economist Thomas Sowell’s ideas in his book, A Conflict of Visions, where he argues that conservatives have a constrained moral vision of human nature and liberals an unconstrained moral vision of human nature. The unconstrained vision is optimistic but perhaps overly idealistic. It suggests that all social problems can be solved with sufficient commitment. The constrained vision is pessimistic but also realistic in the sense that it acknowledges that all attempts to solve social problems have costs, can lead to other social ills, and there are always trade-offs. Stephen Pinker, in his 2002 book, The Blank Slate (which I plan to review here at some point) relabeled Sowell’s visions as the Tragic Vision and the Utopian Vision. The Tragic Vision emphasizes that things like bureaucracies can explode into self-interest for the implementers of the policies while the Utopian Vision seems to emphasize an increase in what is now invoked often as ‘social engineering.’ Issues like the size of government, the level of taxation, free trade vs. fair trade (oddly Trump and the conservatives who back him seem to have reversed this one). Shermer further alters this conflict of visions idea to say that it is more a spectrum. He calls it the Realistic Vision where on one end there is constraint and on the other no constraint and that in reality human nature is partly constrained – by genetics and evolution especially. It acknowledges that we have a dual nature of being both selfish and selfless, that people vary (ie. no blank slate), and that over-focus on equality could cause as many new problems as it would solve. He thinks political moderates on both left and right generally favor such a partially constrained Realistic Vision of human nature. He gives evidence in support of a partially constrained model: genetic differences among people that leads to different abilities, failed communist and socialist experiments, failed utopian experiments, the enduring power of family ties, the power of reciprocal altruism, the desire to punish cheaters, the ubiquity of hierarchical structures, and in-group/out-group dynamics.

Next, he explains why he is a libertarian. He invokes John Stuart Mill, who in his 1859 book, On Liberty, argued that it was democracy that defeated the tyranny of the magistrate characteristic of European monarchies, but that same democracy could also lead to the tyranny of the majority which can work against the rights of the individual. He notes that our Bill of Rights is intended to prevent a tyranny of the majority. He explains that libertarianism is based on the principle of freedom, without infringing on the freedom of others. He says that libertarianism incorporates moral principles embraced by both liberals and conservatives. He does not think Libertarians will ever be a viable third party in the U.S. though. I think he considers them a type of moderate, but one rooted in personal liberties. The party itself seems to produce both reasonable politicians and nutty ones seemingly overly obsessed with certain liberties such as gun rights or corporate rights, which is perhaps one reason it is not very popular.

Political beliefs are different than scientific beliefs. One might simply believe that a certain policy, at this time and place, is the most viable and useful. Timothy Ferris, author of The Science of Liberty, says that liberalism and science are methods rather than ideologies. Extreme Islamists and some fundamentalist Christians favor theocracies that restrict freedoms. Shermer has also written about free-market capitalism and offers this assessment of democracy and capitalism:

“Liberal democracy is not just the least bad political system compared to all others; it is the best system yet devised for giving people a chance to be heard, an opportunity to participate, and a voice to speak truth to power. Market capitalism is the greatest generator of wealth in the history of the world and it has worked everywhere that it has been tried. Combine the two and Idealpolitik may become Realpolitik.”

‘Confirmations of Belief’ is the next chapter title and is a summary of cognitive biases. He starts out with what he calls folk numeracy, a form of patternicity where we have a natural tendency to misperceive probabilities, to think anecdotally rather than statistically, and to focus on trends that confirm our own biases. Confirmation bias, where we tend to confirm our own beliefs by selecting data that conform to them, is, according to Shermer, the mother of all cognitive biases. We do this to confirm our beliefs. He defines confirmation bias as follows: “the tendency to seek and find confirmatory evidence in support of already existing beliefs and ignore or reinterpret disconfirming evidence.” Experiments have shown that people will often favor evidence that confirms their own beliefs over evidence that disconfirms them. He notes that confirmation bias is particularly powerful in political beliefs. We tend to have emotional reactions to data that conflicts with our beliefs (cognitive dissonance) and neuroscience has confirmed this somewhat. Our preconceptions about various subjects, people, and policies tend to be entrenched and the power of expectation is also in play – we tend to expect reality to fit our beliefs and if it doesn’t we tend to get emotional. Remember, in Shermer’s model beliefs come first, then reality = belief-dependent realism.

Next, the “hindsight bias is the tendency to reconstruct the past to fit with present knowledge.” After accidents or weather events and during war time the hindsight bias often appears. It is easy to conclude that we should have known or have been prepared for such events, that the clues were there. It can get conspiratorial. After 9/11 came much hindsight bias. A related bias is the self-justification bias. This is “the tendency to rationalize decisions after the fact to convince ourselves that what we did was the best thing we could have done.” Most biases involve “cherry-picking” data to conform to pre-existing beliefs. The justification bias is strong in politicians who spin things to depict themselves as seemingly right, in their opinions, even when they are wrong and have clearly made incorrect predictions.

Attribution bias is the tendency to attribute different causes for our own beliefs and actions than that of others.” We might attribute the success of others to luck, circumstances, having connections, or to some innate disposition they have. In contrast we tend to attribute our own successes to hard work and/or some positive disposition. Shermer and a colleague, Frank Sulloway, discovered and presented new forms of attribution bias they call intellectual attribution bias and emotional attribution bias. They noticed when asking people why they believe in God people tended to give intellectual reasons for their own belief, such as the harmonious design of the universe, but when they asked the same people why other people believe in God the same people tended to give emotional reasons, such as the fear of death. We tend to do the same in political hot button issues where we give rational reasons for our own beliefs and attribute emotional reasons to the beliefs of others, particularly to those whose beliefs are opposed to ours.

Sunk-cost bias is simply “the tendency to believe in something because of the cost sunk into that belief.” This often leads to the fallacy that we cannot abandon an idea simply because we have invested considerable resources into it. This is one reason why beliefs are difficult to change. It may also be why politicians are so hard-headed.

Status quo bias is similar. He defines it as “the tendency to opt for whatever it is we are used to, that is, the status quo.” This rewards our laziness! It is likely another reason why people don’t like to change their beliefs. The status quo bias is influenced by the endowment effect. Economist Richard Thaler defined the endowment effect as “the tendency to value what we own more than what we do not own.” Evolution is likely an influence here. Certain animals tend to mark and defend their chosen territories, even when other ones are available. Shermer notes that “beliefs are a type of private property – in the form of private thoughts with public expressions – and therefore the endowment effect applies to belief systems.” I think that the sunk-cost bias, the status quo bias, and the endowment effect are much about the energy required to overcome or redesign the past and about laziness.

Next are framing effects“the tendency to draw different conclusions based on how data are presented.” How data is presented or “pitched” can affect how we perceive it. This is one method of neuro-linguistic programming. It is also often used in behavioral economics and it is ubiquitous in sales.

The anchoring bias is “the tendency to rely too heavily on a past reference or on one piece of information when making decisions.” I see this in politics, among environmental activists, and among those opposed to environmental activists. They might overly rely on one particular study, or someone might reuse over and over a technique that once worked for them well in the past even though it doesn’t work so well now.

The availability heuristic refers to “the tendency to assign probabilities of potential outcomes, based on examples that are immediately available to us.” This is especially true of emotionally-charged situations. He gives the example that we especially notice every red light when we are late for an appointment. This is also a factor in how we tend to assess risk. If some disaster or epidemic happened recently, even though it is statistically rare, we will tend to see it as riskier than it really is.

Related to the availability heuristic is the representative bias, which was described by psychologists Amos Tversky and Daniel Kahneman as follows: “an event is judged probable to the extent that it represents the essential features of its parent population or generating process.” People tend to use shortcuts when they need to decide on something and those shortcuts often employ biases. We might throw out candidates for a job for biased reasons just to lighten the load.

Innattentional bias has more to do with our sensory perception and the automatic nature it has sometimes. Psychologists define it as “the tendency to miss something obvious and general while attending to something special and specific.” The classic experiment here has a guy in a gorilla suit walking through while the subjects are told to count the number of basketball passes by a team in black shirts and another in white shirts. A 1-minute video is shown and the gorilla walks in at 30-seconds, thumps his chest and walk out. Consistently (and amazingly) 50% of the subjects do not report seeing a gorilla-suited guy!

Shermer gives a long list of other cognitive biases which includes a bias to trust authority, to jump on bandwagons, to believe what seems believable, to over-rely on expectations, to conflate cause with correlation, to overvalue initial events, to overvalue events that are recent, and the ubiquitous overgeneralization known as stereotyping.

He also mentions the bias blind spot. This is “the tendency to recognize the power of cognitive biases in other people but to be blind to their influence upon our own beliefs.”

Shermer describes science as “the ultimate bias-detection machine.” Mechanisms such as double-blind controls in experiments are designed to weed out bias. The peer-review process is another bias-reduction technique. Skepticism and the ability to falsify are given importance in the scientific process. Scientists must defend their conclusions to the satisfaction of other scientists.

Science is our best means of separating meaningful patterns from meaningless ones. Shermer uses the model of exploration of new lands to explore the psychology of science here. Prevailing paradigms shape our perceptions. Explorers of the past used the prevailing paradigms of the past to describe their new discoveries. The set of beliefs about reality that make up science have changed as new discoveries have been made. Paradigms have shifted and will likely continue to shift. As Galileo found out, paradigms can be slow to shift when belief systems are entrenched. The shift from Aristotelian logic and deduction to Francis Bacon’s ‘observational method’ of induction took time but rewarded us with a less entrenched societal belief system. This is akin to what I call ‘cultivating the shiftable paradigm,’ or simply reminding oneself that what is or seems true today may be refuted at any time with better and more detailed experimental evidence. Of course, pure empiricism is not always perfect. We may be tricked by our eyes, even when great instruments are employed. If some new structure or function is revealed to us we might not recognize or value it if it doesn’t fit into our current paradigm based on empirical observation. Shermer notes Galileo’s mistaking of the rings of Saturn for three stars as an example. At the time there was no available concept of planetary rings and the resolution of the telescopes of the time was not enough to see to Saturn’s distance very well. Thus, even direct observation combined with the limits of the current paradigm, can fool us. He notes the triad data-theory-presentation as most important in combination for complete scientific understanding. Another way of expressing the triad is induction-deduction-communication, or what we see, what we think, and what we say. There is an interplay of the three. Stephen Jay Gould called this the ‘power and the poverty of pure empiricism.’ Observation can trick us into seeing something that is not really there but is partially based on previous paradigms. This is perhaps most true of the vast and the tiny – both of which are beyond our sensory ranges.

A history of astronomy and cosmology is perhaps a reminder that observations can change with new observing techniques (such as spectroscopy). Shermer gives a short history here which was unexpected and perhaps a bit of a digression but it is relevant to the philosophical aspects of astronomy. He notes Arthur C. Clarkes first law: “When a distinguished elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” The history of science is one in which established theories have been upended to the astonishment of scientists and with much resistance. Orthodoxy can permeate science and this has been called “scientism.” However, the word scientism has also often been used as a charge, often a lame charge, by believers in the supernatural and less credible ideas to dis mainstream science. Time tends to resolve debates in science as new discoveries are made.

Mystery, paradox, and the inadequacy of language and the nature of meaning itself have kept us from uncovering the deeper picture of the nature of reality. Will it always be so? We don’t know but some mysteries have yielded little to no ground. Astronomy has meshed with philosophy to give us further curious angles to explore. There is what is called the fine-tuning problem – why our universe seems so finely-tuned toward certain conclusions including existence itself. The Big Bang is said to be “sensitive” to the ‘six cosmic numbers.’ (There are more but these are considered the most important). These are: 1) the amount of matter in the universe, 2) how firmly atomic nuclei bind together, 3) the number of dimensions in which we live, 4) the ratio of the strength of electromagnetism to that of gravity, 5) the fabric of the universe, and 6) the cosmological constant, or “antigravity” force. This fine-tuning has been dubbed the anthropic principle. There is a counter-principle known as the Copernican principle that concludes that we are not special. The universe’s apparent fine-tuning has given much energy to the intelligent design advocates, including biblical creationists. Shermer and many others argue that there can be many alternatives to the anthropic principle – as the notion that the universe was designed especially for us. Carl Sagan mentioned “carbon chauvinism,” the belief that life cannot be based on anything but carbon, and Shermer takes that a step further to call it “cosmic chauvinism,” the idea that the universe is not fine-tuned for us but rather that we are fine-tuned for it. There is still much not understood about relativity and quantum mechanics and the cosmic numbers may not be as constant as thought. They may also be all related in some other fashion. Shermer delves deeper here and mentions six types of theories of a multiverse of which our universe may be but one component. Stephen Hawking rejected any kind of intelligent design notion based on the anthropic principle. He and Leonard Mlodinow presented their ideas about this in the 2010 book The Grand Design (also reviewed in this blog). Their idea is called model-dependent realism. They stated that it is only useful to ask if a model agrees with observation. If it does, we may use it to describe reality. They model the universe with an extension of string theory called M-theory with eleven dimensions. If the universe is somehow determined to be finite, then M-theory would say the universe created itself. These ideas are conjectural, perhaps as conjectural as God. One may posit God but there is little reason to believe, especially in the case of the Gods of our typical religions.

In the epilogue he states simply that skepticism is science. Science has the null hypothesis which states that a hypothesis is false until proven true. In science the burden of proof is always on proving a hypothesis is true. The burden is not on the skeptics to disprove it. This is important to realize when dealing with the supernatural which has a history of claiming something is true simply because it can’t be conclusively disproven. Of course, in the final analysis many of the things we regard as true may not really be so, so much may be regarded as provincial truth in contrast to definitive truth. The opposite argument, from a perspective of negative evidence, might be something like – if you can’t prove it wasn’t God, spirits, UFOs, Jews, Rothschilds, Masons etc. etc., then it must have been them. That is a ridiculous argument.

Science might also proceed toward a ‘convergence of evidence.’ Here lines of inquiry from different inferential sciences converge to form the current scientific paradigm around a subject. This is typical of sciences that rely less on laboratory evidence and direct experimentation. It is called the convergence method. Geology, archaeology, and cosmology are examples where convergence of other sciences makes up their totality. History can often be tested through the ‘comparative method’ which was exemplified by Jared Diamond in his book Guns, Germs, and Steel, (also reviewed in this blog). By comparing the resources available to ancient peoples in different parts of the world and their geographical boundaries and constraints he realized that the variance of those resources and geographies accounted for much of the lopsided development. Both convergence and comparative methods are employed by paleontologists in testing hypotheses about evolution. “The principle of positive evidence “states that you must have positive evidence in favor of your theory and not just negative evidence against rival theories.” Thus, bunk creationist arguments are only against evolution as they have zero positive evidence of creationism. Shermer says man as homo rationalis probably never existed as we are never really purely rational but are always affected by emotion, pain, and the difficulty of life.

This is an excellent book – highly recommended. This is one reason I wanted to do a detailed review. I hope to read a few more of Shermer’s books as well. He also does short video segments on Big Think.

Friday, April 13, 2018

The Sixth Extinction: An Unnatural History

Book Review: The Sixth Extinction: An Unnatural History (Henry Holt & Company, 2014)

This was a fair to good book, mainly a history of mass extinctions and of extinction in general. Kolbert is a writer at The New Yorker and has written about environmental and science topics. I moved this up on my review list since the author just gave a talk about the book at the local university.

She begins with the recent evolutionary success of humans who have managed to vastly increase their population in the past 100 years and vastly affect their environments and planet to the point where we are directly influencing an unprecedented acceleration of the rate of extinction of many species, mainly through habitat loss. Humans began causing extinctions thousands of years ago by hunting animals to extinction, including isolated island species, especially flightless birds, and very likely megafauna as well. When we discovered and developed fossil fuels and subsequent technologies we expanded our population which also expanded human-caused extinctions.

The first chapter is about the Panamanian golden frogs, once extremely populous, now disappearing. She travels to the area to observe efforts to keep the frogs from going extinct. Frog and amphibian extinctions in general have been accelerated recently, even though some have been present on Earth for hundreds of millions of years and several made it through all of the previous mass extinctions. A fungus in the chytrid family, known as Bd for short, is the culprit killing the Panamanian golden frogs.

She talks about what is known as the ‘background extinction rate’ which is the normal rate of extinction for a class of organisms. For example, the current background extinction rate for mammals is 0.25 per million species-years, or roughly one lost mammalian species every 700 years. In contrast, mass extinctions cause substantial biodiversity losses rapidly and globally. They happen close enough in time to be called events, although an event may last hundreds of thousands of years. Mass extinctions often mark the boundaries of geologic periods. The Big Five mass extinctions were: 1) End of Ordovician, 2) End of Devonian, 3) End of Permian, 4) Late Triassic, and 5) End of Cretaceous. According to paleontologist David Raup: the history of life consists of “long periods of boredom interrupted occasionally by panic.” There have been many lesser extinction events as well. Amphibian background extinction rates have not been calculated due to a dearth of amphibian fossils, but it is thought to be less than that for mammals. However, in recent times, most herpetologists have seen several extinctions of amphibian species. In fact, now they are considered the world’s most endangered class of animals, possibly as much as 45,000 times the background extinction rate! Many reef-building corals, fresh-water mollusks, sharks and rays, and mammals are also endangered.

The Bd fungus was thought to have been introduced by importing frogs from other places that were less affected by it. Introduction of invasive species and new diseases is another result of human travel. Humans have managed to globally re-shuffle species, both purposely and accidentally, on an unprecedented scale.

Oddly, humans did not understand that species went extinct until fossils were interpreted by paleontologists. There were theories and ideas about fossils, often referred to the flood of Noah in Genesis. In the late 1700’s the French naturalist Nicolas-Frederic Cuvier interpreted Mastodon fossils from America as remnants from an extinct species. The author visits a paleontology museum in Paris that houses Cuvier’s specimens and sketches. It was Cuvier who established extinction as fact, she notes. He found many fossils in nearby gypsum quarries. The discovery and reconstruction of the bones of the Ohio mastodon by others would cement Cuvier’s idea of extinction. In 1812 Cuvier published a four-volume compendium on animal fossils. Cuvier was also involved in early study and identification of dinosaur fossils. Cuvier’s success (perhaps modest by modern standards) was based on his keen knowledge of anatomy. His senior colleague Jean-Baptiste Lamarck, who asserted that there was a force pushing beings toward complexity (an idea that later merged into Darwin’s evolution) opposed his idea of extinction.

Cuvier believed extinctions were caused by catastrophic events that happened very quickly. Not much later this idea would become known as ‘catastrophism’ and it was the basis of early geological theory in the early-mid 1800’s until Charles Lyell appeared on the scene. Lyell observed rock layers and concluded that geologic processes like sedimentation and erosion were very gradual over long periods of time. He also thought extinction was a gradual process. Darwin read his books as a young college student at 22 as he traveled on his famous voyage to Galapagos Islands and beyond to the South Pacific. He experienced a horrific earthquake while in Chile and measured the local ground uplift with surveying instruments to eight feet. These and other experiences convinced him of the truth of Lyell’s ideas as Lyell also talked about uplift and subsidence, noting that over long periods of time accumulated uplifts could make mountains. Darwin noticed that coral reefs and atolls were a result of the interplay between biology and geology in that sea shelves subsided (dropped) and reefs moved accordingly to waters of shallow depths. He presented the idea to Lyell who was delighted and revised his idea of reefs which erroneously thought underwater volcanoes were the underlying source. Lyell was wrong about other things – ie. catastrophes and destructive events do have a place in geology. Darwin’s key idea of natural selection was also rooted in Lyell’s ‘gradualism’ and followed Lyell’s famous principle that “the present is the key to the past.”

She visits a museum in Iceland that houses the last known specimen of the great auk, an extinct bird, large and flightless, last known from the mid-1800’s. Once numbering in the millions, the bird is one of many species of animals wiped out by humans for food. Darwin also acknowledged human-caused extinction, which in some cases could be directly observed as hunted species became more and more rare.

Next, she considers the ammonites and the work of the Alvarez’s (Luis and his son Walter) in determining their demise was caused by an asteroid impact that defines the end-Cretaceous mass extinction, the stratigraphic boundary being known as the K-T boundary. Paleontologists went around the world looking for the K-T boundary contact in outcrops and successive rock layers before and after, Geochemists got involved looking for evidence of asteroid impact minerals, mainly irridium. Finally, the impact crater was actually found at the Yucatan Peninsula of Mexico. It was not the impact itself that caused the ammonite extinction, wrote the Alvarez’s, but likely the dust which blocked out the sun which killed plants and animals. This is the event that wiped out the dinosaurs. Living things near the impact were simply vaporized. The atmosphere was altered and the ocean chemistry. Species that could live in deeper ocean water had better survival rates. Ammonites can be seen very well in the fossil record to decrease in variety and in amount but in this case the seemingly gradual was really precipitated by a single event. The end-Cretaceous (K-T) mass extinction is thus far the only one (probably “the” only one) confirmed to be from an impact event.

Next, she considers a history of the science of extinction in terms of Thomas Kuhn’s “paradigm shifts” in his historical/psychological study of scientific revolutions. The first paradigm shift was acknowledging extinction which happened with Cuvier and contemporaries in the 1800’s. Lyell’s focus on the gradual was another shift. That idea held up until evidence for asteroid impacts showed that mass extinctions could result from a single catastrophic event. Of course, in reality both gradual and catastrophic processes operate in geology with the catastrophic simply being much rarer than the gradual.  

She goes to Scotland where geologists are looking at graptolite fossils at the end-Ordovician (444 million years ago) boundary defined by the first major mass extinction. Life then was in the sea, having “exploded” in variation in the previous age, the Cambrian. They are looking for the record in the rocks where the sea went from habitable to inhabitable. After verification of the impact explanation for triggering the end-Cretaceous mass extinction, impact was a popular idea for other mass extinctions. However, lack of iridium and other factors favor other explanations. The current theory in favor for the end-Ordovician mass extinction is that of global cooling and glaciation of a previous greenhouse climate. The glaciation dropped global sea levels drastically which ruined sea life habitat and ocean chemistry changed drastically as well. This dropped previously high CO2 levels which cooled the planet further. One idea has it that it was the spread of mosses on the land that decreased CO2 levels and triggered the process. She considers this idea, that it was plants that caused the first major mass extinction. The end-Permian extinction also appears to be a result of changes in climate.  A massive increase in atmospheric CO2 happened then, 252 million years ago. The seas warmed. Reefs collapsed. Some scientists think the CO2 came from volcanos. Some also think that the conditions eventually favored bacteria that produced hydrogen sulfide and that this extended the die-off due to a poisoned atmosphere.

She explores the notion that we have entered a new geologic age informed by humans and the effects of their population growth and technology. Dutch chemist Paul Crutzen termed it the Anthropocene. He noted that humans have transformed between a third and a half of the land surface of the earth, dammed and diverted most of the rivers, added excess nitrogen to the biosphere through fertilization, overfished the oceans, and use half of the world’s freshwater runoff. We have also altered the chemical composition of the atmosphere and the ocean.

Next, she goes to an area in the Tyrrhenian Sea off the coast of Italy where there is an active underwater volcano system that spews CO2 from vents on the ocean floor. Scientists there can study the effects of increased ocean CO2 as one moves closer or further from the source. The extra CO2 produced by humans is partially absorbed by the ocean and this lowers its pH making it more acidic. Scientists there study the effects of more acidic ocean on sea life, particularly life that makes calcium carbonate shells. Some creatures are more adaptable than others, but few can live in the very low PH waters very near the vents. The same is true of a globally acidified ocean – some species will thrive and others suffer. The general prediction is of loss of biodiversity. Ocean acidification was involved in the end-Permian and end-Triassic extinctions and possibly the end-Cretaceous as well as the Toarcian Turnover 183 million years ago in the early Jurassic. Shelled organisms that use calcium carbonate, ie. calcifiers, need to adjust their internal chemistry to the ocean chemistry. Typically, they do that through evolution so the speed at which they can evolve will be a factor. There are no calcifiers near the vents.

Continuing with the effects of ocean acidification on calcifiers she travels to the Great Barrier Reef region in the South Pacific. Corals are a calcifying superorganism. Darwin was right that subsidence is a major factor in reef building. She travels to One Tree island to observe the work of atmospheric scientist Ken Caldeira and others. Caldeira thinks that the coming ocean acidification will surpass that of the last 300 million years. Thus, coral reefs could be the most vulnerable of ecosystems. They are already being affected. She explains pH and calcium carbonate saturation. The sea provides a buffering effect on acidification but the amount of CO2 is increasing the amount of carbonic acid to exceed the buffering effect. In the geologic record limestone-based reefs have come and gone but those reefs consisted of different organisms, many now extinct. Modern coral reefs face other threats too: overfishing that increases algae which compete with the corals for nutrients, agricultural runoff which also spurs algae growth, and siltation resulting from deforestation. Increasing water temperatures cause the corals to lose their colorization mechanisms. This is known as “coral bleaching” and can lead to the death of reefs.

Next, she heads to the Andes in Peru where researchers are studying the effects of climate on the region’s immense biodiversity through observing plots of trees confined to very specific elevations based mainly on temperature. These species are very specific in their requirements. Some species can move upslope faster than others by projecting seeds but others can succumb more easily to rising temperatures. The worry here is that many of the less versatile species will go extinct. The logic is that since there is much greater species variation in the tropics there will much more climate change related extinction there. The trees also host other species of insects, etc. that also have rather exacting condition requirements. One theory simply holds that the tropics have more species precisely because those species are more condition-specific. Another theory notes that it is the age of the tropics in terms of less disruption over time than in temperate and polar zones. They have had more time to diversify. New species are still being discovered in the tropical regions. Species do migrate but if the rate of climate change, mainly increasing temperatures but also differing water conditions, exceeds the rate of migration, some species won’t make it. She explains the species-area relationship graph (S=cAz (z is a superscript – power). What humans change mainly is A or area, changing the availability of certain types of land through development and agriculture. One prediction is that by 2050 9-13% of species will be committed to extinction under the minimum warming scenario and 21-32% by the maximum warming scenario. Others disagree, saying species are better at adaptation and moving. Most do not think that climate change and habitat destruction will cause a major extinction like the big five but may approach some of the minor ones. (Thus, the book’s title may be deceptive. Even the book’s subtitle is questionable since extinction is quite natural as 99.5% of species once in existence have gone extinct. A person asked this question at her talk and she conceded that it may not have been the best subtitle). The tropics have other threats: illegal logging, illegal ranching, and illegal mining. Some have noted that in a warmer world, species’ overall biodiversity will eventually increase rather than decrease as evidenced in the warm Eocene of 50 million years ago. The problem now is that warming is happening much faster than species can keep up.

Next, she moves s over to the Amazon in Brazil to observe some of the “reserves” kept from development – patches or fragments of rainforest. There she visits famed conservationist Tom Lovejoy who convinced the Brazilian government to preserve parts of the area for scientific study. The experiment started in the 1980’s. Here also, new species continue to be discovered. The preserved plots are essentially islands among logged out areas. Like the Andes experiment these areas are megadiverse and species are very specific in actions and in conditions required for thriving. High species diversity also means low population density and so species become isolated by distance. Such populations are much more susceptible to extinction. She notes that while primary forest is declining in the Amazon the amount of secondary forest is growing so this may slow the high species extinction rate predicted somewhat. Timbered forests will regrow if not further developed. There are so many species in the tropics it is hard to count them, let alone count how many have gone extinct.

Next, she considers what has been called the “New Pangaea,” the globalization of species by virtue of human introduction, both deliberate and accidental. Thus, the geographic distribution of some species has advanced hugely. Here she focuses on the loss of bats in Eastern North America due to a fungus that causes what is called “white-nosed syndrome.” This was first observed in 2007 and is killing bats by the millions, especially when they hibernate in caves and mine shafts in the winter. Ballast water of ships, hitching rides on people and cargo in planes, and on people’s stuff and shoes from car travel are some of the ways species expand their distribution. Invasive species have become problematic in many places although some are more problematic than others and some may also be beneficial, ie. introduced species are not always invasive species but often are. Here she explores some of the stories of introduced species. They often become invasive because in new areas their usual predators may not be around. Introduced species can actually hunt other species to extinction as happened with the brown tree snake accidentally introduced from Papua New Guinea to Guam where it extinctified some birds and bats. It merely did what humans do – succeeded at the expense of other species. As in the story of the American chestnut tree succumbing to a fungus that was common to Asian chestnut trees that the fungus evolved with, the fungus killing American bats is not harmful to bats in Europe from where it was likely introduced. The same is true of the chytrid fungus killing the Panamanian frogs. Novelty can kill. She goes through a surprising list of species introduced to North America: dandelions, honeybees, earthworms, queen-Anne’s lace, burdock, plantain, etc. Currently, the emerald ash borer is a problem here in Ohio – I have about 40 or 50 of these trees dying or dead within about 500 ft of my house. Some will fall a limb at a time or maybe from the base. Others nearer will have to be cut down in the next few years. Zebra mussels and Asian carp are aquatic species that have wreaked havoc. Of course, humans have been introducing species from time immemorial as they travelled to new areas. It is just in recent times the process has been vastly accelerated. This has resulted in rises of ‘local diversity.’ While local diversity has been increasing, global diversity has been decreasing.

Next, she visits the Cincinnati zoo which houses (not sure if still there) a Sumatran rhinoceros named Suci. They are the oldest and smallest of the five species of rhinos today but highly endangered. Some housed at zoos are trying to be mated to reproduce. These are known as ‘captive breeding’ programs. The Sumatran rhino, once common from Bhutan to Indonesia, is a victim of habitat destruction and forest fragmentation. There are only a few hundred left in the wild. In their case captive breeding efforts have made the problem worse as many died faster in captivity. A few have been born in captivity so there is some hope. Other rhinos have been overhunted for their horns which are used as an aphrodisiac in Chinese medicine and apparently as a party drug where they are powdered and snorted like cocaine.

While in Cincinnati she also visits the nearby museum at Big Bone Lick where the old mastodon fossils that Cuvier interpreted were found. She considers whether the North American megafauna were driven to extinction by human hunters (the leading theory by far) or by climatic changes or possibly by both. Similar losses of fauna occurred in Australia, New Zealand, Madagascar, and other places. Of course, every loss coincided with the arrival and persistence of humans in those areas. She shows through scientific evidence that pre-historic humans almost certainly caused these extinctions. One issue with some of the big megafauna is that they reproduce sparingly and have one baby at a time so that even a small amount of them killed by humans could have a large effect on their population in a relatively short period of time. Other simulations have shown that the megafauna were very vulnerable to humans, that only a few hundred humans could have wiped them out over a millennium.

Next, she visits Germany to consider the fate of our human deep ancestors, the Neanderthal. The likelihood is that modern humans, homo sapiens, simply killed off and inter-breeded with Neanderthals. Genetic projects are underway to map the Neandertal genome, compare it to the genome of modern humans, and find out where and possibly when they diverged. Apparently, it is a slow process since getting DNA from Neanderthal bones is not easy. We now know that Europeans and Asians share more Neanderthal DNA than Africans. All non-Africans carry between 1% and 4% Neanderthal DNA. Only modern humans, not Neanderthals, used projectiles and made it to Australia, Madagascar and other places (used boats). Paabo, the researcher there in Germany, is a leading DNA-extracting researcher in the realm of humans. He tried and failed to extract DNA from a bone fragment from 17,000 year old homo floresiensis, first discovered in 2004 in Indonesia and identified as a new species. Then in 2010 the first bones of the Denisovans were discovered in a cave in Siberia. Paabo named the species and was able to extract DNA from the finger bone discovered. It was found that modern people from Papua New Guinea share 6% DNA with Denisovans but that modern Siberians and Asians do not. This is likely due to ancient migration patterns. The Neanderthals used tools, buried their dead, and some think they made art and adorned themselves, but after living in Europe for 100, 000 years there is little to show of their “culture.”

Finally, she explores the San Diego zoo, specifically the place there where the DNA of extinct species is preserved. Here there are also very endangered species. One is a rare male alala, or Hawaiian crow named Kinohi. These crows can imitate human speech somewhat like parrots. They are trying to extract his sperm to mate with the few female alalas left. She talked about him in her talk.

Overall, this was a pretty good book and a nice overview of extinction in general. Like I said before, I don’t think the title or the subtitle are ideal since this may not become a “mass” extinction although the rate of extinction has certainly accelerated significantly – and it is not at all unnatural. I think she talked a bit much in her talk about the effects of climate change and the current American political climate as being unhelpful. I would have rather explored more about biology in general.

Saturday, March 31, 2018

Living Downstream: An Ecologist's Personal Investigation of Cancer and the Environment

Book Review: Living Downstream: An Ecologist’s Personal Investigation of Cancer and the Environment – by Sandra Steingraber, Ph.D. (Da Capo Press 1997, 2nd Ed. 2010)

This is an interesting account of the possible relationships between chemicals, particularly synthetic organic chemicals, and incidence of cancer. By nature, it is difficult to discern prime causes of cancer and what might just be influences. The environment a body encounters and imbibes is certainly a factor. She does a good job of stating evidence and trying to tease out relationships in light of both her scientific prowess and her very personal long and thus far successful battle with bladder cancer, diagnosed when she was 20. It is also good as a personal account of the anxieties, hopes and fears, of having cancer, quite touching at times. She is a veteran of over 70 cytoscopic exams where tubes were inserted into her bladder.

In the intro to the 2nd edition she notes that it has been thirty years since she was first diagnosed. When she was diagnosed she was asked by her urologist if she ever worked in a tire factory, the aluminum industry, or with textile dyes. She was asked because bladder cancer is the cancer most thought to be due to exposure to hazardous chemicals. Of all the chemicals used in our society very few have been specifically tested for carcinogenicity – about 2%, although that increases if we extrapolate and categorize by type. Many chemicals known to cause cancer in animals are used in food and consumer products. Genetic factors are extremely important in susceptibility to cancer so that it makes sense that the genetically susceptible are very sensitive to exposure to carcinogens in the environment.

Steingraber’s information sources include the Harvard Medical School library where she did post-doctoral research, right-to-know laws, cancer registries, published studies, and reports about levels of environmental contaminants like pesticides and other chemicals and air pollution. She notes the 1986 federal right-to-know law which requires industrial interests to keep databases of the release of initially some 650 toxic substances into the environment. The database became the Toxics Release Inventory (TRI). This allowed researchers to compare where those releases occurred with local cancer rates and patterns. From 2001 to 2008 the TRI was scaled back and thousands of facilities were no longer required to report.

Steingraber does acknowledge that cancer causation is complex and a recent analysis from Johns Hopkins University notes that most cancers to not have a discernable cause. She notes cancer causation used to be divided among three variables: genes, lifestyle, and environment. Newer analyses indicate that those variables often intermingle in complex ways. Genetic factors may involve epigenetic factors – genetic predisposition to cancer is too simplistic. Substances, natural and synthetic may alter gene expression and change gene behavior. She also mentions endocrine disruption, whereby certain chemicals disrupt our endocrine system which affects hormone production, metabolism, and reproduction. Basically, chemicals can interfere with and mimic hormones. She notes the old toxicology adage – “the dose makes the poison,” but also adds that in endocrine disruption the timing is often very important, particularly exposure early in life. Another complicating factor is chemical mixtures and how they interact with one another and with the body as a whole. Steingraber is an advocate of the Precautionary Principle, which is favored in Europe but has always been a hard sell in the U.S. I don’t agree with her on this – sometimes being overly cautious can cause more harm than good and I think each individual case should be evaluated separately rather than fall under a single regulatory principle. She favors ‘green chemistry’ but there is as of yet much to work out with it. She notes that petroleum and coal are often the sources of carcinogenic synthetic substances and so favors green energy. Of course, petroleum is also the source of many synthetic chemicals that improve health and make our lives safer and more convenient. The bigger part of toxic chemical releases come from coal-burning power plants and emissions from vehicles. She notes that the death rate from cancer has actually fallen and this is due primarily to the success of smoking cessation programs. However, childhood cancer has slowly but steadily increased over the years. She notes that certain industrial chemicals have been proven to cause cancer among those who work with them so that precautions are needed, including outright banning in some cases.

Steingraber recounts her childhood in Central Illinois prairieland/farmland where part of her family farmed. Illinois is 87% farmland. It has been farmed for a long time and pesticide use is abundant, including the use of atrazine. Atrazine in the environment is high during spring planting and lower in winter. It and its byproducts are found in surface water, air, and groundwater as well. A 1992 study found that one quarter of private wells in Central Illinois contained agricultural chemicals, typically in trace amounts. How they get into groundwater and how much varies according to how much used, how much runs off, and the local geology and groundwater configurations are important factors. Even long-banned DDT and PCBs are still found in the environment due to their chemical stability. She notes that:

“Atrazine remains the most frequently detected pesticide [as of publication 2010] in water throughout the United States, found in three of every four American streams and rivers and 40% of all groundwater samples.”

She gives some info/data and anecdotes about DDT, PCBs, and atrazine and introduces Rachel Carson, who succumbed to cancer and whose work led to DDT being banned. Through the book it can be seen that Steingraber venerates Carson and follows in her footsteps. She recounts her visits to the library at Yale University that houses Carson’s papers. She reproduces some of Carson’s notes about her own cancer and impending death and through narrative stories about Steingraber’s friend Jeannie who had an aggressive form of cancer in her thirties and died from it. Carson’s famous 1962 book, Silent Spring, led to the banning or restricted use of several dangerous pesticides, although the chemical industry fought her. In a few isolated cases these pesticides can be useful according to many – such as in very specific applications to prevent malaria which kills many children around the world in tropical countries with abundant mosquitos. Many say DDT could prevent those deaths but there is no access since it is banned internationally. DDT, PCBs, and possibly atrazine (which is not banned in the U.S. but is in Europe) are in some ways associated with cancer although data and conclusions have been inconsistent. Thus, even with these powerful poisons it is difficult to get incontrovertible conclusions. This makes the much less powerful pesticides in use today like glyphosate much more benign by comparison. Steingraber also recounts Carson’s public appearances to fight the chemical/pesticide industry after Silent Spring was published and her struggling cancer patient appearance as she defended her work. Carson argued that pesticides and other chemicals caused cancer before it was generally ceded that that was the case. Steingraber notes Carson’s ode to citizen activists as helping her to speak out and sees herself in the same light – as an activist as well as a scientist – Steingraber has been vocal in recent years in opposition to oil and gas industry activity near Ithaca, NY where she lives – although I think her focus may be misplaced since the fracking revolution likely produces far more benefit than harm and the fears about water contamination are overblown.

She recounts the experience of having cancer throughout the book, the boredom, the anxiety, the fear, the frustration. She also notes cancer trends and trying to tease out trends from the data that is available. She pours through state and federal cancer registries and compares them to TRI data. She looks at cancer incidence rates = number of new cancer cases per 100,000 people per year. Tracking changes in cancer incidence can lead to discoveries that point to sources. Unfortunately, there are often multiple possible sources so that the availability of many of them may correlate with cancer incidence and yet not be related by cause. The adage “correlation does not equal causation” is often relevant to these statistical epidemiological approaches. Teasing out clear relationships from the data can be difficult. She acknowledges these problems.

Incidence rates can change when new detection technologies appear such as mammography for breast cancer. She explores the trends in breast cancer, noting that breast cancer has been dropping irregularly since it peaked in the 1990’s. There are several possible reasons: decline in women taking hormone-replacement drugs, decline in women getting mammograms, disproportional under-reporting, and declining exposure to causative agents. She notes that breast cancer kills 41,000 women in the U.S. yearly. Another reason cancer trends are hard to track is that it is a slow disease and people move to different localities making cancer by region difficult to calculate evenly. She notes that the overall cancer incidence rate is 463 per 100,000. This is more than twice the cancer mortality rate so more people are surviving cancer. Over 11 million people in the U.S. have cancer, are in remission, or are cured. The cancers that are rising are leukemia, non-Hodgkin lymphoma, soft tissue cancers, kidney cancer, and brain and nervous system tumors. Childhood cancers are rising as well, which suggests environmental factors. She notes that children do receive a higher proportion of any poisons in air and water due to body weight and they don’t have lifestyle factors as adults do. She notes that cigarette smoking causes 85-90% of lung cancer with a very high fatality rate and is thus the largest preventable form of cancer.  

Steingraber lauds calls to fund more cancer incidence research as well as research of more potential chemical carcinogens. She notes cancer studies that have grouped people by birth year, by racial/ethnic background, gender, or all of the above. She focuses in on the data about non-Hodgkin lymphomas and notes that people of certain occupations tend to get it such as farm workers and dry cleaning workers. Solvents, PCBs, and certain pesticides (phenoxy herbicides) are suspected sources or triggers. She studies cancer distributions across space and time. One might find cancer clusters and compare them to nearby potential sources of toxins, although one would have to prove that those toxins are there and know something about their toxicological effects. She notes throughout the book that cancer study results are often unclear and inconclusive and can only suggest where and what to study further. She thinks there is a general correlation between industrialization and rising cancer rates. She suggests that increased coal-burning in China and living near a Soviet petrochemical complex in Ajerbaijan correlate well to increasing cancer rates in those places. She implicates coal and petroleum in particular – many synthetic chemicals derive from petroleum. However, it is hard to know how much cancer or cancer influence is derived from petroleum chemicals. We also know that the UV light from the sun causes cancer in those susceptible and that plant substances can be carcinogenic. Lifestyle factors may also stack the deck for or against cancers. Household chemicals, cosmetics, cleaning chemicals, paints, and solvents may be factors. It is hard to know how much with each of these without large, long-lasting, and well-planned studies. More people die of heart disease than cancer (especially now as more and more cancers become treatable) and lifestyle is also a large factor in heart disease. Toxin exposure could be a contributing factor as well. She notes that increasing cancer rates among migrants to a new place certainly suggests an environmental influence.

She calls for a nationwide cancer registry. The National Cancer Institute keeps an atlas of cancer mortality but not incidence (cancer mortality has dropped due to better treatment and sooner detection). She notes a good correlation between cancer mortality and industrial areas. However, she also notes that quality of treatment is a factor and that cancer diagnoses do not seem to correlate as well to industrial activity. She does say that cancer rates seem to match industry more than any other health problems match it. She mentions a study in the UK that correlated leukemia very well to industrial facilities, particularly to those involving chemical solvents at high temperatures. Cancer rates among certain occupations have long been studied: farmers, chemists, dental workers, barbers, hairdressers, firefighters, painters, welders, asbestos workers, miners, printers, fabric and dye workers, certain electronics workers, and plastics manufacturers. Being somehow exposed to dumped chemicals and wastes is also considered, particularly the many Superfund sites. She does also consider methodology and the difficulty of getting from correlation to causation. Here she mentions ecological fallacy as a term meaning to falsely attribute causation to correlation. She complains that uncertainty has been used to delay corrective action for reducing environmental pollution. That can work both ways as those who favor strong regulation of potential toxins often use uncertainty to argue their position – the basis of the Precautionary Principle. It is basically a ‘prove it safe’ vs. a ‘prove it harmful’ debate. I would argue that since many of the toxins are or derive from substances that do much good in the world, including making people healthier and enabling many things – that the burden should be on those to ‘prove it harmful’ for most things – that usefulness to society, cost, and disruption also need to be taken into account.

She goes through a type of epidemiology known as “ecological studies,” which attempt to discern disease trends in large groups. One might study populations where exposure to toxins is likely vs. populations where it is unlikely. Investigation of cancer clusters can be tricky or legitimate requests for such studies can be dismissed by health workers. We are all exposed to various levels of ‘probable carcinogens’ such as metal degreaser trichlorethylene (TCE) as it is in most water supplies. In such cases it is difficult to find a comparison population with no exposure to the toxin. Cancer is more difficult to study because it may take a long time after exposure (to certain toxins) for cancer to develop. People move between exposure and onset. Also, cancer may have numerous causes so that pinpointing it to one cause is not easy. She notes that GIS (geographic information systems) can be very helpful. Environmental epidemiology is wrought with difficulties as well as being amenable to inconclusive results that may suggest causation that is not causation and vice versa.

Steingraber focuses on synthetic organic chemicals as potential sources of cancer although she does not mention that there are natural sources of cancer as well and that some of the synthesized chemicals also do occur in nature, although often in different forms. Crude oil is a natural substance that is toxic if ingested. She notes that:

“Synthetic organic molecules are chemically similar enough to substances naturally found in the bodies of living organisms that, as a group, they tend to be biologically active.”

This is problematic she says. Many organic chemicals are inert in their final forms but active in their intermediate forms during manufacture. If enough of some chemicals get in our systems they may mimic natural body chemicals as is thought to be the way endocrine disruptors act. She also mentions chloroform, considered a probable carcinogen. It is used in many chemical processes and appears in wastewater. It also appears as a byproduct of chlorinated water so removing it may be impossible. Later in the book she does mention favoring alternatives to chlorine use but water companies still prefer chlorine based on cost and feasibility. She laments the ineffectiveness of the 1976 Toxic Substances Control Act (TSCA) which does not require testing for the vast majority of new chemicals on the market. Of course, massive new studies on every new chemical would mean massive animal testing and such tests often involve giving animals massive lethal doses. She notes that barely a handful of chemicals have ever been taken off the market and none for the last nineteen years. Pesticides are regulated differently – Federal Food, Drug, and Cosmetic Act (FFDCA) and Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). In 1986 the Emergency Planning and Community Right-to-Know Act (EPCRA) passed Congress over massive opposition by industry. It spurred the Toxics Release Inventory (TRI) which requires companies to report the total amount of about 650 toxic chemicals released as waste, by-products, and spills every year. TRI was scaled back in 2008 and 2009 citing homeland security concerns since knowledge of where toxins were stored could be used by those wishing to unleash them for terroristic reasons. However, in order to explore whether environmental toxins are influencing cancer incidence one would need to know what is actual in the environment from day to day or at least on avg.

Toxics and pollutants may be concentrated at places like landfills, particularly at those that accept toxic waste such as heavy metals that have been implicated as carcinogens. Steingraber does her own investigative analysis of industrial toxics released into her childhood area of Tazewell County, Illinois.

She goes back again to endocrine disruptors, mainly those that mimic the hormone estrogen. While this is true of several industrial chemicals just today I read an article about naturally-occurring chemicals in the essential oils used in many soaps, shampoos, and lotions also doing the same thing. The substances in the oils are also known to bioaccumulate rather than fully metabolize. Apparently, there are several ways substances (both synthetic and natural) interfere with hormones. Phthalates used in PVC plastic and added to perfumes and lotions are known disruptors as are a rather toxic chemical group known as organochlorines which includes PCBs, TCE, DDT, dioxin, and several others. Organochlorines tend to persist in the environment. Burning plastic produces dioxin and other organochlorines. She mentions the UN Stockholm Convention on Persistent Organic Pollutants (POPs) as inspiring since it seeks to eliminate the use of the most toxic POPs. Several of the worst organochlorines are on the list.

Steingraber favors so-called green chemistry over petroleum chemistry but it is quite difficult to compete with petroleum (or natural gas liquids and derivatives) as feedstocks. She mentions as one green chemistry success story – the development of a soy-based adhesive that replaces formaldehyde in plywood. She argues that green chemistry should be mandated like smoking cessation and exercise – as health-promoting. However, she doesn’t acknowledge that some of nature’s own chemicals, when concentrated and exposed to creatures can cause health problems too. I think she overly focuses on the synthetics. Too many wood ashes concentrated in one spot can contain carcinogenic heavy metals. Wood smoke is highly toxic. An USDA-organic approved fungicide like copper sulfate can be more toxic than commonly used pesticides as can other highly concentrated natural substances. She does acknowledge that pre-synthetic substances like celluloid and castor oil are also environmentally harmful and so does not advocate for banning all synthetic chemicals, only the most harmful ones. While that may seem reasonable some are difficult to replace. However, her activism to ban fracking and an underground propane storage facility suggest that she does advocate bans.

She goes through some case studies of possible relationships between certain pesticides and breast cancer. She also explains why assays – evaluations of biological or chemical substances – can be expensive, messy, and complex. In assays for potential carcinogens animal studies need to involve large amounts of animals who need to be evaluated for years as cancer often takes a long time to appear. For complete understanding such long complex assays would be needed for each potential chemical carcinogen, ideally. Some researchers have advocated for new chemical screening tools since the time constraints and inconclusiveness of animal assays keep a huge backlog of chemicals for which toxicity is unknown. Better knowledge of the interactions of networks of genes, proteins, and receptors has shown that certain chemicals and classes of chemicals disrupt the pathways of these cell functions.

She discovers that her own cancer, a type of bladder cancer called transitional cell carcinoma, also occurs among beluga whales from the St. Lawrence estuary in Canada. Many workers from a nearby aluminum smelting operation also got that particular type of cancer. PCBs, DDT, chlordane, and other toxins are found in the waters and sediment of the estuary. There is also benzo-[a]-pyrene, product of combustion classed as a polycyclic aromatic hydrocarbon (PAH). Liver cancer in fish has also been linked to toxics.

She laments the change from large numbers of small family farms to less farms but bigger more industrialized ones. Using pesticides reduced the need for crop rotation. She recounts her childhood farm experiences in Illinois. While Illinois corn and soybeans are sold for export, most goes to feed livestock. A significant amount also becomes snacks and corn sugar. She also advocates taxing foods of lower nutritional value, like soda to deal with the obesity epidemic. Obesity and weight gain are risk factors for cancer, probably related to hormones and inflammation. She sees the farm belt as overproducing the two – corn and soy – but they make the best animal feed, snacks, and best grain nutrition for export. Also, larger more industrialized farms are way more efficient and reduce overall land use quite significantly. Corn and soy also account for the largest share of herbicide use. Of course , there is also organic, which uses organically approved pesticides, not synthetic, but which may also have some ill effects. Weeds used to be removed by plowing, hoeing, and disking but these were time and labor intensive. They also released more carbon from soils. Modern no-till methods are better for soil health and retain more carbon. Herbicide-resistant varieties developed through genetic engineering. Though she laments that by 2004 a third of Illinois corn was GMO (probably much more now), this is now widely seen as a good thing since overall pesticide use is down. She does not mention this. She invokes herbicide-resistant weeds which can be problematic but have yet to be a huge problem, especially as herbicides are more targeted in place and time to reduce runoff. She laments the continued use of atrazine, banned in the EU, because of its water solubility and ability to spread all over the environment. She also laments the dead zones caused by nitrogen and phosphorus fertilizer runoff overload. She notes that manure is way less used than it used to be (although she doesn’t mention that manure runoff also contributes significantly to the runoff that creates dead zones). Synthetic nitrogen fertilizers are made from natural gas in fertilizer plants. Although she sees that as a problem it is really the basis for improved yields and preventing more global hunger as well as allowing farmers to make a profit and food to be plentiful and cheap. Fertilizer can also be targeted in place and time to reduce runoff, which also can save farmers money. She goes on to advocate organic farming and agroecology. These are of course good things but in terms of yields and reducing land use for agriculture they are still way behind modern mechanized agriculture utilizing synthetic fertilizer. She seems to think organic farming reduced carbon footprints but more recent analysis suggests it’s the other way around, mainly due to less land required for comparable yields = less deforestation/more reforestation. More modern scientific analysis suggests that organic “methods” combined with efficient and smart use of synthetic fertilizers, and genetic engineering will be the best overall solution. More recently there is CRISPR gene-editing that may make GMOs more versatile. Her call to go back to the old ways of farming on a large scale seems rather anachronistic and naive in light of the massive success of modern methods.

Next, she considers airborne toxins that follow the weather to distribute themselves across the globe, even in remote parts of the world such as the Arctic. She mentions that in 2007 one-third of toxins released into the environment were released into the air. However, much of it probably ends up not heavily concentrated. There are chemical reactions that can combine to make new air contaminants from combusted material like how nitrous oxides and volatile organic compounds (VOCs) contribute to photochemical smog/ground-level ozone. This is a well-known pollutant in many urban areas and is considered to shorten life if one lives in an area of chronic smog. She thinks the increase of lung cancer among non-smokers may be attributed to particulates and smog. Oncologists and pathologists have also suggested that air pollutants like nitrogen dioxide may also help cancer spread from other areas to the lungs where it is difficult to treat.

Next, she considers water pollution, noting that it may be responsible for habitat destruction for many riverine species including water fowl. As in many place, she notes that water quality improved in the Illinois rivers due to the requirements of the 1972 Clean Air Act. The 1974 Safe Drinking Water Act set limits of certain chemicals allowable in drinking water. She notes the concept of “enforceable limits” of a few parts per billion of some substances like benzene and TCE where any amount is considered dangerous but water can only be “cleaned” to those enforceable limits. Most limits are in single-digit parts per billion. She notes that as of 2009 there were only enforceable limits established for 90 contaminants. Any device that heats water (showers, dishwashers, washers) can also release VOCs from the water so that water can also contribute to airborne toxics. Some studies have suggested that showers can be more toxic than drinking toxins in water. Again she considers chlorinated water, noting that chlorine combines with contaminants in water to make toxic by-products, some of which are organo-chlorines. She notes that about 600 of those by-products have been discovered with few tested for carcinogenicity. A few are monitored and regulated – trihalomethanes and haloacetic acids – and chloroform being the most common.  She favors alternative water disinfection strategies although chlorine has proven to be quite effective and removing chlorine has proved deadly in a few cases. She favors activated charcoal and ozonation but it is unclear how they compare to chlorine in effectiveness and cost. Manure from farms (which she laments the loss of) is the most widely implicated source of water contamination – so protection of source water can be key to preventing contamination. She does mention that using activated charcoal, then aeration, then using chlorine as the final (rather than the first) stage can reduce trihalomethanes – although aeration can make them airborne.  She also considers groundwater contamination through time and in different parts of aquifers (groundwater moves slow in some aquifers, faster in others). She notes that contamination in groundwater recharge areas, typically upland is more problematic than in discharge areas, typically lowland. Thus, protection of recharge areas is emphasized. Contaminated groundwater is difficult to remedy.

Next, she considers the effects of garbage incinerators, mainly on airborne contamination. These waste-to-energy plants vary in effect based on how the waste-stream is sorted and how effective are the pollution control systems. In modern times some systems claim 99% of contaminants are removed (although the 1% remaining still worries nearby residents). Places like Sweden use their WTE plants as a source of pride in the use of renewable energy while places in the U.S. may consider them sources of industrial toxicity. Perhaps it depends on how they are marketed and the propaganda. There is a long-standing debate about whether landfilling or WTE plants are better for the environment. Her analysis here is a few decades old so I won’t dwell on it. The bottom line today is how much pollution-control is implemented or in the case of landfills how sophisticated are the leachate collection systems, the groundwater monitoring wells, and the methane collection and pumping systems. Each project should be evaluated separately. Dioxin is one major toxin produced and clearly those who live nearest are the most affected. She documents studies on dioxin and how it may work to lead to cancer but the jury is still out on its effects and what an acceptable level should be. Burning most things produces some dioxins, including burning wood. She favors recycling but that too has costs and it is difficult for recyclers to make money and to get people to do it on a large scale. Zero waste is a nice concept but in reality it is far from achievable without massive changes in social habits.

Next, she considers ‘body burden,’ the sum total of all the effects of ingesting, inhaling, and absorption through skin of contaminants to get an idea of ‘cumulative exposure.’ We can measure the amounts of different contaminants in different parts of the body. The highest amounts of DDT, PCBs, and chlordane were found when those chemicals were most in production and use. Measuring levels of pollutants in people is known as “biomonitoring.” When lead was phased out of gasoline, blood levels of lead in children began to decrease and they ended up decreasing more than the models predicted so we know that changing the levels of some chemicals in the environment can lead to less of them in our bodies in a reasonable amount of time. Biomonitoring has also shown that banning smoking in public places has resulted in less “smoke” in our bodies. In 1999, the CDC began monitoring a group of 5000 people in 15 geographic locations for up to 148 chemicals. One surprise was the amount of flame-retardants we have in our bodies – these are potentially dangerous endocrine-disrupting POPs that we have way more in our bodies than Europeans. She notes that advances in chemistry have made biomonitoring more effective and cheaper. California was the first state to embrace biomonitoring but most states have followed suit.

She knows how cancer works:

“Destroying healthy tissue and clogging vital passageways, metastases are what make cancer deadly”

“… tumors are not just homogenous balls of bad cells. Rather, they are composite tissues, with cancerous and normal cells coexisting in a complex society. But the malignant cells are the ones running the casino.”

“They are Cells Gone Wild. They are defiant, disobedient, unstable, chaotic, and in the view of many cancer biologists, almost purposeful in the ways they disrupt cellular biochemistry.”

She goes through the stages of cancer development in detail, noting the three overlapping stages: initiation, promotion, and progression and how contaminants may affect each stage. More recently two processes: chronic inflammation and abnormal epigenetic regulation have been implicated in transforming cells. Obesity can increase chronic inflammation. Genes affect one’s ability to get cancer and so too does the environment. It is not one or the other but how the two interact. Environmental epigenetics is a new avenue of research investigating how contaminants affect epigenetics, the switches that turn genes off and on or otherwise code them. Oddly, she notes that the Inuit people of Greenland, via their own food chain and the way airborne contaminants have fallen on their region due to weather patterns (called global distillation in terms of contaminant transport) have the highest levels/body burdens of POPs -persistent organic pollutants.

She mentions studies of adoptees and ‘epigenetic drift among twins (the notion that as twins separate geographically that their epigenetic factors change). She is an adoptee and wishes she had access to her genetic history. She thinks the reverse may happen among adoptees – that their epigenetic factors converge with non-adopted siblings due to similar environmental factors. One study among identical twins in Scandinavia suggested that the chance of developing the same cancer as one twin by the other was 11-18%, which shows that genetics is a factor but not as strong a factor as expected, A recent Johns Hopkins study has suggested that cancer is so complex that determining the primary “cause” of most cancers is simply not possible. While cancer may be initiated by accumulations of genetic errors it seems more recently that abnormal regulations of genes by epigenetic factors is the reason. She notes that the Human Genome Study has revealed that we have less genes than thought before the mapping was done but more of those genes are implicated in cancer development than previously thought. She cites the Swedish Family-Cancer Database – the largest dataset of that kind in the world, suggests that family history of cancer plays a modest role in cancer development. She talks about oncogenes and adductors and an enzyme-based chemical detoxification process called acetylation as being factors in the likelihood one would develop cancer – if exposed to carcinogens, particularly early in life. People that are “slow acetylators” are more susceptible and that includes more than half of Europeans and Americans.

She compares a U.S Dept of Health and Human Services brochure to a Genetics textbook regarding the environmental factors of cancer development. The textbook considers environmental factors including smoking, lifestyle habits, and obesity, to be responsible for most (as much as 90%) cancers. Is it mainly a problem of behavior or exposure? We now know that one dietary factor – eating more fruits and vegetables – deceases cancer incidence. She thinks focus on behavioral and lifestyle factors tends to hide the environmental roots of cancer. We do know that occupational exposures, typically more than exposures among the general population to certain contaminants has led to increased cancer rates. She asks whether the obesity factor is also related to greater retention of pollutants, presumably along with greater levels of chronic inflammation. Epidemiologists have cautioned against attributing cancers to single causes and biomonitoring studies and how toxins interact in the body do suggest that complex causes involving many factors could be at play in most cancers.

She calls for green chemistry and the Precautionary Principle but in several cases the Precationary Principle has proven more harmful than beneficial. For instance, in genetic engineering, biotech, and gene editing, the banning of such process in Europe has led to the banning of them in parts of Africa where people could directly benefit from them through less hunger, better nutrition, and more successful and cheaper farming and food. Synthetic chemicals have done a lot of good in the world. Green chemistry is a good idea but may only be marginally applicable. Natural chemicals can also lead to cancer. There are trade-offs and no easy answers. There are extreme costs to re-organizing society on greener principles and there are unknowns. Its easy to say let’s have green energy now but there are toxins associated with these sources as well and tremendous costs and logistical problems. She favors “alternatives assessments” and I can agree – that we should explore alternatives to toxic solutions when possible. She also favors “full-cost accounting” where the health costs of toxic solutions are added in. Apparently, judging from her recent activism, she favors the Precautionary Principle in banning fracking as well. Fracking has resulted in massive decreases of particulate pollution and carbon emissions and better air quality as well as cheaper energy – all due to replacing coal with natural gas in power plants. That would not have occurred if the Precautionary Principle would have reigned as it has in areas where the process is banned. Most things that involve risk also have benefit and these need to be evaluated intently. There is also what is called “risk perception” which among humans has a very strong emotional component due to our evolutionary neurological development. Perception of risk versus real statistical risk can vary considerably. Uncertainty is often exploited by those who favor avoiding risks and those who favor taking risks. Studies have shown, however, that people are more willing to exploit uncertainty and emotionality to promote avoiding risks. Heart disease is bigger problem than cancer and yet people worry more about cancer, seeing it as more of a risk, perhaps because it is thought that we can reverse heart disease with lifestyle changes more than we can reverse cancer the same way.

Overall, this is a very good book: detailed and honestly written. She has worked hard to understand the issue of the environmental factors in the development of cancers. She is no fool. While I disagree with her anti-fracking activism I do understand and agree with her advocacy for better evaluation of chemicals and her call for more studies of environmental factors in cancer as well as getting rid of the most dangerous of chemicals.