Wednesday, March 27, 2019

Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily LIfe

Book Review: Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life – by Steven Strogatz (Hyperion, 2003)

This one is an interesting foray into chaos and complexity theories and tendencies toward synchronization and self-organization. Strogatz is a mathematician. The book is ‘dry’ in parts but is not overly complex for the ‘slow’ reader like myself. Synchronization occurs in nature at all scales from atomic nucleus through cosmos. 

Spontaneous order is mysterious, he says. Synchronization is a kind of order - in time. He distinguishes accidental temporary sync with persistent long-lasting sync. We tend to like sync such as the rhythm of music. We tend to interpret persistent sync as a sign of planning, choreography, or intelligence. Seeing sync like schools of fish, synced fireflies, or in my case noticing that my geese and ducks (and one rooster) can be herded in sync as one unit, is fascinating. Sync among non-intelligent entities like cells and electrons is even more mind-boggling. 

The science of synchrony (sync) studies “coupled oscillators.” Oscillators are “entities that cycle automatically, that repeat themselves over and over again at more or less regular intervals.” Two or more oscillators are said to be coupled if they influence one another physically or chemically. Coupled oscillators may be many things: planets, heart cells sending electrical signals, or various units of life and matter. Strogatz studies sync mathematically and notes that there are practical applications present and future, many of them medical and safety oriented. 

He goes through the history of the study of synchronized firefly flashing. Biologist John Buck and colleagues discovered that the rhythm of flashing was regulated by an internal oscillator that could reset. Somehow all this internal resetting, adjusting to the flashing of others, without intelligence, accounts for the well-timed synchrony observed in many firefly species. Strogatz goes so far as to call the tendency to synchronize one of the most pervasive drives in the universe. Different syncs – say the moon’s ability to spin at the exact same rate that it orbits the earth so that we only ever see one side of it from here (caused by tidal effects) or the synchronized swimming of sperm on the way to egg or the pacemaker cells of the heart – are linked by mathematical relationships. In many ways nature is collectively precise without a leader.

The author was inspired by a book by biologist Art Winfree called The Geometry of Biological Time. Impulses often did not change smoothly. Instead they tend to jump which makes them harder to study with calculus and algebra. With multiple oscillators studying them mathematically becomes unwieldy and nearly impossible. Simulations are another method but far less precise than math. Peskin’s stroboscopic simulation method was more satisfying for the author who described the synchronization of his own experimental simulations as “spooky.” 

Strogatz and a grad student describe the tendency of synchronization to be due to what they call “absorption” where one oscillator “absorbs” another in the sense that once the absorbed becomes synchronized with the absorber they stay in sync irrevocably. That is, after they hit a threshold. They may be changed by other oscillators but will change together. Absorption is how oscillators “clump” together eventually resulting in a fully synchronized system or unit. This all happens according to mathematical proofs and logic. The synchronized firing of neurons and the way an earthquake happens after stresses cross certain thresholds are other examples of sync. There are many more. Another idea called ‘self-organizing criticality’ by physicist Per Bak was found to be synonymous with sync. Even though the study of sync has been mocked as frivolous by some politicians, there have been practical benefits. Early internet routers were plagued by pulses that showed sync that causes congestion and so engineers had to devise a means to “clock” computer circuits more efficiently.

Only male fireflies flash in sync and proposed explanations for it have much to do with mating: to advertise the scale of the males available, to take advantage getting lucky by being mistaken for another male, and to not stand out as prey so much to predators. In humans, females sometimes show sync when their menstrual cycles synchronize after being together for long periods. One of the leading possible explanations for the mechanism of sync involves pheromones, chemicals that may signal to sync. Among women some experiments showed that something in their sweat (pheromones) may signal the menstrual cycles to sync. Other experiments did not verify that so that explanation is still unproven. While the pheromones certainly influence the cycles of other women those cycles do not always end up synchronized. Thus, the behavior is more complex than the sync of fireflies. The complexity is enough that synchronized menstruation is difficult to predict. Some have theorized that they do this so they can share some child-rearing and breast-feeding duties which can result in healthier offspring among mammals. Strogatz notes that the shear complexity of some systems makes mathematical modelling of them an art as well as a science. 

Next is an ode to the work of Norbert Weiner, the founder of cybernetics. Weiner was the first to point out the pervasiveness of sync in the universe. He dived into the study of brain waves suspecting them as indicative of some internal clock mechanism to coordinate brain activities that occur many times in a second. He speculated that oscillators in the brain pulled on the frequencies of individual ones to speed them up or slow them down to achieve synchronization. So, the brain waves are a kind of consensus of the mental state. He believed that this ‘frequency pulling’ was a key mechanism of ‘self-organizing.’ He failed to adequately describe this situation before his death in 1964 but a year later Art Winfree would do so. He focused on the ability of oscillators to send and receive signals. Coupled oscillators influence one another and the sensitivity to being influenced changes according to the cycle. Winfree stated that: 

“At any instant, an oscillator’s speed is determined by three contributions: its preferred pace, which is proportional to its natural frequency; its current sensitivity to any incoming influences (which depends on where it is in its cycle); and the total influence exerted by all the other oscillators (which depends on where they all are in their cycles)”

That makes the mathematics very complex and the future can be predicted from the present by differential equations, or calculus. Linear differential equations are solvable but non-linear ones, including those involving competition or cooperation, are unsolvable. Winfree utilized computer simulations to attempt to solve the equations. Once some clumps begin to synchronize they can be “heard” over the background and this can lead to synchrony of the system. Group sync was not hierarchical but was not democratic either, he discovered. Winfree realized that group sync was analogous to phase transition (like the transition from liquid water to solid ice). In phase transition, at a certain temperature there is a reorganization that results in a new structure. Sync is similar, but in time rather than in space. This was an “unexpected link between biology and physics.” Non-linear dynamics and statistical mechanics could now be hybridized into a new theory. In 1975 Japanese physicist Yoshiki Kuramoto finally was able to solve the differential equations related to sync which were considered possibly unsolvable. He defined the ‘order parameter’ which gives a value of 1 to perfect sync and a value of zero to no sync. In his analysis he noted that only a total non-sync, a partial sync, or complete sync were possibilities. He realized that the oscillators must be similar enough to each other in order to synchronize. Strogatz set out in 1986 to study the Kuramoto model. He eventually used techniques developed by plasma physicist Lev Landau. Weiner’s frequency pulling turned out to be not as clear cut as thought but it was found for certain that oscillators affect the frequency of other oscillators. 

Next, he delves into the sync of the human body in sleep-wake cycles. We are tuned through evolution to the day-night cycle and being out-of-sync with it due to things like working the nightshift can wreak havoc. Apparently, we have a ‘circadian pacemaker,’ a ‘neural cluster of thousands of clock cells in the brain, themselves synchronized into a coherent unit.” This cluster influences other cells and organs to do what they do at the right times. Sync in the body occurs at three levels, he notes: sync in cells within an organ, sync between organs where they ‘period match,’ and sync between our bodies and the world around us – this last one, rooted in the day-night cycle, is called external synchronization, or entrainment. There is still much to learn about these circadian rhythms. Hormone fluctuations, digestion, alertness, dexterity, and cognitive performance are all related to these daily rhythms. Experiments with people kept from the sun have shown that their body temperature cycle (changing in a 1.5 deg F range) will sync up with their sleep-wake cycle. Some circadian rhythms were found to be 26 hours, some closer to 22 hours so they vary. However, some people desynchronized radically after long periods away from the sun, typically with long wake and long sleep cycles seemingly randomly thrown in. Their body temperature cycle, however, remained the same. This became known as ‘spontaneous internal desynchronization.’ Only the sleep-wake cycle varied, while the temp and hormone secretion cycles stayed with their variation of daily. Even though the desynchronization seemed random there was logic in the data as expressed with raster plots. The beginning of long sleeps coincided with higher body temps and the beginning of short sleeps coincided with lower body temperature. Results were strong correlations that sleep length was related to phase of temperature cycle. Many other physiological and cognitive processes were linked to the phase of the temperature cycle.

It has also been found that the REM cycle during sleep is also entrained with the body temperature cycle. It is most likely to be initiated just after the body is coldest which is why it more often occurs near the end of the sleep cycle. Other cycles such as the short-term memory cycle, release of the hormone melatonin, and other cognitive and physiological functions maintain phase relationships with the body temperature cycle and with one another. The biological clock ties everything together. The cells of organs also display circadian rhythms. Eventually, the suprachiasmatic nuclei, two clusters of neurons in front of the hypothalamus was identified as where the circadian pacemaker resides. Built in to our daily cycles are times of drowsiness corresponding to the siesta in the day (1-4PM) and the zombie zone at night (3-5AM). These are times when accidents are likely to occur. Times of maximum wakeness were also found, their peaks being 10AM and 9PM. Night shift workers tend to have trouble with synchronization and there are some things they can do to help. Light has a strong synchronizing effect. 80% of blind people suffer from some form of sleep disorders. The other 20% likely have intact circadian photoreceptors in their retinas, even if they can’t see.

An example of circadian sync is that of leaves of plants opening in the day and closing at night. Several trees do this. In 1665 Dutch physicist Christiaan Huygens noticed that two pendulum clocks (he invented them) would synchronize their pendulum swings within a half-hour no matter where they started from. There are many other examples of non-living things spontaneously synchronizing. Lasers, utilized in many things including CDs, laser surgery, and supermarket scanners, rely on synchronized light emissions. Even our regional power grids utilizing different power generators or power plants end up operating in sync. 

Atomic clocks, the most accurate clocks we have, rely on sync. They “count the transition of a cesium atom as it flits back and forth between two of its energy levels.” Atomic clocks made possible GPS systems which can pinpoint positions in space from far away with accuracy. GPS allows synchronization better than a millionth of a second which is also useful for coordinating financial transactions. Each of the 24 global positioning satellites carries 4 atomic clocks synchronized within a billionth of a second of one another by a master clock in Boulder, Colorado.

In the wider universe, another example of inanimate sync is orbital resonance. In the case of two linked planets orbiting a star one version is where one will orbit the star at exactly twice the rate as the other. Even more remarkable is the case of our own moon that spins on its axis at exactly the same rate it orbits the earth, which is why we only ever see one side of the moon. In that case the earth’s gravitational pull of the moon is balanced by the moon’s centrifugal force at the center of the moon. The moon’s weight distribution (it is bottom-heavy) provides the corrective torque to bring it back into sync. Another example of astronomical sync (orbital resonance) is the calculated orbital periods of asteroids in the asteroid belt between Mars and Jupiter, which are always precisely mathematically related to the orbital period of Jupiter. The point of closest approach of the asteroid to Jupiter always occurs in the same place of both of their orbits, similar to Huygens’ pendulum clocks synchronizing. 

Quantum choruses is the next chapter. Superconductivity showed that perpetual motion was possible near but slightly above a temperature of absolute zero, which defies the laws of classical physics. The new theoretical science of quantum mechanics would mathematically solve this riddle and many others. Electrons pairing up and cooperating in sync would be the key to superconductivity. In 1995 physicists at a lab in Boulder, Colorado were able to get temperatures down to less than a millionth of a degree above absolute zero (mind-boggling) and this showed that atoms began behaving as one super-atom. This is ‘quantum phase coherence,’ the basis of the laser. Electrical resistance drops to zero at a certain low temperature, which is the basis for hopes of superconductivity as the basis of a much more efficient form of electrification. Apparently, this has to do with the “communal behavior” of paired electrons. With materials research in the search for superconductivity, which was a major research issue in the 1980’s, it was found that some materials could be coaxed into superconducting behavior at much higher temperatures, but unfortunately, not high enough to be feasible in the real world. There are many other hurdles as well. 

A young grad student, Brian Josephson, in the early 1960’s discovered that “supercurrent” could have a counterintuitive mathematical relationship (like many quantum-level processes). Physicist Richard Feynman soon discovered that these “Josephson effects” in superconductivity could theoretically occur for many “phase-coherent” systems. In 1997 one was found: superfluid helium. Strange quantum effects account for the Josephson effects, like quantum tunneling and quantum sync. “All liquids become highly ordered when cooled to very low temperatures.” Josephson’s theory involved sandwiched superconducting materials that later became known as “Josephson junctions.” They have led to the most sensitive detectors in science known as SQUID – superconducting quantum interference devices. They have been developed and used to great success in medical imaging and show potential for supercomputers, or rather superconducting computers. Josephson received a Nobel Prize in 1973 but soon thereafter devoted his work to paranormal research, thinking that one day quantum theory could explain telepathy. This was not well received by his physicist colleagues, but Josephson believes (if he is even still alive) that it is possible. It was noticed that Josephson junctions, like the motion of pendulums. Is non-linear. The motion of a pendulum is affected by gravity, angles, and torque. Josephson junctions are affected by phase. Breakthroughs in chaos theory aided in the study of sync with the development of non-linear dynamics.

The author began a collaboration in 1990 with Kurt Wiesenfeld, studying the non-linear dynamics of Josephson junctions. They developed a method of study and representation involving two-dimensional graphs that made interesting geometrical shapes. They were shocked to find that “every solution is periodic.” They suspected a “secret symmetry” in the equations. What they discovered was essentially the Kuramoto model! The Millenium Bridge opened in 2000 in England but when hundreds of people began walking on both sides of it, it began to sway and increased its swaying to the point where the bridge was shut down. Apparently, people walking to catch their balance in response to the sway was amplifying the sway. It was Josephson who figured out the sync mechanism that was causing the amplified swaying. 

Next, he delves more into chaos theory and non-linear dynamics with accounts of Lorenz coming up with his equations back in 1963. Chaos theory overlaps with complexity theory as chaotic systems are mathematically complex. He refers to the “second wave” of chaos theory where it was discovered that chaotic systems exhibit a new kind of order. Chaos now had laws. James Gleick’s 1987 book, Chaos, brought chaos theory to the masses (I have read most of it and one day may finish it for a review here). Chaotic systems mostly defy predictability, but it has been found that two chaotic systems can sync up. Synchronized chaos shows that chaotic systems only appear to be random. In reality they are subject to certain laws. 

“These, then, are the defining features of chaos: erratic, seemingly random behavior in an otherwise deterministic system; predictability in the short run, because of deterministic laws; and unpredictability in the long run, because of the butterfly effect.”

The butterfly effect is simply the observation that in chaotic systems small discrepancies or disturbances can end up changing the whole dynamics of a system, rendering it non-predictable. A chaotic system requires precision in the initial measurement of the system to get predictability, but only short-term predictability. 

“Just as a circle is the shape of periodicity, a strange attractor is the shape of chaos.”

In both cases dynamics are converted into geometry. On a practical level, chaotic systems provided the means for “chaotic encryption” of electronic communications, which is unpredictable enough to defy decryption. 

Next, the author explores sync in three dimensions. He goes back to 1982 when he accepted a summer job with Art Winfree at Purdue University to study topology – the study of continuous shape, among other topics. Winfree was the author of many scientific papers relating biology and mathematics, particularly geometry. Another topic of their study was the chemical waves produced in a “Zhabotinsky soup,” a chemical reaction that supports excitatory waves much like those that trigger heartbeat. Chemical waves are like neurons that have three states: quiescent, excited, and refractory (incapable of being excited for a time). One might also compare them to the human sexual response. Zhabotinsky soup (more accurately known as the BZ reaction) allows the unfettered study of wave propagation in excitable media. This led to discovery of a new kind of rotating, self-sustaining wave, shaped like a spiral. Such waves are responsible for tachycardia and the ventricular fibrillation that can result in sudden cardiac death. The waves tend to annihilate on collision with other waves. Strogatz and Winfree were studying these spiral waves in 3D. They helped define scroll waves, scroll rings, and twisted scroll rings, and the rules of such structures. Knots were more difficult. With modern supercomputers there is now much more known about spiral waves and scroll waves and their twisted and knotted forms. They continued to be studied for their role in cardiac arrhythmias.

The next subject is small-world networks. We know that networks have organizing principles and seek to discover them. Even the corpus of scientific knowledge is a network of sorts. Networks are made up of individuals but exhibit network properties, group properties. One version is the so-called “six degrees of separation” that connects us to one another and to others. 

“Whenever nonlinear elements are hooked together in gigantic webs, the wiring diagram has to matter. It’s a basic principle: Structure always affects function. The structure of social networks affects the spread of information and disease; the structure of the power grid affects the stability of power transmission. The same must be true for species in an ecosystem, companies in the global marketplace, cascades of enzyme reactions in living cells. The layout of the web must profoundly shape its dynamics.”

Networks are made up of nodes, or connection points. Studying networks with mathematics involves calculating the number of links between nodes. Strogatz and grad students designed simulations to study network connectivity. They defined a term to address a network’s evolving structure – the average path length, which is the number of lengths in the shortest path between two nodes, averaged for all nodes. They found that, counterintuitively, what they call small-world networks are both highly clustered and small, which is apparently different than bigger networks that are often highly clustered and small ones which are typically not highly clustered. The power grid and the nervous system both qualify as small-world networks. Social networks are also likely to be small-world networks, as the experiments in ‘six degrees of separation’ suggest. 

“The importance of small-world connectivity is even clearer for processes of contagion. Anything that can spread – infectious diseases, computer viruses, ideas, rumors – will spread much more easily and quickly in a small world.”

Small-world networks have a tendency to self-organize. Statistically speaking, there are networks that organize regardless of scale. These are called scale-free networks and have similar self-organizing properties to the small-world networks. 

“At an anatomical level – the level of pure, abstract connectivity – we seem to have stumbled upon a universal pattern of complexity. Disparate networks show the same three tendencies: short-chains, high clustering, and scale-free link distributions. The coincidences are eerie, and baffling to interpret.”

Scale-free networks have been shown to be resistant to random failures yet vulnerable to attacks on their hubs. In a study of the network of protein interactions in yeast it was found that the most highly connected proteins are the most important ones for the cell’s survival.

The last chapter addresses the human side of sync. The author was contacted by the actor Alan Alda, who read his Scientific American article about sync. Alda had long studied fads, a fascination of his. Likely spurred by Richard Dawkins’ idea of memes as a psychological equivalent to genes, he sensed mysteries of group human behavior to be discovered in the study of fads, possibly being some form of sync. Mobs, riots, traffic, and music or sports spectators all exhibit group behavior that sometimes seems to sync up. Sociologists and behavioral economists study group behavior too. Sometimes it’s called herd behavior – since the behavioral choices of others influence one’s own behaviors. We tend to do what our neighbors do. Companies tend to do what their competitors do, often to avoid falling behind or losing market share or profitability. There seems to be a threshold where if enough of one’s neighbors adopt a behavior then we will adopt it as well. Explanations involve ideas like ‘tipping points’ and ‘vulnerable clusters.’ Complexity theory has even been applied to highway traffic where sync does indeed happen when enough vehicles are confined to a certain space. We tend to adjust our speed to the traffic around us. Audiences clapping in unison is another example of social sync. The synchronized marching of German Nazis is another example, not so flattering. Some people see coincidences as a form of sync but the evidence is lacking or perhaps just harder to find. Some suspect sync is even involved in how the brain gives rise to the mind, a major problem in brain science and psychology. Now that we can correlate human thoughts and emotions with activity in different parts of the brain, we can arrive at neural correlates of consciousness. Cognition has been linked to brief outbursts of neural synchrony. Sync may well be a way of binding things together in our minds. Experiments have found that:

“… synchronized neural activity is consistently associated with primitive forms of cognition, memory, and perception.”

The question is perhaps whether sync is essential to cognition or simply just associated with it. Recognition of faces hidden in otherwise meaningless pictures has been definitely associated with synchronized neural activity. 

Strogatz sees science as changing from the excessive study of parts to a new holistic study of whole systems. Crafting parts into a whole often involves apparent choreography and that of course suggests sync. The non-linear sciences: cybernetics, sync, complexity theory, chaos theory, etc. are systems sciences. The chemist Ilya Prigogine thinks thermodynamics will come (somehow) to explain the non-linear subjects. Metabolism, as optimal use of energy, does indeed explain some processes.

As noted, this book was tough to grasp and a little boring in parts but overall quite fascinating. The final paragraph of the book goes like this:

For reasons I wish I understood, the spectacle of sync strikes a chord in us, somewhere deep in our souls. It’s a wonderful and terrifying thing. Unlike many other phenomena, the witnessing of it touches people at a primal level. Maybe we instinctively realize that if we ever find the source of spontaneous order, we will have discovered the secret of the universe.”

Sunday, March 17, 2019

Bottled Lightning: Superbatteries, Electric Cars, and the New Lithium Economy

Book Review: Bottled Lightning: Superbatteries, Electric Cars, and the New Lithium Economy – by Seth Fletcher (Hill & Wang, 2011)

This is a great history of the development of battery technology and electric vehicles. Fletcher was a senior editor at Popular Science magazine when this was published. It seems likely that the 2020’s will see major adoption of EVs with the 2030’s being when they will come to dominate. As of now (2019) they are still a small percentage of total vehicles. 

This book begins with the beginnings of electricity itself. The development of the battery arose from a dispute about the nature of electricity among two Italian scientists, Galvani and Volta. Volta, after studying experiments involving the torpedo fish, came up with the first battery consisting of piles of metal, like sandwich cookies, made of zinc, copper, and brine-soaked cardboard. This was in 1800. Later in that century came Faraday and Maxwell, the discovery of the relationship between electricity and magnetism (now known as electomagnetism), and the development of bigger and more powerful batteries. Lead-acid batteries are of course still used to start our gasoline powered cars. At the beginning of powered transport electric cars did share the road with early gasoline vehicles but eventually proved impractical compared to them. Thomas Edison joined the search for a better battery. His nickel, iron, and potassium battery performed better than lead-acid batteries of his day and they powered some of the vehicles on the road early in the first decade of the 20th century. But the batteries began to leak and Edison was getting old with deteriorating health. Finally, the internal combustion engine was improving enough that battery-powered vehicles lost feasibility. Even though the leaking was fixed, his competitor ESB introduced a new battery, the Ironclad-Exide, using it to start gasoline engines. Thus, the battery was relegated to a supporting role in an increasingly petro-based world. Edison added nickel and lithium hydroxide to his battery in 1908 which gave it 10% more capacity and extended the time it could hold a charge. Lithium was the third element produced in the Big Bang after hydrogen and helium.

“Composed of three neutrons, three protons, and three electrons, lithium is the third element in the periodic table …”

Lithium was used in the 19th century to treat some illness. Lithium citrate was used in early formulations of the lemon-lime soda known as 7UP. The use of lithium chloride for heart patients in the 1940’s turned out to be harmful. People overdosed and died but data about what made a lethal dose became well defined. This slowed implementation of the promising results of lithium salts to treat mania. Lithium carbonate was approved as a psychiatric medication in 1970. It’s now considered one of the most effective medicines for mental illness and those with bipolar disorder. It is used as a mood stabilizer and the mechanism by which it works involves its effect on neurotransmitters and cell signaling. It increases serotonin production. The pharmaceutical industry only uses a tiny fraction of the lithium mined. Metal alloys, ceramics, lubricating greases, devices used to absorb CO2 on spacecraft and submarines, rocket propellant, and certain types of nuclear reactors are other uses. Its use in batteries is set to make it the main use for lithium. Its low atomic weight allows it to store more electricity in a smaller space and be lighter than other battery materials such as lead. Its eagerness to shed its outer electron leads it to make a more powerful battery. Lithium is unstable and so is reactive in its pure form. Under certain conditions a lithium battery can be an explosive but separating the electrodes with electrolyte bridges tames the explosive tendencies. Electricity has many advantages over other power sources such as burning fossil fuels (although much generated electric power comes from doing just that in big power plants), hydrogen, or biofuels like ethanol. The challenge for batteries is to be able to store more energy at reasonable cost. Incremental improvements have happened over the years and continue to do so but the high costs still limit higher implementation. 

California especially has had a problem with internal combustion engine (ICE) vehicles due to the susceptibility of the cities in the southern part of the state to smog. Anti-ICE advocates in California were given another boost by the Middle East energy crisis in the 1970’s. These situations revived interest in electric vehicle (EV) development. Battery research was ongoing at Stanford and at the Ford Motor Company. Solid state electro-chemistry, aka solid state ionics, took off in the early 70’s with the search for better anodes, cathodes, and insulating materials in batteries. By 1972, Chevy, GM, Ford, Chrysler, AMC, and Toyota were all working on EVs. Exxon was also deeply involved in battery research. In fact, Exxon had the only battery model that used lithium that functioned at room temperature, which was a big advantage. Exxon’s early lithium batteries were dangerous if not handled correctly. Gas buildup would lead to explosions. By 1976 the US government was supporting battery research for EVs. There were fears then of oil supplies running out and Exxon wanted to diversify. The first Exxon rechargeable lithium battery to reach the market was very small. The button cell battery was to be used for a solar powered watch. The digital watch thus became the first wearable battery, a trend that we now all know well with our smart phones. The recession of 1979-1980 crashed the battery and EV momentum. Exxon wanted out of battery R&D and ended up licensing their discoveries to other companies like Eveready, then Union Carbide. With the election of Reagan US government interest in alternative energy waned. The rebound of oil supply in the 1980’s also kept EV and battery research underfunded. Getting things to market is more challenging when oil and gasoline prices are low. 

Bell Labs research in the 1970’s led to breakthroughs in cellular communications. Motorola and AT&T were vying for a prototype. Motorola was the first to conduct a cell phone call in New York City in 1973 when a call was made to their competitor, Bell Labs. The story of Oxford’s John Goodenough is told (the author gives some detailed descriptions of different contributors to battery storage, their academic and work histories). Goodenough first decided to replace the lithium sulfides with lithium oxides. These could reach a higher voltage. Goodenough also proved that a battery did not have to be built fully charged as previously believed. It could use compounds stable at ambient air condition and build them discharged. Essential for getting the lithium battery into the marketplace was the lithium-cobalt-oxide cathode which Goodenough notes is what started the wireless revolution. He published in 1980. Motorola started selling cell phones in 1984 but they would be quite rare for years to come. Japanese companies in the 1980’s were very interested in battery research, maybe especially since at the time they made a lot of battery operated digital devices. Sony had planned to joint venture with Union Carbide/Eveready when the disaster in Bhopal, India hit and Union Carbide was in litigation and split up. Sony was able to buy Eveready out at a bargain. By 1987 they were focused on developing a mass-market rechargeable lithium battery. Sony came up with a carbon anode to go with the lithium-cobalt-oxide cathode. The voltage was higher yet at 3.6 volts. This would mean that more power weighed a little less and that battery power was smaller and could fit in smaller gadgets. Another benefit of Sony’s new battery was that it would last longer due to a reversible chemical reaction with little damage from recharges. They could also be recharged before being run completely dead, unlike nickel-cadmium (NiCad) batteries. They announced them ready for use in 1990, calling them lithium-ion batteries, perhaps to distinguish them from Exxon’s flaming lithium batteries of the 1970’s. However. Another company, Moli, had failed mass marketing of lithium batteries in the late 1980’s due to fires caused by unforeseen charging-discharging scenarios. 

In 1990 the nickel-metal-hydride battery replaced the NiCad and was mass marketed. However, in 1992 Sony offered a $60 optional lithium-ion battery for a camcorder that was smaller and lighter the NiCad and stored more energy than nickel-metal-hydride. In 1994 Motorola released the first small cell phone with 45 minutes of talk-time and the first phone with voice mail. It was now realized that the small and light lithium-ion battery was enabling a portable electronics revolution. Japanese companies dominated the industry. RF power amplification allowed phones and devices to become even smaller and lighter and by 1996 small cell phones were the icon of power that big ones used to be. By 1999 the cell phone began transforming into the smart phone. Constant connectivity and its advantages and disadvantages (never-ending work day) were being realized as well. By 2002 95% of cell phones used lithium-ion batteries. 

By the early 2000’s lithium-ion tech was a hope of EV entrepreneurs. In July of 2003, Silicon Valley entrepreneurs Martin Eberhard and Marc Tarpenning incorporated Tesla Motors. The idea of Tesla was to compete via performance rather than price and to a considerable extent that is still true today with Elon Musk at the helm. Their first battery-pack idea consisted of 6831 laptop cells. Musk joined as a major investor and chairman in 2004. The first early launch (earlier than they would have preferred) of a Tesla vehicle was in July 2006. Of course, Toyota was mass producing the lithium-ion powered hybrid Prius a few years before that. It first went on limited sale in 2001. I bought one the first week in January of 2006 and 13 years and almost 500,000 miles later it is still on the road doing fine (although the battery was replaced a few years ago). The Tesla was 100% plug-in electric and built for power and luxury. In 2006, the movie, Who Killed the Electric Car? premiered at Sundance. It painted GM as insincere in their quest for alt-fuel vehicles like hydrogen and EVs. At first, GM did not consider Toyota’s Prius a threat but eventually had to because of its sales success. GM’s Bob Lutz was in charge of coming up with GM’s EV, the Chevy Volt. GM and others had had other unmarketable prototypes with EVs from the early 1900’s through the 1970’s and 1990’s. From 1996-1999 GM had leased 800 EV1 cars but eventually collected them up and destroyed them in the desert – thus “killing” the electric car. They were not affordable nor profitable. They did not want to lose more money maintaining them. GM had lost about $1 billion of the EV1.

There was another battery safety issue that began in December 2005 with Sony laptop batteries catching fire. By August 2006, 4.1 million laptops were recalled by Dell. By October, Sony had recalled 10 million batteries worldwide. The explanation was that during manufacture of a certain batch some metal fragments got into the electrolyte and caused short circuiting. 

By 2007 Eberhard was out as CEO of Tesla and Musk was in. GM was still making the case for the Chevy Volt, that electrification would be better than other alternative fuels and vehicles.

In the late 90’s and early 2000’s it was discovered that adding carbon improved conductivity in lithium iron phosphate. This later became known as “doping.” Doped lithium iron phosphate could make a battery cathode that would discharge completely very quickly which was a desired feature for EV batteries. There were disagreements about how the doping actually worked to increase conductivity. Was it doped metals the led to it or carbon contamination from the jars used in experiments? Later, as early experiments were reproduced it was found that doping did work even though the idea of a highly electrically conductive phosphate challenged conventional wisdom.  Then it was thought to be a coating of lithium phosphide that contaminated the experiment producing the unexpected effect. There were arguments between potential patent holders and companies that could be made or broken by what was determined to be the process. These became known as the “lithium wars.” Litigation and patent disputes marred the battery industry as they had in the late 1800’s. These disputes were still ongoing to some extent when this book was published so I’m not sure what has happened since then. There are several stories of early developers of technologies, both in academia and industry, not getting any royalties when later developers did. John Goodenough did not profit from his work but the marketers of his work did.

The development and successive unveilings of the Chevy Volt is recounted in detail. The first unveiling was in 2007 as a concept car. After some considerable redesign it was unveiled again in Sept. 2008. Meanwhile GM and other car companies were struggling through the economic downturn and amidst government bailouts and restructuring. The early Obama administration was very interested in EVs and battery research and billions of stimulus dollars would be invested. One might think of it as a ‘Green New Deal’ of sorts. Chevy had spent billions getting the Volt ready and even nowadays (2019) EVs are barely profitable for carmakers. The Volt’s competition was announced in 2008 as Japan’s Nissan announced plans to build the Nissan Leaf. In late 2010 both cars were being sold. The year before, Elon Musk appeared on David Letterman, trashing the Volt. Of course, his competition, the Tesla Roadster, was then priced at a hundred grand. At that price it sure would have to be better than the Volt at 40K. 

Aided by the economic stimulus and venture capitalists, many new battery start-ups were popping up. Obama supported the battery industry and wanted more American companies to build batteries as Japan and Korea cornered a big segment of the market. Some companies designed in the U.S. and manufactured in China to take advantage of cheap labor. Big mergers and acquisitions in the battery business were becoming commonplace globally. An EV revolution would feed battery manufacturers indefinitely. Batteries are heavy and so shipping costs are an issue. That is why some companies building cars strategized that battery manufacturing plants near car production plants were a good idea.  Important as cost and weight and related is energy density. Not enough energy density and the weight gets too high to make a battery pack that gets range. Cost per KWh (kilowatt-hour) is another benchmark for batteries. Mass marketing of batteries will probably eventually reduce costs for consumers as will better energy density of the batteries. One hurdle in hydrogen fuel-cells, a competitive alternative fuel, is the cost of the platinum catalyst, which is unlikely to drop or be replaced by something else. 

Lithium reserves and mining is the next subject. Lithium is abundant and should not shoot up in price although it is concentrated in certain areas and somewhat subject to potential political upheavals of those areas that might occur. A 2006 paper titled – The Trouble with Lithium – by energy analyst William Tahil, suggested that lithium was another finite resource we would become dependent on like oil. The border between Chile and Bolivia and somewhat into Argentina has the most concentrated lithium-bearing brines in the world. China also has a decent amount. There are other places as well but the high concentrations of the “lithium triangle” in South America make it the world’s lowest cost lithium. The Salar de Atacama is the largest of the salt flats that contains the concentrated lithium. At the time the reserves were counted at just under 30 million tons of lithium in the form of lithium carbonate. While lithium may be relatively abundant, other resources used in EVs (and wind turbines, military apps, and other modern conveniences) are not so abundant. These are rare earth minerals and other minerals such as cobalt. The rare earth minerals required for the magnets in electric-drive vehicles are mostly mined in China, although they exist elsewhere at higher costs to develop. Cobalt is extensively mined in Africa, sometimes by child miners and other exploited workers.

There are basically three companies that supply the majority of the world’s lithium. They became known as the Oligopoly. One is a Chilean company, one a German company, and one was originally an American company. They all have some reserves in the lithium triangle. Bolivia has been more protective of its reserves, being suspicious of capitalist impulses of would-be developers. However, they may have the largest reserves of all. While there are other supplies, the Oligoply is the cheapest and is likely to remain so because of the high concentration of lithium. I am not sure if things have changed much since 2011 when this book was published. The company SQM extracts 30% of the world’s lithium supply from a single salt flat in Chile, the Salar de Atacama, in the high Andean desert. The US has some lithium reserves in the west, mostly in Nevada, but it would be costlier to develop than to buy from South America. However, there is some value in having a domestic source, so development is likely. A company called Western Lithium was involved in planning mines in Nevada at the time. 

The author traveled to the South American salt flats to see the lithium mining operations for himself. Brine evaporation pools are how the lithium carbonate is extracted. First, he goes to Bolivia, one of the poorest nations in South America with a history of exploitation by mineral extractive industries that has led to massive distrust of any proposed developments. With the continent’s largest natural gas reserves and possibly the world’s largest lithium reserves – both remain undeveloped. Leader Evo Morales, an indigenous Bolivian, nationalized the natural gas industry and notoriously shunned the U.S. and Chile in a potential pipeline deal to bring Bolivian gas to Chile for export. He wanted any company that led lithium extraction in Bolivia to also begin an EV industry in Bolivia. Morales is known to rail about the evils of neo-liberalism and transnational companies. But their reserves are huge and they have time. With their attitude toward resources it is virtually assured that Bolivian lithium will be the slowest and most complicated to be developed. A French company was trying to develop the lithium but had been bogged down by Bolivian politics.

Potassium, magnesium, and boron are other valuable components of the salt flats so the development of all evaporite minerals needs to be considered. In Bolivia, potassium is actually more valuable than lithium. At press time, the first lithium production plant was scheduled to come online in 2014, Thus, they are years behind Chile and Argentina. Lithium availability and price are basically a direct function of how fast EV production takes off and that has been pretty slow so far. 

Lithium mining, or evaporatives mining in general, involves evaporation pools of considerable aerial extent. After the mineral ores/salts evaporate out they are raked into piles and transported via truck for further processing. Lithium, however, is transported to processing as a concentrated brine.The lifeless Atacama Desert is the driest place on earth. The SQM lithium mining sites the author visited in Chile were far more developed and professional than those across the border in Bolivia. The Chilean operations are also lower in elevation and closer to the coast which gives economic advantages. At the time SQM supplied 31% of the world’s lithium yet that only accounted for 8% of the company’s revenues. They produce 50% (at the time) market share of the world’s plant nutrition through their nitrogen-based fertilizers that include caliche, saltpeter (potassium), and iodine – they supply 25% of the world’s iodine. In the desert, however, the products are lithium, potassium, and boron. At the time SQM was producing 40,000 metric tons of lithium carbonate per year. The lithium concentration in the salt in the Salar de Atacama is on average about 2700 parts per million. Another advantage of the desert is that since there is literally no rain (millimeters per decade) it makes an ideal environment to extract evaporatives. It has one of the highest evaporation rates on the planet, three times higher than in the flats in Bolivia. In Bolivia there is a wet season that floods the salt flats. Chilean lithium in the Atacama also has a very low magnesium-to-lithium ratio, which makes extraction easier and cheaper than in Bolivia. SQM also benefitted from their existing copper mining infrastructure in the area and think they can triple their extraction rates to meet rising demand easily. 

The evaporatives evaporate out in an orderly way: first sodium chloride (halite) settles on the bottom. Then the brine is transferred to other pools where potassium chloride (potash), a fertilizer, precipitates out. Then a mineral called carnallite, a magnesium and potassium salt settles out, then comes bischofite, another magnesium salt. The brines, now higher in lithium, are transferred to new pools. These become yellow due to magnesium and lithium. Finally, a 6% lithium brine is produced which is yellow-green. It takes about 14 months to get this 6% lithium brine (from 0.2% lithium brine) – any higher concentration and the lithium will begin to precipitate. The 6% lithium brine is transferred via truck to processing plants. Here it is processed into lithium carbonate, a white powder. 

The challenges for battery makers and battery power in general include making it cheaper and more efficient so that it can compete better with oil and gasoline, eventually without government subsidies. This is basically increasing energy density. Improvements have been steady but incremental and small. There is still considerable research going on looking for breakthroughs. Assembly of new batteries for research involves the use of glove boxes (assembly by gloves in sealed off chamber filled with inert gas like argon or helium to prevent chemical reaction. Sealed in cathodes make coin cells which can be tested with lithium (anode) and electrolyte. The cells are repeatedly charged and discharged so that their performance can be observed. How these things go may result in better zero-to-sixty times and highway passing power for EVs. In fact, EVs, once considered low power, now can have immense power. Research continues into the possibilities of a lithium-sulfur battery which does not need carbon:

“If a lithium-sulfur battery could be made to work correctly, it could store hundreds of watt-hours per kilogram, enough to jump up into the realm of the several-hundred-mile electric car.”

However, since sulfur is a poor conductor the limits have yet to be overcome. Nanoscale engineering is being used to address these issues. Lithium batteries that use silicon may be in the market now – were set to be sold by Panasonic by 2013 – giving 30% increase in energy capacity. Silicon may be tweaked even further and again nanoengineering (silicon nanowires) is being used for such tweaks. Always with lithium there are safety concerns due to its reactivity. Lithium-air batteries are another area of research with plain old air being involved in the battery cell structure. 

Many other materials and techniques are involved with battery research and it is perhaps one of the most promising tech sectors but even in recent years improvements have been small, “incremental” is the word usually used. Battery use for grid storage is happening all over the world. However, it still is not competitive to natural gas “peaker” plants which can be started quickly to provide back-up power for wind and solar resources that are intermittent. Those peaker plants are forced to run inefficiently and so increase the costs of wind and solar considerably. These are one of the main “hidden costs” of wind and solar and are often referred to as grid integration costs. More recently, gas peaker plants can utilize small battery-assist for instantaneous start-up for grid balancing. 

Other hurdles remain for mass adoption of electric vehicles. Charging infrastructure is inadequate in many areas. Charging times are an issue. These days one can have a level 2 charger (which still takes hours) installed at home or business. Some states charge a road-use tax of EVs. If EVs were adopted en masse the revenue from gasoline and diesel taxes would drop dramatically and such revenue funds many things at federal, state, and local levels. The bottom line is that higher energy density solutions are needed and the outlook is not great at the moment. Pure EVs still have a long way to go to compete with the ICE (internal combustion engine) but many products on the market, hybrids and plug-in hybrids can be quite economic and advantageous. I just bought a new PHEV (plug-in hybrid) last week. Other advantages include quietness and needing to stop for fuel way less often due to longer overall range. The tax credit helps the economics a lot as does the EV power which is currently much cheaper than gas or diesel power per mile. Another advantage is less wear and tear on the gasoline engine, less oil changes (none in a pure EV), and other parts that wear out slower than in a gas or diesel engine. Of course, depending on where one lives EVs may be powered by whatever powers the local electrical grids, be it coal, natural gas, nuclear, or renewables. Aside form hardcore “greener-than-thou” enthusiasts that like to power their EVs with solar panels and can afford it, it is more likely that many EVs are powered mostly by fossil fuels. 

Great book for the historical perspective and the overall perspective of electrification.