Saturday, September 24, 2016

Under a Green Sky: Global Warming, the Mass Extinctions of the Past, and What They Can Tell Us About Our Future



Book Review: Under A Green Sky: Global Warming, the Mass Extinctions of the Past, and What They Can Tell Us About Our Future – by Peter Ward, Ph.D. (Harper Collins Publishers, 2007)

This is a fun foray into the scientific worlds of paleontology, paleoclimatology, geology, and mass extinctions. The book reads like an adventure story, or rather a detective story – trying to piece together geologic clues from the past to determine what caused the mass extinctions of the past and what processes were involved in the preceding and subsequent years and how they compare to today’s global warming challenges. The author and his colleagues visit outcrops and sedimentary sequences all over the world, sometimes in isolated places and harsh environments. 

The first place worked is the Muller Canyon area of Nevada where rocks at the end of the Triassic period are exposed. I have done some rock-hounding and geologic mapping in central Nevada in the much older rocks of the Valley and Ridge in the sagebrush desert areas. It’s a great place to look at rocks. At the time geologists were looking for evidence of asteroid impact at the end of the Triassic as evidence was found at the end of the Cretaceous in the mass extinction that wiped out the dinosaurs. No convincing evidence has been found for impact at the end of the Triassic there, only a loss of many fossil species and a thick siltstone nearly bereft of fossils. If it wasn’t asteroid impact was it climate change, he considers. Eventually he builds up a model, a case, that it was indeed fast climate change, with rapid global warming and strong positive feedbacks that led to massive amounts of CO2, methane, and eventually other toxic gases like H2S bubbling out of the ocean and accumulating in the atmosphere, raising temperatures and making it hard to exist for many species. About 60% of all species on earth were lost in the mass extinction event at the end of the Triassic.

Next he ends up in the summer of 1982 in the Basque region, in the Pyrenees Mountains between France and Spain. Here he meets up with another geologist, Jost Wiedmann, a biostratigrapher cataloging, correlating, and dating fossil assemblages throughout the world. He noted that the extinction of ammonites in the fossil record near the K-T (Cretaceous-Tertiary) boundary was gradual, lasting about 20 million years, rather than immediate. Ward, with a fresh Ph.D., was interested in why the ammonite cephalopods went extinct at the K-T event after a 360 million year biological success and their cousins, the chambered nautilus, survived. He also studied wild nautilus by diving in the Pacific off the coasts of New Caledonia and Fiji.

A paper came out in 1980 by Luis and Walter Alvarez, a father and son team from the University of California, Berkeley that strongly advocated that the K-T extinction event was the result of an asteroid impact. Catastrophic environmental changes, particularly a long lasting “blackout” from massive amounts of particulate matter in the air, they proposed, were the mechanism of the mass extinction. Ward and Wiedmann found no ammonites within 15 meters of the proposed impact layer.

Mass extinctions were recognized in the fossil record in the 19th century but were attributed to “catastrophism,” typically worldwide floods like the biblical flood. Such ideas were tossed as the science of paleontology developed further. The two largest mass extinctions divide the stratigraphic record into three main eras: the Paleozoic, the Mesozoic, and the Cenozoic. There are five main mass extinction events noted in the geologic record - from oldest to youngest: 1) Ordovican, 2) Devonian, end of Permian (Permian-Triassic), end of Triassic (Triassic-Jurassic), and end of Cretaceous (Cretaceous-Tertiary, or K-T).

Ward talks about a split among vertebrate and invertebrate paleontologists in the 1970’s where views on mass extinction was a factor: the vertebrate paleontologists did not think the mass extinctions occurred, only that the fossil record was missing. Evidence is now much stronger that the mass extinction indeed did occur and there is little dissent from that view. Two types of mass extinction were proposed: slow and gradual ones due to climate change, changing sea levels, disease, and predation; and rapid catastrophic ones characterized by the sudden disappearance of a large number of fossil biota in the record. The slow extinctions could not really be tested, only theorized. When asteroid impact became seen as a plausible mechanism for extinction there was at least something to look for – iridium and altered quartz that is associated with impacts. 

The Alvarez’s paper began a new paradigm, or revolution, in thinking about mass extinctions, that they weren’t slow and gradual and due to climate change but fast and due to asteroid impact and its after-effects which include climate change. He puts this in the contexts of Thomas Kuhn’s “structure of scientific revolutions.” Much evidence for a K-T boundary impact was accumulated: iridium, “shocked quartz,” spherules, and carbon isotope ratio changes which indicated a rapid loss of plant life presumably due to fire. However, some other geologists had another explanation: volcanism involving “flood basalts” and associated ash and lava flows. The impact vs. volcanism battle went on for over a decade. Flood basalts strongly correlated to all mass extinctions and even minor extinctions. Iridium, shocked quartz and spherules could also be associated with volcanism. Ward suggests that the geochemical evidence for impact was strong because they found what they were looking for in the impact layer but the fossil evidence required looking before and after in different places where the intervals were preserved. 

He tells of an odd experience stalking the Cretaceous-Tertiary boundary in France at a beachside outcrop where there was a large group of tanned naked frolicking gay men while he hammered rocks in geologist garb! Here he finds 12 species of ammonites in abundance near the boundary where in other places they seemingly died off gradually – here they did not until the actual boundary layer, which is further evidence of the asteroid impact. Ward proves that impact cannot kill off just what would become microfossils but macrofossils as well. He presents his findings at a conference where Jost Wiedmann was in attendance, after Wiedmann asserted that impact was not the cause and that the extinction of the ammonites came slowly. Wiedmann listened to his talk then left and never spoke to Ward again – dying a few years later, as Ward explains, his life’s work disproved by an apprentice. Science can indeed be a sad world. By the end of the 1980’s the evidence for impact as the cause of the K-T extinction was very strong. The 120-mile wide impact crater was found (in the Yucatan peninsula of Mexico) and both the geochemical and paleontological evidence supported a very rapid mass extinction. The problem, notes Ward, is that now all the other mass extinctions were assumed to have been caused by impact, as the new “paradigm” took hold.

Ward’s further studies in the French Pyrenees examined the quick (geologically speaking) recovery of life in the Late Paleocene of the Tertiary Period the first 5 million years after the K-T extinction event. The new fossils are of species still around today and indicate the area was warmer as they were tropical species. Oxygen isotope ratios found in shell material provide a very good record of temperatures when they were made. Analysis of oxygen isotope ratios from bottom-dwelling (benthic) organisms from the Antarctic a few million years after the K-T boundary showed that the basal ocean water there had anomalously warmed over a short period of time. The warmer water in the polar high latitudes (both Arctic and Antarctic) was also found to be more depleted of oxygen which caused an extinction of benthic organisms here at the Paleocene-Eocene boundary a few million years after the K-T asteroid impact boundary. The benthic organisms were not affected directly by the impact. The suggestion was that the oceanic conveyor belt which transfers heat to and from depth in the ocean was somehow shut down – presumably by the warm surface temperatures. This became known as the Paleocene thermal event. The event was confirmed to have occurred on land also by compared patterns of carbon and oxygen isotope ratios in well-measured fossil assemblage sections in Wyoming. Here many exotic forms of mammals were found, many now extinct. The Paleocene thermal event is considered a minor extinction event. More evidence was searched for in Aeolian (wind) deposits – basically dust that made it to the ocean floor. The amount was reduced and extremely reduced at the point of the event suggesting low wind conditions – typically as a result of prolonged arid weather. Also found was volcanic ash and indeed a great uptick in volcanic activity 58-56 million years ago. Estimates of seawater temperature differences from equator to poles (now 45 deg C) then shifted from 17 deg C to a mere 6 deg C, suggesting a quite unusual homogeneous ocean temperature. The basic mechanism of the Paleocene thermal event is thought to have been volcanoes spewing carbon dioxide with the CO2 heating up the surface of the planet and later the ocean, shutting down the deep-water circulation conveyor belt. The event ended after the volcanism subsided and later when the CO2 levels finally dropped. By 2000, other minor extinctions began to show similarities to the Paleocene event.

He ends up in the Southern Tunisian Desert in 2000 at one of the best exposures of the K-T boundary. This time they took small cores with the goal of discovering the magnetic stratigraphy as Alvarez and colleagues did in other sections. Here there is a six-foot layer of black rock in an otherwise 100ft thick cliff of white limestone. This black layer can also be found in Italy, England, Wyoming, Colorado, California, offshore British Columbia, and Alaska. This represents an abrupt change to anoxic (oxygen depleted) water. This extinction and others were now firmly linked to warming oceans. 

Next he explores the Permian mass extinction, the “mother of all extinctions” and the Great Dying, along the Caledon River in South Africa. After ten years of studying the K-T boundary, Ward was now fossil hunting near the Permian-Triassic boundary for land animals, terrestrial fossils. The P-T mass extinction resulted in the loss of up to 90% of species on earth. He found one of the best outcrop sections of the transition and noted the difference between the K-T and P-T boundaries’ fossil losses – The P-T losses were more gradual and seemed to be the result of many small events and one big one, rather than one abrupt big one as in the K-T asteroid impact. No asteroid impact was implicated here even though at the time he was looking for one. The P-T boundary was associated with global warming, an anoxic ocean, and volcanic activity via flood basalts from the massive Siberian Traps – a source of CO2 to heat everything up. However, the impact advocators also found what they thought was evidence – so-called “bucky balls” or “fullerenes,” geodesic-dome shaped carbon molecules named after Buckminster Fuller, that were thought to be of extra-terrestrial origin – thus suggesting impact. However, no iridium was found. NASA scientists reported that they may have found an impact crater that caused the P-T extinction in 2003. In 2006, scientists at Ohio State University reported a large impact crater deep in Antarctic ice detected with gravity anomaly measurements but it could not be seen or dated. Ideas of a comet impact also came about with that impact initiating volcanism but these ideas were all vague and difficult to confirm. Eminent paleontologists and geochemists got together to discuss the ideas and re-examine the evidence. They later found that the bucky balls did not come from the Permian but from much younger rocks in the Triassic and so did not correlate to the loss of species.

Another aspect of P-T boundary time was increased atmospheric methane, a greenhouse gas which would have heated things up. Extinction of many plant species occurred and subsequent increases in sedimentation rates. Tropical species appeared where there were previously temperate species. Increased volcanism, repeated changes in oceanic circulation, and presumed methane hydrate melting impulses are also in evidence. Impact as a possible cause for the Permian extinction has been rejected by the majority of scientists.

 A group led by Harvard paleobotanist Andrew Knoll beginning in 1996 proposed that the Permian extinction was similar to the Precambrian extinction of 600 million years ago. Similarities were a stratified ocean with oxygen near the surface but depleted at depth and large amounts of organic material as bottom sediments. When this changed, possibly due to plate tectonics, the deep ocean carbon began to be liberated to surface water and then to the atmosphere through large bubbles. The P-T boundary isotope changes showed a series of perturbations rather than a single one as the K-T had shown. This suggested multiple events over several million years.

In 2001 Ward ends up in the Queen Charlotte Islands off of the coast of British Columbia to study well-exposed sections of the Triassic-Jurassic boundary, the T-J extinction being responsible for the loss of about half of earth’s species. They wanted to get auger cores stratigraphically through the boundary to compare isotope signatures to the other extinctions. They did so in 1996 and found a main single event but did not get very far into the Jurassic section where they now hoped to see if there were multiple perturbations as there had been in the Permian. That is indeed what they found. However, iridium was recently found in several localities in some of the best T-J boundary exposures in the Newark Basin and Connecticut River valley areas of New Jersey, which suggested impact. However, the amount of iridium was quite small compared to the K-T iridium. The proposed impact crater in Quebec was later dated to be about 15 million years too soon to have caused the event. The impact from that massive crater apparently did not cause any significant extinctions – which suggests that the effects of asteroid impact may have been overestimated. 

In 2004 he returns to the Queen Charlotte Islands to look at older rocks on the distant islands to see if extinction was single or multiple, gradual or abrupt. He digs ammonites beginning about 12 million years before the extinction and notes a classic slow gradual decrease in species of them and other fossils. He notes that while his early career was involved in showing what was once thought to be a gradual extinction at the K-T boundary was actually abrupt, now he was showing what was presumed by many to be a sudden extinction at the T-J boundary was actually a slow gradual one. The progression seemed to be that ammonites first reduced their variety as some species died out then a new species of clam, Monotis, appeared in abundance, only to be reduced as the extinction got worse. Monotis might possibly have been adapted to lower oxygen sea bottoms. Better dating techniques by finding a volcanic ash bed to date revealed that the Rhaetian stage of the late Triassic, with low oxygen seas largely devoid of life lasted up to 11 million years. After the Rhaetian stage came the Norian stage when the rest of the bivalves and ammonites died out so Ward sees this as two extinctions, one quite gradual and culminating at the end of the Rhaetian and one more abrupt but still gradual ending at the end of the Norian stage. Subsequent fossil work in other places showed extinction pulses occurring into the Jurassic as well. To sum up it was now thought that most extinctions were gradual and only one, the K-T, was definitively associated with impact, the others being logically ruled out. Thus the ‘extinctions were caused by asteroids’ paradigm was given up except for K-T.

The next chapter finds Ward diving in a pristine coral reef near Palau in tropical Pacific Micronesia. This was back in 1983. Ward was a long-experienced diver. He lost a fellow diver in the past who had passed out during a deep dive and Ward got a serious case of the bends attempting to save his live by bringing him up fast. His friend died but Ward suffered chronic bodily pains and a permanent limp from his own injuries. Here they were studying the nautiluses, along with the ammonites, another cephalopod. The ammonites survived many extinctions but were wiped out at the K-T boundary in the Cretaceous. They tagged the nautiloids and found that they dived deep during the day and came closer to surface at night. That may have been why they survived the K-T and the ammonites who stayed in shallow water did not. It seems that while the Permian, Paleocene, and Tertiary extinctions wiped out bottom dwellers the K-T extinction wiped out the surface dwellers.

It was still unclear exactly how a slow gradual change of climate could have killed so many species several times in the past. New ideas were forming. Microbiologists studying anoxic lakes found some new fossils, chemical fossils, known as biomarkers. They did not leave behind skeletal remains but chemical remains in the lake sediment. Toxic hydrogen sulfide gas (H2S) was one chemical marker and calculations by one author, Kump, suggested that the amount of H2S was significant in the Permian – 2000 times that produced by volcanoes. The Kump Hypothesis also noted that the H2S would have destroyed the ozone layer and evidence from Greenland of fossils damaged by ultraviolet light suggests this may have occurred. Destruction of the ozone layer would mean a decrease in phytoplankton, the base of the food chain. Another hypothesis suggests the ozone layer could have been destroyed by particles from a supernova. With increased CO2 and methane bubbling up from the sea in a hot Permian the H2S would have been more toxic as it is in a warmer environment. Evidence was found of H2S –producing microbes in the Permian throughout the world. Since sea level was low at the time they also looked for evidence of eroding phosphorous which would have been a nutrient for microbes to accelerate their growth.

Next he ends up near his hometown, Seattle, looking at fossils in non-bedded limestones deposited in a “mixed” ocean of little oxygen variation with cold areas at the poles and warm ones at the tropics, as now, or since the Oligocene, about 30 million years ago. Older rocks show black bedded rocks deposited in an anoxic ocean bottom. Pyrite is common in these rocks.  Anoxic bottoms are filled with black shales, around since 3.5 billion years ago and sometimes with very-well preserved fossils of life forms that fell into the sediment with their forms preserved. The famous Burgess Shale is one example. There are two types of stratified oceans, he notes: one with low-oxygen bottoms which supports some life, mostly microbial; and one entirely devoid of oxygen which supports only microbes that utilize sulfur for food and give off H2S as a waste product. The latter is known as a Canfield ocean. Canfield oceans were toxic to life. They are thought to have been around in the Precambrian inhibiting the development of life. The eukaryotes require microbes to fix nitrogen, a needed nutrient, for them. The sulfur-imbibing microbes do not fix nitrogen, instead inhibiting it. Chemical biomarkers also suggest that the T-J extinction is associated with pulses of short-lived Canfield ocean conditions. The oceanic circulation, the conveyor belt, may be the key to the changing ocean states. There is strong evidence that the conveyor belt shut down (or shifted) in the Paleocene and now it appears that this happened in the Permian as well. Of course, the continents were in different places in these past times due to plate tectonics so the actual circulation patterns were different than today but a similar mechanism is still likely to have been in play. The shift in ocean circulation in the Permian was thought to have brought anoxic water to the deep ocean which allowed the H2S-producing microbes to thrive and upwelling of poisonous bottom-waters. If the Paleocene had H2S-producing microbes they were at far lower concentrations than in the Permian. He compares extinctions from Anthony Hallam’s and Paul Wignall’s 1997 book, Mass Extinctions and Their Aftermath, which was written when impact was still thought to be associated with most or all extinctions. Even so, their data revealed that of the 14 mass extinctions that were catalogued, 12 were associated with poorly oxygenated oceans as a major cause. The three “kill mechanisms” are now thought to be heat, low oxygen, and perhaps H2S.

Next he ends up in Namibia in Southern Africa where the scorching hot Kalahari Desert is flanked by a foggy Atlantic Ocean that is very cold. Models of atmospheric CO2 and O2 concentrations of the past can be made using changes in sedimentation burial rates. One of the main modeling setups for paleoclimatological studies is GEOCARB for CO2 and GEOCARBSULF for oxygen. Modeling indicates that CO2 levels were very high from the Precambrian to the lower Permian – from about 5000 then down to about 300 PPM, rising back up to 3000 near the Permian extinction. Modeling also indicates that all of the mass extinctions of the past with the exception of the K-T impact-caused extinction, are associated with maximum or ‘rising toward maximum’ atmospheric CO2 concentrations. Thus rapid rises in CO2 correlate strongly to mass extinctions. This implicates our anthropogenic CO2 increase as a potential cause as well – if it were to rise ever higher – though likely far beyond current projections. Another way to estimate past CO2 concentrations is through fossil plant leaves. These readings on leaf stomata confirmed the CO2 estimates modeled.

Ward summarizes the sequences of events that are thought to have taken place in these mass extinctions: 1) world warms due to increase in greenhouse gases, initially from volcanoes; 2) The ocean circulation system is disrupted or shut down; 3) the deep ocean becomes de-oxygenated then shallow water suffers the same fate; 4) deoxygenated shallow water bottoms with some light penetration allow green sulfur bacteria to grow and produce H2S which rises in the atmosphere and breaks down the ozone layer with the UV light killing off phytoplankton. – The high heat and H2S also cause mass extinction on land. He notes significant variability in each extinction and calls the model the ‘conveyor disruption hypothesis.’ He envisions seas full of gelatinous bacterial mats, stromatolites which would later become food for terrestrial herbivores as (very slow and weak) waves brought them in. The ocean would look serene and waveless and be purple due to floating bacteria. Thick bubbles of various sizes filled with poisonous H2S would belch from the sea giving the sky a green tint – thus the book’s title. The bottom line is perhaps the realization that it is mainly increased atmospheric CO2 and other greenhouse gases like methane that serve as the trigger for mass extinctions. 

Next he talks about bridging all the varying scientific disciplines involved in modern climatology and paleoclimatology. For much of the book he also addresses motivations for reward and prestige among scientists and how that can affect their work. 

He goes into the carbon dating work of Minze Stuiver of the Quaternary Research Institute. He dated the Greenland ice cores year-by-year dating back 200,000 years. Using mass spectrometers they were able to accurately approximate temperatures and CO2 levels. What they found is that the current climate on Earth is quite aberrant even for recent geological history. Temperature changes of up to 18 deg F over a few decades were more common in the past.  Before 10,000 years ago it is thought that storms the size of the major hurricanes occurred several times a year. At about 10,000 years ago a period of unprecedented calm apparently set in. Humans settled and mastered agriculture during this new period of calm. The records of the ice cores match quite well the planetary and orbital cycles proposed by  Milankvitch with those cycles being the triggers for glacial and interglacial periods. One of the unknowns that Ward emphasizes is how much CO2 and global warming would it take to alter the oceanic circulation system. Wally Broecker thinks it could slow down but is unlikely to shut down with even say 1000 PPM CO2. It may be changing now. Fresh water from melting northern ice could be a prime trigger for changing the conveyor belt. Ward goes through smaller time period climate cycles like the Dansgaard-Oeschger cycles and the cycles of floating melting ice dropping cobbles they were carrying, now called Heinrich events – seen in the ocean floor sediments. For 90% of the last 100,000 years the earth has been in an ice age so these are anomalous times indeed. Before 8000 years ago the conveyor belt is thought to have been less stable. The current stable period is a precarious stability, scientists suggest. Biodiversity strongly correlates to this stability. The implication is that the “on-off” conveyor belt tips the earth’s climate to one of two stable states: the cold one that takes up 90 % of the last 100,000 years and the warm one we are in now.

He next visits Manua Loa in Hawaii where atmospheric CO2 has been dutifully measured since the 1950’s – as part of a Canadian TV documentary about climate change. In addressing climate history of the last 8000 years Ward gives the data from William Ruddiman which shows that humans have been affecting CO2 and methane levels since the advent of agriculture, forest burning to clear land, flood for rice paddies (which is major source of methane), and livestock agriculture (another major source of methane). The CO2 range of the last 200,000 years has been between 180 and 280 PPM with most of it in the small end of the range since most times were ice age times. At the beginning of the Industrial Age CO2 levels were at 280 PPM and now they are above 400 PPM, a level unprecedented in the last 200,000 years. CO2 can also directly cause limited extinctions of certain species in the form of increases in ocean acidity and this is happening now in cases of coral bleaching. The changes in ocean pH will likely persist for thousands of years, he notes, thus changing life patterns. While there may have been times of high ocean acidity in the past he suggests that they have not been as high as they are expected to get soon for quite some time – perhaps 100 million years – since certain species were more adapted in the past to higher acidity – however, the abrupt changes now due to anthropogenic CO2 are too fast for many species to evolve adaptations. The present rate in the rise of CO2 seems to be faster than at any period in the past and global average temperatures have not been this warm since the Eocene epoch 60 million years ago which followed a mass extinction. 

Next he delves into the Eocene epoch looking at fossils along the Pacific coast of North America. He notes that this hot time was a time of very high sea levels compared to today. This area was tropical during the Eocene as evidenced by abundant palm and crocodile fossils found as far north as the Arctic Circle. He explores the climatic features of the Eocene and compares them to what a 1000 PPM atmospheric CO2 level world might be like as after we humans create it. First he notes that the tropics are the source of many of the human diseases that affect us. He suggests that tropical peoples in particular have developed coping mechanisms for the heat in the form of various local drugs. I am not so sure they have the monopoly on that. He mentions widespread use of betel nut, kava root, and khat. Of course, the same could be said for alcohol and cannabis in the temperate climes. The prevalence of mosquitoes makes malaria and other diseases more likely as well. He goes through all the typical scenarios of global warming effects: melting ice, rising sea levels, changing weather patterns, submerged cities, storm surges, changes in habitat patterns, etc. He notes that the temperature rise in the Arctic has been 20 times that of other places on earth and is quite worrying to scientists. Are effects underestimated? Overestimated? No one knows for sure but some attribute a significant amount of deaths now to global warming in the form of malaria and malnutrition. He invokes the view that hurricanes will worsen in both magnitude and frequency, popular at time of publication. However, that has not occurred and may end up being a misattributed global warming affect. The increase in hurricanes from 1990-2004 may be part of a natural cycle. Heat waves are another effect that has increased. Suggestions of war and famine are speculative. Cereal grain crops may not yield well in a more tropical climate. 

Next he discusses climate and the possibility of re-entering an Eocene-like epoch with famed University of Washington climate scientist David Battista. Windless tropical conditions in some temperate areas with super hurricanes pounding the equatorial tropics. The conveyor might change into a form where warm water from the tropics sinks much further south in the Atlantic which would freeze Western Europe perhaps giving the false impression to some of an impending ice age. Then when the sinking low salinity freshwater did not sink deep enough a situation of lower oxygen could develop at ocean depths resulting in the next chain in the link of mass extinctions that have occurred in the past.

He goes through some more speculative scenarios at different CO2 levels but it really is hard to know how things will play out and there are still uncertainties about that. 

Great book overall by a geologist who wears the scars of his work and his craft through an adventurous but often lonely existence in far off corners of the world as well as in the academic realms.
     
           

Wednesday, September 14, 2016

Smart Power: Climate Change, the Smart Grid, & the Future of Electric Utilities



Book Review: Smart Power: Climate Change, the Smart Grid, & the Future of Electric Utilities: Anniversary Edition – by Peter Fox-Penner (Island Press, 2010, 2014)

This is an excellent book that delves into the changing utility models taking shape with new sources of distributed energy, often renewable, and smart grid technologies that can make the electric grid more efficient and more responsive, yet still keep it stable and reliable.

The new edition begins with four long forwards, the first by retired Duke Energy CEO and chairman Jim Rogers. He credits the “pressures of climate change” and the “capabilities of emerging technologies” with forcing the utility industry to begin to change from their outdated business models and regulatory structures. Trends that have been occurring will continue: less coal, more natural gas, probably not much change in nuclear, more wind, more solar, more distributed resources, more battery storage, and more renewables-based peak load shifting. Rogers echoes Dr. Fox-Penner’s two main conclusions: changes are needed in utility business models and utility regulations.

The next forward is by Daniel Esty, a professor of environmental law and policy at Yale. He notes the unexpected rise of shale gas and oil through fracking and horizontal drilling which brought vast new and abundant gas supply onto the market, quickly making gas cheaper than oil and coal which had not been the case before. Now gas is the cheapest and the lowest emissions fossil fuel, and this will likely be the case for at least the next decade or two, he notes. He mentions state incentives, including making private capital available for solar startups and standardizing energy service company (ESCOs) contracts. His state, Connecticut, has implemented these projects to make renewable energy cheaper. They have also begun buildout of a system of microgrids powered with small gas turbines, fuel cells, and biogas from anaerobic digesters. These are forms of distributed generation (capable of offgrid “islanding”) in addition to most renewable sources, especially if they have battery backup. He also notes the need for a change in utility business models with other sources of revenue besides selling energy. One is selling energy services where energy efficiency is commodified. Another is managing demand fluctuation (demand response services). Others are managing microgrids and providing backup power for distributed resources. 

The third forward is by Daniel Dobbeni, a principal in several companies. He notes that utilities and power sources are affected by events and policies (Fukushima in Japan, carbon markets and laws in Europe, etc.) He notes that the effects of renewables on the grid (need for backup/peak generation and equipment to balance supply and demand of loads) has been largely ignored until now. He notes the potential for stranded assets in the form of transmission lines that could come about as new technologies replace them. He notes that the focus of energy policy in Europe shifted from competitiveness to sustainability, culminating with the 2007 renewable energy directive, with new infrastructure from 2013 onward focused on connecting Europe as a whole. Europe has adopted a common market model for the power industry, unlike other areas such as in the U.S. Power ‘flexibility,’ such as demand side management, is required in the increasingly distributed energy systems. The European model is to be watched and studied so better models can follow it.

The final forward is by Lyndon Rive, cofounder and executive officer of SolarCity. He focuses on the potential for economic catastrophe due to climate change. Making the grid cleaner and more efficient should be vastly accelerated, he notes (as any solar executive should/would). His focus is to do away with the old utility business models faster to save money since the transition to renewables is inevitable and doing otherwise would be wasteful. He sees midday “overgeneration” from solar more as an opportunity than as a problem, one that might be used for water desalinization and electrification of transportation. He says the old U.S. utility monopoly model of “sole source/cost plus,” where competition is suppressed and profits are guaranteed, is economically undesirable and unsustainable. NASA migrated away from such a model quite successfully and so should U.S. utilities, he says.

Fox-Penner starts out with an account of the first electric revolution. Electric machines and electromagnets in foundries led to drastic increases in work output in the late 1800’s. He talks about the industry developed by Insull, one of Thomas Edison’s staff who became the CEO of Commonwealth Edison. Power was aggregated into what we know as the “grid” and power was sold to take advantage of economies of scale. The model was monopoly with the more power consumed the cheaper the cost per unit of power for the consumer. He also noted that regulation of the industry would provide both stability and protections. It was Insull’s vision that influenced the investor owned utility (IOU) model that has been the standard in the U.S until recently.

The second electric revolution is in progress. Two of the main motives are the need to reduce greenhouse gas emissions and the need for energy security. Renewable energy, energy efficiency, and replacing high carbon fossil fuels (coal) with low carbon fossil fuels (gas) are the three main ways of reducing carbon emissions in the electric sector. Securing domestic energy supply (oil, gas, coal, nuclear, and renewables) helps reduce our dependence on foreign suppliers, particularly on OPEC countries and oligarchies like Russia who use energy as political leverage. Electric transport via EVs can also help us with both. Such needed changes, says Fox-Penner, will be both challenging and expensive for utilities and their rate-payers. Implementation of the so-called “smart grid” with software embedded technologies to optimize efficiency, balance local grids, and buy and sell energy at different rates, will happen more and more over the next few decades. Less coal, more gas, and more renewables will be on the grid. More battery and other energy storage projects will be added. Grid management will be complexified and yet it will be more efficient overall. Fox-Penner mentions three objectives: “creating a decentralized control paradigm, re-tooling the system for low-carbon supplies, and finding a business model that promotes much more efficiency. The consumption model where utilities profit strictly from sales of power is not sustainable. Electric power demand in the U.S. has been flat for a while and is expected to stay there. This is due to efficiency increases and reduction of waste usage on both sides of the meter. One question is how investor owned utilities can remain viable – can remain profitable while meeting all these challenges. New business models are required. Some utilities have and will be resistant while others have or are adopting new economic paradigms.

Electric deregulation is the next issue. In 1990 the consensus was that the electric utility industry should follow airlines, natural gas suppliers, telephone companies, and trucking firms in allowing markets rather than regulators to set prices. Deregulation was oversold, says Fox-Penner, and executed poorly, resulting in the California energy crisis. Early problems have been fixed with better oversight and better market designs but there are still problems. Twenty-three states originally implemented deregulation but eight of those have suspended it or scaled it back. Analysis indicates some regulation is required for successful power markets. 

The three vertical stages of power production from an engineering perspective are: generation, transmission, and distribution. When a company owns all three of these aspects it is said to be vertically integrated. Only some of the U.S. power industry is vertically integrated. Wholesale, or “bulk” power trading between generators and distributors is subject to pricing set by the Federal Energy Regulatory Commission (FERC). This includes control of high-voltage transmission systems. The authority to build transmission systems resides in the individual states through their public service commissions (PSCs). The PSCs set rates for transporting power over these systems. Residential, small-commercial, and large-commercial pricing is at different rates. There are also generators owned by federal, state, and local governments which are subject to less wholesale and transmission regulation as are government and customer owned (co-ops) distribution systems. All the government and customer owned utilities are thought to be less likely to charge unfair prices so they are less regulated. In a deregulated market it is the market itself that sets prices for the wholesale market – transmission and distribution are still regulated by FERC or PSCs.  In 1994 FERC was permitted to create an “open access” system in which any generator could use anyone else’s transmission system (first-come, first-served) to deliver power to a state-regulated system. FERC also began allowing some generators to sell wholesale power to other utilities, not end users, at deregulated rates. In order to develop fair competition in all power markets there are three requirements: 1) enough competing power generators (deconcentration), 2) a transmission system large enough to accommodate all generators, and 3) “open access” rules. If markets can allow buyers to react to price changes then they will be more functional. Power prices tend to vary hour to hour. Unfortunately, there are very few places where all these conditions occur currently but some regional power markets are testing new dynamic pricing, or ‘time-of-use’ models.

Deregulation in the states that adopted it was given as a consumer choice so that consumers may choose a provider or choose not to participate in the deregulated market. This is likely due to the previous deregulation of telephone service where consumers were forced to choose a provider. Since deregulated prices were expected to drop the non-deregulated (provider of last resort, POLR) prices, the new prices were set 10% lower and frozen for five years. However, in many areas deregulated providers could not compete with POLR rates. At the time the cost of fuels rose and deregulated providers had to raise their rates but POLRs could not which made POLRs cheaper than deregulated sellers, a situation opposite of what was intended. This was not the case everywhere and some deregulated providers were/are able to offer lower prices. In May of 2000 hourly electric prices in California began spiking. Energy was in short supply, especially natural gas. This caused intentional blackouts and rolling blackouts for consumers. Prices up to 10 times the previous averages continued in the northwest. The FERC stepped in to provide caps on prices and other measures. By July 2001, the crisis was over. It cost Californians $33 billion more than the previous year’s cost! This gave deregulation a bad reputation. By 2006-2008 when the POLR 10% off rates expired they went up considerably, further bashing the customer advantage claims for a deregulated power industry. Fox-Penner suggests that the relative failures and turbulent history of deregulation in the power industry have galvanized resistance to further “changes,” some of which are necessary to adapt to changing energy sources and business paradigms. Deregulation was only adopted by some states so it further confuses and adds to the already significant variability of generation owner-types and regulation models. The U.S. power industry as a result is quite heterogeneous which makes adopting new regulations and business models and getting new projects going difficult, especially as smart grid technology advances.

Next he goes through some early (2005) dynamic pricing/time-of-use pricing experiments where energy consumers could alter their daily power use to take advantage of the best prices, through smart grid software embedded technology. The results were excellent in keeping power use down, with the possible application of decreasing power spikes that require extra generation usually provided through idling and ready power plants (peaking plants) built for demand peaks.

He explains how second-by-second balancing of power supply and demand on the grid happens with system operators constantly monitoring it at a control center called the ‘balancing authority.’ System operators typically have ‘reserve capacity’ in the form of demand “peaking” plants. Typically for every 100 MW of baseload capacity there is 15 MW of reserve capacity, so 15%. About 5 MW, or 5%, will be ready and idling at any given time, burning fuel and producing emissions. 

Another feature of power use assessment is its one-way nature. The old “dumb” meters measure how much power we use over the course of a month, the billing period. Pricing may vary by season but that is about it. New smart meters keep track of hourly power use and may charge according to times, the price per kilowatt-hour varying with established demand periods. The technology is capable of responding to “dynamic pricing” which may change minute-by-minute and communicating back and forth through high-speed internet. In the past utilities encouraged more power use by dropping prices as power use increased through the month, so-called “declining block rates.” Dumb meters are like charging for groceries by weight rather than by item as one utility executive puts it.

Fox-Penner gives a kind of average scenario where the cost for a utility to generate one kWh varies from 2-3 cents in the dead of night to 8-20 cents in high demand periods from hot or cold days when old and inefficient plants in reserve are turned on. Another potential feature of smart meters is that they can work with smart appliances to respond directly to price changes or simply programmed to run at lower demand/lower price times of the day or night. Smart meters are also more useful for integrating renewables and other distributed sources and battery storage. Thus the end users can self-balance their own power supply and demand. During high demand times if enough users reduce their power usage with smart appliances then there may be no need to turn on expensive reserve capacity. Fox-Penner says that while system operators do employ demand response software that they will still balance the grid manually for a long time to come. 

The smart grid offers three basic advantages: 1) greater customer control over energy use and costs, 2) enabling of local small-scale power production, and 3) a more reliable and secure grid. However, he notes, there are complications. Smart meter technology that regulates the downstream distribution end of the grid is different from high-voltage transmission coming from the upstream generation end where many power plants feed in. Hourly dynamic pricing and trading has long been a feature at this generation-transmission end as has been smart computerized control and switching. Even so, he notes, new software technologies can still improve that end. Such technologies will allow better “situational awareness’ of grid issues and disruptions than previously. Smart grid technology will make it easier to incorporate storage. Having energy storage available means the grid can respond to demand and balance the grid instantaneously and at low cost with the stored energy. Utility-scale storage as well as storage associated with distributed resources on the other end can both be utilized and eventually optimized. However, cost of implementation, battery life, and other issues keep it from being implemented widely. Local distributed generation allows two-way flow of the grid which can be advantageous to the local generators and to the grid as a whole. The smart grid can provide self-activating tools to increase grid reliability and prevent blackouts.

Next he goes into real-time pricing, or time-of-use (TOU) rates, a feature likely to happen more and more in power markets as smart grid tech advances. In TOU pricing the utility may set the daily variable rates in advance. In ‘critical peak pricing’ (CPP) the utility may warn users say a day in advance of expected high demand and high prices so they can plan accordingly. In real-time pricing (RTP) new rates may be set every hour. Such variable rates are designed to be neutral to the utilities (prices are shifted so that the utility makes the same amount of money no matter how much power is used) in terms of power sales but beneficial to consumers. Dynamic pricing advocate Ahmad Faruqui found that typical TOU rates can reduce demand peaks by about 5% and CPP can reduce them by up to 20%. Fox-Penner notes that if this could be sustained as a national average then about 200 medium-sized power plants could be eliminated! So too can their carbon emissions and pollution. This can happen in a fully developed smart grid automatically by programming devices to respond to expected and short-term price changes. Kurt Yeager formerly of the Electric Power Research Institute calls it ‘prices to devices.’ Smart thermostats are now more widely available and many other devices are being outfitted with these ‘enabling technologies.’ Customers can save over a hundred dollars a year while helping the utilities save money (by not building and using more reserve capacity and associated grid) and help a little to mitigate climate change and pollution. By not having to turn on expensive reserve capacity the overall pricing can be reduced as well which benefits not only those provided the ‘demand response’ by reducing their usage, but all users on the system. It takes about 5% of demand drop through demand response to lower prices for all so these benefits should be quite achievable as the tech advances. Those who provide the demand response by lowering their usage through smart tech will of course save much more than the other users. Avoiding the building of more reserve capacity and associated grid is called avoided capacity cost as the reserve capacity is now provided by built-in demand response. This may easily become the largest benefit of DR as it is advanced. Another pricing mechanism is increasing, or inclining block rates. This is simply raising per kWh prices with excess use per customer. This has long been done and was first implemented by Insull – no smart tech required. 

The barriers and resistance to DR and dynamic pricing come from the utilities who have to pay for and implement the smart tech. Changing tens of millions of meters from dumb to smart, or installing ‘advanced metering infrastructure’ (AMI) is quite expensive. However, it does allow utilities to eliminate meter readers as meters can now be read electronically. As smart grid tech advances the cost scenarios will become more familiar to utilities and regulators so business cases can be more predictable. Some users are not able to change their use patterns much such as those who do their most business at times of high demand so they argue that dynamic pricing could hurt them but of course the overall drop due to DR would mitigate this somewhat. However, research indicates that only a few percent of users would end up with higher bills due to well-designed dynamic pricing. Right now we should be around 50% give or take of smart meter implementation, or AMI, in the U.S. The shift from flat power rates to variable ones will save customers quite a bit of money and a little carbon as well.

Next we come to the regulatory realm. The main goals of utility regulation are to keep utilities from making excessive profits or from incurring excessive losses. Regulators and utility executives face significant uncertainty regarding the results of smart grid investments and it will take time to evaluate them. The benefits are large, mostly external to utilities, but not easy to measure quantitatively. Large upfront capital investments are required. Regulators will have a lot to consider. Both DG and DR can add significantly to ‘avoided capital costs’ for utilities, costs which are much greater than energy saved costs. However, figuring avoided costs is not easy. Congress dictated in the 1980’s that states must determine avoided costs every few years and there are many different methods of determining them. Part of the difficulty is that the grid is highly connected and determining avoided costs requires making detailed and extensive hypothetic cases. Wholesale prices are also regional now as different regional markets sell electricity at different rates. Transmission can be considered the difference in costs from the two markets it connects. This is also how natural gas is regionally priced. It is considered a reasonable approximation of value. However, there are requirements: knowledge of daily price fluctuations, knowing the value of scheduled power, requiring DG and DR to keep safety margins of power, and planning for blackouts. Fluctuating hourly prices in wholesale markets benefit small DG and renewables generators, say DG advocates. At press time, 42 states had net metering so that distributed generators, including rooftop solar generators, could sell their power to the grid at the same rate as they buy it. In terms of avoided costs it is actually best to install DG or DR in the middle between generation and end-user distribution. The author mentions Amory Lovins work on DG and his 2002 book – Small is Profitable – as a monumental text on DG. He catalogued many other side benefits to DG. However, most are nearly impossible to measure. For instance, having DGs would make terrorist attacks less disruptive. 

More downstream distributed energy sources also mean that the distribution ends of the grid will need to be reengineered and that will cost and also raise local issues. Fox-Penner gives a scenario where a new solar DG generator will temporarily enjoy selling energy for high cost with those around them also having to pay that high cost until the utility upgrades the system in that area after which the DG will profit less and those around them will be relieved. Such seemingly unfair changing pricing may cause disputes and resentment. Such occurrences have happened on the high voltage end, slowing development. 

Two-way communication between grid operators and customers and their devices will be necessary for full smart grid implementation and software platform standards will have to be developed. This is happening now. This is a requirement for “plug and play interoperability.” Non-profits like the Institute for Electrical and Electronics Engineers (IEEE or I-triple-E) will likely write up the standards. Agreeing on standards has been no easy matter in the past and there are many standards to consider in expanding the smart grid. Cyber-security is another issue in smart grids. With many different languages and protocols in ‘supervisory control and data acquisition (SCADA) systems, the smart grid can be vulnerable to hackers. Russian and Chinese hackers are said to be already mapping U.S. power systems. 

The next topic addressed is total electricity sales, or power usage. This has remained flat for a while in the U.S. and is projected to stay that way. This is due to vastly increasing efficiency from all parts of the system. LED lights are an example from the user end. Utilities, particularly IOUs, have long planning times and significant investments to build generation. If planned power plants are built and later found to be not needed there is a huge sunk cost. If they are not built and there are blackouts, the consequences can be equally severe. Lower sales make it harder for the utilities to invest in low carbon energy and the smart grid. There is no incentive for them. An earlier example is nuclear plant cost overruns in the 1980’s which caused electric rates to skyrocket and bond defaults.
Renewables expansion will require more transmission lines, particularly from areas with rich wind resources that in the U.S. are typically remote from populations. People have resisted large transmission expansions in scenic areas. Transmission regulation is a mix of states, FERC, and regional transmission organizations. Different types of lines have different purposes and abilities. He compares AC and DC high-voltage lines. DC is unidirectional and AC is two-way, or bidirectional. DC lines lose less energy as they traverse long distances (in one direction only) so they can be more economical in certain situations. DC lines move hydroelectric power from the Oregon-California border to Los Angeles and from Canada into New England. They are better for moving power under water and can be used for offshore wind. Apparently, there are only about 15-20 of these DC lines currently in the U.S. Another new transmission technology is superconducting cables which can handle 3-4 times as much current as regular wires. Grid expansion planning is generally difficult in the deregulated markets due to jurisdictions and rules. There are often debates about who should pay for new lines, especially those traversing multiple states and power market regions. New lines will be needed, especially with utility-scale renewable power plants – which will be required to help meet carbon reduction goals and state RPSs. Unfortunately, the best wind, geothermal, and solar resources in the U.S. tend to be in areas with poor transmission infrastructure in addition to being far away from population centers where the power is needed. The bottom line is that a low-carbon future requires more transmission per kWh than our current high-carbon grid. A new transmission superhighway is in the planning with large power volume lines, both AC and DC proposed. Even so, a “supergrid” does raise some reliability and security issues if large amounts of power are coming from single sources. DERs can provide downstream reliability enhancements but upstream disruptions would still be problematic.  

Availability, cost, and reliability are the three main issues with low-carbon energy sources. Natural gas peaking plants used to back-up renewables are the best cost effective means for doing that. Combined cycle gas turbines (CCGT) are the most efficient power plants with energy conversion efficiency over 60%. Their cost per kWh is also the lowest of all power sources. They emit far less carbon and pollutants than coal.  Thus coal-to-gas switching is happening on a big scale and is by far the main reason U.S. greenhouse gas emissions have dropped. The big risk for these plants is high gas prices. Gas prices will likely remain quite low but gas needs to be available to new plants in order for more switching to occur so pipeline expansions are required. Many have been delayed recently due to public opposition. Low-carbon coal plants that use gasification and/or carbon sequestration are very few and far between and are still after many years not widely deployed. I think CCGT gas plants with some carbon sequestration will make these plants nearly as clean as renewables and will be a better deal than any coal plants. Carbon capture and sequestration (CCS) is simply quite expensive and will only be used in a world with carbon prices. It may be better to switch to gas then incorporate some sequestration with gas exhausts. Due to the scattered nature of power plants large networks of CO2 pipelines would be required which people tend not to like and CO2 leaks high enough in certain areas could be poisonous.

20% of U.S. power comes from nuclear plants and most are set to be retired over the next 40-50 years. Will more nuclear replace them? Maybe, maybe not. Costs, safety, security, waste storage, and decommissioning costs are the major issues. 

Wind power will continue to grow, mostly onshore but some offshore as well. Currently wind makes up about 4.5% of U.S. electricity. That could double in the next decade. The availability and cost of transmission limits wind, especially from its best sources in the Great Plains where transmission is scarce. Variability and so reliability is also an issue. Wind is most available when least needed. Storage of excess wind power would be ideal but storage technologies are currently far too expensive even if there are many projects operating and some mandates. When these “grid integration” issues are added the economics of wind power drop significantly.

PV solar has similar availability problems but is significantly more unpredictable than wind due to clouds and there is far less sun in winter. It does have the advantage that peak generation is close to high electric demand times. Even so, without demand or storage for the excess supply during peak generation times there will be grid integration issues with massive solar deployment. Solar economics, rooftop, utility scale, or solar thermal can’t even come close to the economics of CCGT gas, even with significant direct subsidies. Concentrated solar power (CSP), also called thermal solar, has been built in the U.S. southwest but the economics and the performance have been poor so far. It also tends to be in areas that require transmission upgrades.

Biomass power comes from four sources: wood waste from paper and furniture makers, forestry residue, agricultural residue, and methane from landfills and anaerobic digesters. The attribution of biomass as “net-zero carbon,”  while technically close to being true (it is carbon lean rather than carbon zero) , should be caveated with the fact that the carbon is entering the atmosphere much faster than it would have naturally. This is also true of biogas from anaerobic digestion. 

Geothermal and hydroelectric power will see some growth but overall the places it can be developed are small and there are some environmental risks with both. Hydrokinetic power from ocean tides and waves is also in this category of making small contributions to the energy picture. 

California defines DG as sources less than 20MW that connect to the local distribution grid rather than to the high-voltage grid.  The four sources are combined heat and power (CHP, also called cogeneration) – which are usually small gas microturbines; wind; small PV solar plants; and fuel cells. CHP utilizes waste heat to heat buildings. Factories and residential complexes utilize these 5-20 MW sources around the world, with the heat often being delivered through steam tunnels. Utilities have not been cooperative with CHP developments since it decreases their sales but more CHP means less carbon emissions so there should be no opposition. Small home-sized CHP gas microturbines are being used more and more by property developers. They are currently higher cost than grid power but are expected to drop as the technology advances. Siting energy using equipment is a challenge in these systems. They can also be used stand-alone as offgrid islands in many cases. Small-scale wind is simply too expensive although there is some development. The same is true of fuel cells. Costs could come down in the far future but now they are not at all economic. The costs of DG are often better than they appear due to the avoided costs they provide to the grid utilities which are hard to measure and often depend on siting and local grid circumstances. CHP is the cheapest current form of DG. “Observable” costs of DG (without accounting for costs avoided) are still 2-3 times as conventional large-scale power sources. Regulation and subsidies help but not enough. Currently there are 76-85 GW of CHP plants at 3300 sites but this could be expanded to an additional 80-100 GW by 2030. He doesn’t predict the DG revolution will truly begin until about 2030 and suggests most of it will be CHP, small and large. He gives two possible scenarios for future power: ‘small-scale wins’ which favors DG and ‘traditional triumphs’ which favors utility-scale centralized power projects on a more traditional grid. Either way the smart grid will be built. 

Fox-Penner shows comparison charts where gas is the cheapest of all sources as long as gas costs are less than $6 per MCF. Current projections put gas prices at $3-4.50 through 2020 so gas is by far our least expensive power source. Solar PV remains the most expensive energy source even with tax credits. Even with a price on carbon, gas would still be the least expensive and solar the most expensive. Only if the carbon price were above about $50 per ton would it change. Nuclear and carbon sequestration require long lead times and by 2030, he says, we will know if they will be a big part of the picture or not.

Problems with scenarios of complete renewables are many and include increasing costs per kWh as more renewables are added to the grid. Even in the distant future mixed energy sources will likely be used including gas, hydro, and nuclear – all which can provide baseload power. Thus most power providers now and in the future have diverse power source portfolios, used to satisfy demand, reliability, low carbon mandates and pollution mandates such as the Clean Power Plan. The shift toward renewables and smaller sources is indeed inevitable but how much and how fast are the questions.

Next he explores in detail new power provider business models. First is finding a way to commodify and incentivize energy efficiency (EE) so that providers can profit from it as well as power sales. EE is a straightforward investment for someone who buys power but not so for those who sell power. EE also offers the best potential source of carbon emissions and pollution reductions, so these investments are also emissions reduction investments which makes them doubly valuable. Thus carbon pricing would further incentivize them as well. He goes through the barriers to EE adoption: 1) information, 2) capital availability, 3) transaction costs, and 4) inaccurate prices. First accurate information must be obtained about comparative efficiencies of energy sources and technologies big and small on each system. This is true on utility-scale as well as home-scale end-user efficiencies. People especially need to understand efficiency and the potential benefits before they invest in it. Capital availability is simply that. Efficiency is an upfront all-capital cost where 100% must be paid out before savings are realized. Utilities have many projects like new plants, transmission build-out, and smart grid expansion – so EE must compete for capital. Transaction costs refer to the disruption effects of contractors adding the new measures which may cause occupancy delays in new housing and building projects. Inaccurate price signals refer to the common situation that the builder will not be the one paying the bills but will be the one choosing the energy systems and appliances so has no incentive to look at potential energy use costs. One help to some of these problems is when free energy audits and sometimes installation advice are provided by organizations or utility companies. Appliance efficiency standards have saved massive amounts of energy and have saved consumers massive amounts of money. However, the construction and real estate industries as well as appliance manufacturers routinely oppose them because they increase their costs and change construction and manufacturing practices. Building efficiency codes and appliance efficiency standards are surefire ways to save energy and emissions and need to be further pursued. Utility efficiency programs can also be effective – offering free energy audits, low-interest loans, rebates, and free technical assistance. State government incentives and financing have also worked to enable EE. Private sector EE has also been fairly successful through energy service companies (ESCOs) utilizing ‘shared savings’ business models whereby the ESCO pays for the EE upgrades and recoups their money and a certain amount of profit over time while the customer enjoys cheaper energy costs that drop even more when the ESCO is paid off. This is a great model for companies since the ESCO does all the work but often they don’t do the full EE upgrade but only partial ones with rapid paybacks except in government installations where the government mandates full slow payback EE upgrades which are better in the long run. To mandate utility EE there is the Energy Efficiency resource Standard (EERS) whereby the utility or seller of electric power has to meet a certain percentage of sales growth through energy savings through efficiency upgrades. Thus they are mandated to save a certain amount of energy in addition to selling it. About half of U.S. states have some version of EERS. Utilities can often raise capital at lower interest rates – they are credit worthy because they are providing an essential service. This gives them an advantage in investing in EE by loaning to their customers. They are an ideal low hassle financing entity. On the downside utilities face ‘divided incentives’ – selling energy vs. saving energy. Government control of EE programs also has downsides: changing administrations changing the rules and complaints about excessive government loans (as in the DOE paying for weatherizing upgrades) since EE is capital intensive. Further government control could involve tax increases which are unpopular. He gives the three approaches as 1) requiring utilities to control EE against their own financial interests (as is mostly done today), 2) letting the government do it, 3) changing regulation and utility business model to give them an incentive to implement EE. He notes that none of these are done (enough) today. He thinks utilities can do it best and can find more opportunities for EE if they had an incentive to do so.

Finally, he gets to business models. First he describes the two “triads” (referring to generation, transmission, and distribution) - business models in place today. Then he describes what he thinks will be the two possible models that will dominate the future. The first model of today is the traditional vertically integrated utility that owns generation, transmission, and distribution. This is the integrated regulated public/cooperative structure of 36 states. The second model is the de-integrated, deregulated generation and regulated grid which occurs in 14 states. The FERC regulated transmission is open-access – any company can put energy onto it if there is capacity. Vertically integrated companies have probably been the most economically successful but that was before the advent of the smart grid and the necessity to reduce emissions. After deregulation and de-integration in the U.S. and Europe the percentage of vertically integrated companies dropped but after a while they came back up again with many re-integrating for the economic advantages. Will integration with its economic efficiencies survive the smart grid? Regulation arose due to the belief that power companies were natural monopolies with economies of scale. These days only transmission and distribution are considered natural monopolies. Generation and retailing are not considered so. Competition in generation could lead to cost benefits for consumers. Thus it was generation that was deregulated in the 1990’s. With deregulation the owners of transmission and distribution would be required to utilize the lowest cost generation whether they owned it or not or they would not be allowed even to own generation. Many now say that as generation becomes less centralized and more distributed there will be more benefits to deregulated competition. Theoretically they should work even better with dynamic pricing. Fox-Penner goes into detail comparing these models. Smart-phone controlled home-energy management responding to dynamic price signals will be possible when the smart grid is fully employed and will likely be a major feature of demand response and peak shaving. Such innovations will likely be easier if regulated distributors have less control and there is more deregulated competition. Competition and dynamic pricing are a good match but traditional utilities may be able to manage it as well. 

Now we come to his two (and a half) business models for the future of utilities. First is the Smart Integrator (SI) model which is “a utility that operates a regulated smart grid offering independent power and other services at market prices.” Decoupling sales to profits can remove disincentives. The second model is the Energy Services Utility (ESU), which “is vertically integrated, regulated, and must have strong EE incentives built into its regulatory structure to offset its regulated profit motive.” He notes that these two models are really not much different at all than the models available today – just that sales and profits are decoupled in the SI and efficiency is set as a core mission in the ESU. The other differences are that the ESU may own some or most of its power supply and is regulated and the regulators set prices while in the SI the market sets prices. The half model (of the two and a half) is a Smart Integrator in which distributed generators (DG) are owned mainly by communities rather than mainly by individuals. Mid-scale generation may be owned by individuals, businesses, DG management companies, utilities, or communities. Community energy systems (CES) may become a public power model not unlike municipal power companies owning mid-scale DG and becoming the CES.

The mission of the Smart Integrator is to deliver power with superb reliability and maintain mostly downstream wires and assets – but it does not own or sell the power. The SI will work mainly as a distribution company (distco) controlling the two-way flow of electrons to and from DG in response to prices and so must have an ‘open architecture’ format to let more sources in when needed and turn them off when not needed. The whole two-way system will be managed with software, sending price signals and switching sources on and off in real-time. Hourly spot trading prices will have to be determined, but this mechanism of management can get both expensive and complex. Germany is currently having to deal with this as millions of new DG resources that send and receive energy must be managed. Geographical situations where advantages of local DG providers get too strong may have to be regulated in some way. Determining the value of DG and DR services in avoiding expansion costs is not easy nor is how to reward them for their services. Software and software platform evaluation, costs, and agreement are currently big issues and these problems will have to be worked out. Information management will become more important as more DG is added to systems. Software needs to be queried and analyzed in order to provide decision support. More expertise in IT and regulatory economics will be needed in addition to electrical engineering. Regulated utilities set rates according to ‘cost of service’ or rate of return plus an agreed upon profit. This is the sales component by which rates are set for customers every few years. These fixed costs per kWh set in such a way do not encourage EE. The sales incentive exists even for the SI. ‘Decoupling’ is the current solution to mitigating sales incentives and replacing them with energy savings incentives where the SI or other utility is paid for saving energy. This is a short-term fix, he says. Investors will see it as sales declining which is not what Wall Street likes to see! This is especially the case for companies that own generation. If SIs just deliver power the drop in sales won’t affect their profit. Another issue is who provides customer service – will it be the SI itself in a business-to customer (B2C) or a third party software vender-type provider in a business-to-business (B2B) format?

The Energy Service Utility (ESU) will differ from the SI in two main respects. “The ESU will not necessarily have an incentive to cooperate with local generators who want to connect and sell power into its smart system.” It may view them as competition. Second is the disincentive to help customers reduce their power use due to the ESU’s ownership of generation. These two issues will require regulatory interventions. Letting in local generators, or open-access, and incentivizing such access will require regulation. The only difference here between the SI and the ESU is that the ESU owns generation and other generators would be seen as competitors. Another issue is that utilities often will go to the bare minimum to comply with any EE mandates rather than beyond. EE is considered a public goal and should be pursued as much as possible as such so it needs to be incentivized in such a way as it will indeed be pursued as much as possible rather than bare minimum. If utilities get to keep higher percentages of the value of energy they help their customers save then they will be encouraged to save more – simple as that. This has been successful with PG & E in California. Thus the public goal of saving energy is aligned with the business interests of the utility. Even so, the sales profits from PG & E and others still far outweigh (by nearly 10 times) the efficiency profits. Next he considers Duke Energy’s Jim Rogers’ “Save-a Watt” program where home efficiency improvements would become automatically provided by the utility, the costs avoided by deferring new plants would be part of the utility’s profit, and the heavily-regulated planning and approval cycles would be eliminated. The problem is that regulators were/are averse to ceding control of money allocation and investment to the utilities. Rogers’ plan was seen as too lucrative for the utility. It was approved in four states after some tweaking and lowering the profit a bit. Rogers plan called for the possibility of charging customers not for kWhs but for heat, light, and other units – what he called “value billing.” Utility executive Ralph Izzo calls a similar idea “universal access to energy efficiency.” For an ESU to be able to pitch its ability to investors to paradoxically create more value by selling less energy it will have to be able to sell its energy services in such a way as to out-compete an SI and size and site its investments in new generation and transmission in a more precise way. It will also have to manage effectively the Smart Grid. 

Defining utilities missions as selling energy services rather than energy is not new – Edison first sold light, not power. While Amory Lovins and others promoted selling energy services back in the 1980’s it was impractical then because the services could not be measured. Now with IT technologies, dynamic pricing, and automation we can measure them much better. The idea can be extended to other realms – instead of buying products we can rent them while those we rent from take them back to be recycled. Many products are like this. Software-as-service models are standard in some industries. 

Much of the regulatory hurdles to overcome will revolve around how to measure the benefit of investments, how to allocate system costs, and how to blend markets and regulations. Changes in regulation, the smart grid, more decentralized distributed generation, more demand response provided by distributed generation, more focus on decarbonization, and more regional, state, and grid-to-grid cooperation will be the requirements of the future as smart power arrives in full. He notes that public utility commissions have in the past been unduly criticized for some decisions and that has affected their propensity to innovate and take risks with new technology which may be required in the new environment. The California energy crisis and the perceived failure of deregulation has also had negative effects on experimenting with new models and technology integration. State commissions, he says, need their independence restored, and less government oversight. He advocates for regulator training programs and accreditation for commissioners. They need to be well-educated on current trends and technologies. Basically, the public utility commissions need an upgrade, he says.  Decentralized DG will include more and more community-owned sources, co-operatives, and community municipal utility-owned sources. Their ESU formats will be slightly different than those of IOUs. 

Fox-Penner suggests that the SI and ESU models will likely both occur and if one is revealed as better, then that one will win out but it may also be that both will remain. He calls for better national policy on EE and to better inform state regulatory policies – but other than that no new federal laws are needed. 

In the Afterword he gives four pillars to the new power paradigm: adequacy, reliability, universally affordable service, and rapid decarbonization. These changes will be paid for by customers and financed by investors. There is the grid and the increasingly important “grid edge” which is the decentralized distribution and microgrid technologies that will grow. So far, he says, Germany, California, New York, and Hawaii have been at the forefront of these changes to new models. These changes have not been without problems. Traditional utilities in Germany have been hit with revenue losses and low spot prices. When this has happened the U.S. the utilities have requested higher rates for all customers. While they do see some saving from DG it has not been enough to offset revenue losses. Such revenue losses will cause distribution companies to become Smart Integrators while government and customer-owned power companies have moved toward the Energy Services Utility model. 

This book is essential reading for anyone wanting to understand the dynamic state of the current and future power industry. New developments happen frequently. Battery and other energy storage is now entering the picture more and more, both at home-scale and utility-scale. There are state battles occurring with net metering and feed-in tariff rules. There are new renewables mandates and some that have been scaled back. Gas continues to replace coal. Wind continues to grow. Future nuclear and CCS-endowed coal continued to be uncertain but may happen in the 2020’s. More microgrids, community energy, and dynamic pricing experiments are happening. Regional carbon pricing mechanisms have been established and are functional. EVs are set to take-off in the 2020’s and 2030’s. Efficiency improvements continue at all levels. I enjoyed this one as energy is a big interest of mine.