Bringing Power and Progress to Africa

For the past year or so, I’ve been privileged to lead a team at the Institute for Sustainable Energy at Boston University to develop a synthesized perspective on the current state of and future prospects for the African electricity sector.  I’m pleased to announce that our final report is now available to the public, and a webinar presenting a brief summary of its findings can be seen here.

This report became a labor of love for me:  although it admittedly took a lot longer to complete than I had expected, the effort in reviewing the ever-growing body of information on electricity in Africa rewarded me by providing additional evidence in support of my hypothesis that the natural long-run state of the sector involves a “grid-of-grids” architecture.

The challenges impeding improvement in the African electricity sector are daunting.  Summarizing the main points:

  • Nearly half of the continent’s 1.3 billion population has no access whatsoever to electricity. Although nominally connected to the grid, the other half is generally subject to poor reliability of service and frequent outages.  Without much broader and better availability of electricity, it will be very difficult for Africa to gain a meaningful place in the digital 21st Century economy.
  • Correspondingly, electricity infrastructure in Africa is woefully underdeveloped. Bringing improved energy access to Africans will require extraordinary amounts of capital – on the order of $1 trillion over the next few decades, by our estimates.  Alas, low purchasing power in most African economies means that it will be very difficult to attract the necessary capital.
  • Moreover, most electricity sector activity in Africa is undertaken by state-owned utilities that are not financially self-sustaining. The governments that control African utilities are often weak and/or corrupt.  The resulting risk-reward profile for non-relocatable infrastructure investments in such locations is often insufficiently attractive to induce investment.
  • To the extent that sizable sums of capital can be attracted, it is also important that it be invested in a way that is suitable for Africa. Building new large-scale power projects – either generation and transmission – are difficult to make work because they typically require sharing across multiple countries, but cross-border interconnections and coordination is limited in Africa.
  • And, if all the required power generation expansion is achieved through the addition of capacity reliant upon fossil fuels, then Africa’s currently minimal contribution to global carbon emissions will quickly rise – just as the rest of the world needs to dramatically reduce emissions to mitigate climate change.

As a result of these observations, the case for small-scale grids based on distributed energy resources (DER) – most prominently, photovoltaics supported by energy storage – becomes compelling for many places in Africa.  Not only will such solutions be most cost-effective, but they also help mitigate risks that investors face, for several reasons:

  • They can be deployed in modest-sized increments, thus reducing capital outlays and time-to-completion before cash returns can begin to be achieved.
  • They are conducive to Pay-As-You-Go business models that improve the likelihood of ongoing financial viability from customer revenues rather than subsidies.
  • They often involve dealing with local counterparties – often individual customers – rather than state-owned utilities or national governments where bureaucratic (or worse) forces can stymie progress.

As capital flows to African electricity infrastructure in the coming decades, we see the grid-of-grids slowly emerging, wherein new DER-based small grids tend to serve rural communities while pre-existing grids in urban areas get reinforced.  Over time, as economically-viable interconnection opportunities emerge as grids expand, the lattice of electricity delivery infrastructure in Africa can become better integrated, thereby improving the quantity and quality of electricity supply for all Africans.

In our view, this is the pathway to bring power and progress to Africa in a more environmentally and financially sustainable manner.

Posted in Industry perspectives, Thought leadership, Uncategorized

Electric Industry Transformation: Pathway to the “Grid of Grids”

Associated with my serving as a non-resident Senior Fellow at the Institute of Sustainable Energy at Boston University, I was recently asked to speak at their “Utility of the Future” series.

The resulting talk wasn’t recorded, but I was subsequently asked if I could upload it onto the Internet to register it for posterity.  Since I didn’t think the slides could stand on their own without commentary, I have written a blog post on the ISE website to summarize my remarks.

In short, I believe the electricity industry (at least in the U.S. and other countries with already-developed electricity infrastructure) is gradually evolving to a “grid-of-grids”, in which the electricity delivery network will increasingly be able to split apart and recombine instantly and seamlessly for high resilience while being powered by large quantities of intermittent renewable energy resources (e.g., wind and solar) backed up by storage devices.

My talk, entitled “Pathway to the ‘Grid of Grids'”, describes this vision in further detail, although it leaves open both the pace and specific nature of the transition.

While I wouldn’t hold my breath for this vision to come fully to fruition (probably not in my lifetime), the presentation was intended to express a view of the industry’s logical end-state — something for us all to be aiming for when figuring out how best each of us can make a difference in driving towards a reliable zero-carbon electricity system.

Posted in Industry perspectives, Thought leadership

A Taxonomy for Improved Understanding of Microgrids

These days, microgrids are one of the hottest topics in the energy sector.  Capitalizing on the declining costs of many distributed energy resource (DER) options, microgrids are emerging as an increasingly viable business model for augmenting the local electricity grid to improve operational resilience in the wake of emergencies.

“Resilience” is the fundamental concept driving much of the arising interest in microgrids.  Between the vivid examples of social chaos and economic losses caused by massive power outages in the wake of recent natural disasters – from Hurricane Sandy plunging large swaths of New Jersey and New York into prolonged darkness in 2012, through Hurricane Maria of 2017 wiping out the Puerto Rico electricity grid for months on end – and the envisioned threats posed by physical or cyber terrorism acts, civic and business leaders alike are pondering ways to make their electricity supplies more robust.  If not the only or entire answer, microgrids are increasingly seen to be a key part of the solution.

At the Microgrid Knowledge 2018 conference last May in Chicago, speakers from the full spectrum of vantage points offered their perspectives about recent microgrid efforts in which they had participated.

As is often the case at events where microgrids are discussed, the narrative flow was at times difficult to follow.  This is because the subject matter bounced freely across many divergent microgrid types.

Different Microgrids Are…Really Very Different

Given that microgrids cover such a wide variety, considering any two microgrids in juxtaposition to each other is fraught with challenges.

A microgrid supplying electricity to a remote but critical military base is completely different from a microgrid that provides assured power to a community center during emergencies, and both of these have few similarities to a microgrid built to support a lightly populated rural town subject to outages when a long/thin transmission line fails in stormy conditions.

As multiple speakers admitted during the course of the Microgrid Knowledge conference, “If you’ve seen one microgrid…you’ve seen one microgrid.”

Yet, highly dissimilar microgrids often get lumped together during conversations.  When this happens, it becomes very hard to develop useful insights about microgrids.  Significant conscious effort is required to maintain intellectual focus on something related to microgrids while at the same time shifting one’s frame of reference from one extreme of microgrids to the other extreme.

Moreover, thinking can easily become confused – or worse, lead to invalid conclusions – when inappropriate examples are processed through conceptual frameworks designed for other purposes.  Without careful and nuanced reasoning, the lessons learned from one microgrid experience might not translate well at all to another microgrid.

Among electricity sector observers, it is commonly noted that a standard industry definition for microgrids is lacking.  One attendee at the conference noted that as many as 25 different definitions for microgrids have been found in the literature.

Because there are already too many possibilities in circulation, I don’t propose any new definition of microgrids.  Rather, I propose a taxonomy for microgrids, in the aim of providing greater clarity when communicating or jointly problem-solving in the microgrid arena.

First Question to Ask:  Microgrid or Minigrid?

When considering a case study that is said to be a microgrid, the initial distinction to assess is whether the example is a separately-operable subset of a larger grid or rather is a small stand-alone electrical system unto itself.

If the latter, as is the case in remote isolated villages or tiny islands in the middle of the sea, then the microgrid is essentially simply a very small utility – with all the corresponding functional requirements of a conventional utility, except writ much smaller.

True, the operational and hence planning needs of a small utility are often more challenging than for large utility, since there is less diversity and correspondingly less ability to take statistical advantage of the law of large numbers on both the supply-side and the demand-side of the electrical system.  But, be that as it may, the issues facing very small utilities are essentially the same issues that electric utilities face everywhere.  So, this constitutes not so much a microgrid, but rather a “micro-utility”, and is increasingly being called by industry experts and observers a “minigrid”.

Minigrids are of growing interest in rural areas of developing economies, where the grid has never yet reached.  In contrast to these situations, when the word “microgrid” is used in the U.S., it’s usually in a context in which a local area portion of the much larger grid can seamlessly disconnect to operate independently – a concept often called “islanding”.

Within the islanding-capable set of microgrids, arguably the most important differentiating factor – at least when considering commercial implications – is if the system involves one or multiple customers.

Single-User Microgrids:  Familiar Concept, Better Technologies

Single-building microgrids capable of islanding are fairly straightforward.  In fact, they’re nothing new:  many buildings have long had the ability to disassociate from the grid and remain electrified.  For decades, most hospitals, other critical emergency services (e.g., police, fire), and telecommunications facilities have been able to immediately switch over to self-reliance in the event of a generalized power outage, preserving operational functionality even when everyone else in the neighboring area is dark.

The only thing novel about single-building microgrids are newer and hence greatly-advanced technologies and products that can be considered for commercial adoption, such as highly-sophisticated control systems and lower-cost DER alternatives.

Microgrids Serving One Customer with Multiple Buildings Similar to Single-Building Microgrids

With these advancements, it’s becoming more practical to consider islanding multiple buildings, not just one.  Indeed, campuses involving multiple buildings but just one customer are currently a common target for microgrid development activities, enabling the entire campus to operate independently from the grid in an emergency.  Examples include:

  • Military bases
  • Universities

While a microgrid covering multiple contiguous buildings is technically more complex to build and manage than one involving only a single building, from a commercial standpoint, it isn’t that much different provided that all the buildings are owned by the same entity.  In other words, as long as there’s just one customer, it doesn’t much matter how many buildings are involved in the microgrid.

In a campus microgrid, the “point of common coupling” (PCC) – that is, the interface between the microgrid and the main grid – becomes essentially the meter associated with a master account through which the utility serves the campus owner.  True, a decision will need to be made on who owns and operates the microgrid “behind” the PCC:  the campus owner, the local electric utility, or a third party.  But whoever owns/operates it, from the perspective of the larger grid, a campus microgrid looks just like a single building that can be islanded.

As a result, at least when considering the commercial (as distinct from the technical) issues of microgrids, it’s probably appropriate in most contexts to categorize multi-building microgrids along with single-building microgrids into an overall “single-user” microgrid classification.

It’s when microgrids involve multiple customers that things get…really “interesting”.

The Nuances and Complexities of Multi-User Microgrids

To date, most microgrids developed in the U.S. have been single-user microgrids.  This dominance stems from the fact that it is relatively straightforward for electricity customers, electric utilities, and vendors to the industry to commercially structure a single-user microgrid.

A single customer can implement an islandable microgrid on their own, requiring no coordination with other stakeholders nor any changes to any pre-existing regulatory structures governing electric utility service.  The economics of single-user microgrids are straightforward to evaluate and monetarily structure, because the single entity incurs all the costs of microgrid development, operations and maintenance – and gains all of the benefits that the microgrid affords.

In contrast, only relatively few multi-user microgrids have been developed, because the obstacles to their successful development are numerous and significant.

Since multiple parties with frequently differing objectives are stakeholders to multi-user microgrid development, all aspects of planning, managing, and monetizing these systems involve reaching agreement on various complex and nuanced matters.  Multi-user microgrids open questions of what services that companies developing and/or owning microgrids — companies that are not regulated utilities — are allowed to sell to other parties, and how the microgrid interfaces both operationally and commercially with respect to the utility owning the distribution grid in the area.

A newly-released report, produced by a team I was involved in as a result of my affiliation with the Institute of Sustainable Energy at Boston University, may be the first comprehensive review of the obstacles to multi-user microgrid development.

The Special Case of District CHP Systems

Since the earliest days of the electricity system, systems involving combined heat and power (CHP) – also known as cogeneration – have been providing steam and electricity to large and locally-concentrated energy demands in dense urban centers.

As the electricity grid we now take for granted filled in around them, these so-called “district CHP” systems interconnected with the local electricity distribution network, yet retained the ability to disconnect from the grid when necessary to maintain steam/heat service for their dedicated customers.

Fundamentally, these were the first microgrids:  they are fully capable of stand-alone operation islanded from the larger grid, providing electricity to a set of customers in their immediate vicinity.  Indeed, for the most part, they pre-dated the larger grid within which they would later become a microgrid.  As a result, there’s probably not a lot to be learned about future microgrid development from these historical examples.

However, to the extent that district heating represents an untapped but attractive economic opportunity in a location that is also considering a microgrid (e.g., for resilience reasons), a district CHP system is highly likely to be a very attractive option worth serious consideration.  Given that they are usually designed to serve multiple customers within a well-defined geographic area, district CHP systems will likely face most of the same development and implementation issues as a multi-user microgrid.

Getting Clear on Microgrids

The old adage says, “If all you have is a hammer, everything looks like a nail.”  In the microgrid arena, if all you’ve worked on are single-building microgrids, don’t assume that what you’ve learned and know will necessarily apply well to a multi-user microgrid.

The point of this essay boils down to one simple message:  I urge microgrid professionals to employ standardized terminology more rigorously when discussing microgrid projects, and humbly propose the above taxonomy for consideration.  If not this taxonomy, then I implore the microgrid community to agree on something at least as workable and spread its usage, as it will help everyone – especially newcomers to the field – avoid misapplying lessons learned across dissimilar microgrid examples.

Posted in Industry perspectives, Thought leadership Tagged with:

Regional Energy Archetypes and Their Associated Pathways to Shift the Global Energy Mix

Across the world, there is near unanimous agreement on the long-run vision for the global energy mix. This vision includes:

  • Widespread adoption of solar and wind (both onshore and offshore), augmenting hydro where available
  • Backstopped by various forms of storage (perhaps including electrolyzed hydrogen for long-duration needs)
  • Powering a society based on electricity for almost all stationary and transportation energy needs
  • Fossil fuels relegated to only selected niches (e.g., aviation)

Any regional variations in this vision relate primarily to the path taken towards this destination.

Simply put, different regions will navigate the energy transition to this end-state differently.  This is due to regional differences in economic maturity, the energy infrastructure inherited from previous generations, available renewable resources, and prevailing political attitudes.

Most regions of the world fall into one of six archetypes, which in turn establishes how their energy sectors will evolve in the coming decades:

  1. “Lucky”

In certain places, the incumbent electricity systems have long been able to leverage copious quantities of inherent renewable energy resources that are both low-cost and dispatchable. Cases in point: geothermal power for Iceland, and hydroelectricity in the cases of Scandanavia, New Zealand, Brazil, Quebec and the Pacific Northwest in the U.S.

For economies such as these, carbon emission reduction goals for the power sector are almost moot, and the remaining move to a zero-carbon economy is limited to shifting vehicles to electricity from petroleum-based fuels. While this remains a major challenge, at least the power generation sources that will be used to electrify vehicles in these markets are already suitable for the 22nd Century.

  1. “Earnest”

By contrast, several regions seeking to be among the leaders in the clean energy transition — most prominently Germany, California and the Northeast U.S. (New York and New England), but also many island economies facing severe consequences from climate change, such as Hawaii — entered the 21st Century saddled with significant reliance on fossil-fueled generation capacity.

In these regions, fossil electricity is rapidly being replaced primarily with new solar and wind energy installations. Fortunately, wind and solar costs have fallen so dramatically in the past two decades that what once would have been prohibitive is now often highly cost-effective. However, because these types of renewable energy are intermittent in nature, large additions of supply are producing significant grid operations challenges. Because of the forceful political and civic commitment in these regions, it is likely that the drive for carbon reductions will continue largely undimmed despite the challenges.

  1. “Booming”

Some economies are growing so fast that virtually all economically-viable energy additions — based both on renewable energy and fossil fuel — will occur. In the face of such rapid growth, retirement of any existing fossil fuel assets is out of the question. Prime examples of this archetype include China, India, and the Arabian Peninsula.

While notable additions of clean energy infrastructure will occur here, and the per-unit carbon intensity of these economies will decline, the rate of economic growth may dominate such that absolute emission volumes will continue rising.

  1. “Overlooked”

Some of the most deprived economies of the world, including most “failed state” countries and some large borderline cases like Indonesia and South Africa, are beset by extreme poverty (and other often-associated social ills such as corruption and tribal warfare) that prevent investment in renewable energy — or any forms of energy, for that matter.

In economies of this type, performance on carbon metrics are a “nice to have” but not a high priority. Consequently, the energy transition will proceed painfully slowly here, and any progress will be easily overlooked.

  1. “Uncertain”

A number of regions would logically be expected to proceed promptly through the energy transition: places like Australia, France, South Korea, Japan, Ontario, and the Midwest U.S. All have added large quantities of renewables in the past two decades, and are populated with citizens that like to think of themselves as environmentally progressive.

While they may not yet realize it, these regions are at a crossroads: most of the “easy” inexpensive carbon emission reductions have been achieved, and incremental reductions will be much harder and thus more costly to achieve — especially as advocates push for retirement of low-emitting non-renewable energy sources. When this occurs, it is uncertain that environmental aspirations will triumph over economic pragmatism in these areas.

  1. “Indifferent”

Alas, due to insufficient belief in the need to take actions to mitigate climate change, some regions won’t take substantive action to reduce emissions that causes any negative economic consequences whatsoever. Put Russia, Eastern Europe, and the Southern U.S. in this camp.

The energy transition in these areas will clearly happen much more slowly, driven primarily as lower-carbon energy approaches become lower cost than higher-carbon energy approaches. While policy won’t drive the transition in these regions, technology innovation will: with the introduction of fracking technology to shale resources, ample supplies of low-priced natural gas is driving dirtier coal power out of the marketplace. Where renewable resources are inexpensive due to abundant sun, wind and land (e.g., Texas), they too will be added with vigor — but where that’s not the case, such as in the Deep South of the U.S., growth in renewables will be very slow.

The world cleaves into these six archetypes relatively neatly.  From a population standpoint, the planet is disproportionately weighted towards “Booming” and “Overlooked”.  In net, the global energy mix may reach its envisioned end-state by the year 2200, but given the sizable fraction of the world’s population in some of the slower-moving archetypes, unlikely much sooner than that.

Posted in Uncategorized

Voodoo Energy Economics: Understanding the Haitian Electricity System

As some of you may know, in addition to serving as President of Future Energy Advisors, I am also a non-resident Senior Fellow at the Institute of Sustainable Energy (ISE) at Boston University (BU).

Resulting from my fellowship at ISE, I was recently asked to lead a donor-funded research effort to profile the current status of the electricity sector in Haiti, including an assessment of the national utility Electricite d’Haiti (EDH), independent power producers supplying electricity to EDH, and microgrids in rural villages not served by EDH.

The resulting report, Assessment of Haiti’s Electricity Sector, is now available here for public review.

For those who are interested in energy in Haiti, I believe you will find the report to be quite useful, as an overarching public assessment of the electricity industry in Haiti was heretofore lacking.  True, many good studies are available covering certain aspects of Haitian electricity, and we certainly benefited from them.  However, we found little that integrated the various topics into one cohesive perspective.  With this report, we think this gap is fairly well closed — at least for now.

At ISE, work continues on improving electricity access in Haiti, where less than 40% of households have access to electricity — and this access is nowhere near available reliably 24 hours a day.  A BU student team is currently investigating options to increase adoption of electric cookstoves to increase load on underutilized grids, and a next phase of effort involves estimating the economic drivers of microgrid development and operations.

Posted in Uncategorized

Is Demand Response Dead?

Earlier this month, I spoke at ACI’s Next-Generation Demand Response conference in San Diego, where attendees mingled to consider future trends of demand response (DR) activity in the electricity sector.

In the mid-afternoon of the first day, in the wake of several presentations (including mine) on the state-of-play in U.S. DR markets, I sat on a panel that was convened on the spot to deliberate a question that hung in the air:

“Is Demand Response Dead?”

Reintroduction to Demand Response

Emerging as a force in the electricity industry in the early 2000’s, DR can be viewed as a logical successor to load management (also known as load control) programs of the 1980’s.  These programs involved the following quid pro quo between electric utilities and participating customers:

  • The utility remotely modulated energy-consuming devices on the customer’s premises – air conditioning, refrigeration, or lighting – to reduce electricity volumes for brief periods at times of highest demand on the electricity system.
  • In exchange, the utility provided the customer a discount or credit on the monthly electricity bill.

For the customer, the inherent value proposition of load management is straightforward:  economic savings with minimal negative consequences (e.g., room temperatures slightly different from optimal for short durations).

For the utility, the benefit would seem less obvious.  After all, what business wants to sell less of its product?

As a regulated monopoly, an electric utility must ensure that it can provide reliable service to all of its customers at all times.  And, because of the dearth of storage on the power grid – note that nature doesn’t like keeping electrons still – the electricity system has generally been sized to meet the moments of highest collective demand, plus additional amounts to account for adverse contingencies.

Accordingly, utilities historically built power plants that ran for very few hours per year – or bought power from a neighboring utility that had done so – to meet peak demands from customers.  Electricity supplied from such marginal “peaker” plants is inevitably quite expensive, usually much more costly than the average price of electricity sold.

It was only in the 1980’s that utilities and regulators came to the realization it was both more cost-effective for customers and more profitable for utilities to install radio-controlled switches at customer premises to briefly turn off or turn down appliances, thereby modestly reducing electricity consumption during transitory peaks in demand.  Thus, load management programs came into being.

These programs were generally effective and satisfactory to all parties.  However, as standardized pre-set tariffs with non-negotiable terms, they were not price-based.  In large part, this is because electricity market prices were not transparent – not only to the end customer, but even to the electric utility itself.

Only since the 1990’s have electricity prices become reasonably straightforward to obtain.  In many regions (in the U.S. and in other countries), the electricity industry was restructured in several important respects, with the goal of creating more transparent markets and thereby introduce competitive forces to induce greater efficiencies in the sector.

As energy markets developed during the late 1990’s and early 2000’s, and competitors emerged to participate, certain new entrants discovered an entrepreneurial opportunity when prices became sufficiently high at peak moments:  customers could be paid to use less energy at that instant, and the resulting “negative demand” could then be sold into the market as a supply resource.

Thus was demand response born.

Of course, to be accepted as a legitimate supply resource by wholesale power market administrators and operators, DR requires a firm measurable commitment by the customer to instantaneously reduce demand.  Although turning down or turning off appliances is theoretically viable as a supply resource, in practice, transaction costs are prohibitive in verifying and aggregating demand reductions from many small-scale loads.

Consequently, the most common approach for DR has involved contracts with commercial or institutional customers in which standby diesel generators – installed in many buildings to maintain electricity service during grid outages – are turned on during peak periods.  Not only is the resulting supply resource easily measurable, but also it usually comes in relatively sizable increments:  at least hundreds of kilowatts, often at megawatt-scale.

Thus, portfolios of generator-based DR contracts were cost-effective to assemble and dispatch into regional power markets.  When this was discovered in the first few years of this century, the DR markets quickly took off.

Why Would Anyone Think DR Is Dead?

Today, DR accounts for about 22 GW of supply resource in the U.S.  While this represents only about 2% of the nation’s capacity base and peak demand levels, note that it’s the most critical 2%:  the 2% that occurs at the moments of greatest market tightness and highest prices.  In other words, DR is the marginal supply – and in economic markets, prices are typically set at the margin.  At peak periods, electricity markets would be tighter and prices higher – perhaps much higher – without DR.

So, the advent of DR would seem to be a good story.  And while presenters and attendees at the DR conference I recently attended were not deeply depressed, nevertheless the pervading sense was that the U.S. DR markets are not healthy.

Upon further investigation, the data does indicate DR market stagnation.  Estimates from Navigant presented at the conference suggest that the U.S. DR markets have been essentially flat since 2011, and that 2015 represented a peak of volume that may not be surmounted until well after 2020.

Two primary factors have thwarted recent growth in the U.S. DR markets:

  • Some of the most active regional wholesale marketmakers managing DR auctions – for instance, PJM and NYISO – changed some rules that made it more difficult for service providers to schedule or dispatch DR resources.
  • Also, to reduce local emissions (especially in urban areas with air quality challenges), the U.S. EPA tightened restrictions on the use of standby diesel generators – the primary source of DR capacity that aggregators had been contracting in the marketplace from customers.

These dampening factors on growth were consequential.  In 2017, many DR veterans were saddened, as two of the most notable pioneers of the DR marketplace were acquired:  Comverge by Itron and EnerNOC by Enel.

Both Comverge and EnerNOC had ridden the early growth wave of DR in the early 2000’s and achieved very good exits through IPOs for their original investors.  Such successes are rare in the energy venture arena, and these two former start-ups became accustomed to the spotlight as poster children for entrepreneurship, innovation and wealth-creation.

Alas, in recent years for the reasons discussed above, Comverge and EnerNOC had begun struggling to maintain profitable growth stories as independent entities.  Each hit the wall, and were snapped up by larger players that sought to expand their list of services to retail customers of electricity.  With that, the two primary superstars from the DR market no longer shined.

Reflecting upon these facts, it’s not hard to understand why a singular question was top-of-mind to a gathering of market practicioners:  is DR dead, or dying?

DR Is Neither Dead Nor Dying, But It Is Changing

And so I found myself on a panel amongst DR thought-leaders confronting this existential question.

I confess now as I confessed then that I’m not an expert on DR markets.  Indeed, I was invited to speak at ACI’s DR conference not about DR, but rather on the topic of behind-the-meter (BTM) storage involving energy storage devices located at a customer’s premises – one of the newer and hotter games in the energy services arena.

But, I felt strongly that the subject of my talk – energy storage – was indicative of the true answer to the question being posed.  So, I threw it out for discussion:

No, DR is not dead or even dying.  At worst, its position is secured and stable.  However, I think the nature of DR is actually expanding at a fundamental level – provided that its definition is also expanded.

In reflecting further on the phrase “demand response”, I am struck by the customer-centricity of the concept.  This orientation is relatively alien for the electricity industry, whose architecture was designed a century ago to provide electricity service to customers whatever their demands might be.  In other words, the electricity sector has generally considered customers to be an exogenous factor, like weather, to be accommodated.

DR is a scheme to leverage the communal electricity grid – and its economics – to the potential benefit of particular customers on the grid.  In ceasing to think about costs and instead focusing on customer value and market prices, it was a great leap forward for the electricity industry.

But now the electricity industry is boldly entering a new phase with dramatic consequences.

Rooftop solar is enabling millions of customers to produce electricity on-site, and sell surpluses back to the grid.  And, BTM storage is allowing customers to store produced power or buy power from the grid at times of low prices either for later use or sale back to the grid at times of high prices.

In short, customers are becoming “prosumers” – sometimes producers, sometimes consumers – of energy.

Some industry observers think this will mean lots of customers “cutting the cord”, discontinuing electric utility service akin to the many who have gotten rid of wireline phone service and rely now solely on mobile phones.

While this may eventually become true for a few customers, I’m skeptical that many will ever actually leave the grid, for four reasons:

  • Critically important to most customers, electricity quality can only decrease (perhaps substantially) by disconnecting from the grid – in ways that are much less satisfactory than “dropped calls” due to poor cell service.
  • Unlike in telephony, there is no additional functionality and value (such as the ability to use intelligent apps at any time and place) that a grid-independent version of electricity service can provide.
  • For many customers, it will be cost-prohibitive or otherwise impractical to acquire enough generation and storage capability to become completely self-sufficient in electricity – an irrelevant concern for telephone service.
  • Unless and until wireless electricity transmission becomes viable (highly doubtful in my lifetime, and probably yours too), the full value of on-site energy assets – which are capital intensive – can only be monetized by transacting with other parties through the grid.

With the grid remaining a vital part of electricity supply for most customers, recall what DR truly represents:  the ability for the customer to provide services (through reduced demand) to the grid at certain times for certain prices under certain conditions.

As more customers have on-site energy equipment that needs to be optimized relative to the grid, the spectrum of what looks (to the grid) like DR also increases.

In particular, from the standpoint of the wholesale market and bulk power system operator, BTM storage is essentially the same as backup standby generators.  Thus, companies offering BTM storage services that include aggregation and dispatch of capacity back to the grid are already de facto offering DR.

If/when other forms of distributed generation (e.g., fuel cells) become economically viable, these too will need aggregation and dispatch into the grid and wholesale markets, just as is done with DR (and BTM storage) today.

Accordingly, I think that DR-type offerings are likely to in fact grow with greater penetration of customer-sited generation and storage assets.  DR won’t just be limited to supply resources provided by price-based load control or backup generators.

So, DR is neither dead nor dying, just changing – albeit perhaps with a new semantic term warranted.  This is not just my view:  I was pleased to learn that my fellow panelists shared this general perspective.  The discussion complete, we adjourned to mingle with an audience in somewhat better spirits.

Posted in Thought leadership Tagged with: , , ,

The Evolving Opportunities for Fossil Power Generation

U.S. wholesale electricity markets are in the midst of significant disruption.  One author has called it “The Breakdown of the Merchant Generation Business Model”.

Due to ever-improving economics, large quantities of new natural gas and renewables capacity are being added to the power system, quickly capturing large gains in market share of the nation’s generation mix.  With the resulting surpluses of lower-cost supplies, prices for both energy and capacity are declining in most parts of the country.  In turn, many coal and nuclear power plants have become uncompetitive at these low prices.  Many of the largest owners of merchant (i.e., unregulated) generation portfolios have experienced significant financial distress, and many of the most economically-vulnerable plants are being driven into early retirement.

Through the Department of Energy, the Trump Administration in September 2017 submitted a Notice of Proposed Rulemaking (NOPR) to the Federal Energy Regulatory Commission (FERC), seeking to implement market structure changes that would provide preferential power prices to owners of generation facilities capable of storing more than 90 days of fuel inventory — in other words, coal and nuclear plants — but in early January the FERC firmly said no to such an obvious subsidization scheme for propping up mature assets.  Meanwhile, regional marketplaces across the U.S. — including PJM and ISO-NE — are considering changes to capacity pricing mechanisms, aiming to strengthen market signals to ensure adequate quantities of dispatchable resources are always available to maintain high grid reliability, especially in the wake of mass retirements.

In recognition of this market turbulence, GE Power recently asked me to write a series of short essays describing the implications of these changes on fossil power plants.  During the last quarter of 2017, the following articles were published to GE’s Transform website:

Together, I refer to this set of writings as “Mission Possible”.  For many owners and operators of fossil power generation assets, the moral is that all is not lost, notwithstanding how challenging the market environment has become.  Although possibly requiring repositioning to enable operations quite different from historical experience, productive and profitable futures may exist for many remaining pre-existing fossil power plants.

 

 

Posted in Industry perspectives, Thought leadership Tagged with: , , , , , , , , ,

Creating Shared Value: The Emerging Synthesis of Corporate Sustainability and Strategy

Major corporations find themselves at the center of conflicting demands from virtually every direction:

  • Activist investors are clamoring more loudly than ever for increasing shareholder value, often pushing for major changes
  • Customers are becoming increasingly demanding and sophisticated in selecting among an ever-growing list of choices
  • New technologies are constantly emerging, many of which are disrupting established ways of doing things — if not entire industries
  • As other companies innovate, competitive forces accelerate every day, driving out costs and driving down prices
  • Employees are managing their careers more aggressively, seeing themselves as free-agents and taking their skills elsewhere when more attractive opportunities emerge
  • Media channels and NGOs are eager to pounce on any slip-ups and convey them worldwide over the Internet, damaging reputations instantly

As innovation thought-leader Rowan Gibson says in his seminars, “the world won’t change this slowly ever again.”  Every day, the dynamism is only accelerating, and the resulting pressures are only getting more intense.

For corporations in the energy sector, facing multi-billion dollar bets on assets with lifetimes of several decades, the challenge of navigating this minefield of competing concerns is particularly brutal.

How did we find ourselves here?

The Evolution of Corporate Strategy

Although fraught with geopolitical peril as the Cold War combatants rattled their nuclear sabers, the era immediately after World War II was in many respects a wonderful time for the corporate world.  Between pent-up demand from the pre-war Great Depression and the need to rebuild much of Europe, conditions for growth were robust.

Accordingly, it was often adequate for companies to utilize simple deterministic concepts from early days of the discipline of corporate strategy — such as the “five-forces” framework or the “structure-conduct-performance” model — to derive insights for setting strategic direction.  Not disparaging these tools:  they remain useful, but their limitations are now more well-recognized.

It was the OPEC oil embargo in the early 1970’s that served as the first shot across the bow.  The resulting spike in oil prices sent companies in many industries reeling:  having unduly relied upon just one forecast of a key input that turned out to be woefully inaccurate, their day-to-day operations as well as their longer-term business strategies weren’t robust to a market environment that they had not even remotely anticipated.

In response to this situation, Royal Dutch Shell pioneered the use of scenario planning to facilitate stress-testing of corporate decisions across a wide range of potential futures that might transpire.

Since then, corporate strategy development approaches have continued to become much more sophisticated in dealing with uncertainty and complexity, which have only magnified in recent decades.

In retrospect, in that postwar afterglow, the private sector may have lost sight of a profound truth:  that companies weren’t in full control of their own destinies and success wasn’t assured merely by continued operational optimization and minor product enhancements foisted on naive customers, but that outside forces applied by other parties could have significant impact on business success.

This was especially powerfully illustrated by the emergence of environmental concerns in the late 1960s — a decade of immense social change on multiple fronts.

In the U.S., global dominance in heavy industrial activity and the rise of automotive culture spawned many clearly identifiable pollution problems adversely affecting citizens, ranging from the omnipresent smog in the Los Angeles Basin to the burning Cuyahoga River in Cleveland.  A population empowered by the rebellious culture of rock’n’roll, the hard-earned successes of the civil rights movement and ongoing protests against the Vietnam War unleashed a wave of consumer boycotts and civic opposition against prominent polluters.

Ultimately, this social pressure culminated in the establishment of environmental protection regulators and regulations with real clout.  No longer could companies freely discharge their waste streams with impunity.

The Rise of Corporate Sustainability

Facing a new set of constraints on their activities, companies launched dedicated environmental initiatives — even if with reluctance, in some cases.

Initially, the corporate response was primarily tactical, oriented towards least-cost environmental compliance:  meeting government-set requirements at the lowest expense.  Frequently, this also involved public advocacy — a.k.a. lobbying — in the aim of getting policymakers to set environmental standards at more lenient levels that would consequently cost less to meet.

For awhile at least, this modest approach to managing environmental matters was generally met with tacit acceptance by stakeholders.  However, because many heavy industrial activities intrinsically carry operational risks, this approach could not and did not completely eliminate the potential for exceptional pollution events.

Following incidents such as the Union Carbide Bhopal gas leak in 1984 and the Exxon Valdez oil spill in Alaska in 1989, owners and operators of major industrial activities began to recognize that simply satisfying minimum requirements — especially in developing economies where environmental standards were particularly weak or barely enforced — did not eliminate the risk of financial penalties, tightening of regulations, and customer backlash if (or when) something went wrong and something bad happened.

Put simply, companies began to realize that they would benefit from building public goodwill through a clear pattern of proactive, more-than-required attention to environmental concerns.  Should a major problem occur, the accumulated goodwill would be of immense value.

Gradually, the notion of “corporate social responsibility” (CSR) was born.  Corporations with significant impact on their local communities began to acknowledge that part of maximizing profitability meant enhancing their “social license to operate”  Spending a little bit extra to do more than the minimum necessary became increasingly viewed as a better path to long-term profit maximization, by increasing the willingness of local stakeholders to allow industrial activities in their backyards.

Over time, CSR morphed into the concept of , in which corporations admitted that they needed to own up to the sometimes-negative implications of their operations on society, and to take actions to minimize these impacts — and in some cases, offset them — to alleviate the accumulation of social pressures against their business.

Today, virtually every major corporation has a substantial sustainability program, usually led by an executive officer and reported on in a glossy brochure touting the many activities being undertaken by the company to preserve environmental conditions.  In so doing, these sustainability efforts often burnish the company’s credentials sufficiently so as to meet the tests of socially-responsible investors, who otherwise decline to own shares in non-compliant companies.

Admittedly, some corporate sustainability activities can be philanthropic in nature:  spending money in ways that are not necessarily strategic to the business, but that improve perceived corporate citizenship.  Funding of community centers, academic scholarships or vocational training programs do help the local population and are appreciated, but often have weak strategic linkages to advancing the interests of the business.

And therein lies the rub.  An increasingly discerning and cynical public is beginning to see through blatant examples in which corporations attempt to “buy” good public relations with million-dollar investments to better enable billion-dollar business opportunities.  The phenomenon of “astroturfing”, undertaken by a hitherto-unknown advocacy organization created and funded by corporate interests that misleadingly suggest grassroots support of a for-profit agenda, is a particularly egregious practice in this vein that can badly backfire.

At bottom, CSR and corporate sustainability initiatives remain subject to a critical precept of traditional corporate strategy, wherein the company first defines its preferred interests, and then develops plans — including plans to engage with the public — that will maximize prospects to produce desired outcomes.  This approach is intrinsically sequential in nature:  putting the company first, working in isolation to determine the most appealing path(s) forward, and only subsequently working with others outside the firm.

Framed this way, a provocative question naturally emerges:  what would happen if a company and its external stakeholders worked together in developing and pursuing plans to produce a mutually-beneficial future?

An Emerging Synthesis:  Creating Shared Value

In the past decade, thought-leaders have begun forging a new synthesis addressing this question:  an integrative strategy and sustainability approach called “creating shared value” (CSV).

Since CSV is a nascent theory, some of its basic precepts are still open to interpretation.  Also because it’s still early days, the jury is out on the effectiveness and impact achieved when a company commits to CSV.

Even so, in my view at least, CSV opens the door to what could be a tectonic shift in the way business operates.  If that turns out to be the case, early corporate adopters of CSV could find themselves riding a long wave of success and enduring leadership, whereas those who fail to make the shift could be at risk.

Through the millennia, as a basic element of any contest between subsets of humans, strategy has always been developed in-house as a means of gaining an advantage to beat other competitors.  It is thus historically unnatural to conduct “open-source” strategy.

But the accelerating pace of change in our world today is forcing new ways of thinking.  Open innovation, in which companies co-create a business future with outsiders, is becoming more commonplace.

CSV takes this to another dimension, extending beyond for-profit collaborators to bring external stakeholders into strategic alignment with the business.

While companies have always had to care about and create value for their customers and their employees — or else they simply wouldn’t last very long as enterprises. CSV expands the set of relevant parties to include constituencies that are neither customers nor employees.  And, CSV pushes companies to weigh the concerns of these constituencies alongside, not subservient or secondary to, the company’s concerns.

Successful corporate pursuit of CSV should result in an enduring bond between the company and its associated social vectors so that all stakeholders benefit when one party benefits, with no party benefitting at another’s expense.

CSV is thus the grandest of grand strategy, while at the same time taking sustainability to its logical extreme:  a company’s prospects are only sustainable over the long-term if the prospects for the communities within which it operates are also sustainable for the long-term.

Recently, I was privileged to assist a large energy company in establishing the primary parameters of its CSV strategy:  the Board had already firmly decided to adopt CSV, but wanted additional detail on what this would mean for the business.

In addition to better defining the general societal themes in which the company would focus its CSV efforts for joint pursuit with external stakeholders, the engagement quickly made apparent that pursuing CSV implied ongoing reinforcing activities up and down and across the corporation, including:

  • Extensive internal and external communications of the intentions and rationale underlying the decision to adopt CSV
  • Tracking and measurement of key progress indicators, since corporate accountability is essential for CSV to be deemed credible
  • Allocation of considerable resources to work constructively and intensively with many outside parties, who may be pursuing differing goals with radically different measures of success

Most crucially, CSV demands a degree of openness with outsiders that many corporations have historically found highly uncomfortable.  The adoption of CSV quickly separates out companies with merely good intentions from companies actually dedicated to allowing other parties “inside the tent” potentially including participation in sensitive deliberations and decision-making activities.  With CSV, there is virtually no room for “greenwashing”.

Consequently, CSV is not something that every company will want to undertake. Then again, CSV offers the potential to create much more long-term value for some companies than for others.

The best candidates for CSV are large industrial corporations with large long-lived operations that intrinsically prone to producing significant environmental impacts.  Most major players in the energy sector fit this description.  Such companies can more easily afford the resources required to dedicate to a meaningful CSV initiative, bearing the certainty of additional short-term costs in exchange for the likelihood of long-term benefits.  They also have an enduring stake in their communities, and for better or worse can easily become viewed as the “bad guys” if not careful in their dealings with the public.

From our review of activities to date in the corporate world, those visibly pushing ahead on CSV include:

  • BASF (FWB: BAS), whose “AgBalance” methodology provides a balanced holistic scorecard spanning economic, environmental and sociological concerns for application to agricultural activities in the aim of “protecting the future of food production”.
  • Microsoft (NASDAQ: MSFT), whose “Cloud for Global Good” initiative involves a multi-pronged effort involving investments and advocacy with various parties to bring “trusted, responsible, inclusive” cloud computing to everyone.
  • Nestle (SIX: NESN), whose shared value framework consists of 42 corporate commitments spanning three priority areas where the company “can create the most value and make the most difference”:  nutrition, rural development and water.
  • Schneider Electric (Euronext: SU), whose “Access to Energy” program entails the investment of both capital and staff time to nurture the success of emerging ventures aiming to electrify rural villages in developing economies worldwide.

As these examples illustrate, CSV has ample potential to generate substantial financial benefits to the companies, while also addressing important social concerns.

A New Era of Shared Value Creation in Energy?

More strikingly, these CSV examples hint at the possibility for a fundamental change in the role of business in society, wherein companies transcend solely financial interests and go far beyond mere philanthropy to meaningfully address a broad spectrum of fundamental challenges facing humankind in the 21st Century.

In the face of declining citizen trust in governmental and social institutions, CSV represents an opportunity for corporations to elevate their position in society.  In stark contrast to the robber-barons of the late 19th Century, exploiting less advantaged constituencies in the relentless maximization of near-term profits, 21st Century corporations can utilize CSV to become a reliable civic partner that aids society in navigating an increasingly uncertain and volatile future.

CSV can thus be seen as a corporate embodiment of an ancient aphorism:  with great companies, comes great responsibility; to great companies, go great rewards.

This is especially the case for corporations in the energy sector, where the difference between valued social contributor and reviled social predator can be vast.

Over the coming years, it will be interesting to follow which companies in the energy arena will aspire to such greatness and most earnestly pursue CSV as a means of securing a place that can take them (and us) to the 22nd Century.

It will be even more interesting to see which companies come to view that decision as critical for putting them on a better path for enduring viability.  Almost certainly, those companies will point to the following key success factors:

  • Clarity of vision on which areas of social impact declared as the focal point of CSV activities
  • Sincerity of commitment to CSV, as reflected by adequacy of dedicated resources
  • Superior execution on the tangible initiatives being pursued under the umbrella of CSV

Only time will tell, but the emergence of CSV as a management principle gaining traction among multinationals is intriguing — and cause for cautious optimism.

Posted in Thought leadership

A Roaring Li-ion: Reflections on the State of Battery Technology

In mid-March, I escaped from snowy Boston and traveled to hot and sunny Phoenix to join about 200 others attending the annual meeting of NAATBatt, a trade association that promotes advanced electrochemical battery technologies.

Given my interests in energy storage, I wanted go upstream to get a better handle on the current state of battery technologies, as innovations on this front are at the root of the rapid increase in grid-connected energy storage activity.

The conference provided an interesting cross-section of perspectives ranging from national laboratories engaging in early research, through start-ups aiming to commercialize developing technologies, up to established battery manufacturers.  While the exhibition floor was small and held few booths, the presentations were generally illuminating.

The key insights I gleaned from my time at NAATBatt were:

  • Lithium-ion (Li-ion) batteries dominated the discussion. Upon reflection, this shouldn’t be that surprising:  with significant cost declines in recent years, owing to the advancement of electric vehicle (EV) markets, Li-ion sales have grown rapidly.  And under the expectation of continuing cost reductions for the foreseeable future, Li-ion will expand beyond EVs to capture ever-increasing shares in an ever-increasing number of grid-connected market segments.  To the extent that other battery chemistries were discussed (e.g., vanadium, nickel-iron, sodium-sulfur, zinc-air), presenters are often thrust into the defensive position of explaining how their battery technology is superior to Li-ion for the particular application being targeted.

 

  • Founded just a few years ago, the little-known Chinese firm CATL stands poised to leapfrog many of the current market leaders (e.g., Tesla/Panasonic, LG, Samsung, BYD) to soon become the world’s largest producer of Li-ion batteries. A video of CATL’s massive new manufacturing facility revealed not only the scale of the commitment, but also the advanced technologies – mostly robotic, with minimal human labor input – involved in precision fabrication and assembly of batteries at high volumes to minimize costs.

 

  • There is an enormous base of intellectual property available to be licensed in the battery sector. While there are probably a few nuggets lying in the rubble, I would guess that most of the patents for license or sale are of low value, as anything profound or easily commercialized probably has already been exploited, leaving only the dregs unclaimed.  Moreover, it would seem that the patent space is extremely crowded, implying that patents need to be drawn extremely narrowly in order to be issued, in turn suggesting that any issued patent might be circumvented with clever engineering.

 

  • Start-up companies continue to emerge in the battery sector, despite the downturn in availability of venture capital from investors that are purely motivated by attractive financial returns. To counteract that trend, and with the expected decline in government funding under the Trump Administration, these companies will be pressed to raise capital either from (1) patient investors with below-market return expectations that are committed to the environmental benefits afforded by increased utilization of batteries, or (2) strategic investors at corporations who stand to benefit substantially in their core businesses by the success of a new battery company.  Even so, raising enough capital will be a challenge, because succeeding with a new venture commercializing a novel battery chemistry is likely to require much more capital than the typical digital/software start-up.

 

  • The continued exponential growth in Li-ion battery demand will dramatically affect global markets for lithium and other metals (e.g., cobalt), which heretofore have been niche. Prices for these commodities will generally experience upward pressure, especially during periods of tightness when supply expansion lags demand increases.  As prices rise, expansion of mining activities for these commodities will require major influxes of capital, and well-positioned low-cost producers stand to earn attractive returns.

 

  • In theory, the challenges of satisfying lithium supplies for new battery production could represent an opportunity for recycling of aging Li-ion batteries as they reach end of useful lives in the first-generation of electric vehicles, which will likely face retirement in the 2020s. In practice, recycling of Li-ion batteries may be prohibitively expensive, as challenging disassembly steps would first be required before the content would be sufficiently sorted to reprocess.  So-called “second-life” utilization – involving reconditioning of Li-ion batteries after their “first-lives” – might be practical in certain circumstances, but only if the duty cycle during first-life is well-understood.

 

  • Although Li-ion captures virtually all the growth and attention in today’s battery marketplace, it’s still the case that good old lead-acid batteries account for most of the $65 billion annual battery industry – at least for the next few years. While Li-ion will overtake the leading position by the 2020s, lead-acid batteries are simply so inexpensive and so well-proven that they will remain viable in many applications for years to come – especially ones for which depth of discharge, high energy density, or high power density are not critical.

It’s impossible to attend all of the various battery conferences held annually around the world, but if you can only attend just one, NAATBatt is a good candidate.  The NAATBatt event planners are savvy:  they pick very nice event venues (next March in San Antonio will be at the Hyatt Regency Hill Country Resort outside San Antonio), do a great job on catering food and drinks, and organize an extensive program of events for spouses, so that you can plausibly make a vacation of it.

By spring of 2018, I suspect that the state of battery technologies and markets will have changed appreciably, as things are happening quickly.  However, I’m willing to bet that Li-ion chemistry will still be attracting the lion’s share of attention.

Posted in Conference summaries

Hydro: The Forgotten Renewable

If you were to ask random Americans on the street, “What is renewable energy?”, most of those who respond with something other than “I don’t know” would almost certainly reference solar or wind energy.

Wind and solar are clearly the forms of renewable energy that have most prominently captured the public’s imagination.  In the late 1970’s and early 1980’s – in the wake of the oil crises, the rise of environmentalism, and the backlash against nuclear power – wind and solar promised superior high-tech solutions to the economic and ecological challenges posed by the continued reliance on energy approaches that had powered the emergence of the modern lifestyle we now take for granted.

In short, “renewable energy” became synonymous with “new energy”, or even more commonly, “alternative energy”.

But, “alternative” to what?  Alternative to all the forms of energy that had long been used to supply our ever-growing demands for energy:  oil, coal, natural gas, nuclear, and…

Hydroelectricity?

Well, what about hydro?  It certainly doesn’t fit the descriptor “alternative” – dams have been used to generate electricity since about 1880 — but it certainly qualifies as “renewable” energy by any reasonable definition:  zero-emission energy production from naturally-occurring replenishable sources.

Yet, when it’s semantically important to be considered “renewable” energy, hydro often doesn’t get included.

The “renewable” distinction has been very important in the development of the U.S. solar and wind sectors.  For the past two decades, solar and wind energy projects developed in the U.S. have critically benefitted from two sets of policies:

In contrast, new hydro projects have not been afforded comparably favorable tax treatment and, for the most part, do not qualify as “renewable energy” additions for the purposes of RPS compliance.  As a result, for the recent past, hydro didn’t experience the degree of construction growth that wind and solar markets have enjoyed:  for the last twenty years, the installed base of U.S. hydro has essentially remained flat, at about 100 gigawatts.

I used to think that this lack of growth in hydro generation in the U.S. was because virtually all attractive sites for hydro development had already been developed, because they had been extensively picked over in the first half of the 20th Century.

I was wrong; my intuition was untrue.  In early 2016, the Department of Energy released an overarching assessment of the U.S. hydro industry called Hydropower Vision.  The report’s punchline:   nearly 50 gigawatts of new hydro potential was estimated to be reasonably developable in the U.S. by 2050, which would create substantial economic and environmental benefits for Americans if developed.

Hydro Beset by Onerous Approval Process

Alas, the prospects for new hydro development in the U.S. are daunting.  Rather than being aided by supportive policy to encourage the development of new renewable energy, the hydro sector has in fact faced a litany of obstacles impeding additional hydro generation.

None of the obstacles are as fundamental as the onerous process now involved in obtaining government approval for constructing a new hydro project in the U.S.

Admittedly, a new hydro project can create significant impacts on a local ecosystem, and it is only proper that potential impacts be duly considered before starting construction.  The addition of a hydro project can affect fish migration and other wildlife activity, as well as submerge sizable swaths of land under water for an untold number of decades.  This flooding of valleys can lead to substantial human displacement:  it is estimated that more than a million people were forced to relocate by Chinese authorities to make way for building the massive Three Gorges Dam.

However, the key word in that last sentence is “massive”.  Yes, megadams involving gigawatt-scale development usually have a major footprint.  In contrast, smaller-scale and lower-head dams, especially those that are of run-of-river design, have much more limited effects.  With the blessing of several environmental advocacy groups, the Low Impact Hydropower Institute exists to certify new hydro projects so that they adhere to basic principles that result in minimal adverse effect on the local ecosystem.

Even when developers have amply demonstrated conscious action to mitigate impact, most opportunities to construct new dams in the U.S. remain subject to a highly convoluted approval process involving multiple agencies, including multiple state and local authorities and the following Federal parties:

Although this diverse set of authorities have acted with the best of intentions – seeking to ensure stakeholder concerns are reasonably considered before a project is launched – the resulting working arrangement involves a byzantine, overlapping and interlocking lattice of regulatory requirements.

In turn, this makes new hydro development in the U.S. incredibly time-consuming – think decades, on average – and hence prohibitively expensive.

By comparison, new wind and solar projects – and even new gas-fired powerplants – can typically be sited, permitted, licensed and constructed in the space of a couple of years.

Facing this major economic disadvantage, it’s no wonder that power project developers have generally shied away from tackling new hydro opportunities in the U.S.

To overturn the obstacles to new project development and capture the multi-gigawatt prize of undeveloped hydro opportunities in the U.S., two separate things must happen.

  • New hydro projects must become less expensive.
  • Hydro generation must transcend low-priced energy markets.

On the first point:  reducing the costs of new hydro projects…

Clearly, as suggested above, a high priority to debottleneck new hydro development in the U.S. is regulatory reform.  This has long been recognized as an important issue offering many improvement opportunities – witness the passage of the Hydropower Regulatory Efficiency Act of 2013 as signed by President Obama.  Given its general stance to reduce regulatory burdens on business, the Trump Administration could be a catalyst for more all-encompassing reforms that dramatically streamline hydro development.

In addition, there are likely to be significant technological advancements that could reduce the physical costs of building new hydro projects in the U.S.

Beyond bringing advanced materials and improved manufacturing techniques to otherwise very mature hydro products and equipment, real opportunities exist to standardize hydro site design and management.  Note that most of the investment associated with a conventional new hydro project is related to civil works – building dams custom-designed for a particular site – meaning that hydroelectric approaches less reliant on expensive dam infrastructure offer the promise of radically lower cost.

Regarding the second need:  realizing higher value associated with hydroelectric generation…

Realizing the Full Economic Value of Hydro

A very important transition in U.S. electricity markets may well be in the offing.  For the hydro sector, it will be vital for owners and operators of hydroelectric assets to be well-positioned for those changes.

Wholesale markets for power generation are regional, and while each region has uniquely defined the ways their power markets work to best match their particular circumstances, it is generally the case that the primary product is defined as “energy” (measured in kilowatt-hours), representing power delivered to the grid over a period of time.  Secondary products, such as “capacity” (measured in kilowatts, megawatts or gigawatts) and a series of technically-nuanced products necessary for grid operations collectively called “ancillary services”, are primarily instantaneous in nature.

At the risk of oversimplification, “energy” is a commodity product, whose price is pushed down by the regional presence of lots of power generation capability with low variable costs.  In general, energy prices across the U.S. have been on the decline, primarily driven by two factors:

  • The shift from coal-fired to gas-fired power generation, coupled with the glut of low-cost natural gas in the wake of the shale boom.
  • The addition of large amounts of wind and solar energy, which have zero variable cost.

Also in general, prices in U.S. regional capacity markets have been declining, due to increasing surpluses in installed generation.

Between declining capacity prices and declining energy prices, it is becoming difficult for owners of existing power generation assets – much less developers of new power generation assets, who need their investment costs recovered – to earn decent returns.

In the meantime, grid operators in many regional power markets are facing a growing litany of operational difficulties.  In particular, as more solar and wind energy is added to the power mix, volatility and uncertainty of generation supply is increased – for the simple fact that sometimes the wind doesn’t blow and the sun doesn’t shine.

Dispatchability – the ability to modulate power supply from a generator up or down at an operator’s discretion – is becoming more valuable.

Yet, prices for the ancillary services associated with the notion of dispatchability are generally not increasing.  This is because these services are, for the most part, not subject to market forces but rather have been fixed by prior agreement.  And, these prices were set in a bygone era, when dispatchability was much less scarce and hence less valuable.

At root, unlike wind and solar, hydro is dispatchable renewable energy.  Unlike gas-based electricity, hydro is dispatchable renewable energy.  Hydro should thus be a premium product, commanding higher prices than competing generation alternatives.

Today, that is not the case.  But, it is increasingly clear that wholesale energy markets will be transformed in the coming years.  A number of prominent power generation owners and developers are financially struggling.  Without change, there won’t be sufficient financial viability to maintain much less build generating assets over the long-haul.  Something’s gotta give.

Those responsible for designing and implementing wholesale electricity markets – especially the aforementioned FERC – are already beginning to contemplate entirely new approaches to accommodate and promote the desired electricity grid of the future:  one with more renewables, more resilience and more reliability.

Earlier this month, I was fortunate to be in D.C. to attend the annual gathering of the U.S. hydro sector, Waterpower Week, organized by the National Hydropower Association.  Many of the speakers mentioned the need for hydro to be able to access more lucrative revenue streams reflecting hydroelectricity’s truly higher value.  Meanwhile, across town, many experts were then congregating at a technical conference convened by FERC to discuss possible changes to wholesale market structures.

This is a topic about which I intend to write a future blog post, as it is too complex and lengthy to delve into here.

But for those active in the hydro sector, it’s clear that this will be an important front to actively pursue.  With success on appropriate redesign of wholesale market pricing, and in reducing the costs of new projects, hydroelectricity could see growth it has not experienced in the U.S. in fifty years.

Posted in Industry perspectives