Abstract

This article analyses how a forced transition to low-carbon energy impacts the innovation of new energy technologies. We apply the insights to nuclear fusion, potentially a large provider of carbon-free energy currently attracting billions in private investments. We discuss the ‘fastest-feasible-growth (FFG)’ curve for transitions: exponential growth followed by linear growth, where the rate of latter is limited by the inverse lifetime of the installation. We analyse how innovation is affected if, during rapid deployment, a technology progresses through several generations. We identify key timescales: the learning time, the generation time, the build time, and the exponential growth time of the early deployment phase and compare these for different energy technologies. We distinguish learning rate-limited and generation-time-limited innovation. Applying these findings to fusion energy, we find that a long build time may slow deployment, slow learning, and promote early technology lock-in. Slow learning can be remedied by developing multiple concepts in parallel. Probabilistic analysis of value implies that the optimal strategy is to parallelize the development of many concepts. This concurs with the present surge in private investment in multiple concepts. For this strategy to be successful, the build time of the power plant must be minimized. This requirement favours concepts that lend themselves to modularization and parallelization of production and assembly.

Lay Summary

The energy transition must be realized fast compared to the lifetime of energy systems. Innovation often comes as a sequence of generations: wind turbines become larger in steps. But if the build time of a single unit is many years, the upscaling cannot wait for lessons learned, resulting in poor innovation. To accelerate learning, multiple concepts must therefore be developed in parallel. That is expensive, but it is the economically rational approach. For nuclear fusion, this means that instead of ‘picking the winner’, five to ten concepts must be developed in parallel. Fusion start-ups, with private investments, collectively do that.

Introduction

The world is in the early phase of the transition from a fossil fuel-driven economy to one based on low-carbon power generation. The question if the world is going to meet the goals of the Paris agreement [1] of 2015 becomes more urgent with every year that passes without the required action and progress in decarbonization (The impact of climate change is in the news every day. To give one official quote: In his closing speech at the COP27 conference (Nov 2022, Sharm el Sheikh) UN secretary-General stated: “But let’s be clear. Our planet is still in the emergency room. We need to drastically reduce emissions now—and this is an issue this COP did not address. A fund for loss and damage is essential—but it’s not an answer if the climate crisis washes a small island state off the map—or turns an entire African country to desert. The world still needs a giant leap on climate ambition. The red line we must not cross is the line that takes our planet over the 1.5° temperature limit. To have any hope of keeping to 1.5, we need to massively invest in renewables and end our addiction to fossil fuels.). Whereas the important role foreseen for photo-voltaic (PV) and wind is undisputed [2], that of nuclear fission remains a subject of debate. Independence of other nations or blocs for the energy supply has moved to the top of the political agenda and its value to society is evidenced by the premium price paid for it (See e.g. the unprecedented hike of the gas price following the start of the Russian invasion of Ukraine.). In this context, it is meaningful that the USA, by speech of special envoy Kerry at the COP28 summit, has announced it foresees an important role for nuclear fusion and that it wants to lead that development [3].

Unlike nuclear fission, nuclear fusion is a still undemonstrated energy source. While it has the theoretical potential to provide a sizeable share of the global energy demand [4], it has played a minor role in the scenarios for the decarbonization of the energy system, being projected too far into the future. This is a valid reasoning for the government-sponsored programmes, that plan for demonstrator projects by mid-century [5], despite recent scientific breakthroughs (On March 9, 2022 the EUROfusion organization announced a new record of fusion energy generated in the Joint European Torus, which received worldwide media coverage: see https://euro-fusion.org/eurofusion-news/european-researchers-achieve-fusion-energy-record/ retrieved 26 April 2024; On 13 Dec 2022, Lawrence Livermore National Lab announced to have achieved fusion ignition in the National Ignition Facility: see https://www.llnl.gov/news/lawrence-livermore-national-laboratory-achieves-fusion-ignition retrieved 26 April 2024, and Zylstra, A. B., Hurricane, O. A., Callahan, D. A., Kritcher, A. L., Ralph, J. E., Robey, H. F., … Zimmerman, G. B. (2022). Burning plasma achieved in inertial fusion. Nature, 601(7894), 542–548. doi:10.1038/s41586-021-04281-w; See also e.g. Wurzel, S.E. and Scott, C.H. Progress toward fusion energy breakeven and gain as measured against the Lawson criterion, Physics of Plasmas 29, 062103 (2022)). But today we see an upsurge of private companies that promise to commercialize fusion power on a much shorter timescale [6]. The private funding of these companies now exceeds that by governments, with several companies having raised hundreds of millions of dollars, some in excess of a billion [7]. Following suit, several governments, including the USA, the UK, and Germany, have recently announced significant steps to accelerate the introduction of fusion power (https://www.whitehouse.gov/ostp/news-updates/2023/12/02/international-partnerships-in-a-new-era-of-fusion-energy-development/; https://assets.publishing.service.gov.uk/media/65301b78d06662000d1b7d0f/towards-fusion-energy-strategy-2023-update.pdf; https://www.bmbf.de/SharedDocs/Publikationen/de/bmbf/7/775804_Positionspapier_Fusionsforschung.html).

These developments raise the question if a technology that is still pre-demonstrator can be scaled up fast enough to make a meaningful contribution to the energy transition. And more in particular, how it can still learn and innovate during such rapid upscaling.

When a new technology enters the market, it must grow fast and innovate at the same time. An interesting example is the introduction of the smartphone: there are generations (Apple conveniently numbers them: I-phone 1, I-phone 2, ...) that got more advanced while the market was rapidly expanding. Three characteristic times in this process are (i) the time between two generations; (ii) the doubling time during the early exponential growth at the start of the S-curve; and (iii) the lifetime of the product. In the case of the smartphone these times are all similar, at 1–2 years. That means that if every year a more advanced generation is launched, custom is guaranteed because the previous generation is about to retire.

There is a fourth determining timescale, and that is the learning time, i.e. the time needed, after the launch of generation N, to develop new ideas, prototype them, and evaluate an improved concept. For smartphones, the innovation cycle time is 4–6 weeks [8], hence an improved model can be developed and taken into production in time for the launch of the next generation. This situation is ideal to spur on innovation as well as commercial success. How does this work out for energy technologies and the energy transition?

It is sometimes suggested that a new energy technology, such as solar PV, once it reaches grid parity, will conquer the market as fast as the smartphone did. But if we look at the characteristic times, these situations are not comparable at all. Energy technologies—wind turbines, solar panels, hydro-, or nuclear power plants—typically have a long lifetime. This implies that, eventually, the replacement market is limited, which places an upper bound on the required industrial capacity, and therefore on the speed with which the transition can be realized. This limit does not play up in the case of the smartphone, owing to its short lifetime.

Looking at the other determinants, we see that the time between generations varies greatly between energy technologies. For PV, generations form almost a continuum. Wind has stepped up the unit size of series-produced turbines every few years. Generally, small unit size is conducive for efficient learning [9]. Fission, on the other hand, characterized by large unit size and bespoke power plant designs, has seen only three generations in 70 years [10], the ubiquitous Gen2 having dominated the production between 1965 and 2000. Hence, for several decades in essence, the same reactor concepts have been used, and this technology exhibited little learning [11, 12].

The innovation cycle time, too, differs greatly between the technologies. For PV incremental improvements of the production method can be tested on a batch-to-batch basis. For nuclear technologies, the long build time and the large unit size (and hence small numbers) mean that innovations can only be tested when a new model has been built, commissioned and operated for a few years, which severely limits the pace of innovation [13]. As we saw, fission Gen2 has been the industry standard for decades.

The aim of this article is to analyze what a good innovation strategy is for nuclear fusion. Fusion is an energy technology of which the inter-generation time and the lifetime are long, while yet there is a need to scale up fast enough to realize the roughly 10 000 plants needed for a significant contribution to the energy system, in a reasonable time. How can the looming risk of technology lock-in in such a scenario be mitigated? How, in such a situation, do ‘platform’ innovation strategies (such as Space-X) compare to ‘bespoke’ [14] innovation processes typical of government-run development?

This question is particularly relevant now, because of the upsurge of private parties who—using a wide spectrum of different approaches—aim to bring fusion energy to the market within a decade. Should governments and private industry work together in a mission-driven program [15], as advocated by Mazzucato [16]? Should the governments ‘pick the winner’, as has been the strategy for the past decades, or rather work together with private initiatives in a portfolio management approach—a strategy that appears to have been embraced by the USA and UK, in 2023.

To answer these questions, this article is structured as follows. In ‘The fastest feasible transition path’ section, we discuss the shape of the S-curve that describes the introduction and deployment of new energy sources. We show that the total industrial capacity places constraints on the shape of the S-curve of deployment, and we describe the fastest feasible transition curve. This shows exponential growth over several orders of magnitude before the deployment becomes linear and eventually saturates. Importantly, the initial exponential growth phase is when—according to Wright’s law—most of the learning needs to happen. This then raises the question how innovation during rapid exponential growth can best be realized.

In ‘Innovation during rapid growth’ section, we consider how the different time scales that characterize the development of a new technology. This provides a basis for a systematic comparison of technologies such as Wind, PV and nuclear, and indeed others such as the smartphone. We observe that, depending on the relative time scales, learning can be limited by the generation time (if that is long, as it was for e.g. fission); or by the learning rate itself (e.g. if the innovation cycle takes long).

In ‘Estimation of the optimal parallelisation: Case study of nuclear fusion’ section, we apply this analysis to the case of nuclear fusion. Using the fastest feasible growth curve as guideline, we optimize the value of fusion power as function of the number of different technological concepts that are being explored in parallel. This results in a value-driven rather than a science-driven strategy.

In the discussion, we reflect on the assumptions underlying the analysis and on the consequences of our findings for the strategy of fusion deployment.

The ‘fastest feasible’ transition path

Forced transition

Climate change asks for the transition to low-carbon energy to be realized by 2050. In that time fossil fuel needs to be phased out by force, while all required low-carbon energy technologies need to be deployed at the fastest possible rate. This creates a situation of a virtual ‘infinite market pull’: no regular market dynamics, society will buy the product in the quantity that is made available until the transition is realized. It is a transition from one quasi-steady state to another, in the sense that all time derivatives before and after are much smaller than during the transition. The transition must be faster than the typical lifetime of the existing energy infrastructure, which means that natural replacement will not meet the required pace.

To illustrate this, we refer to the pledge, at COP28, by 20 countries, to triple the total installed nuclear (fission) power by 2050 [17]. That would entail bringing ~1000 new fission power plants online within 25 years, an average production rate of 40 per year, and well in excess of that by the end of the 2030s. The present capacity of the nuclear industry is about five plants per year, hence this industry needs to grow an order of magnitude in the coming decade. But the industrial capacity that has been built up by that time is large enough to, in steady state, maintain a fleet of thousands of power plants. If these are not realized, then the industry will have to fold back again, only to be rebuild a few decades later.

Generic ‘S-curve’ growth will lead to oscillating industrial capacity after a forced transition

The introduction of a new product or technology is commonly described by a so-called S-curve: a slow start, followed by a linear growth phase that rolls over into a steady state. Indeed, S-curves have been used widely in the literature to describe and analyze the energy transition (see e.g. [18]). These analyses consider factors that influence the speed of the transition—and there can be many, such as the market demand, the availability of raw materials, the workforce, legislation, etc—and sometimes take an empirical and/or probabilistic approach to account for the uncertainty of such factors [2, 19].

The logistic function (black dashed line) describes an S-curve type transition. The industrial capacity (the other curves) required to realize it depends on the lifetime of the product. Due to the shape of the S-curve, the industrial capacity will go into oscillation with a period equal to the product lifetime. The faster the transition is compared to the product lifetime, the larger the oscillation will be. The ‘fastest feasible growth curve’ discussed in ‘The fastest transition path that avoids oscillation’ section realizes the transition fastest without resulting in oscillating production.
Figure 1

The logistic function (black dashed line) describes an S-curve type transition. The industrial capacity (the other curves) required to realize it depends on the lifetime of the product. Due to the shape of the S-curve, the industrial capacity will go into oscillation with a period equal to the product lifetime. The faster the transition is compared to the product lifetime, the larger the oscillation will be. The ‘fastest feasible growth curve’ discussed in ‘The fastest transition path that avoids oscillation’ section realizes the transition fastest without resulting in oscillating production.

However, when it comes to a forced transition, we can also ask the much simpler question ‘how fast can we make the transition happen if all these factors can be neglected?’ Because, as we saw in the example of the pledged tripling of fission power above, there are still limiting factors having to do with the speed at which an industry can grow. And with the anticipated industrial capacity needed after the transition.

Figure 1 shows the industrial capacity as derived from a generic S-curve, for which we took the logistic function, factoring in the lifetime of the product. We compare three different conditions: the product lifetime shorter than the transition time; similar to; or two times longer. We see that if the transition time is two times longer than the lifetime the required industrial capacity grows smoothly to the saturation level, with a small residual oscillation. However, if the transition time is similar to or shorter than the lifetime, large-amplitude oscillations result with a period equal to the lifetime. Which stands to reason, as after the transition has been achieved the demand for new products will wane and only pick up when the oldest products need replacement. Such oscillations are undesirable; fluctuating demand, at this scale, would destabilize an industry that employs tens of millions of people.

The capacity of the fission industry initially grew to >30 plants per year, sufficient to build and sustain about 2000 plants at a lifetime of 60 years. But when the demand abruptly fell in the 1980s the industry had to scale down to almost zero. Today the world capacity of the fission industry is barely sufficient to keep the number of operational plants constant. Data: IAEA [20].
Figure 2

The capacity of the fission industry initially grew to >30 plants per year, sufficient to build and sustain about 2000 plants at a lifetime of 60 years. But when the demand abruptly fell in the 1980s the industry had to scale down to almost zero. Today the world capacity of the fission industry is barely sufficient to keep the number of operational plants constant. Data: IAEA [20].

The development of nuclear fission illustrates this pattern. Figure 2 shows the historical development of the number of fission power plants construction starts, as a proxy of the industrial capacity. The total installed fission power flatlined around 1980. This was not planned: it was forced upon the industry as a consequence of the accident at Three Miles Island (1979), followed by the Chernobyl disaster (1986). The graph shows that the industrial capacity scaled back after this plateau was reached, in agreement with the 50–60 year lifetime of the power plants. Today, the fission industry can build about five power plants per year. To realize the COP28 pledge, this capacity has to grow to a level well above that of the 1970s and we see the oscillation appear, with a period which approximates the lifetime of the power plants.

The fastest transition path that avoids oscillation

We addressed the question how fast a new technology can be introduced while avoiding the oscillatory behaviour in [21, 22]. It led to the mathematical description of what we will call the ‘fastest feasible growth (FFG)’ curve. The model starts from the observation that the rate of deployment, e.g. the number of solar panels installed, globally, each year, is equivalent to the industrial capacity, i.e. the number of solar panels the industry can produce, and transport and install per year. It is important to note here that our definition of ‘industrial capacity’ includes the production of raw materials, the workforce, logistics, installation etc. The model then makes one Ansatz:

Continuous, i.e. without jumps, because it is not possible to create industrial capacity overnight. Monotonical, because it is economically undesirable to build up an industrial capacity only to make it shrink again. In other words: we require that the aforementioned oscillations are avoided.

In a situation characterized by a fast growth towards a quasi-steady state (market saturation), the fastest growth trajectory is a linear growth towards saturation with a rate that is equivalent to the replacement rate in the saturated state. For energy infrastructure, with a typical lifetime of 25–50 years, that corresponds to an industrial capacity capable of replacing 2–4% of the infrastructure annually. Hence, the duration of the linear growth is equal to the lifetime of the installation—any faster growth will result in the oscillations described above. However, this industrial capacity will not be available at the start of the development. It needs to be built up before the linear growth can start. For this build-up phase, exponential growth is assumed. Figure 3 provides a graphical explanation of the FFG curve. The mathematical formulation reads as follows (see [21]):

In the FFG model, the fundamental pattern is a linear growth with slope that equals the industrial capacity needed to sustain the final installation. This industrial capacity needs to be built first, in a phase of exponential growth. The soft start is reflected in the soft roll-over.
Figure 3

In the FFG model, the fundamental pattern is a linear growth with slope that equals the industrial capacity needed to sustain the final installation. This industrial capacity needs to be built first, in a phase of exponential growth. The soft start is reflected in the soft roll-over.

P = Psat τexplife {exp[(t − ttrans)/τexp] – exp[(t − ttrans − τlife)/τexp]} for t < ttrans

P = Psat τexplife {1 + (t-ttrans)/τexp – exp[(tttrans − τlife)/τexp]} for ttrans ≤ t ≤ tsat

P = Psat for t > tsat

where P denotes the total effective installed power in the case of power technology, or more generally the total number of a product that is operational, Psat the asymptotic value in the saturated state, τexp the characteristic time of the exponential growth, τlife the lifetime of the power-generating installations, t the time, and ttrans the time at which the transition from exponential to linear growth occurs.

The mathematical consequences of the Ansatz are that

  • i) the linear growth rate is limited by the replacement rate in the final, saturated, state; and

  • ii) the transition from exponential to linear growth occurs when the installed power has reached a fraction of the final level given by the ratio of the characteristic time for the exponential growth and the lifetime of the infrastructure.

For energy technologies, exponential growth is typically seen with a doubling time of 2–4 years. Combining that with a lifetime of 25–50 years shows that the transition to linear growth occurs at a few percent of the saturated level to be reached, in agreement with observations made by Kramer and Haigh [23].

The FFG curve is limiting curve, not a prediction of a likely evolution

The FFG curve is a limiting curve in the sense that it is unlikely that a transition will proceed faster. Whether this fastest transition path is realized or not depends entirely on investments, policy measures, geopolitical factors, and the market. All of these external factors can slow down or even halt the transition. But they are unlikely to boost the transition rate beyond the FFG curve.

We stress that the Ansatz is not a law of nature. In a ‘war economy’ it is possible to scale up faster, but society then has to accept post-war overcapacity. Below we’ll discuss the example of the introduction of LED lighting. This was managed in such a way that the transition happened on the time scale of the lifetime of the incoming, not the outgoing, technology: incandescent lamps could have been, but were not, banned overnight.

Two growth phases during the transition: First exponential, then linear

The exponential phase is required to build industrial capacity, to learn how to build in large volume and at reasonable cost. Energy technologies must typically grow several orders of magnitude during this phase, with a corresponding drop of cost, according to Wright’s law. Wind and PV are pertinent examples of this exponential growth and cost reduction. In comparison, the linear growth that follows only spans ~1 order of magnitude, hence the learning and cost reduction is limited in that phase. Therefore, it is early in the exponential growth phase when investments should go to maximizing learning. We’ll return to this important point in ‘Estimation of the optimal parallelisation: Case study of nuclear fusion’ Section.

Application to non-energy cases: LED lighting and smartphones

The replacement of incandescent lamps with LED lighting is interesting, because the outgoing technology was characterized by a short lifetime (<1 year) whereas the incoming technology has a long lifetime (~10 years). Figure 4a shows a fit of the FFG curve to the data of the LED market share. The fit corresponds to a LED lifetime of 12 years, which seems reasonable. This case study shows that in a replacement transition, with in good approximation infinite market pull, the replacement still takes a lifetime of the new technology, not the replaced technology. Which is important to keep in mind for the energy transition. We also note that, in the case of LED lighting, the exponential growth was fast, which combined with the long lifetime, resulted in an early transition to linear growth, all in agreement with the FFG model. This, too, is a characteristic that we must expect in the energy transition.

(a) The replacement of incandescent light by LED lighting is well described by the FFG curve, in both the exponential and the linear phases. This observation shows that the transition is dominated by the lifetime of the replacing, not that of the replaced technology. [Data source: Goldman Sachs]; (b) The introduction of the smartphone showed exponential growth of the industrial capacity (sales) followed by a sudden levelling off, in agreement with the FFG model: here it is the short lifetime of the product that allows the industrial capacity to grow exponentially until saturation is reached. [Data retrieved from https://www.statista.com/statistics/263437/global-smartphone-sales-to-end-users-since-2007/].
Figure 4

(a) The replacement of incandescent light by LED lighting is well described by the FFG curve, in both the exponential and the linear phases. This observation shows that the transition is dominated by the lifetime of the replacing, not that of the replaced technology. [Data source: Goldman Sachs]; (b) The introduction of the smartphone showed exponential growth of the industrial capacity (sales) followed by a sudden levelling off, in agreement with the FFG model: here it is the short lifetime of the product that allows the industrial capacity to grow exponentially until saturation is reached. [Data retrieved from https://www.statista.com/statistics/263437/global-smartphone-sales-to-end-users-since-2007/].

We chose the introduction of the smartphone as another interesting case, as here the lifetime of the product is short. In the FFG logic, this would allow exponential growth nearly all the way until the plateau is reached. Figure 4b shows that this behaviour is indeed observed: smartphone sales first grew exponentially until there was an almost abrupt stagnation in the growth of the sales (hence, industrial capacity). With the short lifetime of a smartphone, this would imply that a fully developed market was reached a few years later.

In summary, we have observed that the logistic function, often used to describe S-curves, does not satisfy the requirement of a smooth and monotonical development of industrial capacity during a transition. The FFG curve is a special S-curve constructed in such a way that it does satisfy this requirement. It shows that the fastest pace of a transition is determined by the lifetime of the incoming technology. Next, we’ll use this model to find the limiting curve for the energy transition, restricting the analysis to wind and PV.

(a) Application of the FFG model to onshore wind and PV. The same data is plotted on a log-scale (left axis) as well as a linear scale (right axis) to better bring out the two growth phases. The historical data of both wind and PV appear to follow an FFG curve. Wind appears to have transitioned to linear growth in 2012, PV shows signs of such a roll-over from 2016. If this trend is not broken, saturation will occur at a level of 350–400 GW, due to the finite lifetime. This is almost an order of magnitude too low to achieve the energy transition. (b) Reversing the logic, we take the IEA net zero emission (NZE) scenario and make the FFG model match the 2050 target, taking the present installed power as starting point and keeping the same values for the exponential growth rate and lifetime. The IEA scenario is compatible with the FFG logic but does require the industry to resume exponential growth on short term, during a few more years. (c) Application of the model to the exponential phase and matching it to the NZE values in 2050 shows that until the exponential growth slowed in 2012 and 2016 the deployment of wind and PV appeared to follow an FFG curve that would bring them to the NZE targets in time. (graphs based on historic data from the IEA data explorer, reworked by the authors (see ref [25]))
Figure 5

(a) Application of the FFG model to onshore wind and PV. The same data is plotted on a log-scale (left axis) as well as a linear scale (right axis) to better bring out the two growth phases. The historical data of both wind and PV appear to follow an FFG curve. Wind appears to have transitioned to linear growth in 2012, PV shows signs of such a roll-over from 2016. If this trend is not broken, saturation will occur at a level of 350–400 GW, due to the finite lifetime. This is almost an order of magnitude too low to achieve the energy transition. (b) Reversing the logic, we take the IEA net zero emission (NZE) scenario and make the FFG model match the 2050 target, taking the present installed power as starting point and keeping the same values for the exponential growth rate and lifetime. The IEA scenario is compatible with the FFG logic but does require the industry to resume exponential growth on short term, during a few more years. (c) Application of the model to the exponential phase and matching it to the NZE values in 2050 shows that until the exponential growth slowed in 2012 and 2016 the deployment of wind and PV appeared to follow an FFG curve that would bring them to the NZE targets in time. (graphs based on historic data from the IEA data explorer, reworked by the authors (see ref [25]))

The FFG curve applied to on-shore wind and solar PV

We applied the FFG curve to the development of on-shore wind and solar PV, taking three different approaches. In the first, we fit the model to the historical data. Figure 5 shows that a fair description of the data is obtained for both the exponential growth and the transition to linear growth. The fit is obtained with 25 years for the lifetime of the installed systems and 1.9 and 3.2 years for the doubling time of PV and wind, respectively. Both appear to have transitioned to linear growth, in 2012 and 2016 for wind and PV, respectively, where it is noted that the transition is clear for Wind, while for PV it could be argued that there is still exponential growth, albeit slowing down (Footnote: these fits have four parameters, of which one, the lifetime of the wind turbines or solar installations represents a physical property of the technology. For both wind and PV, it is around 20–25 years. As metric for the goodness of the fit the sum of squared differences in the log plot was used, to obtain a balance between the exponential and linear phases. The fits are robust for the exponential growth phase, for which the data spans 2–3 orders of magnitude. The transition time, too, has an uncertainty of less than 0.5 year. The linear phase is well determined for Wind, but for PV the few data points leave room for variation: varying the lifetime between the reasonable bounds of 20–25 years results in corresponding saturation levels between 300 and 400 GW, respectively.) We note that the investments in Wind and PV have also stalled at an almost constant level between 2012 and 2020 (see e.g. investment data at IRENA [24]). The fact that both wind and PV appear to have reached a linear growth, coupled to the fact that after one lifetime all industrial capacity is needed for the replacement, results in a saturation level—if no policy changes are implemented to change this—of 300–400 GW. This is almost an order of magnitude too low to achieve the energy transition.

As said, the FFG model is not predictive other than providing a limiting growth curve. Therefore, the fact that the deployment data of PV and wind to date appear to exhibit the characteristics of the model, does not mean that this trend cannot be broken. In fact, it must be broken to realize the energy transition in time.

It is therefore interesting to analyse if, following the FFG logic, the transition goals of 2050 can be reached provided there are no other factors limiting deployment the deployment. To that end, we fill in the installed power to be reached by 2050 according to the IEA NZE2050 scenario and construct the FFG curve leading to that goal, starting from the actual installed power today, while keeping the characteristic times for wind and PV the same as in the fit to historic data. Figure 5b shows that with these parameters the 2050 goals are feasible but require a radical increase in the deployment rate. This calls for a strong increase of investments in the coming years. Such an increase can indeed be observed for PV, with annual investments almost doubling from 2021 to 2023. For wind, such a pronounced upturn of investments is not yet observed. Finally, in Fig. 5c the FFG curve is again made to fit the NZE 2050 target, but here we only matched the exponential phases. This plot illustrates how, by departing from the FFG curve, time is lost that cannot recovered.

Conclusion: energy deployment has a long exponential phase

The most important aspect of the FFG curve for the analysis in this paper is that the transition from exponential to linear growth happens at a level—relative to the final market share—that is given by the ratio of the exponential and linear growth times. For energy technologies, which have a lifetime of decades while the exponential growth should be fast, this is typically when the contribution to the energy market reaches the percent level. This means that one the one hand the phase in which a meaningful contribution to the energy system is being built up takes about a lifetime of the installation. On the other hand, as we have seen in the case of PV and wind, this linear growth is preceded by decades of exponential growth. To take the example of nuclear fusion, which we’ll discuss in ‘Estimation of the optimal parallelisation: Case study of nuclear fusion’ section, it means that the exponential growth phase should take it from the first few power plants to hundreds of power plants. This requires more than 20 years at a doubling time of 3 years. It is during this exponential phase when the largest relative growth takes place, and since according to Wright’s law learning is proportional to the logarithm of the cumulative production, most learning will happen during that phase, too. Therefore, we must address the question how learning, or innovation, is best organized during exponential growth. That is not obvious, as during the fast scale-up there is little time to learn, invent, try out and implement innovations. This is especially the case if the innovation cycle is longer than the exponential growth time.

Innovation during rapid growth

Growth and learning by generation

It can be useful to think of an innovation process as a sequence of generations, similar to the launch of the generations of smartphones, or PC operating systems, or of a particular model car. Innovation is then measured as progress from generation to generation.

In analogy to natural evolution [26], the dynamic of innovation is governed by two processes: learning, i.e. the spawning of new ideas followed by prototyping, testing and evaluation, which is then followed by the implementation of the innovations in a new generation of the product. After the new generation of a product has been launched, a new round of learning starts. Learning needs time, but it also saturates after some time, when all that could be learned from model N has been learned and the time is ripe for a new version to be launched as generation N + 1. We’ll assume, for the sake of argument, that we can characterize the learning process with a characteristic time constant, τL.

If the time between generations (τG) is long compared to τL, the sequence of generations is slowing down the evolution/innovation. We’ll call the innovation ‘generation time-limited’ in that case. If, on the other hand, the generation time is too short compared to the learning time, the innovation is ‘learning rate-limited’. If a new generation is started before learning could take place, it is essentially the same as the previous and we would not call it a new generation. Therefore, the natural ordering is τL ≤ τG.

If an innovation process is learning rate-limited, the rate of innovation can only be increased by speeding up the learning. Which may not always be possible, especially if the learning leans on external technology developments. If an innovation process is generation time-limited, then the frequency of the generations must be increased to speed up innovation. Whether this is possible or not depends on the build time.

The role of the build time

An important addition to this logic is that it takes time (τB) to build a new model or generation. Once the design of a device has been frozen and construction started, learning will not impact its performance anymore. Learning during the construction phase will feed into the design of the next generation. However, the integration of all the components into the new device, followed by its operation, is essential to complete the learning process. For these lessons to feed into the design and construction of the next generation, there must be time between the start of operation and the start of construction of the next generation, and naturally the generation time has to be longer than the build time: τB < τL ≤ τG. This underlines the importance of minimizing the build time.

Figure 6 categorizes different technologies by their lifetime and build time, including LED lighting and smartphones for context. Effective learning can only be achieved when the build time is much shorter than the generation time. A long lifetime hampers the maximum rate of deployment. The generation time must be longer than the build time to allow learning to have an effect on the next generation, but ideally should be kept short to allow for a smooth building up of the industry. Technologies in the lower left quadrant, with the smartphone as the example, have ideal conditions for fast learning and swift deployment. The technologies that have long build time and long lifetime are constrained by both the generation time and the learning rate. Unless strategies are applied aimed at abating exactly those unfavourable conditions, these technologies will show slow deployment and little learning. The present development of the small modular fission reactor is an example of such a strategy.

Different technologies can be categorized by their lifetime and build time. A short lifetime allows swift deployment, a short build time allows quick exploration of innovative ideas. Technologies that are characterized by a long build time, such as nuclear fission, face unfavourable conditions for learning.
Figure 6

Different technologies can be categorized by their lifetime and build time. A short lifetime allows swift deployment, a short build time allows quick exploration of innovative ideas. Technologies that are characterized by a long build time, such as nuclear fission, face unfavourable conditions for learning.

Learning during exponential growth

The difference between learning-rate-limited or generation-limited innovation fundamentally changes under rapid exponential growth. If the growth factor from one generation to the next is large—say two orders of magnitude, as was the case in fission deployment—it is clear that it pays to get as much learning as possible from the first generation. Better to invest in 10 different concept developments in Gen1, of which only few make it to Gen 2, than to risk building 100 reactors of a type that isn’t as good as it might have been.

Taking as starting point that, in order to have effective learning, τG must be greater than τL, we see that innovation is hampered when τLexp is too large: when fast growth is required, the learning time and hence the build time, must be reduced.

Cost of accelerated learning and de-risking by parallelisation of concept trials

The learning rate can only be increased by trying out multiple concepts in parallel. The faster learning, or a greater probability that at least one successful concept or improvement is found within a generation time, comes the cost of increased spending. There is, however, a finite number of ideas that can be tested, and if each of them a have equal probability (p) of success, the probability that at least one successful innovation will be found will typically have a functional profile as depicted in Fig. 7. Here the number of ideas tried in parallel is a proxy for the required investment. For the trial of each individual concept, the probability of success itself will depend on the time available. This relation is non-linear: in practice there is a minimum time within which a project can be realized even with unlimited budget. But if there is sufficient reward for achieving success earlier, it can be worth the extra spending. On the other end of the scale, dragging out a project will increase the cost without increasing the probability of success.

Sketch of the generic relationships between the investment in learning and the reward: The expected reward increases in value if the probability that it will be realized is increased, but there is a limit to the amount of risk reduction money can buy; likewise, reducing the time of the R&D program will increase the value of the expected reward, but there is a limit to the acceleration that can be achieved by increased spending. The optimum (reward–Cost) is indicated by arrows and depends, apart from the shape of the curves, on the absolute values. Both the risk-reduction curve and the (future) reward can only be estimated, using assumptions. In sec. 4 this is done for the case of nuclear fusion.
Figure 7

Sketch of the generic relationships between the investment in learning and the reward: The expected reward increases in value if the probability that it will be realized is increased, but there is a limit to the amount of risk reduction money can buy; likewise, reducing the time of the R&D program will increase the value of the expected reward, but there is a limit to the acceleration that can be achieved by increased spending. The optimum (reward–Cost) is indicated by arrows and depends, apart from the shape of the curves, on the absolute values. Both the risk-reduction curve and the (future) reward can only be estimated, using assumptions. In sec. 4 this is done for the case of nuclear fusion.

The rapid development of vaccines during the Covid pandemic may serve illustration of these generic principles: many concepts were developed and tried in parallel (with only a few winners), and the companies and governments involved were willing to spend the budget needed for maximum acceleration of the development programs because the benefits of reaching success were clear. The reward warranted the spend. All of these arguments hold a fortiori for the climate crisis and the need to accelerate the transition to clean energy.

The optimum parallelization will depend on several factors. These include factors that we can use as input, such as the desired exponential growth rate and the cost of trying a single concept; and factors that must be estimated, most notably the expected future revenues and the discount rate. In ‘Estimation of the optimal parallelisation: Case study of nuclear fusion’ section, we shall carry out this optimization for the case of nuclear fusion, in an effort to understand why there is such a surge of private investment in fusion start-ups.

Estimation of the optimal parallelisation: case study of nuclear fusion

The multitude of private companies: parallel exploration of concepts

Nuclear fusion is characterized by a long build time and high cost of prototyping: the experimental reactor ITER costs more than 20 billion Euro and will take more than 35 years for construction and commissioning until full performance [27]. While future commercial fusion power plants are projected to be less costly, the construction cost of a fusion powerplant will be an important component of the cost of electricity [28, 29].

The roadmap of the government sponsored R&D program foresees the start of the construction of a demonstrator, DEMO, after ITER has reached full performance. This puts full performance operation of DEMO towards 2060. The initial exponential growth only starts after DEMO has established a sufficiently mature technology basis.

These long timelines have motivated private parties to propose faster tracks towards the realisation of fusion power. In the past few years that their number has grown to >40 worldwide, with an accumulated private investment in excess of $6 billion as of 2022 [6]. These companies typically promise to deliver a demonstrator within a decade. And, whereas the government sponsored R&D program has largely focussed on a single concept (the so-called tokamak, a machine in which a hot plasma is confined by strong magnetic fields), the private companies together explore a wide variety of concepts. These include more compact versions of the tokamak, but also fundamentally different concepts.

As a result, the private companies taken together represent an innovation path in which several concepts are tried out in parallel, while at the same time a large effort is being made at reducing the build time. As we saw in Sec.3, this is an effective way to accelerate learning. However, exploring different concepts in parallel is expensive.

This brings us back to the question if there is an optimum parallelization and how it depends on factors such as the build time, the growth rate and the projected revenues and discount rate.

Accelerated learning by parallel trials: The cost, the gain, the optimum

To model the learning in the prototyping generation of fusion power plants and its consequences for the first generation of commercial plants, we take a probabilistic approach. We consider a fixed time t0 in which N concepts are tried simultaneously. We’ll call this Gen0, and we assume that Gen0 will not generate revenues. We denote by I0 the investment needed to realize revenues (R1) in Gen1. The function to optimize therefore is

(1)

where <. > denotes the expectation value.

Trying out more concepts in parallel goes at the cost of a larger I0 but will increase the probability of success, and therefore increase the expectation value <R1>. In this idealized model, we assume that there is initially a large number of independent concepts to choose from, which have a probability p to succeed, each; and we assume that the probabilities of success of the different concepts are uncorrelated. We’ll examine and discuss the validity of this approach later. Within this frame of assumptions, I0 is proportional to N:

(2)

where C0,s denotes the cost of trying out a single concept, and the probability to find a successful concept in a batch of N parallel tests is given by

(3)

The expectation value <R1 > in Gen1 is proportional to the number (M) of reactors in that generation, the expected revenue of a single plant (R1,s), and the probability that they will be realized at all, PN. Moreover, the costs and revenue in the future must be discounted by the factor exp(−t/τdisc) where τdisc is the e-folding time of the discounting (Footnote: The discount factor is commonly expressed as an annual percentage (x 0.100%). This is related to the e-folding time by (1 + x)t = et/τ ➔ τ = 1/ln(1 + x). For x ≪ 1 this can be approximated by τ ≈ 1/x.)

Next, we must make an assumption about the number of active reactors as function of time. For this, we’ll stay with the logic developed earlier in this paper and assume that the deployment proceeds according to the FFG curve.

With these elements we can estimate the future revenues. In a first approach, we can integrate the discounted cash flow over time, equating the annual revenue of a single reactor to R1,slife. This integral converges due to the discounting. In particular, due to the long lifetime of fusion power plants, the linear growth phase, which in FFG model has a duration of τlife, is much longer than the discount time constant τdisc. In that case, the integrated cash flow is largely determined by this linear growth phase, the integral over which can be approximated by

(4)

where Msat denotes the eventual saturation level of the FFG curve, which should be of order 104 for fusion to make a sizable contribution to the energy mix. In this approach, however, we make no distinction between generations, while in the linear growth phase the number of reactors already runs into the thousands.

We therefore propose a different estimation, in which we consider a finite number (M) of plants which we may think of as one generation. Instead of integrating the cash flow, we estimate the total revenues over the lifetime of these M power plants. Since we are looking at the early phase of deployment, this implies that the number of operational reactors grows exponentially, with characteristic time τexp. Hence, the number of plants M and the time needed to realize those, and therefore the time over which the discounting is to be applied, are dependent. In other words, the exponentially growing number of plants must be multiplied by a—less steep—decreasing exponential function that represents the discounting. The result is still exponential but with a reduced rate.

The exponential growth only starts after t0, the time used to select the best concept and build the first batch of power plants, which brings in an additional discounting factor exp(−t0disc).

Next, we express the net revenue R1,s of a single reactor in terms of the cost of the reactor at the time it is built, using the economic payback time τpb:

(5)

We chose to express the revenues in terms of the payback time, because this is more robust than an estimation of the competitive cost of electricity—made up of both the intrinsic cost and the market—decades into the future. Upon introduction, a new energy source will generally not be competitive, if only because it hasn’t gone through its learning curve yet. This market failure can be fixed by government intervention—if the new source features in their policy—by subsidies or other policy measures, in such a way that the new source is interesting for investors. That means that the payback time must be much shorter than the lifetime, typically 1–2 decades. It does not need to be shorter than that—once that is the case, government support is no longer needed.

With this assumption, the lifetime revenue is related to the initial cost of the reactor and need not be discounted over the lifetime. The factor K.C0,s expresses the cost of the reactor in terms of the cost C0,s of building and evaluating a prototype, which connects the future revenues to the cost of development today, in money of today. The expectation is that the cost of power plants is less than that of the demonstrator, i.e. K < 1.

The expectation value (in a statistical sense) of the future revenues of a generation comprising of M power plants then takes the generic form

(6)

Since we are interested in the value (Nopt) of N for which the function F = <R1 > − N.C0,s reaches its optimum (Fopt), we express the expectation value of the revenue in units of the cost of a single concept trial as

(7)

where all input variables have been absorbed in the numerical constant k:

(8)

Now

(9)

The numbers of interest are the value Nopt for which this function reaches its optimum and its value at that optimum. Nopt is found by differentiating the following expression:

(10)

where p denotes the probability of success of a single concept trial, as before. The value of k must be estimated, but we can indicate a range in which the result can be expected. As central values, we take t0 = 15 years, τexp = 5y, R1,s = 2.K.C0,s (i.e. the payback time is one third of the lifetime) and τdisc = 15y (corresponding to an annual discount percentage of ~7%). The latter effectively places the economic horizon at about 30 years from now (factor 10 reduction of value), i.e. around 2050, the crucial time for climate action. For energy technologies, this would seem to be a sensible horizon for the evaluation of future value, both from an economical and a societal perspective.

For the number of plants in Gen1 we consider the range 30–500, bearing in mind that one needs to build a minimum number to warrant the cost of development; that for innovation and the avoidance of technology lock-in the scale jump between generations should not be too large; and that in view of the 104 power plants target, the exponential growth should transition to linear growth at N = 500–1000. (For comparison: fission Gen2 comprises ~400 plants).

Inserting these numbers in (8), k comes out in the range [7–50].

Figure 8a shows the optimization curve for the numerical factor k = 30 and the probability of success of a single trial p = 0.3. A broad optimum is found for N in the range 5 to 8, for which the investment in the R&D amounts to about 30% of expectation value of the revenues. Figure 8b plots Nopt and the corresponding revenues as function of k, showing that for a broad range of the latter, the number of parallel tries warranted by the revenues is in the range 5–15, the higher values being indicated for the cases with lower probability of success. For k > 20 the value of Nopt becomes quite insensitive to k. Finally, we must bear in mind that in practice the total number of independent concepts may be limited to only a handful.

(a) Model optimization of the number of parallel trials of different concepts. The cost of the trial process is proportional to the number of concepts tried, whereas the probability of (at least one) successful trial levels off. Taking the latter as a multiplier of the expected revenues, an optimum is found which depends on the probability of success of a single concept (P = 0.3 in this example) and the lifetime revenue of a single power plant, which is discounted, because it is in the future. Discount percentage, rate of deployment and the profitability of a single plant determine the absolute value of the revenue curve, represented by the factor k; (b) the optimum N (full lines) and corresponding net revenue (dashed lines) for P = 0.15, 0.2, and 0.3 respectively.
Figure 8

(a) Model optimization of the number of parallel trials of different concepts. The cost of the trial process is proportional to the number of concepts tried, whereas the probability of (at least one) successful trial levels off. Taking the latter as a multiplier of the expected revenues, an optimum is found which depends on the probability of success of a single concept (P = 0.3 in this example) and the lifetime revenue of a single power plant, which is discounted, because it is in the future. Discount percentage, rate of deployment and the profitability of a single plant determine the absolute value of the revenue curve, represented by the factor k; (b) the optimum N (full lines) and corresponding net revenue (dashed lines) for P = 0.15, 0.2, and 0.3 respectively.

The bottom line of this analysis then is that in any scenario in which fusion power is to become a serious contributor to the energy landscape the best strategy today is to try as many different concepts in parallel as possible.

The key to increasing the value of fusion energy, i.e. increasing the multiplier k, lies in acceleration. By reducing the time until deployment starts, the discount due to the factor exp(−t0disc) can be reduced, whereas a faster exponential growth reduces the impact of discounting during deployment.

From this attempt at quantifying the economics of learning in the development phase, we see that:

  • There is an economic incentive to shorten the time to demonstration, even if that means significant extra spending on an annual basis.

  • There is a clear economic advantage in trying multiple concepts in parallel. It depends a bit on the probability of success assigned to each concept, but typically more than five parallel tracks are warranted, which means in practice that it is economically sound strategy to try all reasonable options on the table.

The optimization depends, but not critically, on the estimation of the future revenues. Here it is important where we put the financial horizon. In the analysis above we have adopted the logic that the revenues of Gen1 are taken into consideration, where the number of Gen1 plants M is a variable to be chosen. This fits in the logic of deployment in a sequence of generations: during the deployment of Gen1 new investments must be made to develop a better and cheaper Gen2. In that second round a similar analysis could be done.

To recoup the investment made in the development phase, the number of plants that contribute to the generation of revenues must be significantly larger than 10. Here we see the danger of technology lock-in: If Gen1 consists of a large number of power plants of the same design, it will be difficult for a radically different design to enter the arena for Gen2. It would, therefore, be advantageous to pursue multiple different options in Gen1 and postpone down-selection to Gen2. In the analysis in Sec.3 we saw that it is advantageous to limit the scale jump between generations, hence the intergeneration time, to allow innovation during exponential growth. But the intergeneration time should also allow time to learn, which means it must exceed the build time by at least a few years in which reactors of the new generation are operational. Ergo, the build time emerges as the crucial determinant in the development and reducing it as much as possible should be driving the design of fusion power plants.

Summary and discussion

In summary, we have discussed building blocks leading up to an analysis of the innovation strategy for nuclear fusion. To start, we have shown that the pace at which the energy transition can be realized is limited by the lifetime of the supplementing technologies, not the incumbents. Breaking this relationship to achieve a ‘forced transition’ leads inevitably to an overshoot in industrial capacity. We illustrated this logic with the examples of the introduction of LED lighting and smartphones, and showed that the lifetime of wind and PV power is just short enough to be compatible with the goals in the IEA NetZeroEmission scenario. For infrastructure with a significantly longer lifetime, such as nuclear fission or fusion, a significant contribution by 2050 is unlikely to be realisable—which isn’t to say that it cannot play an important role the second half of the century.

We then analysed how the build time, learning time and generation time are related. In relation to innovation, or the possibility to achieve efficient learning, we identified the learning rate-limited and generation time-limited regimes. Here, too, technologies that are characterized by large unit size and correspondingly long build time stand apart. These are apt to suffer from slow learning due to the limited time between start of operation of a new generation and the launch of the next generation. This intrinsic conflict of time scales is exacerbated when fast exponential growth is required. In that case, there is either very limited learning from one generation to the next, or there is a very large jump in volume between generations. The latter entails a large technological risk, even apart from the fact that it is difficult to make an industry—at that scale—grow in quantum jumps. The history of nuclear fission bears witness of these issues, having stayed with Gen2 reactor design for several decades while showing limited learning. For nuclear fusion, the same unfavourable characteristics are seen.

Two strategies can be followed in order to alleviate these principal drawbacks. First, in order to accelerate learning, fusion should adopt a strategy of testing out as many potential technical solutions at the same time as possible. In the present phase of development, this applies to fundamentally different reactor concepts. By extension, once one or more viable reactor concepts have been identified and demonstrated, this strategy should be applied at the subsystem level. In this way the amount of learning in a given time can be increased, i.e. the learning rate is enhanced. To lift the generation time limitation, the build time must be reduced as much as possible. This means that concepts or designs that are modular or linear are favoured over those that have an intrinsically complex build-up, and that an accelerated assembly process is worth an extra cost.

Both strategies require increased funding levels compared to a linear ‘pick the winner’ approach. But as we argued in ‘Estimation of the optimal parallelisation: Case study of nuclear fusion’ section on the basis of a Net Present Value analysis, both steps follow a sound economic logic. Parallelisation of trials, in a given amount of time, de-risks through accelerated learning. This results in a higher evaluation of the future revenues. Acceleration of the sequence of generations made possible by a reduction of the build time has many advantages, but simply the fact that it brings the revenues forward and therefore reduces the discounting warrants significant spending on shortening the build time. In addition, a shortened build time, for a given generation time, leaves more time for learning per generation.

Returning to the observation that in the past few years the number of fusion start-ups has grown quickly to more than 40 worldwide in 2023, we note that together they appear to constitute the strategy outlined above. They are all committed to drastically shortening build times, several of them are already actively organizing the supply chains and aim for designs and manufacturing processes that allow modularization and parallelization of production and assembly. Whereas each works on acceleration and de-risking of their own project, on the macro-level, too, these companies together de-risk and accelerate the development of a fusion reactor. Effectively they move fusion in the direction of the lower left corner in Fig. 7. Investors recognize this portfolio aspect and several of them back multiple private fusion companies.

In the analysis in ‘Estimation of the optimal parallelisation: Case study of nuclear fusion’ section 4, it was assumed that all concepts have equal probability of success, and that these probabilities are uncorrelated. Neither of these conditions will be fulfilled in practice, but the effects of loosening them are not fundamental. If the probabilities are unequal, we could sort the concepts by probability of success and limit the parallel trials to the topmost successful ones—imagining for the minute that we possess the prescience to do that. With reference to Fig. 8a, this would result in a probability curve that shows a steeper rise to saturation, where the level of saturation will not quite reach unity. This does not affect the estimation of Nopt in a fundamental way, it just gives a shift to the lower values, whereas the absolute value of <R1 > will come out lower, too. In terms of spending the available budget wisely, one could argue that the less likely concepts may require (much) less money for a proof of principle. This is reflected in the distribution of private funding for the fusion start-ups, of which only a handful receive by far the largest share of the total funding [10]. The assumptions that the probabilities of success are uncorrelated is certainly not fully realistic. After all, the concepts aim at overcoming the same obstacles that make nuclear fusion such a hard problem. However, despite common factors, it is also true that the different approaches make different choices. Magnetic confinement systems have fundamentally different issues than inertial confinement approaches, in some systems the neutron activation of the ‘first wall’ material is a crucial problem that is avoided altogether in alternative concepts that feature a reactor design that does not have a first wall, other concepts are based on an aneutronic fusion reaction avoiding the issues with neutron damage and activation at the cost of a smaller reaction rate, etc. Therefore, while there is certainly some degree of correlation between the probabilities of success, there is also ample variability that decorrelates them.

These considerations leave the main conclusion unaltered: the best strategy is to try all reasonable options. And notably, trying to ‘pick the winner’ does not appear to be a good strategy in any scenario. A portfolio approach in which the risk is spread over 5–15 different concepts is the rational approach, where ideally a few concepts are kept for further development in Gen1.

Conclusion

Based on considerations of the speed of the required energy transition, the characteristic times of learning and innovation during such a transition, and the concept of growth through generations, we argue that the parallel development of as many fusion concepts as are available, even those that have a modest probability of success, is an economically sound strategy. It reduces the risk of the development of fusion power and brings its deployment forward, factors that increase its value, both in terms of monetary return on investment as in terms of contribution to a sustainable energy system. In addition, we gave arguments why a reduction of the build time of fusion reactor is key to successful deployment and generation-on-generation learning.

Acknowledgements

The authors should like to acknowledge the discussions with Guido Lange, and with undergraduate students who did projects in the TU/e techno-economics-of-fusion-energy team over the past few years: Polle van Berlo, Ruben Wierda, Dilys Mertens, Pavan Teki, Noel Weerensteyn, Alexander Lugtenberg, Mustahsan Majeed, Sophie Broekers, Thimo Gubbels, and Shaughn Prickarts.

Funding

This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No. 101052200 — EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.

Conflict of Interest

The authors do not have any conflicts of interest.

Authors' contributions

The paper is the result of a close collaboration between both authors. The writing was mostly done by NLC.

Credit author statement

Niek Lopes Cardozo (Conceptualization [lead], Data curation [lead], Formal analysis [lead], Funding acquisition [lead], Investigation [lead], Methodology [lead], Project administration [lead], Resources [lead], Software [lead], Supervision [equal], Validation [lead], Visualization [lead], Writing—original draft [lead]) and Samuel Ward (formal analysis [supporting], Investigation [supporting], Methodology [supporting], Validation [supporting], Writing—original draft [supporting]).

Data availability

All data used is from publicly available sources, as referenced. The SSG model and the analysis in Section 4 are analytical.

Footnotes

1

The impact of climate change is in the news every day. To give one official quote: In his closing speech at the COP27 conference (Nov 2022, Sharm el Sheikh) UN secretary-General stated: “But let’s be clear. Our planet is still in the emergency room. We need to drastically reduce emissions now—and this is an issue this COP did not address. A fund for loss and damage is essential—but it’s not an answer if the climate crisis washes a small island state off the map—or turns an entire African country to desert. The world still needs a giant leap on climate ambition. The red line we must not cross is the line that takes our planet over the 1.5° temperature limit. To have any hope of keeping to 1.5, we need to massively invest in renewables and end our addiction to fossil fuels.

2

See e.g. the unprecedented hike of the gas price following the start of the Russian invasion of Ukraine.

3

On March 9, 2022 the EUROfusion organization announced a new record of fusion energy generated in the Joint European Torus, which received worldwide media coverage: see https://euro-fusion.org/eurofusion-news/european-researchers-achieve-fusion-energy-record/ retrieved 26 April 2024; On 13 Dec 2022, Lawrence Livermore National Lab announced to have achieved fusion ignition in the National Ignition Facility: see https://www.llnl.gov/news/lawrence-livermore-national-laboratory-achieves-fusion-ignition retrieved 26 April 2024, and Zylstra, A. B., Hurricane, O. A., Callahan, D. A., Kritcher, A. L., Ralph, J. E., Robey, H. F., … Zimmerman, G. B. (2022). Burning plasma achieved in inertial fusion. Nature, 601(7894), 542–548. doi:10.1038/s41586-021-04281-w; See also e.g. Wurzel, S.E. and Scott, C.H. Progress toward fusion energy breakeven and gain as measured against the Lawson criterion, Physics of Plasmas 29, 062103 (2022)

References

2.

Way
R
,
Ives
MC
,
Mealy
P
et al. 
Empirically grounded technology forecasts and the energy transition
.
Joule
2022
;
6
:
2057
82

4.

Schwartz
JA
,
Ricks
W
,
Kolemen
E
et al. 
The value of fusion energy to a decarbonized United States electric grid
.
Joule
2023
;
7
:
675
99

5.

The EUROfusion Roadmap. https://euro-fusion.org/eurofusion/roadmap/. In June 2023 EUROfusion proposed an acceleration of this roadmap: https://euro-fusion.org/eurofusion-news/we-need-to-change-gears/ (

26 April 2024, date last accessed
)

7.

Fusion Industry Association
. (
2022
). The global fusion industry in 2022, 31. https://www.fusionindustryassociation.org/_files/ugd/202e0f_4c69219a702646929d8d45ee358d9780.pdf (
26 April 2024, date last accessed
)

9.

Sweerts
B
,
Detz
RJ
,
van der Zwaan
B
.
Evaluating the role of unit size in learning-by-doing of energy technologies
.
Joule
2020
;
4
:
967
70

11.

Grubler
A
.
The costs of the French nuclear scale-up: a case of negative learning by doing
.
Energy Policy
2010
;
38
:
5174
88

12.

Lovering
JR
,
Yip
A
,
Nordhaus
T
.
Historical construction costs of global nuclear power reactors
.
Energy Policy
2016
;
91
:
371
82

13.

Rubin
ES
,
Azevedo
IML
,
Jaramillo
P
et al. 
A review of learning rates for electricity supply technologies
.
Energy Policy
2015
;
86
:
198
218

14.

Ansar
A
,
Flyvbjerg
B
.
How to solve big problems: bespoke versus platform strategies
.
Oxf Rev Econ Policy
2022
;
38
:
338
68

15.

Pearson
RJ
,
Costley
AE
,
Phaal
R
et al. 
Technology roadmapping for mission-led agile hardware development: a case study of a commercial fusion energy start-up
.
Technol Forecast Soc Chang
2020
;
158
:
120064

16.

Mazzucato
M
Mission Economy: A Moonshot Guide to Changing Capitalism
.
New York, NY
:
Harper Business
,
2021

18.

Vinichenko
V
,
Jewell
J
,
Jacobsson
J
et al. 
Historical diffusion of nuclear, wind and solar power in different national contexts: implications for climate mitigation pathways
.
Environ Res Lett
2023
;
18
. https://doi.org/10.1088/1748-9326/acf47a

19.

Odenweller
A
,
Ueckerdt
F
,
Nemet
GF
et al. 
Probabilistic feasibility space of scaling up green hydrogen supply
.
Nat Energy
2022
;
7
:
854
65

20.

International Atomic Energy Agency
Nuclear Power Reactors in the World, Reference Data Series No. 2
.
Vienna
:
IAEA
,
2022

21.

Lopes Cardozo
NJ
,
Lange
AGG
,
Kramer
GJ
.
Fusion: expensive and taking forever?
J Fusion Energ
2016
;
35
:
94
101

22.

Lopes Cardozo
NJ
.
Economic aspects of the deployment of fusion energy: the valley of death and the innovation cycle
.
Philos Trans R Soc A Math Phys Eng Sci
2019
;
377
:
20170444

23.

Kramer
G
,
Haigh
M
.
No quick switch to low-carbon energy
.
Nature
2009
;
462
:
568
9

26.

The comparison to evolution has often been made, see e.g. Solée, R.V., Valverde, S., Casals, M.R., Kauffman, S.A., Farmer, D. and Eldredge, N. (2013), The evolutionary ecology of technological innovations. Complexity 18:15–27. doi:10.1002/cplx.21436

28.

Entler
S
,
Horacek
J
,
Dlouhy
T
et al. 
Approximation of the economy of fusion energy
.
Energy
2018
;
152
:
489
97

29.

Maisonnier
D
,
Campbell
D
,
Cook
I
et al. 
Power plant conceptual studies in Europe
.
Nucl Fusion
2007
;
47
:
1524
32

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.