The Double Fantasy of Net Zero


Ventura Photonics Climate Post 29, VPCP 0029.2

October 26, 2023

Oct. 31, V2: Figures added, minor text changes,


Roy Clark


Download .pdf file





A ‘CO2 doubling’ from 300 to 600 ppm produces a maximum decrease in the rate of cooling in the troposphere of +0.08 °C per day. The change in temperature during each integration step in the MW67 model is too small to accumulate over time. It took a year for the model to reach a steady state. M&W failed to explain how this can be detected in the normal daily and seasonal temperature fluctuations.



Summary


The energy fantasy of Net Zero is built on the equilibrium climate fantasy land created by the early steady state climate models. The idea that an increase in the atmospheric concentration of CO2 will cause global warming is a fantasy and the idea that the world can be saved from the fictional global warming apocalypse by reducing CO2 emissions is a double fantasy.


There are three parts to the climate fraud. First, climate energy transfer was oversimplified using the equilibrium climate assumption. This created global warming as a mathematical artifact when the CO2 concentration was increased in the early ‘steady state air column’ climate models. Later models were simply ‘tuned’ to match a global mean temperature record using a contrived set of radiative forcings. Second, there was ‘mission creep’. As funding was reduced for NASA space exploration and US Department of Energy (DOE) nuclear programs, climate modeling became an alternative source of revenue. The simplified climate models were accepted without question. Third, there was a deliberate decision by various outside interests, including environmentalists and politicians to exploit the fictional climate apocalypse to further their own causes. The climate models used to perpetuate the climate fraud are no longer based on science. They are political models based on the pseudoscience of radiative forcings, feedbacks and climate sensitivity that are ‘tuned’ to meet political goals. The climate modelers are paid to provide the climate lies and propaganda needed to justify public policy that restricts the use of fossil fuels. Climate science has degenerated beyond dogma into the Imperial Cult of the Global Warming Apocalypse.


Surface energy transfer is considered in more detail in the book Finding simplicity in a complex world – The role of the diurnal temperature cycle in climate energy transfer and climate change, Clark and Rörsch, 2023. Further information is available at ClarkRorschPublications.com.


A more detailed description of the climate fraud is provided in the Ventura Photonics Climate Post VPCP30 The equilibrium climate fraud [VPCP30.ClimateFraud].



The Steady State Climate Fantasy Land


Non-equilibrium climate energy transfer was described by Fourier in his memoirs (reviews) from 1824 and 1827. However, this work was ignored and the myth of the equilibrium climate was introduced by Pouillet in 1836. Physical reality was abandoned in favor of mathematical simplicity. In the early 1860s, Tyndall speculated that changes in the atmospheric concentration of CO2 could cycle the earth through an Ice Age. This was gradually transformed into an irrational belief that the CO2 produced by fossil fuel combustion could change the temperature of the earth. The basic steady state fantasy land was introduced by Arrhenius in 1896. He simplified his ‘climate’ to a uniform air volume with a partially reflective blackbody surface illuminated by a 24 hour average sun. When the CO2 concentration is increased in this model, there is an initial decrease in the long wave IR flux returned to space. The equilibrium assumption requires that the surface temperature must increase to restore the energy balance. This is just a mathematical artifact of the calculation. In the real atmosphere, such temperature changes are too small to measure in the normal daily and seasonal temperature fluctuations.


Starting in the early 1960s, it was decided that the rather primitive global circulation models (GCMs) used for weather forecasting at that time could be modified using the equilibrium climate assumption and radiative transfer calculations to predict ‘climate’. The first step was the development of a one dimensional radiative convective (1-D RC) model that could calculate a temperature profile of the atmosphere for incorporation into a GCM. This work is described in four key papers, Manabe and Möller [1961], Manabe and Strickler [1964], Manabe and Wetherald (M&W) [1967] and M&W [1975], (MM61, MS64, MW67 and MW75).


A radiative transfer algorithm is a mathematical tool that can be used to calculate the IR radiation field in the atmosphere using the temperature and species profiles and spectroscopic data as inputs. It simply provides a ‘snapshot’ of the IR flux in the atmosphere for the conditions specified. In the early climate models, the radiative transfer calculations were extended to include the heating effects of the absorbed solar flux by the near IR (NIR) overtone bands of H2O and CO2 and by the UV/vis absorption of ozone. The net heating and cooling fluxes at each atmospheric level in the model were then converted to rates of heating and cooling by dividing by the heat capacity of each air layer.


This calculation was incorporated into an iterative finite difference or ‘time marching’ procedure. The temperature change for each atmospheric level was calculated by multiplying the rates of heating and cooling by a finite time interval, usually 8 hours. These temperature changes were then added to the temperatures from the previous iteration and used to obtain a new set of air layer temperatures. This process was repeated until the model reached a steady state where there was almost no further temperature change. This typically took a year or more of model time (step time multiplied by number of iterations) although the computer calculation time was much less. Manabe and his group never explained how these small changes in temperature could be measured in a real atmosphere with much larger daily and seasonal temperature changes. The temperature change in each step is ‘buried in the noise’ and does not accumulate. The ‘time marching’ procedure introduced in MM61 is invalid.


In MW67 a fixed relative humidity (RH) distribution was added to the air layers. This created a ‘water vapor feedback’ in the model. As the air layer temperature increased, the absolute water vapor concentration increased to maintain the fixed RH distribution. This amplified the initial mathematical warming artifact created when the CO2 concentration was increased in the model. M&W did not understand that surface temperature is determined by the downward LWIR flux from the air layers closest to the surface. Half of this flux originates from within the first 100 m layer. Here the RH and the absolute humidity can vary significantly over the daily and seasonal cycles. Almost all of the downward LWIR flux from the troposphere to the surface originates from within the first 2 km layer. The atmospheric pressure at 2 km altitude is near 800 mb.


M&W used a hypothetical doubling of the CO2 concentration to evaluate their model. The increase in surface temperature from such a doubling became accepted as a ‘benchmark’ that could be used to compare climate models. This became known as the ‘equilibrium climate sensitivity’. For the MW67 model, the sensitivity was 2.9 °C. For the later H81 it was ‘tuned’ to 2.8 °C. The correct value is ‘too small to measure’.


MW67 is reviewed in more detail in the Ventura Photonics Climate Post VPCP 27.


In MW75, M&W incorporated the MW67 model into each unit cell of a ‘highly simplified’ GCM. When the CO2 concentration was increased, this GCM had to create global warming, by definition from the mathematical artifacts in the MW67 algorithms. This established a ‘benchmark warming’ that later models had to meet in order to justify continued funding.



Mission Creep 1: The NASA Copycats


The climate modelers at NASA started out by studying radiative transfer in planetary atmospheres, mainly Mars and Venus. On both planets, the atmospheric composition is approximately 95% CO2. As NASA funding was reduced towards the end of the Apollo (moon landing) program that finished in 1972, these modelers began to expand their work and analyze the earth’s climate [Hansen, 2000]. They had no understanding of climate energy transfer on a rotating water planet. Melodramatic claims about climate change related to ‘runaway’ greenhouse effects or ‘air pollution’ were used to justify the extension of their radiative transfer studies to the earth’s atmosphere. They failed to conduct any model validation or ‘due diligence’ and blindly accepted the 1-D RC equilibrium air column model and the CO2 warming dogma. They just wanted funding to continue their work on atmospheric energy transfer. They followed M&W into the equilibrium climate fantasy land and have never left.


During the 1970s there was a global cooling scare related to the coupling of the cooling phase of the Atlantic Multi-decadal Oscillation (AMO) to the weather station record. Since ocean cooling was not part of the climate change narrative, Rasool and Schneider [1971] claimed that an increase in aerosol concentration could over-ride any CO2 induced warming and produce atmospheric cooling. If this continued, then it could trigger an Ice Age. At the time, both authors were with NASA Goddard. In 1975, Ramanathan at NASA Langley claimed that an increase in the atmospheric concentration of chlorofluorocarbons (CFCs) could produce an increase in surface temperature. This was later recognized as the first use of radiative forcing, although the term ‘radiative forcing’ was not introduced until later [Ramaswamy et al, 2019]. Both of these papers used a 1-D equilibrium model and claimed that this could predict the earth’s ‘climate’. The calculated increases in surface temperature produced by increases in ‘greenhouse gas’ concentration were just mathematical artifacts of the modeling assumptions.


In H76, a group from NASA Goddard that included Hansen extended MW67 to include additional ‘minor species’ including N2O, CH4, NH3, HNO3, C2H4, SO2, CCl2F2, CCl3F, CH3Cl and CCl4. The foundation of the equilibrium fantasy land was completed with the publication of H81. This added a slab ocean model, the CO2 doubling ritual and the calculation of the global temperature record using a contrived set of ‘radiative forcings’ to the 1-D RC model. H81 created the prototype political climate model. The complexities of the earth’s climate were reduced to the single time series of numbers in the global average temperature record and the climate model used a contrived set of pseudoscientific radiative forcings to match these numbers. There are 9 fundamental scientific errors or groups of errors in H81. These are discussed in detail in the Ventura Photonics Climate Post VPCP 17 A review of the 1981 paper by Hansen et al [VPCP12.H81].


The early climate modeling work at NASA created a ‘pipeline’ for the growth of the climate fraud. Graduate students trained in radiative transfer calculations became equilibrium climate modelers at NASA without any understanding of time dependent climate energy transfer. Their expertise was in the computer algorithms needed for radiative transfer and later, fluid dynamics. As these researchers moved on to other positions outside of NASA, they took their climate modeling expertise with them. This established a closed group of ‘climate cronies’ that ‘peer reviewed’ each other’s publications and grant proposals.



Radiative Forcing and ‘Extreme Weather Events’


H81 (figure 5) claimed that the ‘global temperature record’ could be simulated using a combination of increasing CO2 concentration, changes to the solar flux and variations in aerosol concentrations. As computer technology improved, coupled atmosphere-ocean GCMs were used to simulate the global mean climate record using a contrived set of radiative forcings. The 1-D RC algorithms were hidden inside the unit cells of the GCMs. In the Third IPCC Climate Assessment Report (TAR) [IPCC, 2001], the number of ‘forcing agents’ used in the climate models had increased to 15. In addition, starting with the TAR, the radiative forcings were separated into ‘anthropogenic’ and ‘natural’ forcings. This was used as a political tool to control the energy supply by claiming an anthropogenic cause for ‘extreme weather events’. Most of the initial work was performed at the UK Hadley Climate Center [Stott et al, 2000, Tett et al, 2000]. This center was established in 1990 to provide climate propaganda for Margaret Thatcher, the UK prime minister. It continues to support the climate fraud, even today. Starting in 2012, the Bulletin of the American Meteorological Society has published an annual supplement ‘Explaining extreme climate events of [Year] from a climate perspective’. Most of the papers published in this series claim that the ‘anthropogenic’ radiative forcings used in the climate models have led to an increase of an ‘extreme weather event’ of one kind or another [Herring et al, 2022 and prior publications in this series]. All of this of course is pseudoscientific nonsense.


This is discussed in more detail in VPCP 25 A greenhouse gas forcing does not produce a measurable change in the in the surface temperature of the earth [VPCP25.RadiativeForcing]



Mission Creep 2: Climate Model Intercomparison


The Atomic Energy Commission was made part of the newly formed Department of Energy in 1977. As funding for nuclear programs was reduced, the National Labs were not restricted to their nuclear mission. They were allowed to jump on the climate bandwagon and became ‘climate modelers for hire - with supercomputers’. Lawrence Livermore Labs became a major center for the Coupled Model Intercomparison Project, CMIP. This allowed one fraudulent climate model to be compared with another without the need for model validation using measured data. The underlying pseudoscience was accepted without question. CMIP has now grown to at least 49 modeling groups. The CMIP has become a major source of fraudulent climate model data for the IPCC [Hausfather, 2019].



The Exploitation of the Climate Fraud


The exploitation of the climate modeling fraud by outside groups started in the 1970s [Hecht, 2007]. However, nature did not cooperate and the warming phase of the AMO was not detected in the climate record until 1985 by Wigley et al. The UN Intergovernmental Panel on Climate Change (IPCC) was established in 1988 and the Global Change Research Program (USGCRP) was established by presidential initiative in 1989 and mandated by congress in 1990. In the UK, the Hadley Climate Center was established in 1990.


The mission of the IPCC is to assess “the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change.” This is based on the a-priori assumption that human activities are causing CO2 induced global warming. There never was an attempt to objectively evaluate the scientific evidence of the cause of climate change. The mission of the USGCRP is ‘to coordinate federal research and investments in understanding the forces shaping the global environment, both human and natural, and their impacts on society’. Here, the USGCRP has failed in its mission to find the ‘natural forces’ such as ocean oscillations, downslope winds and high pressure domes that are responsible for climate change and extreme weather events such as fires, floods and droughts. The USGCRP has blindly copied the IPCC climate assessment reports and accepted the climate model results as real without any attempt at validation. Few, if any, of the analysts associated with the USGCRP have any expertise in climate energy transfer and many are not scientists at all.


A good example of this is the 2020 US Geological Survey Report, ‘Using information from global climate models to inform policymaking-The role of the U.S. Geological Survey’ [Terando et al, 2020]. Figure 1 from this report shows the global mean temperature record divided into ‘anthropogenic’ and ‘natural’ components and ‘attributes’ the anthropogenic warming to ‘human causes’. This figure was copied from the Fourth US National Climate Assessment (NCA4) and this in turn was copied from the Fifth IPCC climate assessment (AR5). The original article was published by Jones, Stott and Christidis in 2013. These authors are with the UK Met Office and Stott was one of the authors of the 2001 paper that started the extreme weather ‘attribution’ fraud. Terrando et al is considered in more detail in VPCP 26 The corruption of climate science, [VPCP26.Corruption].


Various political and environmental groups have been very successful at exploiting the climate fraud to further their own interests. Further details are provided in VPCP 30 The equilibrium climate modeling fraud, Section 12 [VPCP30.ClimateFraud].


The USGCRP and the attribution fraud are discussed in more detail in three posts, VPCP 07, Greenwashing Congress with NCA5: Refilling the pork barrel for the deep state [VPCP07.Greenwash], VPCP18 Down the rabbit hole [VPCP18.RabbitHole] and VPCP24 Follow the yellow brick road [VPCP24.YellowBrickRoad].



The Evidence for the Climate Modeling Fraud


The basic requirement of any scientific or engineering model is that it should correctly predict the measured variables of the physical process or system it is configured to simulate. If the model is wrong, it should be modified or replaced.


The ‘equilibrium’ climate models are also fraudulent, by definition, because of the assumptions used before a single line of computer code is even written. When the atmospheric concentration of a greenhouse gas such as CO2 is increased, there is small decrease in the LWIR flux emitted to space at the top of the atmosphere within the spectral emission regions of the greenhouse gas of interest. The radiative transfer calculation is correct. It is then claimed that this ‘radiative forcing’ changes the energy balance of the earth and that the surface ‘adjusts’ to a new ‘equilibrium state’ with a higher temperature. This is simply wrong.


1) The earth is never in equilibrium.

2) An LWIR ‘greenhouse gas’ radiative forcing does not change the energy balance of the earth, nor does it produce a measurable change in the surface temperature.

3) There can be no ‘CO2 signal’ in the global mean temperature record

4) There is no ‘water vapor feedback’.

5) There is no ‘climate sensitivity’ to CO2.

6) Climate GCMs have no predictive capabilities over climate time scales.


The evidence for the climate modeling fraud in these six areas will now be summarized. Links to posts with more details are included. Additional information is also provided in Clark and Rörsch [2023] (CR23). A more detailed discussion of the climate fraud is given VPCP30.ClimateFraud



1) The Earth is Never in Equilibrium


There are significant diurnal and seasonal time delays or phase shifts between the peak solar flux and the surface temperature response. These delays are clear evidence for a non-equilibrium thermal response. The seasonal subsurface phase shifts over land were described by Fourier in 1824. At mid latitudes, there are well defined seasonal phase shifts in the ocean surface temperature response. These are often coupled to the weather station record by weather systems that form over the ocean and then move over land. This is discussed in more detail in VPCP 028 Time Dependent Energy Transfer: The Forgotten Legacy of Joseph Fourier [VPCP 028 Fourier]. Further information is available in CR23.


2) An LWIR ‘Greenhouse Gas’ Radiative Forcing does not Change the Energy Balance of the Earth, nor does it Produce a Measurable Change in the Surface Temperature


Since 1800, the atmospheric CO2 concentration has increased by about 140 ppm from 280 to 420 ppm. This has produced a decrease in the LWIR flux at TOA near 2 W m-2. In addition to the small decrease in LWIR flux at TOA, there is a similar increase in downward LWIR flux to the surface from the lower troposphere [Harde, 2017]. This is shown in Figure 1.





Figure 1: a) the measured increase in atmospheric CO2 concentration from 1800 (Keeling curve) and b) calculated changes in atmospheric LWIR flux produced by an increase in atmospheric CO2 concentration from 0 to 760 ppm.



The decrease in LWIR flux emitted at the top of the atmosphere (TOA) produced by an increase in atmospheric greenhouse gas concentration is decoupled from the surface by molecular line broadening. Almost all of the downward LWIR flux from the atmosphere to the surface is emitted from within the first 2 km layer of troposphere. Approximately half of this is emitted from within the first 100 m layer. In the troposphere, the radiative heating and cooling processes in a local air parcel are fully coupled to the turbulent convective motion. Any small amount of heat released into the troposphere by an increase in the atmospheric concentration of ‘greenhouse gases’ is dissipated by wideband LWIR emission. This is illustrated in Figure 2 [Wijngaarden and Happer, 2022, Gibert et al, 2007, CR23].





Figure 2: a) HITRAN linestrengths at 296 K for H2O, CO2, O3, N2O and O3 plotted vs. wavenumber from 0 to 2500 cm-1. The smooth black line is the blackbody emission at 296 K. The number of lines plotted are indicated for each species. Because of the large number of lines, only 10% of the O3 lines, selected randomly, are plotted. The main atmospheric absorption bands of interest for CO2 are circled in red [Wijngaarden and Happer, 2022]. b) Transition from absorption-emission to free photon flux as the linewidth decreases with altitude. Single H2O line near 231 cm-1. c) Linewidths for H2O and CO2 lines in the 590 to 600 cm-1 spectral region for altitudes of 0, 5 and 10 km. d) Cumulative fraction of the downward flux at the surface vs. altitude for surface temperatures of 272 and 300 K, each with 20 and 70% relative humidity (RH). Almost all of the downward flux reaching the surface originates from within the first 2 km layer. e) The energy transfer processes for a local tropospheric air parcel (in a plane-parallel atmosphere). f) The dissipation of the absorbed heat from a ‘CO2 doubling’ by the normal tropospheric energy transfer processes (schematic). The wavelength specific increase in absorption in the CO2 P and R bands is dissipated as small changes in broadband LWIR emission and gravitational potential energy (CR23). g) The vertical velocity profile in the turbulent boundary layer recorded over 10 hours at the École Polytechnique, south of Paris, July 10th 2005 using Doppler heterodyne LIDAR [Gibert et al, 2007]. The change in vertical velocity is ±2 m s-1.



In a non-equilibrium system, a flux defines a rate of heating or cooling of a thermal reservoir. At low to mid latitudes, the total LWIR cooling rate in the troposphere is between -2.0 and -2.5 °C per day [Feldman et al, 2008]. For a CO2 doubling from 300 to 600 ppm, the maximum change in this cooling rate is +0.08 °C per day [Iacono et al, 2008]. At a lapse rate of -6.5 °C km-1, an increase in temperature of +0.08 °C is produced by a decrease in altitude of 12 meters. This is equivalent to riding an elevator down four floors. The small amount of additional heat generated in the troposphere by this CO2 doubling is dissipated by wideband LWIR emission and does not couple to the surface. An LWIR ‘greenhouse gas’ forcing does not change the energy balance of the earth. This is illustrated in Figure 3. At higher altitudes in the stratosphere, near 50 km, this increase in CO2 concentration produces an increase in the cooling rate of approximately -3 °C per day as shown in Figure 3f. However, the air density here is low, 1 mb or 0.001 atm., so the change LWIR flux is small, near -40 μW m-2. In addition, these changes in flux do not couple downwards into the lower troposphere because of line broadening effects at lower altitudes.





Figure 3: a) the spectrally resolved LWIR emission to space for 0, 400 and 800 ppm CO2 concentrations [Wijngaarden and Happer, 2022], b) the CO2 emission band on an enlarged scale, c) the difference between the 800 and 400 ppm CO2 emission, e) the total and band resolved cooling rates vs. altitude for a), e) the changes in the rate of cooling in the troposphere for a CO2 doubling from 287 to 574 ppm and f) the corresponding changes in the rate of cooling for the stratosphere.



Over the oceans, the penetration depth of the LWIR flux is less than 100 micron (0.004 inches) [Hale and Querry, 1973]. Here it is fully coupled to the much larger and more variable wind driven evaporation. Within the ±30° latitude bands the sensitivity of the latent heat to the wind speed is at least 15 W m-2/m s-1 [Yu et al, 2008]. This is illustrated in Figure 4 (CR23). For each increase in wind speed of 1 meter per second, the latent heat flux increases by 15 W m-2. The entire increase of 2 W m-2 in downward flux from the 140 ppm increase in CO2 concentration is dissipated by an increase in wind speed of 13 centimeters per second. At present, the average annual increase in CO2 concentration is near 2.4 ppm per year. This produces an increase in the downward LWIR flux to the surface of 0.034 W m-2. This is dissipated by an increase in wind speed of approximately 2 millimeters per second. An LWIR ‘greenhouse gas’ forcing does not change the temperature of the oceans.





Figure 4: a) Penetration depth (micron) of LWIR radiation into the ocean surface for 99% attenuation, 1200 to 200 cm-1. The approximate locations of the CO2 P and R branches and the overtone bands are indicated. b) The change in ocean latent heat flux per unit wind speed based on zonal averages.



Over land almost all of the absorbed solar flux is dissipated within the same diurnal cycle. The net cooling flux is insufficient to remove the absorbed heat. As the surface warms and cools during the day, the excess heat is dissipated by moist convection. There is a convection transition temperature each evening when the surface and air temperatures equalize and convection stops. The surface then cools more slowly overnight by net LWIR emission. The convection transition temperature is reset each day by the local weather system passing through. These day to day temperature changes are much larger than any change in temperature that can be produced by the increase in downward LWIR flux from CO2. An LWIR ‘greenhouse gas’ forcing does not change the land surface temperature. This is discussed in more detail in VPCP 016 The radiation balance of the earth [VPCP16.RadiationBalance] and in CR23.



3) There can be no ‘CO2 Signal’ in the Global Mean Temperature Record


When the global mean temperature record, such as the HadCRUT4 data set is evaluated, the dominant term is found to be the Atlantic Multi-decadal Oscillation (AMO). The AMO is a long term quasi-periodic oscillation in the surface temperature of the N. Atlantic Ocean from 0° to 60° N. Superimposed on the oscillation is a linear increase in temperature related to the recovery from the Little Ice Age (LIA) or Maunder minimum. Before 1970, the AMO and HadCRUT4 track quite closely. This includes both the long period oscillation and short term fluctuations. There is an offset that starts near 1970 with HadCRUT4 approximately 0.3 °C higher than the AMO. The short term fluctuations are still similar. The correlation coefficient between the two data sets is 0.8. The influence of the AMO extends over large areas of N. America, Western Europe and parts of Africa. The weather systems that form over the oceans and move overland couple the ocean surface temperature to the weather station data through the diurnal convection transition temperature (CR23). The 1940 AMO peak in the global temperature record has long been misunderstood or ignored.


There is still an additional part of the recent HadCRUT4 warming that is not included in the AMO signal. This may be explained as a combination of three factors. First there are urban heat islands related to population growth that were not part of the earlier record. Second, the mix of urban and rural weather stations used to create the global record has changed. Third, there are so called ‘homogenization’ adjustments that have been made to the raw temperature data. These include the ‘infilling’ of missing data and adjustments to correct for ‘bias’ related to changes in weather station location and instrumentation. It has been estimated that half of the warming in the ‘global record’ has been created by such adjustments.


This is discussed in more detail in VPCP 020 There can be no CO2 signal in the global mean temperature record [VPCP20.NoCO2Signal]


4) There is no ‘Water Vapor Feedback’


In order to match the average atmospheric temperature profile in MW67, Manabe and Wetherald added a fixed relative humidity (RH) distribution to their 1-D RC model. When the CO2 concentration was increased, the initial mathematical warming artifact was amplified by a ‘water vapor feedback’. As the temperature increased, so did the absolute water vapor pressure at fixed RH. This increased the downward LWIR flux to the surface which in turn increased the surface temperature. M&W assumed that the average of measured temperatures in a non-equilibrium atmosphere could be simulated using a steady state 1-D RC model. However, because of molecular line broadening, the downward LWIR flux to the surface is dominated by the emission from the air layers closest to the surface. As the temperature and evaporation changes during the diurnal and seasonal cycles, both the relative and absolute humidities change. Any small increase in downward LWIR flux produced by a doubling of the CO2 concentration is too small to produce a measurable effect on the RH or the surface temperature. In signal processing terms, there is too much noise for the CO2 signal to be detected. As discussed in Section 2, the maximum change in the daily tropospheric cooling rate produced by a CO2 doubling is +0.08 °C. The change in temperature during each step in the time marching procedure used by M&W does not accumulate in the turbulent boundary layer near the surface. All of the 1-D RC models are therefore invalid, starting with MM61. Water vapor feedback is a mathematical artifact of the MW67 model. This is discussed in more detail in VPCP 28 A review of the 1967 paper by Manabe and Wetherald [VPCP28.MW67Review].



5) There is no ‘Climate Sensitivity’ to CO2


As discussed above, an LWIR radiative forcing produced by an increase in greenhouse gas concentration does not change the energy balance of the earth, nor does it produce a measurable change in surface temperature. The LWIR flux emitted at TOA is decoupled from the surface by molecular line broadening. The change in the LWIR cooling rate produced by a greenhouse gas forcing is too small to accumulate in the diurnal variations in flux and temperature at the surface and in the turbulent surface boundary layer. In the early 1-D RC climate models, the climate sensitivity was created by the water vapor and other feedbacks. MW67 had a sensitivity of 2.9 °C for ‘clear sky’. H81 used a climate sensitivity of 2.8 °C. Various feedbacks were incorporated into the later coupled ocean-atmosphere GCM climate models. These are ‘tuned’ to match the global temperature record using a contrived set of radiative forcings. The climate sensitivity is the increase in surface temperature produced in these models by a doubling of the CO2 concentration. Any climate model that has a climate sensitivity other than ‘too small to measure’ is by definition, fraudulent.


The earlier global mean temperature record is dominated by the coupling of the AMO to the weather station record. Later warming includes urban heat island effects, changes to the rural/urbans stion mix and significant ‘homogenization’ adjustments.


This is discussed in more detail in VPCP 020 There can be no CO2 signal in the global mean temperature record [VPCP20.NoCO2Signal].


6) Climate GCMs have no Predictive Capabilities over Climate Time Scales


The primary interest of Manabe’s group was the mathematical challenge of developing a global circulation climate model. They did not understand that this was impossible. The climate GCMs require the solution of very large numbers of coupled non-linear equations. The errors associated with these solutions increase over time. In addition, the solutions may become unstable. This was demonstrated by Lorenz in 1963. A practical limit to weather forecasting models is about 12 days ahead [Lorenz, 1973]. Furthermore, there was no equilibrium average climate that could be used to force a solution to their equations. The GCM climate models may be considered as quasi-stable pseudorandom number generators all tuned to the same starting seed – to match the global mean temperature record. This is discussed in more detail in VPCP 019 Explaining the Climate Fraud Section 10.0 Lorenz Instabilities.



The Imperial Cult of the Global Warming Apocalypse


Eisenhower’s warning about the corruption of science by government funding has come true. Climate science has been thoroughly corrupted by a deluge of money. The scientific dogma of an equilibrium average climate became accepted in the nineteenth century. The steady state climate model fantasy introduced by Arrhenius was copied without question by Manabe’s group starting in the 1960s. The NASA climate modelers blindly copied the MW67 model. They used melodramatic predictions of the global warming apocalypse as an excuse to obtain funding for their radiative transfer calculations and later extended their work to the fluid dynamics of a global circulation model. They were soon trapped in a web of lies of their own making. M&W set the benchmark for climate sensitivity at 2.9 °C with their water vapor feedback. H81 used 2.8 °C. Little has changed in over 40 years. For 42 models in CMIP6 the climate sensitivity was 3.8 ±1.1 °C.


The exploitation of the fraudulent climate models for political purposes increased significantly after the formation of the IPCC in 1988. Climate modeling degenerated past dogma into The Imperial Cult of the Global Warming Apocalypse. Continued funding requires that the climate modelers keep on playing computer games in the equilibrium climate fantasy land. They have become prophets of the Imperial Cult. They must believe in the pseudoscience of radiative forcings, feedbacks and climate sensitivity. Scientific reason will not prevail. The climate modelers are paid to provide the climate lies and propaganda needed to justify public policy to restrict the use of fossil fuels. In order to stop the Net Zero fantasy, it is also necessary to expose and stop the underlying climate modeling fantasy.


The Imperial Cult of the Global Warming Apocalypse believes that it has the divine right to save the world from a non-existent problem created using the pseudoscientific fantasy of radiative forcings, feedbacks and climate sensitivity. There is no cost or technical justification for The Green Energy Fantasy. The utility scale use of wind and solar power must fail. The raw materials and battery technology do not exist. The fire risk from high energy density lithium batteries is too great. Hydrogen is an explosion waiting to happen.