WHAT PART OF ‘TOO SMALL TO MEASURE’ DONT YOU UNDERSTAND?


Ventura Photonics Climate Post 21, VPCP 0021.01


Roy Clark


Download .pdf File



SUMMARY

The surface temperature of the earth is determined by the time dependent, interactive energy transfer processes that are coupled to the surface thermal reservoir. In order to evaluate the effect of an increase in atmospheric CO2 concentration on the surface temperature it is necessary to determine the change in heat content or enthalpy of this reservoir produced by the increase in downward longwave IR (LWIR) flux to the surface by CO2 over the diurnal and seasonal temperature cycles. This is discussed in detail in the recent book Finding Simplicity in a Complex World – The Role of the Diurnal Cycle in Climate Energy Transfer and Climate Change by Clark and Rörsch, Amazon, 2023. The results of such thermal engineering calculations show that any CO2 induced changes in surface temperature are ‘too small to measure’. Climate model results from the CMIP6 model ensemble show a ‘climate sensitivity’ or increase in global average temperature in the range 1.8 to 5.6 °C produced by a doubling of the atmospheric CO2 from 280 to 560 parts per million (ppm). This is based on the assumption that the decrease in long wave LWIR flux produced at the top of the atmosphere (TOA) by an increase in the atmospheric CO2 concentration perturbs the energy balance of the earth. The global surface temperature is then presumed to warm to a new ‘equilibrium state’ that restores the energy balance. The elaborate scheme of radiative forcings, feedbacks and climate sensitivity used by the climate models is pseudoscientific nonsense. Physical reality has been abandoned in favor of mathematical simplicity. Eisenhower’s warning about the corruption of science by government funding has come true. The climate modelers are no longer scientists. They have become prophets of the Imperial Cult of the Global Warming Apocalypse.


The technical part of the climate fraud began in the nineteenth century with the ‘equilibrium air column’ used by Arrhenius. His oversimplified climate model had to create an increase in ‘equilibrium surface temperature’ when the atmospheric CO2 concentration was increased. Later, radiative transfer algorithms and a ‘water vapor feedback’ were added to the equilibrium air column by Manabe and Wetherald (M&W) to create a one dimensional radiative convective (1-D RC) model. The CO2 warming artifacts were now amplified by water vapor. M&W then went on to incorporate the 1-D RC model into every unit cell of a ‘highly simplified’ global circulation model. Other groups, notably Hansen et al at NASA Goddard ‘improved’ the 1-D RC model with additional greenhouse gases, various aerosols and a ‘slab’ ocean. They claimed that this model could simulate a ‘global average temperature’ derived from weather station and related data. The obvious contribution from the Atlantic Multi-decadal Oscillation (AMO) was ignored. By 1981, the foundation of the climate fraud using forcings, feedbacks and climate sensitivity had been established.


The number of groups using large scale climate models has increased from the 2 listed in the Charney report in 1979 to about 50 today. All of these models use a contrived set of ‘radiative forcings’ to match a ‘global mean temperature record’ based on a ‘homogenized’ temperature record derived from weather station and ocean surface temperature data. In addition, satellite based radiometers are used to generate a highly simplified ‘radiation balance’ of the earth. The radiative forcings are also combined with the ‘global mean temperature record’ to create a ‘measured’ climate sensitivity that can be compared to the climate model results.


Two external factors contributed to the growth of the climate fraud. The first was ‘mission creep’. As funding was reduced for NASA space exploration and for DOE nuclear programs, climate modeling became an alternative source of revenue. There was also a deliberate decision by various outside interests, including environmentalists and politicians to exploit the fictional climate apocalypse to further their own causes. The World Meteorological Organization (WMO) and the United Nations Environmental Program (UNEP) were used to promote the global warming scare. The UN Intergovernmental Panel on Climate Change (UN IPCC) was established in 1988 and the US Global Change Research Program (USGCRP) was established by Presidential initiative in 1989 and mandated by Congress in 1990. The IPCC has used the fraudulent climate models to create the illusion that a dangerous climate warming is being created by the increase in CO2 concentration. The USGCRP has blindly copied the IPCC reports.


This article describes the evolution of the climate modeling fraud from the nineteenth century equilibrium air column to the massive, multi-trillion dollar fraud we have today. The main focus is the evolution of the technical fraud and the failure to detect the climate fraud through peer review or government oversight.


Keywords: Carbon dioxide, climate change, climate sensitivity, greenhouse gas, ocean oscillations, radiation balance, radiative forcing, radiative transfer, surface temperature, water vapor feedback.



INTRODUCTION

The earth is a rotating water planet that has an atmosphere with an infrared radiation field. The troposphere functions as an open cycle heat engine that removes part of the heat from the surface by moist convection and transfers it to the middle troposphere. From here it is radiated to space, mainly by the water emission bands. The temperature at the surface-air interface is determined by the time dependent, interactive energy transfer processes that are coupled to the surface thermal reservoir. A change in temperature is determined by a change in heat content or enthalpy of this reservoir divided by the local heat capacity. The dominant surface energy transfer processes are the absorbed solar flux, the net LWIR flux, the moist convection or evapotranspiration and the subsurface thermal transport. (This does not include rainfall or freeze/thaw effects.) Over the oceans, approximately 90% of the solar flux is absorbed within the first 10 meter layer. The heat capacity of the oceans stabilizes the climate and reduces the temperature variations. There is no requirement for an exact flux balance between the solar heating and the surface cooling terms. The quasi-periodic ocean oscillations provide a natural ‘noise floor’ for the surface temperature. There are also significant time delays or phase shifts between the peak solar flux and the peak temperature response. These are clear evidence of a non-equilibrium thermal system [Clark and Rörsch, 2023] (CR23).


Since the start of the Industrial Revolution about 200 years ago, the atmospheric concentration of CO2 has increased by approximately 140 parts per million (ppm), from 280 to 420 ppm [Keeling, 2023]. This has produced a decrease near 2 W m-2 in the longwave IR (LWIR) flux emitted to space at the top of the atmosphere (TOA) within the spectral range of the CO2 emission bands. There is also a similar increase in the downward LWIR flux from the lower troposphere to the surface [Harde, 2017, Clark, 2013]. At present, the annual average increase in CO2 concentration is near 2.4 ppm. This produces an annual increase in the downward LWIR flux to the surface of approximately 0.034 W m-2. In order to evaluate the effect of this increase in atmospheric CO2 concentration on the surface temperature it is necessary to determine the change in enthalpy of the surface thermal reservoir produced by the increase in downward LWIR to the surface over the diurnal and seasonal temperature cycles. The results of such thermal engineering calculations [CR23, Chapter 8] show that:


1) The additional absorption of 2 W m-2 by the CO2 bands has not changed the temperature of the troposphere. Nor does it change the energy balance of the earth. The additional heat is simply reradiated to space as wideband LWIR emission.

2) The 2 W m-2 increase in downward LWIR flux to the surface has not produced a measurable change to land or ocean surface temperatures.

3) The annual increase of 0.034 W m-2 in downward LWIR flux to the surface cannot increase the ‘frequency and intensity’ of ‘extreme weather events’.


Any temperature increases produced by these changes in LWIR flux are ‘too small to measure’. In addition, CO2 is a good plant fertilizer, so there is a major agricultural benefit to an increase in CO2 concentration – enhanced agricultural production [CO2 Science, 2023].


Climate model results from the CMIP6 model ensemble show a ‘climate sensitivity’ to CO2 in the range 1.8 to 5.6 °C [IPCC 2021, Chap. 7, Zelinka et al, 2020, Hausfather, 2019]. This climate sensitivity is the increase in ‘global average temperature’ calculated by the climate models for a doubling of the CO2 concentration, usually from 280 to 560 ppm. The climate model results have been used to establish the 2 or 1.5 °C temperature limit incorporated into the Paris Climate Accord.


The introduction to Chapter 7 of the Working Group 1 Report in the latest UN Intergovernmental Panel on Climate Change (IPCC) Climate Assessment, AR6, WG1 The Earth’s energy budget, climate feedbacks, and climate sensitivity [IPCC, 2021] starts:


This chapter assesses the present state of knowledge of Earth’s energy budget, that is, the main flows of energy into and out of the Earth system, and how these energy flows govern the climate response to a radiative forcing. Changes in atmospheric composition and land use, like those caused by anthropogenic greenhouse gas emissions and emissions of aerosols and their precursors, affect climate through perturbations to Earth’s top-of-atmosphere energy budget. The effective radiative forcings (ERFs) quantify these perturbations, including any consequent adjustment to the climate system (but excluding surface temperature response). How the climate system responds to a given forcing is determined by climate feedbacks associated with physical, biogeophysical and biogeochemical processes. These feedback processes are assessed, as are useful measures of global climate response, namely equilibrium climate sensitivity (ECS) and the transient climate response (TCR).


A more concise summary was provided by Knutti and Hegerl [2008]:


When the radiation balance of the Earth is perturbed, the global surface temperature will warm and adjust to a new equilibrium state.


This description of climate energy transfer in terms of radiative forcings, feedback and climate sensitivity in an equilibrium average climate is pseudoscientific nonsense. The climate energy transfer processes have been oversimplified using the equilibrium climate assumption. When the atmospheric concentration of ‘greenhouse gases’ is increased, the climate models have been ‘tuned’ to create an increase in surface temperature that is supposed to match a ‘global mean temperature change’. The climate sensitivity is used a ‘benchmark’ to evaluate climate model performance. The climate models are fraudulent, by definition, before any software code is written because of the equilibrium assumption and other simplifications incorporated into the models.


The use of the equilibrium climate assumption to oversimplify climate energy transfer began in the nineteenth century and became accepted scientific dogma. It was adopted without question when the first computer climate models were developed, starting in the 1960s. As funding for space exploration and then nuclear energy diminished in the 1970s, various groups at NASA and the old Atomic Energy Commission [part of the Department of Energy (DOE) since 1977], decided to jump on the climate bandwagon. The mathematical artifacts created by the ‘equilibrium air column’ assumptions were accepted without question. Most of the expertise needed for climate studies involves mathematical analysis and computer programming. The physics of the surface energy transfer was neglected. The climate modelers soon became trapped in a web of lies of their own making. Melodramatic prophesies of climate warming became such a lucrative source of funding that the scientific process of hypothesis and discovery collapsed. Various political and environmental groups also decided to exploit global warming to further their own interests. The equilibrium assumption has degenerated beyond scientific dogma and has become part of the creed of the Imperial Cult of the Global Warming Apocalypse.


The technical foundation of the multi-trillion dollar climate fraud that we have today was established by the work of Manabe and Wetherald at NOAA and the ‘planetary atmospheres’ groups at NASA between 1967 and 1981. They blindly accepted the equilibrium climate assumption and used their fraudulent model results to create the illusion of a dangerous climate warming produced by fossil fuel combustion. Physical reality was abandoned in favor of mathematical simplicity. The climate modelers have been playing computer games in an equilibrium climate fantasy land since the 1960s. Eisenhower’s warning about the corruption of science by government funding has come true. The climate modelers are no longer scientists. They have become prophets of the Imperial Cult of the Global Warming Apocalypse. They have chosen to worship the sacred spaghetti plots generated by the climate models. The simplistic flux balance equations have become the Divine Text of the Apocalypse. There is a Holy Trinity of fraud. The radiative forcings must change the temperature of the earth. The satellite radiometers must create the desired climate energy imbalance. The ‘global mean temperature change’ must reveal the climate sensitivity. There has to be an equilibrium climate state that is perturbed by CO2 or the entire climate pyramid scheme collapses. Physical reality must not be allowed to interfere. Instead of a flat earth, there has to be a flat ocean where there are no wind driven oscillations and no phase shifts.


In order to understand how this fraud evolved, it is necessary to start with a brief review of the history of climate science from the nineteenth century to the publication of the first one dimensional radiative convective (1-D RC) model by Manabe and Wetherald [1967] (MW67). The assumptions and errors introduced in MW67 are then considered in detail. The mathematical warming artifacts created by the MW67 were exploited to establish two research bandwagons. The first was the development of large scale global circulation climate models and the second was the addition of other ‘greenhouse gases’ to the MW67 model that established the pseudoscience of radiative forcing. Starting with the work of Hansen et al in 1981, a contrived set of radiative forcings has been used to create an approximate match to a ‘global mean temperature change’ [Hansen et al, 1981]. As computer technology improved, the climate models became more complex and the equilibrium assumption shifted from the air column to a planetary average energy balance. The equilibrium air column approach was incorporated into each unit cell of the GCM. The climate models are still evaluated using a climate sensitivity and tuned to match the global mean temperature record. Radiative forcings, feedbacks and climate sensitivity are still the foundation of the climate fraud today [IPCC, 2021]. Little has changed in over 40 years. These areas will now be considered in more detail.



HISTORICAL BACKGROUND

The temperature of the earth was discussed by Joseph Fourier in two similar memoires (reviews) published in 1824 and 1827 [Fourier, 1824, 1827]. He correctly described the time dependent heating of the earth’s land surface by the solar flux. He also described ocean solar heating and atmospheric cooling by convection. However, he did not use the term ‘greenhouse effect’. Instead he discussed a solar calorimeter with glass windows that had been developed by Saussure. An important and long neglected part of Fourier’s work was the description of the seasonal time delay or phase shift in the subsurface heat transfer. Here he was able to quantitatively explain the observed temperature changes using his theory of heat, published in 1822 [Fourier, 1822].


At a moderate depth, as three or four meters, the temperature observed does not vary during each day, but the change is very perceptible in the course of a year, it varies and falls alternately. The extent of these variations, that is, the difference between the maximum and minimum of temperature, is not the same at all depths, it is inversely as the distance from the surface. The different points of the same vertical line do not arrive at the same time at the extreme temperatures. ..........

The results observed are in accordance with those furnished by the theory, no phenomenon is more completely explained.

Fourier (1824, p. 144)


The equilibrium climate assumption was first introduced by Pouillet in 1836. As a hypothesis, it had already been disproved by Fourier at least 12 years before. In 1840, Agassiz proposed the existence of an Ice Age based on observations of the glaciers in the Alps [Agassiz, 1840]. The climate debate then shifted from surface temperature to the cause of an Ice Age. This led Tyndall in the early 1860s to speculate that changes in the atmospheric CO2 concentration could alter the earth’s climate [Tyndall, 1861, 1863]. This in turn was the motivation for Arrhenius [1896] to try and calculate changes in surface temperature produced by CO2. Arrhenius used an ‘equilibrium air column’ in his calculations, so his results were invalid. He replaced the time dependence with 24 hour average solar and LWIR fluxes and neglected the effects of convection, evaporation and subsurface transport. When the CO2 concentration was increased, this approach had to produce an increase in surface temperature as a mathematical artifact of the calculation. Arrhenius repeated his calculations in 1906 and obtained smaller temperature changes [Arrhenius, 1906].


III. Thermal Equilibrium on the Surface and in the Atmosphere of the Earth

All authors agree in the view that there prevails an equilibrium in the temperature of the earth and of its atmosphere.

Arrhenius 1896, p. 254

V. Geological Consequences

I should certainly not have undertaken these tedious calculations if an extraordinary interest had not been connected with them. In the Physical Society of Stockholm there have been occasionally very lively discussions of the cause of the Ice Age.

Arrhenius 1896, p. 267


The first person to claim a measurable effect on surface temperature from an increase in CO2 concentration due to fossil fuel combustion was Callendar [1938]. He assumed that an increase in LWIR absorption and emission in the CO2 band near 650 cm-1 could cause a change in surface temperature. He found a slight increase in both CO2 concentration and meteorological temperatures, particularly in the N. hemisphere. He was probably the first person to find the signal from the AMO in the weather station data. His period of record included the warming phase of the AMO from about 1915 to 1935 (See Figure 9).


In the mid 1950s, improved spectroscopic measurements and computer calculations allowed Plass [1956a] to provide updated estimates of possible heating effects from CO2. He calculated a cooling rate for CO2 of 0.2 to 0.3 K per day in the troposphere. He was still using the equilibrium assumption, and estimated changes in surface temperature of +3.6 C and -3.8 C for a doubling and a halving of the CO2 concentration from 330 ppm. In a different paper, he discussed a ‘CO2 Theory of Climate Change’ [Plass, 1956b]. Here it is clear that he was interested mainly in changes in CO2 concentration as the cause of an Ice Age cycle, although fossil fuel combustion was also discussed. He speculated the when all of the known coal and oil reserves were used up in less than 1000 years, the equilibrium climate temperature rise could be 12 °C with the CO2 concentration increasing to 3000 ppm.


Interest in the effects of CO2 from fossil fuel combustion on climate was revived in the late 1950s with the work of Burt Bolin and Roger Revelle on the distribution of CO2 between the atmosphere and the oceans [Bolin and Eriksson, 1959, Bolin, 1960, Revelle and Seuss, 1957]. They had a new technique that they could use. This was the measurement of the carbon isotope ratios 14C/12C and 13C/12C. However, this only provided information on the amount of CO2 in the atmosphere that could be attributed to combustion. There was no new information on the relationship between CO2 and surface temperature. They also used exaggerated claims of climate warming to obtain research funds. The mass spectrometer analysis needed for these studies was expensive. They made no attempt to validate their claims using any thermal engineering calculations of the surface temperature.



THE EARLY CLIMATE MODELS

One of the earliest uses of computers was for weather forecasting, pioneered by a group led by John von Neumann [Harper, 2004]. However, the global circulation models (GCMs) used in this application require the solution of large numbers of coupled nonlinear equations. Lorenz [1963, 1973] found that such solutions were unstable, even for a simple convection model with 3 equations. A practical limit for weather forecasting was 12 days ahead. This work should have made it clear that such GCMs had no predictive capabilities over the time scales associated with climate change. Similarly, the time delays or phase shifts found in the surface temperature data are irrefutable evidence for a non-equilibrium climate (CR23). Unfortunately, by the early 1960s, the equilibrium climate assumption had become firmly entrenched as scientific dogma. The idea that an increase in atmospheric CO2 concentration produced by fossil fueled combustion CO2 could cause an increase in surface temperature was accepted without question.


The development of a computer climate model required two main steps. First, radiative transfer algorithms had to be improved so that the IR radiation field in the atmosphere, including the cooling rates could be calculated. Second, these algorithms had to be incorporated into each unit cell of a GCM modified for calculations over a climate time scale. A radiative transfer calculation simply provides a ‘snapshot’ of the IR radiation field for the temperature and species profile specified in the analysis. In the one dimensional radiative transfer models, the equilibrium climate assumption was imposed. A fixed 24 hour average solar flux was used as a model input. The LWIR flux emitted at the top of the model atmosphere (TOMA) had to be equal to the absorbed solar flux. The model was adjusted iteratively so that each atmospheric level reached an equilibrium state where the absorbed and emitted fluxes balanced and further iterations of the calculation did not change the temperature profile. A one dimensional (1-D) radiative equilibrium model was described by Manabe and Möller [1961] and a 1-D radiative convective (1-D RC) model was described by Manabe and Strickler [1964]. However, the first generally accepted 1-D RC model was that of Manabe and Wetherald (MW67) [1967]. The errors introduced by the simplifying assumptions made in this model will now be considered in more detail.



THE 1967 M&W MODEL

The MW67 model was an ‘improved’ version of the equilibrium air column used by Arrhenius with radiative transfer through 9 or 18 air layers added. It provided a mathematical platform for the development and evaluation of radiative transfer and related algorithms. The assumptions used to build the model had to create climate warming as a mathematical artifact of the calculations even before the first line of model code was written. They were clearly stated on the second page of their paper:


1) At the top of the atmosphere, the net incoming solar radiation should be equal to the net outgoing long wave radiation.

2) No temperature discontinuity should exist

3) Free and forced convection and mixing by the large scale eddies prevent the lapse rate from exceeding a critical lapse rate equal to 6.5 C km-1.

4) Whenever the lapse rate is subcritical, the condition of local radiative equilibrium is satisfied.

5) The heat capacity of the earth’s surface is zero.

6) The atmosphere maintains the given vertical distribution of relative humidity (new requirement).


The model used radiative transfer and related algorithms to interactively adjust the temperature of an atmospheric air column to a steady state condition. A fixed, 24 hour average solar flux was coupled to a 9 or 18 layer static air column. The surface was a blackbody surface with an adjustable reflectivity and zero heat capacity. The known spectral properties of H2O, CO2 and O3 were used to simulate the radiative transfer. In order to make their model work, M&W used a fixed distribution of relative humidity. As the temperature changed in the model, the water vapor concentration changed. It was determined by definition as a fraction of the temperature dependent saturated vapor pressure. When the CO2 concentration was increased, LWIR flux absorbed by the atmospheric layers of the model increased and the LWIR flux at TOMA decreased. The surface temperature and the temperatures of the atmospheric layers were then adjusted iteratively until a new equilibrium state was reached with a higher surface temperature. This restored the LWIR flux at TOMA to its equilibrium value. The step iteration process in the model (number of steps multiplied by the step time) required a year to reach equilibrium although the calculation time was much less. The model is illustrated in Figure 1. The results from such calculations have no relationship to the earth’s climate. The time dependence, evapotranspiration and surface thermal storage effects were ignored. At the time, such calculations using the available computer technology were a significant achievement. Unfortunately the MW67 model created global warming, by definition, as a result of the oversimplified input assumptions that they used.





Figure 1: The 9 or 18 layer M&W model. Three separate model runs to steady state were required to generate the three temperature distributions with different CO2 concentrations.



The use of such a simple model as a mathematical platform for the development of radiative transfer algorithms is entirely reasonable, provided that the limitations are understood and clearly stated. Anyone who uses a climate model that incorporates the MW67 assumptions should add the caveat ‘does not apply to planet earth’. Has anyone seen a ‘24 hour average sun’ shining in the sky at night? The equilibrium assumption created a false connection between the decrease in LWIR flux at the top of the atmosphere (TOA) produced by an increase in ‘greenhouse gas concentration’ and the surface temperature. These are decoupled by molecular line broadening effects in the troposphere. This becomes clear when the radiative transfer analysis is extended to include the change in the rates of cooling at different levels in the atmosphere. The maximum change in the rate of LWIR cooling of the troposphere produced by a doubling of the CO2 concentration from 280 to 560 ppm is +0.08 K per day [Iacono et al, 2008]. At the -6.5 K km-1 lapse rate used in MW67 this temperature change requires a decrease in altitude of 12 meters. This is equivalent to riding an elevator down four floors. The cooling rates in MW67 were discussed by Stone and Manabe [1968]. LWIR cooling rates were also discussed by Lacis and Oinas [1991]. Unfortunately, the climate warming artifacts created by the MW67 model soon became a lucrative source of research funds. The limitations of the MW67 model, including the tropospheric LWIR cooling rates were conveniently overlooked. This is still the case today. The MW67 calculations were recently repeated by Kluft [2020]. He simply copied the errors found in MW67.



THE FIRST CLIMATE BANDWAGONS

MW67 created two ‘bandwagons’ that could be used to obtain research funding. First, the MW67 model could be incorporated into a general circulation model (GCM) with well over a thousand ‘units’ coupled together within a modified weather forecasting program to make ‘improved’ climate ‘predictions’. Second, the radiative transfer algorithms could be improved with better spectroscopic constants and more greenhouse gases. In addition to CO2, melodramatic claims of warming by other ‘greenhouse gases’ could now be made. This gradually led to the concept of radiative forcing. None of this required any change to the underlying MW67 model assumptions.


The prevailing scientific dogma required that the climate should warm as the atmospheric CO2 concentration increased. Nature had other plans. From 1940 to 1970 the climate, as measured by the weather station record, showed cooling [AMO, 2022]. The reason for this was that the AMO was in its negative or cooling phase. In many regions of the world, the prevailing weather systems form over the oceans and then move overland. The ocean surface temperature is coupled to the bulk surface air temperature of the weather system and this is carried over land and becomes part of the weather station record. This is discussed in more detail in CR23, Chapter 7. The climate modelers decided instead that something was over-riding the greenhouse gas warming and created a global cooling scare. Aerosols from ‘air pollution’ were cooling the earth. Another Ice Age was coming. They reverted back to CO2 induced warming in about 1985 when the next AMO warming phase could be detected in the climate record [Wigley et al, 1985 Jones et al, 1986].



THE 1975 M&W MODEL

M&W chose to ignore the errors that they introduced in MW67 model and went on to incorporate the 1967 mathematical warming artifacts into every unit of a ‘highly simplified’ global circulation model [M&W, 1975] (MW75). The 1967 model was now described as a ‘global average climate model’. Although the MW75 GCM did not contain any real climate effects such as ocean transport and the cloud cover was fixed, claims of global warming from a ‘CO2 doubling’ were still made, even though the source was the invalid 1967 assumptions. The 1975 model also created a ‘hot spot’ in the upper troposphere at low and middle latitudes. This is also an artifact of the model assumptions related to the relative humidity assumption. The temperature increases produced by a ‘CO2 doubling’ and the ‘hot spot’ are shown in Figure 2.





Figure 2: The effect of a CO2 doubling in the 1975 M&W GCM, a) The increase in surface air temperature and b) the tropospheric ‘hot spot’ near 10 km altitude at low and middle latitudes.



In their conclusions, M&W stated:

In evaluating these results, one should recall that the current study is based upon a model with fixed cloudiness. The results may be altered significantly if we use a model with the capability to predict cloudiness. Other major characteristics of the model which can affect the sensitivities of the model climate are idealized geography, swamp ocean and no seasonal variation. Because of the various simplifications of the model, it is advisable not to take too seriously the quantitative aspect of the results obtained in this study.

The MW75 paper set a benchmark for climate warming by CO2. The equilibrium air column was now hidden inside the unit cell of the GCM. Funding for additional GCM development work by M&W or others required similar warming effects. The bandwagon was rolling and there was no turning back.



FROM PLANETARY ATMOSPHERES TO RADIATIVE FORCING

During the 1970s, various groups that started out studying planetary atmospheres, particularly Venus and Mars, began climate related studies. This work was justified in part by making melodramatic claims about climate change related to ‘runaway’ greenhouse effects or ‘air pollution’. The idea that an increase in CO2 concentration would increase the surface temperature was the expected result. The mathematical artifacts created by the 1-D RC modeling approach were accepted without question. Continued research funding required a dangerous warming from CO2 or other ‘greenhouse gases. Alternatively, there could be an equally dangerous cooling from aerosols. Rasool and Schneider [1971] claimed that an increase in aerosol concentration could over-ride any CO2 induced warming and produce atmospheric cooling. If this continued then it could trigger an Ice Age. At the time, both authors were with NASA Goddard.


In 1974, Ramanathan at NASA Langley claimed that an increase in the atmospheric concentration of chlorofluorocarbons (CFCs) could produce an increase in surface temperature. This was later recognized as the first use of radiative forcing, although the term ‘radiative forcing’ was not introduced until later [Ramaswamy et al, 2019]. Ramanathan simply used the available spectral data to calculate the decrease in average LWIR flux at TOA for CF2Cl2 and CFCl3. He then assumed that a sensitivity of the surface temperature to the solar flux of 1.425 W m-2 K-1 derived from the work of Budyko [1969] could be used to convert the change in LWIR flux at TOA to a change in temperature. In reality, the small decrease in LWIR flux at TOA produced by the increase in the atmospheric concentration of CFCs does not couple to the surface because of increased molecular line broadening at lower altitudes. The corresponding increase in LWIR flux from the lower troposphere to the surface is too small to produce a measureable change in surface temperature when it is coupled to the time dependent interactive flux terms at the surface [CR23 Chapter 8].


Later, in 1976 a group from NASA Goddard that now included James Hansen claimed an increase in the ‘greenhouse effect’ from the ‘trace atmospheric constituents’ N2O, CH4, NH3, HNO3, C2H4, SO2, CCl2F2, CCl3F, CH3Cl and CCl4 as well as the species H2O, CO2 and O3 used in MW67 [Wang et al, 1976]. These authors copied the MW67 model approach:

Interaction is continued through the time marching procedure until energy balance is achieved at each level in the atmosphere, after the atmospheric composition is perturbed, typically 300 to 440 simulated days are required to reestablish equilibrium to an accuracy of 0.01 K.

Wang et al, 1976

The equilibrium assumption was also clearly stated by Ramanathan and Coakley (RC78) in their 1978 review paper on radiative convective models:

For radiative-convective equilibrium the net outgoing longwave radiative flux at the top of the atmosphere, Fn0, must equal the net solar radiative flux Sn0. Likewise, because the stratosphere is in radiative equilibrium, the net longwave radiative flux at the base of the stratosphere, Fn1, must equal the net solar radiative flux into the troposphere, Sn1. For any perturbation the stratosphere and the atmosphere as a whole seek a new state of radiative equilibrium.

Ramanathan and Coakley, 1978

For a given location at TOA, the solar flux is changing on both a daily and a seasonal time scale. An average solar flux is simply a mathematical construct. RC78 did not consider the limitations of the equilibrium assumption nor did it provide any thermal engineering calculations of the surface temperature.



‘IMPROVEMENTS’ TO THE 1967 M&W MODEL

The early 1-D RC models used a partially reflective blackbody surface with zero heat capacity. Several groups then began to consider a 1-D RC model that included a ‘slab’ ocean model [Cess and Goldenberg, 1981, Dickinson, 1981, Hansen et al, 1981 (H81)]. H81 also included several other modifications to the 1967 M&W model that completed the foundation of the climate modeling fraud. These have propagated through all of the ‘equilibrium’ climate models developed since then, including the large scale GCMs.


H81 started with the concept of a climate sensitivity, which is the temperature increase produced by the model when the CO2 concentration is doubled, usually from 280 or 300 ppm to 560 or 600 pm. Then they described a 2 layer ‘slab’ ocean added to the MW67 1-D RC model. However, they conveniently neglected to consider the surface energy transfer. Next they ‘perturbed’ their model by changing the concentration of various ‘greenhouse gases’ and aerosols and claimed that the changes in model temperature were real. Then they introduced the ‘CO2 doubling ritual’. This is the response of a 1-D RC model to a ‘CO2 doubling’ as it ‘adjusts’ to a new ‘equilibrium state’. Then they did a ‘bait and switch’ and claimed that their model could simulate the weather station temperature record. Here they ignored the role of the ocean oscillations in setting the surface temperature, particularly the AMO. Finally they used a contrived set of three ‘radiative forcings’ to simulate the weather station record using their model.


The H81 model could be ‘tuned’ by adjusting the ‘feedback’ processes that amplified the surface temperature increase. The climate sensitivity of their model to various processes is shown in Figure 3. Here the climate sensitivity is the increase in surface temperature produced by doubling of the CO2 concentration from 300 to 600 ppm. This is just the pseudoscientific mathematical artifact created by the M&W assumptions. Model 4, with a ‘sensitivity’ of 2.8 K was selected for additional analysis. The decrease in LWIR flux at TOA for a ‘CO2 doubling’ is 3.9 or 4.0 W m-2. The increase in downward LWIR flux to the surface is similar [Harde, 2017].





Figure 3: Hansen et al 1981, Table 1 - Equilibrium surface temperature increase due to doubled CO2 (from 300 to 600 ppm) in 1D-RC models. Model 1 has no feedbacks affecting the atmosphere’s radiative properties. Feedback factor f specifies the effect of each added process on model sensitivity to doubled CO2. F is the equilibrium thermal flux into the ground if Ts is held fixed (infinite heat capacity) when CO2 is doubled. Abbreviations: FRH, fixed relative humidity, FAH, fixed absolute humidity, 6.5LR, 6.5 °C km-1 limiting lapse rate, MALR, moist adiabatic lapse rate, FCT, fixed cloud temperature, FCA, fixed cloud altitude, SAF, snow/ice albedo feedback and VAF, vegetation albedo feedback.



H81 then introduced a two layer ‘slab’ ocean model with an upper mixed layer 100 m thick and a thermocline layer below this. The surface energy transfer was ignored and only the time delays related to the increase in heat capacity were considered. The penetration depth of the LWIR flux into the ocean surface is less than 100 micron [Hale and Querry, 1973]. Here it is fully coupled to the larger and more variable wind driven surface evaporation. Any change in surface temperature produced by a ‘CO2 doubling’ is too small to measure. The estimated increases in ocean surface temperature for various ocean model conditions as calculated in H81 are shown in Figure 4.





Figure 4: Hansen et al 1981, figure 1 - Dependence of CO2 warming on ocean heat capacity. Heat is rapidly mixed in the upper 100 m of the ocean and diffused to 1000 m with diffusion coefficient k. The CO2 abundance is 293 ppm in 1880, 335 ppm in 1980 and 373 ppm in 2000. Climate model sensitivity is 2.8 °C for doubled CO2.



With their model ‘tuned’ so that a ‘CO2 doubling’ produced an increase in ‘equilibrium’ surface temperature of 2.8 °C, the authors of H81 went on to calculate the temperature changes produced by various ‘radiative perturbations’. These are shown in Figure 5.





Figure 5: (Hansen, 1981 figure 2) Effects of various ‘radiative perturbations’ on surface temperature calculated using a 1D RC climate model. The changes in ‘surface temperature’ are mathematical artifacts produced by the simplifying assumptions used in the model.



The authors then discussed the presumed effects of ‘volcanic aerosols’ relate to the volcanic eruption of Mount Agung in 1963. However, this is based on the mathematical artifacts created by their 1-D RC model. There is no reason to expect that the model results for aerosols to be any better than those for CO2. The authors then described the changes in flux produced in their 1-D RC model when the CO2 concentration is doubled from 300 to 600 ppm and their model responds by ‘adjusting’ to a new ‘equilibrium state’ with a higher surface temperature. This is shown in Figure 6 (Hansen et al, figure 4). Again, the temperature changes are just mathematical artifacts of the 1-D RC model. In reality, any small amount of heat release in the troposphere is re-emitted as wideband LWIR emission or dissipated by turbulent convection. There is no change to the energy balance of the earth and no change in surface temperature [CR23, Chapter 8]. Unfortunately, the concept of radiative forcing has become accepted as part of the doctrine of the Imperial Cult of the Global Warming Apocalypse [Ramaswamy et al, 2019]. A very similar argument to Figure 6 was used in Chapter 8 of the Fifth IPCC Assessment WG1 Report [IPCC, 2013] over 30 years later. Figure 7 shows the equilibrium climate ‘adjustment’ to a radiative forcing from figure 8.1 of the IPCC AR5 report.





Figure 6: The effects of a hypothetical ‘CO2 doubling’ from 300 to 600 ppm on an equilibrium average climate





Figure 7: (Figure 8.1 AR5, WGp 1 [2013]). Cartoon comparing (a) instantaneous RF, (b) RF, which allows stratospheric temperature to adjust, (c) flux change when the surface temperature is fixed over the whole Earth (a method of calculating ERF), (d) the ERF calculated allowing atmospheric and land temperature to adjust while ocean conditions are fixed and (e) the equilibrium response to the climate forcing agent. The methodology for calculation of each type of forcing is also outlined. ΔT0 represents the land temperature response, while ΔTs is the full surface temperature response. (Updated from Hansen et al., 2005.)



Next the authors described the long term surface air temperature averages derived from weather station data. Figure 8 (H81, figure 3, lower plot) shows the long term five year global average from 1880 to 1980. This includes the well-defined Atlantic Multi-decadal Oscillation (AMO) peak near 1940 [AMO, 2022]. The change in CO2 concentration (Keeling curve [2023]) is also shown.





Figure 8: The global mean temperature, 5 year running average from Hansen et al, 1981 with the Keeling curve (atmospheric CO2 concentration) overlaid. The broad peak centered near 1940 is the AMO.



The role of the AMO in setting the surface air temperature has been misunderstood or ignored for a long time. The first person to claim a measurable warming from an increase in CO2 concentration was Callendar in 1938. He used weather station temperatures up to 1935 that included most of the 1910 to 1940 warming phase of the AMO [Callendar, 1938]. The warming that he observed was from the AMO, not CO2. During the 1970s there was a ‘global cooling’ scare that was based on the cooling phase of the AMO from 1940 to 1970 [McFarlane, 2018, Peterson et al, 2008, Douglas, 1975, Bryson and Dittberner, 1976]. As shown in Figure 8, Hansen et al [1981] chose to ignore the 1940 AMO peak in their analysis of the effects of CO2 on the weather station record. Similarly Jones et al conveniently overlooked the 1940 AMO peak when they started to ramp up the modern global warming scare in 1986 [Jones et al, 1986]. This is illustrated in Figure 9. The AMO and the periods of record used are shown in Figure 9a. The AMO is plotted with the HadCRUT4 global temperature record [HadCRUT4, 2022]. The two are aligned from 1869 to 1970. The temperature records used by Callendar, Douglas, Jones et al and Hansen et al are shown in Figures 9b through 9e. The Keeling curve showing the increase in atmospheric CO2 concentration is also plotted in Figures 9d and 9e [Keeling, 2022].





Figure 9: a) AMO anomaly and HadCRUT4 global temperature anomaly, aligned from 1860 to 1970, b) temperature anomaly for N. temperate stations from Callendar [1938], c) global cooling from Douglas [1975], d) global temperature anomaly from Hansen et al, [1981] and e) global temperature anomaly from Jones et al, [1986]. The changes in atmospheric CO2 concentration (Keeling curve) are also shown in c and d. The periods of record for the weather station data are also indicated.



The authors then used a contrived mix of increasing CO2 concentration, volcanic aerosols and variations in solar flux to adjust their 1-D RC model and create a fit to the weather station record. This is shown in Figure 10 from H81, figure 5. In reality, they have simply ‘tuned’ their model to match a temperature record dominated by the AMO.





Figure 10: (H8l, figure 5) Global temperature trend obtained from climate model with sensitivity 2.8 °C for doubled CO2. The results in (a) are based on a 100 m mixed layer ocean for heat capacity, those in (b) include diffusion of heat into the thermocline to 1000 m.



INTO A BLIND ALLEY

Starting in about 1982, a CO2 research program was initiated by the US Department of Energy (DOE) with a major report published in 1985 [MacCracken and Luther, 1985a, 1985b]. The climate model results were accepted without question. The issue was how to detect the CO2 signal in the surface temperature record. In their analysis of the temperature record Wigley et al [1985] concluded that “unequivocal, statistically rigorous detection of the effects of changing CO2 levels on atmospheric temperatures is not yet possible”. No quantitative thermal engineering analysis of the changes in surface temperature was presented. In the following year, using the same data set, the Climate Research Unit (CRU) at the University of E. Anglia started to ramp up the warming claims "the data show a long timescale warming trend, with the three warmest years being 1980, 1981 and 1983 and five of the nine warmest years in the entire 134 year record occurring after 1978” [Jones et al. 1986]. In a slightly later paper, Jones et al [1988] concluded “Nevertheless, the persistent surface and tropospheric warmth of the 1980s which, together with the ENSO, gave the exceptional warmth of 1987 could indicate the consequences of increased concentrations of CO2 and other radiatively active gases in the atmosphere”. There was no attempt to perform any thermal engineering analysis of the surface temperature or apply signal processing (Nyquist) theory to the temperature record.


H81 is one of the earliest examples of the use of a contrived set of ‘radiative forcings’ to fraudulently ‘tune’ an ‘equilibrium’ climate model to match the climate record. This process was accepted by the IPCC and the US Global Change Research program (USGCRP) [Ramaswamy et al, 2019, Weubbles et al, 2017, Melillo et al, 2014]. Climate modeling had entered a blind allay and remains there even today. An updated version of Figure 10 may be found in a 1993 review paper by Hansen et al. This is shown in Figure 11 [Hansen et al, 1993]. In addition to CO2, other ‘well mixed’ greenhouse gases have been included in the model. The aerosol terms have been expanded and cloud aerosol interactions have also been added.





Figure 11: (Composite from figures 19 and 15 of Hansen et al, 1993) a) and b) simulated global temperature change for 3 climate sensitivities. Successive forcings are added cumulatively. The zero point of observations and model is 1866-1880 mean. c) and d) climate forcings used in the GCM simulations.



The same pseudoscientific approach using forcings, feedbacks and climate sensitivity is still used today. Figure 12 shows the estimated temperature increases from 1750 to 2019, the related radiative forcings, the time series of the radiative forcings, the estimated increase in the ‘global mean temperature anomaly’, and the CMIP6 ensemble equilibrium climate sensitivities. Figures 12a through 12d are from Chapter 7 of the IPCC WG1 AR6 Report (figures 7.7, 7.6, 7.8 and Box 7.1a) [IPCC 2021] and Figure 12e is from Hausfather [2019]. Little has changed since 1981. Figure 12a is an update of Figure 5 (H81 figure 2). Figures 12b, 12c and 12c follow from Figure 10 (H81 figure 5) and Figures 11b and 11c (Hansen et al, 1993). ‘Efficacies’ were added to further ‘tune’ the radiative forcings by Hansen et al in 2005 [Hansen et al, 2005]. The role of the AMO in setting the surface temperature is still ignored. Figure 12e still brackets the climate sensitivity of 2.8 °C used by Hansen et al in 1981. The main change here is that the 1-D RC model has been buried inside the unit cells of the more complex GCMs. There is now an average planetary equilibrium state. However, the models are still ‘tuned’ using a contrived set of radiative forcings so that they appear to match the mean global temperature record.





Figure 12: a) Simulated temperature increases from 1750 to 2019, b) changes in radiative forcings since 1750, c) time dependence of the temperature changes derived from the radiative forcings, d) ‘tuned’ temperature record using a contrived set of radiative forcings that appear to simulate the global mean temperature record (IPCC AR6, WG1, figures 7.7, 7.6, 7.8 and Box 7.1a) and CMIP6 climate model sensitivities from Hausfather [2019]. The range, from 1.8 to 5.6 °C still brackets the 2.8 °C value used by Hansen et al in 1981.



THE GROWTH OF THE CLIMATE MODELING FRAUD

The basic technical foundation of the climate modeling fraud was established with the publication of H81. The climate models results were officially ‘sanctified’ by the Charney report published in 1979. It concluded in part:

When it is assumed that the CO2 content of the atmosphere is doubled and statistical thermal equilibrium is achieved, the more realistic of the modeling efforts predict a global surface warming between 2 C and 3.5 °C with greater increases at higher latitudes. The primary effect of an increase of CO2 is to cause more absorption in the troposphere and thus to increase the air temperature in the troposphere. A strong positive feedback mechanism is the accompanying increase of moisture which is an even more powerful absorber of terrestrial radiation.

Charney Report, 1979

This report was very narrow in scope and ignored the large body of evidence that was available to show that the climate equilibrium assumption was invalid and that an increase in the atmospheric CO2 concentration could not change the surface temperature of the earth. There was no quantitative discussion of the surface energy transfer processes that determine the surface temperature. For example, detailed flux and temperature measurements were available from the Great Plains Turbulence Field Program conducted in 1953 [Letteau and Davidson, 1957]. Ocean surface energy transfer was discussed by Bunker [1976]. Natural wind driven ocean oscillations including the Southern Oscillation Index and the North Atlantic Oscillation were also ignored [Julian and Chervin, 1978, Stephenson et al, 2003, Lamb, 1972].


The causes of an Ice Age were finally explained in 1976 by Hays et al. Subtle changes in the distribution of the solar flux over the earth’s surface related to Milankovitch cycles – orbital eccentricity, axial tilt and precession were sufficient to change the balance between the rates of heating and cooling of the earth [Hayes et al, 1976, Imbrie and Imbrie, 1979]. Changes in the atmospheric concentration of CO2 followed the ocean temperature changes. The mathematical warming artifacts created by the equilibrium air column had been revealed to anyone who cared to look. Physical reality had been abandoned in favor of mathematical simplicity. The climate modelers were blinded by the equilibrium assumption. They continued to play computer games in their equilibrium climate fantasy land. They ignored the details of their own radiative transfer calculations. The LWIR flux emitted to space was decoupled from the downward LWIR flux to the surface by molecular line broadening effects. The maximum warming or decrease in the rate of LWIR cooling of the troposphere produced by a ‘CO2 doubling’ was +0.08 °C per day. The increase downward LWIR flux to the surface was fully coupled to the wind driven ocean latent heat flux. Any CO2 induced temperature changes were too small to measure [CR23 Chapter 8].


The Charney report included the initial results from two climate modeling groups using five primitive GCMs. By 1995, 18 coupled climate models were available from seven different countries [Meehl et al, 1997]. The modeling effort used by the IPCC is now coordinated through the Coupled Model Intercomparison Project (CMIP). In the US, one of the main CMIP centers is Lawrence Livermore National Labs (LLNL) [Taylor et al, 2012, Stauffer et al, 2017]. This is a good example of ‘mission creep’. In 2019 there were 49 modeling groups with approximately 100 different models involved in CMIP6. All of these groups are ‘tuning’ their models to match the global mean temperature record using a contrived set or radiative forcings (see Figure 12). There has been no attempt at independent validation. There has been no comparison of climate model results to thermal engineering calculations of the surface temperature.


Two external factors contributed to the growth of the climate fraud. As funding was reduced for NASA space exploration and for DOE nuclear programs, climate modeling became an alternative source of revenue. The National Laboratories had ‘supercomputer services for hire’. There was also a deliberate decision by various outside interests, including environmentalists and politicians to exploit the fictional climate apocalypse to further their own causes [Hecht, 2007, Mead and Kellogg, 1976]. The World Meteorological Organization (WMO) and the United Nations Environmental Program (UNEP) were used to promote the global warming scare. The UN Intergovernmental Panel on Climate Change (UN IPCC) was established in 1988 and the US Global Change Research Program (USGCRP) was established by Presidential initiative in 1989 and mandated by Congress in 1990. In the UK, Margaret Thatcher became a proponent of the global warming scare. The UK Hadley Center was established in 1990 and became a major supporter of the IPCC [Courtney, 2012, Folland et al, 2004]. In the US, one of leading political advocates of climate change was Al Gore. He was US vice president from 1992 to 2000.


It must be emphasized that the Intergovernmental Panel on Climate Change (IPCC) is a political body, not a scientific one [McLean, 2010, 2009, Bolin, 2007]. Its mission is to assess “the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change.” This is based on the a-priori assumption that human activities are causing CO2 induced global warming. The IPCC was established to exploit global warming as a way of inducing economic disruption based on the population growth and sustainability concerns raised by the Club of Rome [Darwall, 2017, Zubrin, 2013, Klaus, 2007, Dewar 1995]. The IPCC has published six major assessment reports: the first, second and third - FAR (1990), SAR (1995), TAR (2001) and AR4 (2007), AR5 (2013) and AR6 (2021). While the reports may contain a useful compendium of scientific references, material that does not conform to the global warming dogma has usually been omitted. Authors and editors were selected based on their willingness to find CO2 induced global warming whether it existed or not. The primary focus of these reports has been on the use of modeling ‘scenarios’ to predict future global warming using invalid computer models. These reports should not be cited as scientific references.


Any scientific caution about the attribution of temperature increases to global warming was abandoned with the second IPCC Assessment Report in 1995. This was altered at the last minute at the request of the US State Department [FM, 2012]. The science had to agree with the ‘Summary for Policymakers’ written for the politicians. Similarly, the notorious ‘Hockey Stick’ temperature series based on fraudulent tree ring data was featured prominently in the 2001 Assessment Report [Mann et al, 1998, 1999, Montford, 2010, Steyn, 2015, Wedgman et al, 2010]. This was an attempt to eliminate the Medieval Warm Period and the Maunder Minimum from the climate record. The fraud here was the deliberate manipulation of the measured data to create the desired outcome. In November of 2009, and again in November 2011, a large archive of e-mails and other files from the Climate Research Unit of the University of East Anglia was released on the Internet. A third round was released in March 2013. This archive has revealed to many people outside of the close knit climate community that there had been an ongoing fraud for many years to promote the global warming agenda and prevent the publication of material that did not support the prevailing global warming dogma. The peer review process in climate science had collapsed and been replaced by blatant cronyism. Climate science had become detached from its foundation in physical science and degenerated into a quasi-religious Imperial Cult of the Global Warming Apocalypse. Belief in global warming was a prerequisite for funding in climate science. The release of this climate archive became known as Climategate. The information provided has been analyzed in detail by several authors [Monckton, 2009, Montford 2010, Mosher and Fuller, 2010].


The USGCRP has simply copied the IPCC reports for over 30 years. The climate model results produced by the groups at NOAA. NASA, NSF and DOE (including the National Labs) have been accepted without question by the rest of 13 agencies involved in the USGCRP. The fictional warming created by the climate models has been used to drive US energy policy and force the unnecessary adoption of solar and wind based electrical power generation and the use of electric vehicles.



A SATELLITE BALANCING ACT

As computer technology improved, the climate models shifted from the equilibrium air column to the ‘energy balance of the earth’ as determined by the large scale GCMs. The 1-D RC mathematical warming artefacts were incorporated into each unit cell of the GCM. The climate was now determined by three numbers, the total solar intensity (TSI), the albedo or reflectivity and the average LWIR flux returned to space. This established another climate bandwagon, the use of satellite radiometers to determine the energy balance of the earth.


The earth is an isolated planet that is heated by shortwave (SW) radiation from the sun and cooled by the outgoing longwave radiation (OLR) back to space. Climate stability only requires an approximate long term planetary energy balance between the absorbed solar flux and the OLR. There is no requirement for an exact flux balance at the ocean-air interface between the absorbed solar flux and the surface cooling flux. Natural variations in wind speed produce quasi periodic oscillations in ocean surface temperature [CR23]. These provide a ‘noise floor’ for the climate temperatures and for the LWIR flux returned to space. There is no unique solution to the surface flux balance equations that defines a single ‘surface temperature’. Any ‘radiation imbalance’ is accounted for as a change in energy stored in the climate system. Most of this energy is stored as heat by the oceans, but some is stored as gravitational potential energy in the troposphere.


Figure 13 shows an IR image of the earth recorded March 18, 2011 using the CERES instrument on the NASA aqua satellite [CERES, 2011]. The intensity of the LWIR emission varies from 150 to 350 W m-2. The low intensity white areas near the center of the image are the LWIR emission from cloud tops. For the ‘radiation balance’ all of this information is lost and replaced by a single number.





Figure 13: CERES image of the LWIR emission to space from the earth, recorded March 18, 2011.



Figure 14 shows the zonal average of the net flux (absorbed solar flux minus LWIR flux) for March, June, September and December [Kandel and Voilier, 2010]. Near equinox, in March and September, the net flux is positive with a net energy flow of up to 100 W m-2 within the ±30° latitude bands. There is net cooling at higher latitudes. In June, near summer solstice in the N. Hemisphere, the heating occurs in the N. Hemisphere and this reverses in December for the S. Hemisphere summer. Figure 15 shows maps of the monthly average of the net flux for March, June, September and December 2000 recorded using the CERES instrument on the NASA Terra satellite. This illustrates the seasonal shift in solar heating (orange/red band) [CERES, 2004]. Any ‘radiation balance’ requires the accurate determination of small differences in large numbers. The accurate calibration of the radiometers used to measure the radiation balance is a difficult undertaking. The residual imbalance is close to the limits of the measurements. The result may be compared to the description of an average family with 1.9 cars and 2.4 children. It is a mathematical construct with little useful meaning. In addition, the two hemispheres are weakly coupled to each other, so the concept of a single planetary energy balance is a drastic oversimplification of the energy flow. Furthermore, any ocean heating is related to changes in the surface energy balance that have nothing to do with LWIR radiative forcings by ‘greenhouse gases’. The decrease in LWIR flux at TOA related to the ‘greenhouse gas’ forcings is decoupled from the surface by molecular line broadening in the troposphere. The detailed analysis of the energy flows that establish the earth’s radiation balance do not support the radiative forcing narrative.





Figure 14: Zonal averages of the net flux (absorbed solar minus emitted LWIR flux), for March, June, September and December, five year average CERES values.





Figure 15: Spatially resolved CERES Terra monthly average net radiation balance at TOA for March, June, September and December 2000.



Another issue is that the spectral distribution of the LWIR flux emitted at TOA is not that of a blackbody radiator. Such spectra were available from the Michelson interferometer (Fourier transform IR spectrometer) on the Nimbus 4 satellite [Hanel et al, 1971]. The concept of an ‘effective emission temperature’ near 255 K based on just the average intensity of the LWIR flux at TOA is invalid [Möller, 1964]. This should not be combined with an ‘average surface temperature’ to give a ‘greenhouse effect temperature’ of 33 K [CR23, Chapter 2, Taylor, 2006].



THE GLOBAL MEAN TEMPERATURE RECORD

The global mean temperature record is the area weighted average of weather station and ocean temperature records after they have been extensively processed [Morice et al., 2012] Usually this is presented as a temperature anomaly with the mean subtracted. When the global mean temperature record, such as the HadCRUT4 data set is evaluated, the dominant term is found to be the Atlantic Multi-decadal Oscillation (AMO) [HadCRUT4, 2022]. This is illustrated above in Figure 9a. The AMO is a long term quasi-periodic oscillation in the surface temperature of the N. Atlantic Ocean from 0° to 60° N [AMO, 2022]. Superimposed on the oscillation is a linear increase in temperature related to the recovery from the Little Ice Age (LIA) or Maunder minimum [Akasofu, 2010]. Before 1970, the AMO and HadCRUT4 track quite closely. This includes both the long period oscillation and short term fluctuations. There is an offset that starts near 1970 with HadCRUT4 approximately 0.3 °C higher than the AMO. The short term fluctuations are still similar. The correlation coefficient between the two data sets is 0.8.


There is an additional part of the recent HadCRUT4 warming that is not included in the AMO signal. This may be explained as a combination of three factors. First there are urban heat islands related to population growth that were not part of the earlier record. Second, the mix of urban and rural weather stations use to create the global record has changed. Third, there are so called ‘homogenization’ adjustments that have been made to the raw temperature data. These include the ‘infilling’ of missing data and adjustments to correct for ‘bias’ related to changes in weather station location and instrumentation. It has been estimated that half of the warming in the ‘global record’ has been created by such adjustments. This has been considered in more detail by Andrews [2017a, 2017b and 2017c] and by D’Aleo and Watts [2010]. Adjustments to the Australian temperature record have been discussed by Berger and Sherrington [2022].



FORCING THE CLIMATE SENSITIVITY

The climate models are ‘tuned’ to create the global mean temperature record using a contrived set of radiative forcings. The same set of forcings are also combined with the global mean temperature record to create a climate sensitivity. A good example of this is Otto et al [2013]. They defined the climate sensitivities as:


ECS = F2xΔT/(ΔF - ΔQ)..................................(Eqn. 1a)

TCR = F2xΔT/ΔF........................................... (Eqn. 1b)


Here, F2x is the radiative forcing produced a doubling of the atmospheric CO2 concentration, set in this case to 3.44 W m-2 for a doubling from ‘preindustrial levels’, 280 to 560 ppm, ΔF is the change in radiative forcing (W m-2), ΔT (°C) is the change in global mean temperature and ΔQ is the change in the ‘earth system heat content’, also given in W m-2. The change in temperature is taken from the HadCRUT4 global temperature anomaly and the radiative forcings are taken from the CMIP5/RCP4.5 model ensemble. The change in heat content is dominated by ocean heat uptake. The decadal temperature and forcing estimates from data given by Otto et al are shown in Figures 16a and 16b. The 1910 AMO cycle minimum and the 1940 maximum are indicated. The increase in the downward LWIR flux related to the ‘radiative forcing’ shown in Figure 16b cannot couple below the ocean surface and cause any measurable change in ocean temperature. Using the data from Figures 16a and 16b combined with estimates of ΔQ from various sources, Otto et al assume that their net radiative forcing estimates are responsible for the observed heating effects and that the temperature response to the change in LWIR flux is linear. Plots of ΔT vs (ΔF-ΔQ) and ΔT vs ΔF are therefore presumed to be linear with a slope that changes with the value of ECS or TCR. The results generated by Otto et al are shown in Figures 16c and 16d. Using the data for 2000 to 2010, they create an ECS of 2.0 °C with a 5-95% confidence interval of 1.2 to 3.9 °C and a TCS of 1.3 °C with a confidence level of 0.9 to 2.0 °C.





Figure 16: a) Decadal mean temperature estimates derived from the HadCRUT4 global mean temperature series. b) Decadal mean forcing with standard errors from the CMIP5 /RCP4.5 ensemble. c) Estimates of ECS and d) TCR from Otto et al [2013].



THE ‘ATTRIBUTION’ OF ‘HUMAN FACTORS’

The contrived set of radiative forcings used in the climate models to calculate the global mean temperature record is then manipulated to ‘attribute’ the observed warming to ‘human factors’. The ‘human factors’, mainly the LWIR forcings from the increase in ‘greenhouse gases’ are turned off and the climate models are rerun with ‘natural factors’ only. In reality, the most recent warming phase of the AMO and the various bias terms are removed from the temperature record. This is illustrated in Figure 17. The may be regarded as a ‘flat ocean’ assumption. The global mean temperature change and the results from the CMIP5 model ensemble are shown in Figure 17a. The temperature record is taken from three sources, HadCRUT4, NASA GISS and NCDC. The radiative forcings used in the models are shown above in Figure 12c. The 1940 AMO peak and the 1910 AMO minimum are also indicated. The models are then rerun without the ‘human factors’. This is blue line in Figure 17b. The approximate contribution of the AMO, including the warm phases of the oscillation and the linear recovery from the LIA are indicated in Figure 18. The box enclosing the 1910 to 1940 warm phase has simply been copied over to show the recent AMO warming. The rest of the warming may be accounted for by UHI effects, changes to the rural/urban mix of the weather stations used in the global average and the ‘homogenization adjustments. Figures 17a and 17b are from a report by Terando et al, Using information from global climate models to inform policymaking - The role of the U.S. Geological Survey [2020]. Here the authors have blindly copied the figure from the NCA4, the Fourth USGCRP National Climate Assessment Report [Knutson et al, 2017]. Similar figures with the ‘attribution to human factors’ were published in NCA3, the Third USGCRP National Climate Assessment Report [Melillo, 2014]. They were also used in the Working Group 1 Report from AR5, the Fifth IPCC Assessment Report [IPCC, 2013]. The original work was published by Jones et al, [2013, figure 4].





Figure 17: a) GISTEMP, HadCRUT4.5 and NOAA climate records and the CMIP5 model ensemble results, b) model ensemble with ‘anthropogenic forcings’ removed. The 1910 AMO minimum and the 1940 AMO peak are indicated.





Figure 18: Figure 14b with the positive phases of the AMO, temperature recovery from the LIA and residual ‘adjustments’, UHI etc. indicated.



EXTREME WEATHER EVENTS

After the contrived set of radiative forcings used to create the ‘global mean temperature’ is divided into ‘natural’ and ‘human’ or anthropogenic factors it is used to make fraudulent claims that ‘human factors’ are causing increases in ‘extreme weather’. One of the more egregious examples of this is the annual supplement to the Bulletin of the American Meteorological Society ‘Explaining Extreme Events of [Year] from a Climate Perspective’ [Herring et al, 2022]. The series has been published annually since 2012. The BAMS publication guidelines state:


‘Each paper will start with a 30 word capsule summary that includes, if possible, how anthropogenic climate change contributed to the magnitude and/or likelihood of the event’.


The climate sensitivities created by the CMIP5 and CMIP6 model ensembles and other climate models are used without question to ‘explain’ the observed ‘extreme weather events’ for the year of interest. Natural climate changes related for example to ocean oscillations and blocking high pressure systems have to be ‘enhanced’ by the pseudoscience of radiative forcings.


At present the average atmospheric CO2 concentration is increasing by approximately 2.4 ppm per year. This produces an increase near 0.34 W m-2 per year in the downward LWIR flux to the surface. Such a change is far too small to have any measurable effect on surface temperature or any form of ‘extreme weather’. The energy transfer processes related to ocean oscillations, downslope winds and blocking high pressure systems are discussed in more detail in CR23.



WHERE WAS THE OVERSIGHT?

The equilibrium air column was introduced in its basic form by Arrhenius in 1896. It was modified by Manabe and Wetherald in 1967 to include radiative transfer and a ‘water vapor feedback’. It became a 1-D RC model. When the CO2 concentration was increased, an increase in ‘equilibrium’ surface temperature was produced by the increase in CO2 absorption and then ‘amplified’ by ‘water vapor feedback’ as a mathematical artifact of the simplifying assumptions used to build the model. The radiative transfer algorithms were then ‘improved’ by NASA modelers to include other ‘greenhouse gases’ and more detailed aerosol effects. This gradually led to the pseudoscientific concepts of radiative forcing, additional feedbacks and a climate sensitivity to CO2. The molecular line broadening that decoupled the upward and downward LWIR fluxes was ignored. Similarly, the small changes in LWIR cooling rates as the CO2 concentration was increased were also ignored. A ‘slab’ ocean model was added that was magically heated by a small increase in the downward LWIR flux to the surface produced by an increase in atmospheric ‘greenhouse gases’. The wind driven ocean surface evaporation was ignored. The CO2 doubling ritual was introduced. The 1-D RC model was ‘tuned’ to match a ‘global mean temperature change’ using a contrived set of radiative forcings. The signal from the ocean oscillations was ignored. The temperature record was ‘adjusted’ to better match the models. As computer technology improved, the 1-D RC model disappeared into the unit cells of the larger scale global circulation models (GCMs). These GCMs required the solution to very large numbers of coupled non-linear equations. Such solutions are unstable and have no predictive capabilities over the time scales required for climate change. When the IPCC was formed in 1988 it accepted the pseudoscience of radiative forcings, feedbacks and climate sensitivity in a perturbed climate equilibrium state without question. When the USGCRP was formed in 1989/90 it simply followed the lead of the IPCC. Now, thirty five years later, we have a massive, multitrillion dollar fraud that is still based on the pseudoscience of radiative forcings, feedbacks and climate sensitivity. Eisenhower’s warming about the corruption of science by government funding has come true. Where was the oversight?


• Time delays or phase shifts between the peak solar flux and the surface temperature response provide irrefutable evidence of a non-equilibrium thermal response. This was described by Fourier in 1824.

• The instability in the solution of the coupled non-linear equations for convection was described by Lorenz in 1963. The 12 day limit to weather forecasting was established in 1973.

• Why did Manabe and Wetherald decide to use an equilibrium air column model?

• Why were Manabe and Wetherald allowed to start the development of their GCM?

• Why did the planetary atmosphere modelers at NASA fail to detect the mathematical warming artefacts in the 1-D RC model?

• Why did NASA fail to detect the errors in the H81 slab ocean model?

• Why did NASA allow the use of global circulation models with unstable solutions?

• Where was the NASA review? NASA has a process of Technical Readiness Levels (TRLs) for evaluating technical maturity. Why was this not applied to the climate models? Why has NASA failed to detect the climate fraud in the mission analysis for more recent satellite missions such as OCO?

• Why have ‘equilibrium’ climate modelers with no understanding of climate energy transfer been allowed to serve as climate advisors to the president?

• Why was the ‘flip’ from a global cooling scare to a global warming scare not investigated for fraud?

• Why was the neglect of the AMO signal in the global average temperature record not investigated for fraud?

• As more climate modeling groups were established, why was there no attempt at independent model validation? Where was the independent thermal engineering analysis?

• Why did the weather forecasting groups at the National Center for Atmospheric Research (NCAR) fail to recognize the limitations imposed by Lorenz instabilities on the climate GCMs? Why was the climate equilibrium assumption still accepted?

• The National Labs are typically operated as independent corporations under contract to DOE. For many years, the prime contractor for Los Alamos and Lawrence Livermore Labs was the University of California (UC). More recently, UC was replaced by a consortium of companies and universities that still includes UC. Where was the administrative oversight of the climate modeling activities at the National Labs, including both the prime contractor and DOE?

• There are similar climate modeling oversight issues at other National Labs.

• There is close cooperation between the NOAA Geophysical Fluid Dynamics Lab and Princeton University. Why have the radiative forcing errors associated with the climate group, including the work of Ramaswamy not been corrected?

• The NASA Jet Propulsion Laboratory is administered by the California Institute of Technology (CIT). Why has the climate modeling fraud not been investigated by CIT?

• Why has NOAA allowed its climate modeling group to publish work on ‘extreme weather’? Why was the work by Herring et al published by the American meteorological Society?

• Why has the USGCRP failed to detect the climate modeling fraud by NOAA, NASA, DOE and NSF? Why has it followed the IPCC reports without any independent validation? A good example of this is shown in Figures 17 and 18. Here, Terando et al from the USGS have blindly copied the USGCRP climate assessment report and the USGCRP in turn has blindly copied the IPCC.

• In addition to the failure to provide oversight to climate modeling activities, there has also been a lack of oversight of the experimental determination of the climate sensitivity and the satellite radiation balance measurements.

• The detailed satellite radiometer data used to determine a ‘planetary energy imbalance’ is reduced to three numbers, the average TSI, the albedo and the average OLR. Any imbalance is supposed to be caused by ‘radiative forcing’. Where is the oversight for these activities?

• It has been assumed, incorrectly that all of the warming found in the global mean temperature record can be explained using the same contrived set of radiative forcings as used in the climate models. This is illustrated by the work of Otto at al as shown in Figure 16. Where is the oversight for these activities?



CONCLUSIONS

Any temperature changes produced by the observed increase in the atmospheric concentration of CO2 are too small to measure. The radiative forcing or decrease in LWIR flux at TOA produced by an increase in greenhouse gas concentration does not couple to the surface and change the surface temperature. The upward and downward LWIR fluxes are decoupled by molecular line broadening effects. There is no equilibrium, so the small atmospheric heating effects produced by increases in greenhouse gas concentration have to be analyzed as changes to the rate of LWIR cooling. Any heat generated in the troposphere is simply radiated back to space as wideband LWIR emission, mainly by the water bands. There is no change to the energy balance of the earth. At the surface, any small increase in downward LWIR flux has to be added to the interactive, time dependent flux terms coupled to the surface thermal reservoir. A thermal engineering analysis then shows that any change in surface temperature is ‘too small to measure’. The climate modelers have abandoned physical reality in favor of mathematical simplicity. They are playing expensive computer games in an equilibrium climate fantasy land.


There are three parts to the climate fraud. First, the climate energy transfer processes that determine the surface temperature were oversimplified and replaced by an equilibrium air column. When the CO2 concentration is increased, the surface temperature must increase as a mathematical artifact of the calculation. Radiative transfer algorithms and a ‘water vapor feedback’ were added to the basic air column and this 1-D RC model was incorporated into the unit cells of the climate larger GCMs. ‘Improvements’ to the 1-D RC model led to the pseudoscience of radiative forcings, feedbacks and climate sensitivity still used in the climate models today. The climate modelers have trapped themselves in a web of lies of their own making. Second, as funding decreased for government agencies such as NASA and the nuclear programs at DOE there was ‘mission creep’ and some of those that had skills in areas such as mathematical analysis and computer programming jumped onto the climate bandwagon. A paycheck was more important than model validation. Third, various environmental and political groups decided to exploit the climate fraud to further their own agendas.


The pseudoscience of computer climate fiction was established between 1967 and 1981, mainly by the work of Manabe and Wetherald at NOAA and Hansen’s group at NASA Goddard. The technical fraud was spread in part by ‘academic incest’. Graduate students became postdoctoral fellows and then moved on to other positions in universities and government agencies - taking their computer models with them. This established a closed group of ‘climate theologians’ who were trapped in their own web of lies about radiative forcings, feedbacks and climate sensitivity. This expanded into a ‘triangle of fraud’ that included satellite studies of the ‘radiation balance of the earth’ and the creation of a climate sensitivity from a contrived set of radiative forcings and the homogenized ‘global mean temperature record’. The growth of the climate modeling fraud coincided with the warming phase of the AMO that was first detected in the weather station record in 1985.


Climate modelers are not scientists. They are no longer capable of logical deduction based on observation and measurement. They have become the prophets of the Imperial Cult of the Global Warming Apocalypse. Instead of a flat earth that have chosen to believe in a flat ocean where wind driven oscillations and non-equilibrium phase shifts do not exist. The climate must be controlled by radiative forcings and feedbacks. Eisenhower’s warning about the corruption of science by government funding has come true. The sacred spaghetti plots generated by the computer models, the immaculate radiation balance of the earth and the holy climate sensitivity form a triangle of fraud that is part of the creed of the Imperial Cult of the Global Warming Apocalypse. The climate believers have claimed the Divine Right to save the world from a non-existent problem. The new indulgences require us to give up fossil fuels to save us from a climate apocalypse that is too small to measure. We can ride an elevator down four floors to the new climate hell - with a change in LWIR heating rate of +0.08 K per day.





The Triangle of Fraud: the radiation balance, the climate sensitivity and a contrived set of radiative forcings are used to support the climate modeling fraud.






ACKNOWLEDGEMENT

This work was performed as independent research by the author. It was not supported by any grant awards and none of the work was conducted as a part of employment duties for any employer. The views expressed are those of the author. He hopes that you will agree with them.

REFERENCES

Normally, the references given in an article of this nature would be almost exclusively to the peer reviewed literature, with limited references to websites that provide access to climate data. Unfortunately, climate science has been thoroughly corrupted by the global warming fraud. The peer review process has collapsed and been replaced by blatant cronyism. Many of the publications in ‘prestigious’ journals such as Nature, Science, PNAS and others that relate to climate modeling predictions of global warming are fraudulent and should never have been published. Consequently many of the important references given here are to website publications. This should not detract from the integrity of the information provided. Many of these website publications have received a more thorough review than they might have received through the traditional peer review process.


Note: The text editor for web builder does not allow certain text characters including semicolon. URLs with semicolons will have to be loaded manually by the reader with {semicolon} replaced by the semicolon character.



Agassiz, L. (1840) Etudes sur les Glaciers, Neuchatel

Akasofu, S.-I. (2010), “On the recovery from the Little Ice Age” Natural Science 2(11) pp. 1211-1224. Akasofu

AMO (2022) AMO

Andrews, R. (2017a), “Adjusting Measurements to Match the Models – Part 3: Lower Troposphere Satellite Temperatures” Energy Matters Sept 14. Andrews.a

Andrews, R. (2017b), “Making the Measurements Match the Models – Part 2: Sea Surface Temperatures” Energy Matters Aug 2. Andrews.b

Andrews, R. (2017c), “Adjusting Measurements to Match the Models – Part 1: Surface Air Temperatures” Energy Matters July 27. Andrews.c

Arrhenius, S. (1896), “On the influence of carbonic acid in the air upon the temperature of the ground” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 41 pp. 237-276. Arrhenius

Arrhenius, S. (2014), “The Probable Cause of Climate Fluctuations: –Svante Arrhenius, A Translation of his 1906 Amended View of “Global Warming” Friends of Science, Original title:“Die vermutliche Ursache der Klimaschwankungen” Meddelanden från K. Vetenskapsakademiens Nobelinstitut Band 1 No 2. Arrhenius.1906

Berger, T. and G. Sherrington, (2022), “Uncertainty of Measurement of Routine Temperatures–Part Three” WUWT Oct 14. Berger

Bolin, B. (2007), A History of the Science and Politics of Climate Change. The Role of the Intergovernmental Panel on Climate Change, Cambridge, Cambridge University Press.

Bolin, B. (1960), “On the Exchange of Carbon Dioxide between the Atmosphere and the Sea” Tellus 12 pp. 274-281. Bolin

Bolin, B., and Eriksson, E. (1959). “Changes in the carbon dioxide content of the atmosphere and sea due to fossil fuel combustion”, in B. Bolin, (Ed.), The atmosphere and the sea in motion pp. 130-142. New York: The Rockefeller Institute and Oxford University Press. Bolin.Eriksson

Bryson, R. A. and G. J. Dittberner (1976), “A non-equilibrium model of hemispheric mean surface temperature” J. Atmos. Sci. 33(11) pp. 2094-2106. Bryson

Budyko, M. I. (1969) “The effect of solar radiation variations on the climate of the Earth” Tellus 21(5) pp. 611-619. Budyko

Bunker, A. F. (1976) “Computations of Surface Energy Flux and Annual Air–Sea Interaction Cycles of the North Atlantic Ocean” Monthly Weather Review 104(9) pp. 1122-1140.

[https://doi.org/10.1175/1520-0493(1976)104<1122:COSEFA>2.0.CO{semicolon}2]

Callendar, G. S. (1938), “The artificial production of carbon dioxide and its influence on temperature” J. Roy. Met. Soc. 64 pp. 223-240. Callendar Also available at Callendar.1

CERES Team (2011) NASA Langley, Press Release, OLR Image, March 18 2011. CERES.2011

CERES Team (2004) Earth Radiation Budget: Seasonal cycles in Net Radiation, 3/2000 to 2/200, CERES/Terra, Release Date: 07/12/2004 CERES.2004

Cess, R. D. and S. D. Goldenberg (1981) “The effect of ocean heat capacity upon global warming due to increasing atmospheric carbon dioxide “ J. Geophysical Res. 86 pp498-502. Cess

Charney, J. G., A. Arakawa, D. J. Baker, B. Bolin, R. E. Dickinson, R. M. Goody, C. E. Leith, H. M. Stommel and C. I. Wunsch (1979), Carbon Dioxide and Climate: A Scientific Assessment, Report of an ad hoc study group on carbon dioxide and climate, Woods Hole, MA July 23-27. Charney

Clark, R. (2013), “A dynamic, coupled thermal reservoir approach to atmospheric energy transfer Part I: Concepts” Energy and Environment 24(3, 4) pp. 319-340. Clark.I

“A dynamic, coupled thermal reservoir approach to atmospheric energy transfer Part II: Applications” Energy and Environment 24(3, 4) pp. 341-359. Clark.II

Clark, R. and A. Rörsch, (2023) Finding Simplicity in a Complex World - The Role of the Diurnal Temperature Cycle in Climate Energy Transfer and Climate Change, Clark Rörsch Publications, Thousand Oaks, CA. Available from Amazon.

Paperback: Clark.2023a ebook: Clark.2023b

CO2 Science (2023) CO2.Sci

Courtney, R. (1999) “Global Warming: How it all began” John.Daly.Waiting for greenhouse Post 1999. Courtney

D’Aleo, J. and A. Watts (Aug. 27, 2010) “Surface temperature records: policy driven deception? SPPI [http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf] (Link not working) Available at: D’Aleo

Darwall, R. (2017), ‘Green Tyranny’ Encounter Books, NY, NY. Dewar, E. (1995), Cloak of Green: The Links between Key Environmental Groups, Government and Big Business, Lorimer Press.

Dickinson, R. E. (1981) “Convergence rate and stability of ocean-atmosphere coupling schemes with a zero-dimensional climate model” J. Atmos. Sci 38(10) pp 2112-2120. [https://doi.org/10.1175/1520-0469(1981)038<2112:CRASOO>2.0.CO{semicolon}2]

Douglas, J. H. (March 1, 1975) “Climate change: chilling possibilities” Science News 107 pp. 138-140. Douglas

FM (2012) Fabius Maximus FM

Folland, C. K., D. J. Griggs and J. T. Houghton (2004), “History of the Hadley Centre for Climate Prediction and Research” Weather 59(11) pp. 317-323. Folland Also available at: Folland.1

Fourier, J.-B.-J. (1824), “Remarques générales sur les températures du globe terrestre et des espaces planétaires” Annales de Chimie et de Physique 27, pp. 136–167. Fourier.1824.Fr English Translation: Fourier.1824.Eng

Fourier, B.-J.-B. (1827), “Mémoire sur les températures du globe terrestre et des espaces planétaires” Mém. Acad. R. Sci. Inst. Fr., 7 pp. 527-604. Fourier.1827.Fr English Translation: Fourier.1827.Eng

Fourier, J. -B. -J. (1822) Theorie Analytique de la Chaleur, Didot, Paris.

HadCRUT4 (2022) HadCRUT4



Hale, G. M. and M. R. Querry (1973), “Optical constants of water in the 200 nm to 200 µm wavelength region” Applied Optics 12(3) pp. 555-563. Hale

Hanel, R. A., B. Schlachman, D. Rogers and D. Vanous (1971) “Nimbus 4 Michelson Interferometer” Applied Optics 10(6) pp. 1376-1382. Hanel

Hansen, J. et al., (45 authors), (2005), “Efficacy of climate forcings” J. Geophys Research 110 D18104 pp.1-45. Hansen.2005

Hansen, J., D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind and G. Russell (1981), “Climate impact of increasing carbon dioxide” Science 213 pp. 957-956. Hansen.1981

Hansen, J., A. Lacis, R. Ruedy M. Sato and H. Wilson (1993), “How sensitive is the world's climate?” National Geographic Research and Exploration 9(2) pp. 142-158. Hansen.1993

Harde, H. (2017), “Radiation Transfer Calculations and Assessment of Global Warming by CO2Int. J. Atmos. Sci. 9251034 pp. 1-30. Harde

Harper, K. C. (2004) “The Scandinavian tag team: Providers of atmospheric reality to numerical weather prediction efforts in the U. S. (1948-1955)” Proc. Int. Commission on History of Meterology 1.1 pp. 84-91. Harper

Hausfather, Z. (2019), “CMIP6: The next generation of climate models explained” Carbon Brief Hausfather

Hays, J. D., J. Imbrie, and N. J. Shackleton (1976), “Variations in the Earth's Orbit: Pacemaker of the Ice Ages” Science 194 Dec. 10, pp 1121-1132. Hays

Hecht, M. M. (2007), “Where the global warming hoax was born” 21st Century Science and Technology, pp.64-68, Fall Issue. Hecht

Herring, S. C., N. Christidis, A. Hoell and P. A. Stott (2022), “Explaining Extreme Events of 2020 from a Climate Perspective” Bull. Amer. Meteor. Soc. 101 (1), pp. S1–S128, (and prior years in this series). Herring

Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins (2008), “Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models” J. Geophys. Res. 113, D13103 pp. 1-8. Iacono

Imbrie, J. and K. P. Imbrie (1979), Ice Ages: Solving the Mystery, Harvard University Press, Cambridge, Mass.

IPCC, Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. (2021). In Press. doi:10.1017/9781009157896.

IPCC.2021

IPCC, Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, (2014)1535 pp. ISBN 9781107661820. IPCC.2013

Jones, G. S., P. A. Stott and N. Christidis (2013), “Attribution of observed historical near surface temperature variations to anthropogenic and natural causes using CMIP5 simulations” J. Geophys. Res. Atmos. 118(10) pp. 4001-4024. Jones.2013

Jones, P. D., T. M. L. Wigley, C. K. Foland, D. E. Parker, J. K. Angell, S. Lebedeff and J. E. Hansen (1988) “Evidence for global warming in the past decade” Nature 332, p. 790. Jones.1988

Jones, P. D., T. M. Wigley and P. B Wright (1986), “Global temperature variations between 1861 and 1984” Nature 323(31) pp. 430-434. Jones.1986

Julian, P. R. and R. M. Chervin (1978) “A Study of the Southern Oscillation and Walker Circulation Phenomenon” Monthly Weather Review 106(10) pp 1433-1451. [https://doi.org/10.1175/1520-0493(1978)106<1433:ASOTSO>2.0.CO{semicolon}2]

Kandel, R. and M. Viollier (2010), “Observation of the Earth's radiation budget from space” Comptes Rendus Geoscience 342(4-5) pp. 286-300. Kandel

Keeling (2022), The Keeling Curve. Keeling

Klaus, V. (2007), Blue Planet in Green Shackles. What Is Endangered: Climate or Freedom? Competitive Enterprise Institute.

Kluft, L. (2020) “Benchmark Calculations of the Climate Sensitivity of Radiative-Convective Equilibrium” Reports on Earth System Science / Max Planck Institute for Meteorology 239 pp. 1-90. Kluft

Knutson, T., J.P. Kossin, C. Mears, J. Perlwitz and M.F. Wehner (2017), “Detection and attribution of climate change” In: Climate Science Special Report: Fourth National Climate Assessment, Volume I, Wuebbles, D.J., D.W. Fahey, K.A. Hibbard, D.J. Dokken, B.C. Stewart, and T.K. Maycock (eds.). U.S. Global Change Research Program, Washington, DC, USA, pp. 114-132. Knutson

Knutti, R. and G. C. Hegerl (2008), “The equilibrium sensitivity of the Earth’s temperature to radiation changes” Nature Geoscience 1 pp. 735-743. Knutti

Lacis, A. A. and V. Oinas (1991), “A description of the correlated k distributing method for modeling nongray gaseous absorption, thermal emission and multiple scattering in vertically inhomogeneous atmospheres” J. Geophys. Res. 96(D5) pp. 9027-9063. Lacis

Lamb, H. H. (1972) British Isles weather types and a register of the daily sequence of circulation patterns 1861 – 1971 Geophys Memoirs, 116 (#2, volume XVI), 85 pp. HMSO, London.

Lettau, H.H. and B. Davidson (1957), Exploring the Atmosphere’s First Mile. Proceedings of the Great Plains Turbulence Field Program, 1 August to 8 September 1953 Volume II, Site Description and Data Tabulation, Oxford, Pergamon Press. (Google digital book)

Lorenz, E. N. (1973), “On the Existence of Extended Range Predictability” J. Applied Meteorology and Climatology 12(3) pp. 543-546. Lorenz

Lorenz, E.N. (1963), “Deterministic nonperiodic flow” Journal of the Atmospheric Sciences 20(2) pp. 130-141. Lorenz.1963



MacCracken, M. C. and F. M. Luther (Eds.) (1985) Detecting the climatic effects of increasing carbon dioxide US Department of Energy Report DOE/ER-0235. MacCracken.1985.a

MacCracken, M. C. and F. M. Luther (Eds.) (1985) Projecting the climatic effects of increasing carbon dioxide US Department of Energy Report DOE/ER-0237. MacCracken.1985.b

Manabe, S. and R. F. Strickler (1964) “Thermal Equilibrium of the Atmosphere with a Convective Adjustment” J. Atmospheric Sciences 21 pp. 361-385. Manabe.Strickler

Manabe, S. and F. Möller (1961) “On the radiative equilibrium and heat balance of the atmosphere” Monthly Weather Review 89(12) pp. 503-532. [https://doi.org/10.1175/1520-0493(1961)089<0503:OTREAH>2.0.CO{semicolon}2]

Manabe, S. and R. T. Wetherald (1975) “The effects of doubling the CO2 concentration in the climate of a general circulation model” J. Atmos. Sci. 32(1) pp. 3-15. Manabe.Wetherald.1975

Manabe, S. and R. T. Wetherald (1967) “Thermal equilibrium of the atmosphere with a given distribution of relative humidity” J. Atmos. Sci. 24 pp. 241-249. Manabe.Wetherald.1967

Mann M. E., R S. Bradley and M. K. Hughes (1999) “Northern Hemisphere temperatures during the past millennium: Inferences, uncertainties, and limitations” Geophys Res Lett. 26:759-762. Mann.1999

Mann, M. E., R. E. Bradley and M. K. Hughes (1998) “Global-scale temperature patterns and climate forcing over the past six centuries” Nature 392, pp. 779-787. Mann.1998

McLean, J. (2010), “we have been conned – an independent review of the IPCC” SPPI 2010 [http://scienceandpublicpolicy.org/originals/we_have_been_conned.html] (Link not working) Available at: McLean.2010

McLean, J. (2009), “Climate Science Corrupted”, SSPI 2009. [http://scienceandpublicpolicy.org/images/stories/papers/originals/climate_science_corrupted.pdf] (Link not working) Available at: McLean.2009

McFarlane, F. (2018), “The 1970s Global Cooling Consensus was not a Myth” Watts Up with That, 11.19.2018. McFarlane

Mead, M. and W. W. Kellogg, Eds. (1976), The Atmosphere: Endangered and Endangering, Fogarty International Center Proceedings No. 39, (Washington, D.C.: U.S. Government Printing Office, DHEW Publication No. [NIH] 77-1065). (Google Digital Book)

Meehl, G. A., G. J. Boer, C. Covey, M. Latif and R. J. Stouffer (1997) “Intercomparison Makes for a Better Climate Model” Eos 78(41) pp. 445-451 October 14. Meehl

Melillo, J. M., T. C. Richmond, and G. W. Yohe, eds., (2014) Climate Change Impacts in the United States: The Third National Climate Assessment. U.S. Global Change Research Program, 841 pp. Melillo.pdf On line Melillo.OL

Möller, F. (1964) “Optics of the lower atmosphere” Applied Optics 3(2) pp. 157-166. Möller

Monckton, C. (2009), ‘Climategate: caught green-handed’, SPPI [http://scienceandpublicpolicy.org/monckton/climategate.html] (Link not working) Available at: Monckton

Montford, A. W. (2010), ‘The Hockey Stick Illusion’, Stacey International.

Morice, C. P., J. J. Kennedy,N. A. Rayner and P. D. Jones (2012) “Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set” J. Geophysical Res. Atmospheres 117 D08101 pp. 1-22. Morice

Mosher, S. and T. W. Fuller (2010), Climategate: The Crutape Letters, Create Space.

Otto, A., F. E. L. Otto, O. Boucher, J. Church, G. Hegerl, P. M. Forster, N. P. Gillett, J. Gregory, G. C. Johnson, R Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens and M. R. Allen (2013) “Energy budget constraints on climate response” Nature Geoscience, 6 (6). pp. 415 - 416. ISSN 1752-0894. Otto

ibid Supplementary material Otto.Suppl

Peterson, T. C., W. M. Connolley and J. Fleck, (2008) “The myth of the 1970’s global cooling consensus” Bull. Amer. Meteor. Soc. 86 pp. 1325-1337. Peterson

Plass, G. N. (1956a) “The influence of the 15-micron carbon dioxide band on the atmospheric infrared cooling rate” Quarterly Journal of the Royal Meteorological Society 82 pp. 310-324. Plass.a Also available at: Plass.a1

Plass, G.N. (1956b) “The carbon dioxide theory of climatic change” Tellus 8(2) 140-154 (1956). Plass.b

Pouillet, M. (1837), “Memoir on the solar heat, on the radiating and absorbing powers of the atmospheric air and on the temperature of space” in: Scientific Memoirs selected from the Transactions of Foreign Academies of Science and Learned Societies, edited by Richard Taylor, 4 pp. 44-90. Pouillet.Eng

Original publication: (1836), “Mémoire sur la chaleur solaire: sur les pouvoirs rayonnants et absorbants de l'air atmosphérique et sur la température de l'espace” Comptes Rendus des Séances de l'Académie des Sciences, Paris. 7, pp. 24-65.

Ramanathan, V. (1975) “Greenhouse effect due to chlorofluorocarbons: Climatic implications” Science 190, pp. 50-52. Ramanathan

Ramanathan, V. and J. A. Coakley (1978), “Climate modeling through radiative convective models” Rev. Geophysics and Space Physics 16(4) pp. 465-489. Ramanathan.Coakley Also available at: Ramanathan.Coakley.1

Ramaswamy, V., W. Collins, J. Haywood, J. Lean, N. Mahowald, G. Myhre, V. Naik, K. P. Shine, B. Soden, G. Stenchikov and T. Storelvmo (2019) “Radiative Forcing of Climate: The Historical Evolution of the Radiative Forcing Concept, the Forcing Agents and their Quantification, and Applications” Meteorological Monographs Volume 59 Chapter 14. Ramaswamy

Rasool, S. I. and S. H. Schneider, (1971) “Atmospheric carbon dioxide and aerosols: Effects of large increases on global climate” Science 173 pp 138-141. Rasool

Revelle, R. and H. E. Suess (1957) “Carbon dioxide exchange between atmosphere and ocean and the question of an increase of atmospheric CO2 during the past decades” Tellus 9 pp. 18-27. Revelle

Stouffer, R. J., V. Eyring, G. A. Meehl, S. Bony, C. Senior, B. Steven, S, and K. E. Taylor (2017) “CMIP5 scientific gaps and recommendations for CMIP6” Bull. Amer. Met. Soc. 98(1) pp. 95-105 Stouffer

Stephenson, D. B., H. Wanner, S. Brönnimann and J. Luterbacher (2003) “The History of Scientific Research on the North Atlantic Oscillation” Geophysical Monograph Series 134 The North Atlantic Oscillation: Climatic Significance and Environmental Impact, J. W. Hurrell, Y. Kushnir, G. Ottersen and M. Visbeck (eds) Chapter 1. Stephenson Also Stephenson.1

Steyn, M. (2015), A Disgrace to the Profession, Amazon. Steyn

Stone, H. M. and S. Manabe (1968) “Comparison among various numerical models designed for computing IR cooling” Monthly Weather Review 96(10) pp 735-741. [https://doi.org/10.1175/1520-0493(1968)096<0735:CAVNMD>2.0.CO{semicolon}2]

Taylor, F. W. (2006), Elementary Climate Physics, Oxford University Press, Oxford, Chapter 7.

Taylor, K. E., R. J. Stauffer and G. A. Meehl (2012) “An overview of the CMIP5 and the experimental design” Bull. Amer. Met. Soc. 93(4) pp. 485-498. Taylor

Terando, A. D. Reidmiller, S. W. Hostetler, J. S. Littell, T. D. Beard, Jr., S. R. Weiskopf, J. Belnap, G. S. Plumlee (2020) “Using information from global climate models to inform policymaking—The role of the U.S. Geological Survey” U.S. Geological Survey Open-File Report 2020–1058, 25 pp. Terando

Tyndall, J., (1861) “On the Absorption and Radiation of Heat by Gases and Vapours, and on the Physical Connexion of Radiation, Absorption, and Conduction” Philosophical Transactions of the Royal Society of London 151 pp. 1-36. Tyndall

Tyndall, J. (1863), “On radiation through the Earth's atmosphere” Proc. Roy Inst. Jan 23 pp. 200-206.

Wang, W. C., Y. L. Yung, A. A. Lacis, T. Mo and J. E. Hansen (1976), “Greenhouse effects due to man-made perturbations of trace gases” Science 194 pp. 685-690. Wang

Wegman, E. J., D. W. Scott and Y. H. Said, (2010) “Ad hoc committee report on the 'hockey stick' global climate reconstruction”. [http://scienceandpublicpolicy.org/reprint/ad_hoc_report.html] (Link not working) Available at: Wegman

Wigley, T. M. L., J. K. Angell and P. D. Jones (1985), “Analysis of the temperature record” in Detecting the climatic effects of increasing carbon dioxide, M. C. MacCracken and F. M. Luther, Eds. US Department of Energy Report DOE/ER-0235, pp. 55-90. Wigley

Wuebbles, D.J., D.W. Fahey, K.A. Hibbard, D.J. Dokken, B.C. Stewart, and T.K. Maycock (eds.) (2017) Climate Science Special Report: Fourth National Climate Assessment, Volume I. U.S. Global Change Research Program, Washington, DC, USA, pp. 114-132. Wuebbles

Zelinka, M. D., T. A. Myers, D. T. McCoy, S. Po-Chedley, P. M. Caldwell, P. Ceppi, S. A. Klein and K. E. Taylor, “Causes of Higher Climate Sensitivity in CMIP6 Models” Geophysical Research Letters 47, e2019GL085782 pp1-12 (2020). Zelinka

Zubrin, R. (2013), ‘Merchants of Despair’, Encounter Books, NY, NY.