KaiserScience

Home » Environment

Category Archives: Environment

Tidal water level changes in the Merrimack River

I knew about significant water level changes, due to the tides, out at the mouth of the Merrimack River, Massachusetts, but didn’t realize that they were so effective when miles inland. So when I heard that the water levels in Haverhill would be low, I had to take a drive out to the river to see what it would be like.

Haverhill River low tide 10 7 2019

So here I am, after I walked out into the middle of the river! GPS clearly shows how far I walked out.

Haverhill River on GPS low tide

I just looked at the NOAA (National Oceanic and Atmospheric Administration) Tides and Current pages for Newburyport, MA, Merrimack River, Station ID: 8440466

tidesandcurrents.noaa.gov, 8440466

This graph shows the significant differences between the river level at high and low tide, where the Merrimack meets the Atlantic Ocean, in Newburyport.

Merrimack River Entrance Massachusetts Tide Chart

So now I am looking here, likely close to where I was standing in Haverhill, Riverside, Merrimack River, – Station ID: 8440889

tidesandcurrents.noaa.gov, 8440889

This graph shows the differences between the river level at high and low tide, further upriver, in Haverhill, MA.

Merrimacport, Merrimack River, Haverhill MA Tides

This brings up the question, how are the tides created? Check out our resource, the origin of tides.

GIF Tides lighthouse

Beach in the UK

 

Environmental Science Syllabus

Environmental Science

       Weekly guide to what we’re doing in class

 

 

Nuclear power

Content objective:

What are we learning? Why are we learning this?

content, procedures, skills

Vocabulary objective

Tier II: High frequency words used across content areas. Key to understanding directions, understanding relationships, and for making inferences.

Tier III: Low frequency, domain specific terms

Building on what we already know

What vocabulary & concepts were learned in earlier grades?
Make connections to prior lessons from this year.
This is where we start building from.

Here we see the glow of Cherenkov radiation as a nuclear fission reactor starts up.

In principle there are many ways that we can generate nuclear power

1. nuclear fission of uranium

The largest metal atoms are not stable. They spontaneously break apart (“fission”) into smaller pieces. But the fascinating thing is that when you add up the mass of the smaller parts, they almost – but don’t quite – equal the mass of the parent atom!

Where did the massing mass go? We think mass could never “just disappear” – that violates the law of conservation of mass. Isn’t that a “law of nature”?”

Turns out that there is no law of conservation of mass. Sure, mass is usually conserved in everyday life, but it isn’t always conserved. So what’s the real deal?

Any missing mass has been converted into photons (particles of light) with high energy.

Under very specific conditions, mass can turn into energy, and vice-versa.  So there’s no absolute ‘law of conservation of mass’ or ‘law of conservation of energy’. Rather, these are just two aspects of a higher order law of nature: ‘the law of conservation of mass & energy.’

Scientists have discovered how to use isotopes of uranium to create large amounts of power, which we use to create electricity.

2. nuclear fission of thorium

The general idea here is the same as for uranium. Nuclear fission of a radioactive metal to produce power. Thorium is far more abundant, easier to process, and much safer to use. It doesn’t sustain the kind of reactions that occur in an atomic or nuclear bomb. Thorium reactors can’t blow up. It makes very little radioactive waste, and the little that it makes degrades safely, in a shorter period of time. And it’s waste can’t be used to make nuclear weapons, so there is no fear of nuclear weapons proliferation. It has always been recognized as safer, cheaper, and better all around. So why aren’t we using it?

… research into the mechanization of nuclear reactions was initially driven not by the desire to make energy, but by the desire to make [atomic] bombs. The $2 billion Manhattan Project that produced the atomic bomb sparked a worldwide surge in nuclear research, most of it funded by governments embroiled in the Cold War. And here we come to it: Thorium reactors do not produce plutonium, which is what you need to make a nuke. How ironic. The fact that thorium reactors could not produce fuel for nuclear weapons meant the better reactor fuel got short shrift, yet today we would love to be able to clearly differentiate a country’s nuclear reactors from its weapons program.

… Thorium’s advantages start from the moment it is mined and purified, in that all but a trace of naturally occurring thorium is Th232, the isotope useful in nuclear reactors. That’s a heck of a lot better than the 3% to 5% of uranium that comes in the form we need.

Then there’s the safety side of thorium reactions. Unlike U235, thorium is not fissile. That means no matter how many thorium nuclei you pack together, they will not on their own start splitting apart and exploding. If you want to make thorium nuclei split apart, though, it’s easy: you simply start throwing neutrons at them. Then, when you need the reaction to stop, simply turn off the source of neutrons and the whole process shuts down, simple as pie….

… There are at least seven types of reactors that can use thorium as a nuclear fuel, five of which have entered into operation at some point. Several were abandoned not for technical reasons but because of a lack of interest or research funding (blame the Cold War again). So proven designs for thorium-based reactors exist and need but for some support.
– The Thing About Thorium: Why The Better Nuclear Fuel May Not Get A Chance, by Marin Katusa , Forbes, 2/16/2012

Here we see the difference between a uranium fission and a thorium fission nuclear power plant.

Thorium nuclear power

Thorium – World Nuclear Association

The Thing About Thorium: Why The Better Nuclear Fuel May Not Get A Chance. Forbes.

3. nuclear fusion (several types)

tba

in the sun

Sun animation

Inside a star, gravity pulls billions of tons of matter towards the center. Atoms are pushed very close together. So close that sometimes two atoms will fuse into one, heavier atom.

The mass of this new atom is slightly less than the mass of the pieces that it was made of in the first place? Where the did missing go? It effectively becomes energy – which we see as photons, or as the heat/motion energy of other particles.

As an example, here we see deuterium fusing with tritium. The resulting product has less mass than the parts going in to the collision. That missing mass we see becomes 3.5 mega electron-volts of energy,

For more details see Stars are powered by nuclear fusion.

How can we possibly replicate the energy of stars here on Earth? For the last 70 years people have been steadily working on creating and sustaining nuclear fission in the laboratory, and the process actually works! Not surprisingly it has been extremely challenging to do this.

In this device, called a torus, engineers have designed extremely powerful electromagnets. These create a super-powerful magnetic field, strong enough to contain the hot plasma.  We see the plasma contained inside as a glowing blue gas.

nuclear fusion magnetic containment gif

At the present time we can not use nuclear fusion as a practical way to produce energy, but research is continuing at a steady rate.

Advantages of nuclear fusion – ITER

Conclusion:

In practice we are only using nuclear fission of uranium. Research on thorium fission reactors is slowly proceeding, and we expect to see such reactors operating within the next 20 years. (We could do it much sooner if governments sustainably funded more reserach.) Research on fusion reactors is slowly proceeding, but we don’t expect to see such reactors operating soon. It is unclear at the moment when such reactors will be practical.

 

tba

How does nuclear fission power work?

tba

https://kaiserscience.wordpress.com/biology-the-living-environment/human-impact-on-ecosystems/human-industrialization-affects-the-earth/

What are the benefits of nuclear power?

tba

What are the risks of nuclear power?

tba

https://kaiserscience.wordpress.com/biology-the-living-environment/disease/cancer/

What about the nuclear power plant disasters?

What about the radiation release from not using nuclear power?

https://kaiserscience.wordpress.com/2017/02/02/coal-releases-more-radioactivity-than-nuclear-power/

World’s Worst Energy Accidents in Environmental Perspective

Further discussion

 

 

Climate Change Could Make Clouds Disappear, Triggering Cataclysmic Warming

Article archive for my students

A World Without Clouds, Natalie Wolchover, Quanta Magazine,

A state-of-the-art supercomputer simulation indicates that a feedback loop between global warming and cloud loss can push Earth’s climate past a disastrous tipping point in as little as a century.

On a 1987 voyage to the Antarctic, the paleoceanographer James Kennett and his crew dropped anchor in the Weddell Sea, drilled into the seabed, and extracted a vertical cylinder of sediment. In an inch-thick layer of plankton fossils and other detritus buried more than 500 feet deep, they found a disturbing clue about the planet’s past that could spell disaster for the future.

Lower in the sediment core, fossils abounded from 60 plankton species. But in that thin cross-section from about 56 million years ago, the number of species dropped to 17. And the planktons’ oxygen and carbon isotope compositions had dramatically changed. Kennett and his student Lowell Stott deduced from the anomalous isotopes that carbon dioxide had flooded the air, causing the ocean to rapidly acidify and heat up, in a process similar to what we are seeing today.

While those 17 kinds of plankton were sinking through the warming waters and settling on the Antarctic seabed, a tapir-like creature died in what is now Wyoming, depositing a tooth in a bright-red layer of sedimentary rock coursing through the badlands of the Bighorn Basin. In 1992, the finder of the tooth fossil, Phil Gingerich, and collaborators Jim Zachos and Paul Koch reported the same isotope anomalies in its enamel that Kennett and Stott had presented in their ocean findings a year earlier. The prehistoric mammal had also been breathing CO2-flooded air.

More data points surfaced in China, then Europe, then all over. A picture emerged of a brief, cataclysmic hot spell 56 million years ago, now known as the Paleocene-Eocene Thermal Maximum (PETM). After heat-trapping carbon leaked into the sky from an unknown source, the planet, which was already several degrees Celsius hotter than it is today, gained an additional 6 degrees. The ocean turned jacuzzi-hot near the equator and experienced mass extinctions worldwide. On land, primitive monkeys, horses and other early mammals marched northward, following vegetation to higher latitudes. The mammals also miniaturized over generations, as leaves became less nutritious in the carbonaceous air. Violent storms ravaged the planet; the geologic record indicates flash floods and protracted droughts. As Kennett put it, “Earth was triggered, and all hell broke loose.”

The PETM doesn’t only provide a past example of CO2-driven climate change; scientists say it also points to an unknown factor that has an outsize influence on Earth’s climate. When the planet got hot, it got really hot. Ancient warming episodes like the PETM were always far more extreme than theoretical models of the climate suggest they should have been. Even after accounting for differences in geography, ocean currents and vegetation during these past episodes, paleoclimatologists find that something big appears to be missing from their models — an X-factor whose wild swings leave no trace in the fossil record.

Evidence is mounting in favor of the answer that experts have long suspected but have only recently been capable of exploring in detail. “It’s quite clear at this point that the answer is clouds,” said Matt Huber, a paleoclimate modeler at Purdue University.

Clouds currently cover about two-thirds of the planet at any moment. But computer simulations of clouds have begun to suggest that as the Earth warms, clouds become scarcer. With fewer white surfaces reflecting sunlight back to space, the Earth gets even warmer, leading to more cloud loss. This feedback loop causes warming to spiral out of control.

For decades, rough calculations have suggested that cloud loss could significantly impact climate, but this concern remained speculative until the last few years, when observations and simulations of clouds improved to the point where researchers could amass convincing evidence.

Now, new findings reported today in the journal Nature Geoscience make the case that the effects of cloud loss are dramatic enough to explain ancient warming episodes like the PETM — and to precipitate future disaster. Climate physicists at the California Institute of Technology performed a state-of-the-art simulation of stratocumulus clouds, the low-lying, blankety kind that have by far the largest cooling effect on the planet.

The simulation revealed a tipping point: a level of warming at which stratocumulus clouds break up altogether. The disappearance occurs when the concentration of CO2 in the simulated atmosphere reaches 1,200 parts per million — a level that fossil fuel burning could push us past in about a century, under “business-as-usual” emissions scenarios. In the simulation, when the tipping point is breached, Earth’s temperature soars 8 degrees Celsius, in addition to the 4 degrees of warming or more caused by the CO2 directly.

Once clouds go away, the simulated climate “goes over a cliff,” said Kerry Emanuel, a climate scientist at the Massachusetts Institute of Technology. A leading authority on atmospheric physics, Emanuel called the new findings “very plausible,” though, as he noted, scientists must now make an effort to independently replicate the work.

To imagine 12 degrees of warming, think of crocodiles swimming in the Arctic and of the scorched, mostly lifeless equatorial regions during the PETM. If carbon emissions aren’t curbed quickly enough and the tipping point is breached, “that would be truly devastating climate change,” said Caltech’s Tapio Schneider, who performed the new simulation with Colleen Kaul and Kyle Pressel.

Huber said the stratocumulus tipping point helps explain the volatility that’s evident in the paleoclimate record. He thinks it might be one of many unknown instabilities in Earth’s climate. “Schneider and co-authors have cracked open Pandora’s box of potential climate surprises,” he said, adding that, as the mechanisms behind vanishing clouds become clear, “all of a sudden this enormous sensitivity that is apparent from past climates isn’t something that’s just in the past. It becomes a vision of the future.”

The Cloud Question

Clouds come in diverse shapes — sky-filling stratus, popcorn-puff cumulus, wispy cirrus, anvil-shaped nimbus and hybrids thereof — and span many physical scales. Made of microscopic droplets, they measure miles across and, collectively, cover most of the Earth’s surface. By blocking sunlight from reaching the surface, clouds cool the planet by several crucial degrees. And yet, they are insubstantial, woven into greatness by complicated physics. If the planet’s patchy white veil of clouds descended to the ground, it would make a watery sheen no thicker than a hair.

Clouds seem simple at first: They form when warm, humid air rises and cools. The water vapor in the air condenses around dust grains, sea salt or other particles, forming droplets of liquid water or ice — “cloud droplets.” But the picture grows increasingly complicated as heat, evaporation, turbulence, radiation, wind, geography and myriad other factors come into play.

Physicists have struggled since the 1960s to understand how global warming will affect the many different kinds of clouds, and how that will influence global warming in turn. For decades, clouds have been seen as by far the biggest source of uncertainty over how severe global warming will be — other than what society will do to reduce carbon emissions.

Kate Marvel contemplates the cloud question at the NASA Goddard Institute for Space Studies in New York City. Last spring, in her office several floors above Tom’s Restaurant on the Upper West Side, Marvel, wearing a cloud-patterned scarf, pointed to a plot showing the range of predictions made by different global climate models. The 30 or so models, run by climate research centers around the world, program in all the known factors to predict how much Earth’s temperature will increase as the CO2 level ticks up.

Each climate model solves a set of equations on a spherical grid representing Earth’s atmosphere. A supercomputer is used to evolve the grid of solutions forward in time, indicating how air and heat flow through each of the grid cells and circulate around the planet.

By adding carbon dioxide and other heat-trapping greenhouse gases to the simulated atmosphere and seeing what happens, scientists can predict Earth’s climate response. All the climate models include Earth’s ocean and wind currents and incorporate most of the important climate feedback loops, like the melting of the polar ice caps and the rise in humidity, which both exacerbate global warming. The models agree about most factors but differ greatly in how they try to represent clouds.

The least sensitive climate models, which predict the mildest reaction to increasing CO2, find that Earth will warm 2 degrees Celsius if the atmospheric CO2 concentration doubles relative to preindustrial times, which is currently on track to happen by about 2050. (The CO2concentration was 280 parts per million before fossil fuel burning began, and it’s above 410 ppm now.

So far, the average global temperature has risen 1 degree Celsius.) But the 2-degree prediction is the best-case scenario. “The thing that really freaks people out is this upper end here,” Marvel said, indicating projections of 4 or 5 degrees of warming in response to the doubling of CO2. “To put that in context, the difference between now and the last ice age was 4.5 degrees.”

The huge range in the models’ predictions chiefly comes down towhether they see clouds blocking more or less sunlight in the future. As Marvel put it, “You can fairly confidently say that the model spread in climate sensitivity is basically just a model spread in what clouds are going to do.”

Climate Change clouds feedback

Image from Lucy Reading-Ikkanda/Quanta Magazine

The problem is that, in computer simulations of the global climate, today’s supercomputers cannot resolve grid cells that are smaller than about 100 kilometers by 100 kilometers in area. But clouds are often no more than a few kilometers across. Physicists therefore have to simplify or “parameterize” clouds in their global models, assigning an overall level of cloudiness to each grid cell based on other properties, like temperature and humidity.

But clouds involve the interplay of so many mechanisms that it’s not obvious how best to parameterize them. The warming of the Earth and sky strengthens some mechanisms involved in cloud formation, while also fueling other forces that break clouds up. Global climate models that predict 2 degrees of warming in response to doubling CO2generally also see little or no change in cloudiness. Models that project a rise of 4 or more degrees forecast fewer clouds in the coming decades.

The climatologist Michael Mann, director of the Earth System Science Center at Pennsylvania State University, said that even 2 degrees of warming will cause “considerable loss of life and suffering.” He said it will kill coral reefs whose fish feed millions, while also elevating the risk of damaging floods, wildfires, droughts, heat waves, and hurricanes and causing “several feet of sea-level rise and threats to the world’s low-lying island nations and coastal cities.”

At the 4-degree end of the range, we would see not only “the destruction of the world’s coral reefs, massive loss of animal species, and catastrophic extreme weather events,” Mann said, but also “meters of sea-level rise that would challenge our capacity for adaptation. It would mean the end of human civilization in its current form.”

It is difficult to imagine what might happen if, a century or more from now, stratocumulus clouds were to suddenly disappear altogether, initiating something like an 8-degree jump on top of the warming that will already have occurred. “I hope we’ll never get there,” Tapio Schneider said in his Pasadena office last year.

The Simulated Sky

In the last decade, advances in supercomputing power and new observations of actual clouds have attracted dozens of researchers like Schneider to the problem of global warming’s X-factor. Researchers are now able to model cloud dynamics at high resolution, generating patches of simulated clouds that closely match real ones. This has allowed them to see what happens when they crank up the CO2.

First, physicists came to grips with high clouds — the icy, wispy ones like cirrus clouds that are miles high. By 2010, work by Mark Zelinka of Lawrence Livermore National Laboratory and others convincingly showed that as Earth warms, high clouds will move higher in the sky and also shift toward higher latitudes, where they won’t block as much direct sunlight as they do nearer the equator. This is expected to slightly exacerbate warming, and all global climate models have integrated this effect.

But vastly more important and more challenging than high clouds are the low, thick, turbulent ones — especially the stratocumulus variety. Bright-white sheets of stratocumulus cover a quarter of the ocean, reflecting 30 to 70 percent of the sunlight that would otherwise be absorbed by the dark waves below. Simulating stratocumulus clouds requires immense computing power because they contain turbulent eddies of all sizes.

Chris Bretherton, an atmospheric scientist and mathematician at the University of Washington, performed some of the first simulations of these clouds combined with idealized climate models in 2013 and 2014. He and his collaborators modeled a small patch of stratocumulus and found that as the sea surface below it warmed under the influence of CO2, the cloud became thinner. That work and other findings — such as NASA satellite data indicating that warmer years are less cloudy than colder years — began to suggest that the least sensitive global climate models, the ones predicting little change in cloud cover and only 2 degrees of warming, probably aren’t right.

Bretherton, whom Schneider calls “the smartest person we have in this area,” doesn’t only develop some of the best simulations of stratocumulus clouds; he and his team also fly through the actual clouds, dangling instruments from airplane wings to measure atmospheric conditions and bounce lasers off of cloud droplets.

In the Socrates mission last winter, Bretherton hopped on a government research plane and flew through stratocumulus clouds over the Southern Ocean between Tasmania and Antarctica. Global climate models tend to greatly underestimate the cloudiness of this region, and this makes the models relatively insensitive to possible changes in cloudiness.

Bretherton and his team set out to investigate why Southern Ocean clouds are so abundant. Their data indicate that the clouds consist primarily of supercooled water droplets rather than ice particles, as climate modelers had long assumed. Liquid-water droplets stick around longer than ice droplets (which are bigger and more likely to fall as rain), and this seems to be why the region is cloudier than global climate models predict. Adjusting the models to reflect the findings will make them more sensitive to cloud loss in this region as the planet heats up. This is one of several lines of evidence, Bretherton said, “that would favor the range of predictions that’s 3 to 5 degrees, not the 2- to 3-degree range.”

Schneider’s new simulation with Kaul and Pressel improved on Bretherton’s earlier work primarily by connecting what happens in a small patch of stratocumulus cloud to a simple model of the rest of Earth’s climate. This allowed them to investigate for the first time how these clouds not only respond to, but also affect, the global temperature, in a potential feedback loop.

Their simulation, which ran for 2 million core-hours on supercomputers in Switzerland and California, modeled a roughly 5-by-5-kilometer patch of stratocumulus cloud much like the clouds off the California coast. As the CO2 level ratchets up in the simulated sky and the sea surface heats up, the dynamics of the cloud evolve. The researchers found that the tipping point occurs, and stratocumulus clouds suddenly disappear, because of two dominant factors that work against their formation. First, when higher CO2 levels make Earth’s surface and sky hotter, the extra heat drives stronger turbulence inside the clouds. The turbulence mixes moist air near the top of the cloud, pushing it up and out through an important boundary layer that caps stratocumulus clouds, while drawing dry air in from above. Entrainment, as this is called, works to break up the cloud.

Secondly, as the greenhouse effect makes the upper atmosphere warmer and thus more humid, the cooling of the tops of stratocumulus clouds from above becomes less efficient. This cooling is essential, because it causes globs of cold, moist air at the top of the cloud to sink, making room for warm, moist air near Earth’s surface to rise into the cloud and become it. When cooling gets less effective, stratocumulus clouds grow thin.

Countervailing forces and effects eventually get overpowered; when the CO2 level reaches about 1,200 parts per million in the simulation — which could happen in 100 to 150 years, if emissions aren’t curbed — more entrainment and less cooling conspire to break up the stratocumulus cloud altogether.

To see how the loss of clouds would affect the global temperature, Schneider and colleagues inverted the approach of global climate models, simulating their cloud patch at high resolution and parameterizing the rest of the world outside that box. They found that, when the stratocumulus clouds disappeared in the simulation, the enormous amount of extra heat absorbed into the ocean increased its temperature and rate of evaporation.

Water vapor has a greenhouse effect much like CO2, so more water vapor in the sky means that more heat will be trapped at the planet’s surface. Extrapolated to the entire globe, the loss of low clouds and rise in water vapor leads to runaway warming — the dreaded 8-degree jump. After the climate has made this transition and water vapor saturates the air, ratcheting down the CO2 won’t bring the clouds back.

“There’s hysteresis,” Schneider said, where the state of the system depends on its history. “You need to reduce CO2 to concentrations around present day, even slightly below, before you form stratocumulus clouds again.”

Paleoclimatologists said this hysteresis might explain other puzzles about the paleoclimate record. During the Pliocene, 3 million years ago, the atmospheric CO2 level was 400 ppm, similar to today, but Earth was 4 degrees hotter. This might be because we were cooling down from a much warmer, perhaps largely cloudless period, and stratocumulus clouds hadn’t yet come back.

Past, Present and Future

Schneider emphasized an important caveat to the study, which will need to be addressed by future work: The simplified climate model he and his colleagues created assumed that global wind currents would stay as they are now. However, there is some evidence that these circulations might weaken in a way that would make stratocumulus clouds more robust, raising the threshold for their disappearance from 1,200 ppm to some higher level. Other changes could do the opposite, or the tipping point could vary by region.

To better “capture the heterogeneity” of the global system, Schneider said, researchers will need to use many simulations of cloud patches to calibrate a global climate model. “What I would love to do, and what I hope we’ll get a chance to do, is embed many, many of these [high-resolution] simulations in a global climate model, maybe tens of thousands, and then run a global climate simulation that interacts with” all of them, he said. Such a setup would enable a more precise prediction of the stratocumulus tipping point or points.

There’s a long way to go before we reach 1,200 parts per million, or thereabouts. Ultimate disaster can be averted if net carbon emissions can be reduced to zero — which doesn’t mean humans can’t release any carbon into the sky. We currently pump out 10 billion tons of it each year, and scientists estimate that Earth can absorb about 2 billion tons of it a year, in addition to what’s naturally emitted and absorbed. If fossil fuel emissions can be reduced to 2 billion tons annually through the expansion of solar, wind, nuclear and geothermal energy, changes in the agricultural sector, and the use of carbon-capture technology, anthropogenic global warming will slow to a halt.

What does Schneider think the future will bring? Sitting in his office with his laptop screen open to a mesmerizing simulation of roiling clouds, he said, “I am pretty — fairly — optimistic, simply because I think solar power has gotten so much cheaper. It’s not that far away from the cost curve for producing electricity from solar power crossing the fossil fuel cost curve. And once it crosses, there will be an exponential transformation of entire industries.”

Kerry Emanuel, the MIT climate scientist, noted that possible economic collapse caused by nearer-term effects of climate change might also curtail carbon emissions before the stratocumulus tipping point is reached.

But other unforeseen changes and climate tipping points could accelerate us toward the cliff. “I’m worried,” said Kennett, the pioneering paleoceanographer who discovered the PETM and unearthed evidence of many other tumultuous periods in Earth’s history. “Are you kidding? As far as I’m concerned, global warming is the major issue of our time.”

During the PETM, mammals, newly ascendant after the dinosaurs’ downfall, actually thrived. Their northward march led them to land bridges that allowed them to fan out across the globe, filling ecological niches and spreading south again as the planet reabsorbed the excess CO2 in the sky and cooled over 200,000 years. However, their story is hardly one we can hope to emulate. One difference, scientists say, is that Earth was much warmer then to begin with, so there were no ice caps to melt and accelerate the warming and sea-level rise.

“The other big difference,” said the climatologist Gavin Schmidt, director of the Goddard Institute, “is, we’re here, and we’re adapted to the climate we have. We built our cities all the way around the coasts; we’ve built our agricultural systems expecting the rain to be where it is and the dry areas to be where they are.” And national borders are where they are. “We’re not prepared for those things to shift,” he said.

https://www.quantamagazine.org/cloud-loss-could-add-8-degrees-to-global-warming-20190225/

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

 

California wildfire disasters

A wildfire or wildland fire is a fire in an area of combustible vegetation occurring in rural areas. Depending on the type of vegetation present, a wildfire can also be classified more specifically as a brush fire, bushfire, desert fire, forest fire, grass fire, hill fire, peat fire, vegetation fire, and veld fire.

California wildfire

Photo of the Delta Fire, California, 2018. Social media/Reuters.

Fossil charcoal indicates that wildfires began soon after the appearance of terrestrial plants 420 million years ago. Wildfire’s occurrence throughout the history of terrestrial life invites conjecture that fire must have had pronounced evolutionary effects on most ecosystems’ flora and fauna. Earth is an intrinsically flammable planet owing to its cover of carbon-rich vegetation, seasonally dry climates, atmospheric oxygen, and widespread lightning and volcanic ignitions.

The Camp Fire may have been caused by one, but the California wildfire was years in the making.

Living on the Edge: Just as coastal communities must learn to live with hurricanes, communities that edge up against forests are going to have to learn to live with fire.

By JAMES B. MEIGS, Slate, November 20, 2018

It’s hard to look at the images of what used to be Paradise. On Nov. 8, California’s Camp Fire tore through the Sierra Nevada foothills town of 27,000 people with little advance warning. It destroyed homes, incinerated cars—many of which were abandoned on roads that had became gridlocked by fleeing residents—and left a death toll of 77 people and climbing. Nearly 1,000 remain unaccounted for. But if you look closely at photos and video of the aftermath, you’ll notice something surprising. The buildings are gone, but most of the trees are still standing—many with their leaves or needles intact.

The Camp Fire is generally referred to as a forest fire or, to use the term preferred by firefighting professionals, a wildfire. As the name suggests, wildfires are mostly natural phenomena—even when initially triggered by humans—moving through grasslands, scrub, and forest, consuming the biomass in their paths, especially litter and deadwood. Visiting the disaster area, President Donald Trump blamed poor forestry practices and suggested California’s forests should be managed more like Finland’s where they spend “a lot of time on raking and cleaning.”

But the photos tell a different story. Within Paradise itself, the main fuel feeding the fire wasn’t trees, nor the underbrush Trump suggested should have been raked up. It was buildings. The forest fire became an infrastructure fire. Fire researchers Faith Kearns and Max Moritz describe what can happen when a wildfire approaches a suburban neighborhood during the high-wind conditions common during the California fall: First, a “storm of burning embers” will shower the neighborhood, setting some structures on fire.

“Under the worst circumstances, wind driven home-to-home fire spread then occurs, causing risky, fast-moving ‘urban conflagrations’ that can be almost impossible to stop and extremely dangerous to evacuate.” The town of Paradise didn’t just experience a fast-moving wildfire, its own layout, building designs, and city management turned that fire into something even scarier.

At first glance, the cause of the Camp Fire seems obvious: Sparks from a power line ignited a brush fire, which grew and grew as high winds drove it toward the town (there were also reports of a possible second ignition point). Pacific Gas and Electric, the regional utility, is already facing extensive lawsuits and the threat of financial liabilities large enough to bankrupt the company. And yet, like almost every disaster that kills large numbers of people and damages communities, the causes of the tragedy in Paradise are more complex than it first appears. The failure of the power line was the precipitating factor, but other factors came into play as well: zoning laws and living patterns, building codes and the types of construction materials used, possibly even the forestry management practices Trump inelegantly referenced. (Many residents of Finland got a chuckle out of Trump’s “raking and cleaning” comment, but Trump isn’t alone in calling for more aggressive management of California woodlands.)

A number of environmental, political, and economic trends converged in Butte County in just a few hours on Nov. 8 to spark this fire. But the tragedy was the result of many longer-term decisions, decades in the making.

Paradise sits in the picturesque foothills of the Sierra Nevada range. Its streets bump up against the forest. The surrounding Butte County is less densely populated but still has many homes on lots of between 1 to 5 acres. (Some 46,000 people were displaced by the fire overall.) That makes Butte County a prime example of what planners call the wildland-urban interface. A recent Department of Agriculture study defined the WUI as “the area where structures and other human development meet or intermingle with undeveloped wildland.” The report estimated that nearly a third of California’s residents lived in such regions in 2010. And their numbers are growing.

It’s easy to see why. These are lovely places to live, attractive to longtime residents as well as retirees and people moving out of cities. But they are also dangerous, especially in California. The state is subject to several conditions that make fires particularly threatening. One is drought. California summers have always been dry, but records show that they’ve been getting hotter and dryer. Fire season is getting longer. Climate models show that that trend is likely to get worse.

Another is wind. Each fall, hot, dry air flows westward from the state’s higher elevations toward the coast. These Santa Ana or “diablo” winds can blow at high speeds for days on end. (On the morning of the Camp Fire, wind speeds as high as 72 miles per hour were recorded.) Like a giant hair dryer, the wind desiccates everything in its path. The night before the fire, local meteorologist Rob Elvington warned: “Worse than no rain is negative rain.” The winds were literally sucking moisture out of the ground.

Those hot, dry conditions make fires terrifyingly easy to start—a hot car muffler, a cigarette ash, a downed power line, almost anything can do it. And the wind makes them almost impossible to stop. As it barreled toward Paradise, the Camp Fire grew at the rate of roughly 80 football fields per minute. “California is a special case,” fire historian Stephen J. Pyne recently wrote in Slate. “It’s a place that nature built to burn, often explosively.” Even if no one lived in them, California’s hills would burn regularly, Pyne notes. But humans and their infrastructure make the problem worse.

One of the biggest risk factors is electric power. Utilities like PG&E don’t have the option of not serving rural or semirural residents. And every power line that crosses dry, flammable terrain could spark a wildfire. The culprit in these cases is, once again, the interplay between human-built infrastructure and the natural environment. Vegetation is constantly growing in the corridors, and if a tree falls on a line, or merely touches it, that can cause a short circuit that might spark a fire.

Cal Fire, the California fire management agency, estimates that problems with power lines caused at least 17 major wildfires in Northern California last year. Under an unusual feature of California law known as “inverse condemnation,” a utility can be forced to pay damages for fires that involve its equipment, even if the company hasn’t been proven negligent in its operations. Even before the massive Camp Fire, PG&E announced that it expects its liabilities from 2017’s large wine-country fires to exceed $2.5 billion. (California Gov. Jerry Brown recently signed a bill offering some financial relief to utilities grappling with wildfire costs, but it did not do away with inverse condemnation.)

As more and more people move into wildland-urban zones, these new arrivals will need to be served with electric power. Which means that, not only will there be more people living in the zones threatened by wildfires, but more power lines will need to be built, increasing the risk of fires. Disaster researchers call this the expanding bull’s-eye effect. Also, as more people move into vulnerable regions—and then build expensive infrastructure in those areas—the costs of natural disasters increase. This effect has been shown dramatically in coastal areas such as Houston that have seen the damage estimates associated with hurricanes skyrocket. The expanding bull’s-eye means the costs of rebuilding will keep climbing even if the frequency and severity of natural disasters doesn’t change.

So, California’s fire country faces a double-barreled threat: More lives and infrastructure lie in the path of potential fires than ever before. And the fires are getting bigger. That combination explains why 6 out of the 10 most destructive fires in California history have occurred in the past three years.

So far, California is not doing much to discourage people from moving into its danger zones. Moritz, Naomi Tague, and Sarah Anderson, researchers at the University of California, Santa Barbara, maintain that “people must begin to pay the costs for living in fire-prone landscapes.” They argue that currently, “the relative lack of disincentives to develop in risky areas—for example, expecting state and federal payments for [fire] suppression and losses—ensures that local decisions will continue to promote disasters for which we all pay.” (Disaster experts make a similar argument about how federal flood insurance and other programs encourage people to live in hurricane-prone areas.) One financial analyst who works closely with California utilities believes the inverse condemnation rule is part of this problem: “These communities are very dangerous to supply power to,” he says. “But the utility is forced to carry all the risk. They can’t charge their customers a premium for fire risk.”

Of course, when fires do occur, the residents of these areas suffer the most. The question is how to provide the right incentives for people so that we limit the chances of this happening again. Looking ahead, “We need to ensure that prospective homeowners can make informed decisions about the risks they face in the WUI,” Moritz, Tague, and Anderson say.

What else can be done? Building and zoning codes can be changed to make towns less fire prone. Homes that are built or retrofitted with fireproof materials—and landscaped to keep shrubbery away from structures—can usually survive typical wildfires. In new developments, homes can be clustered and surrounded by fire-resistant buffer zones, such as orchards. And, no matter how well designed, communities in fire zones need realistic evacuation plans and better emergency communications. (Poor communications and inadequate evacuation planning in the face of the speed a fire could move at were among the many failures in Paradise.)

There’s even a grain of truth to Trump’s comments that better forest management can reduce the ferocity of wildfires, though it’s not clear it would have helped in the case of the Camp Fire. The Santa Barbara researchers recommend increasing “fuel management such as controlled burns, vegetation clearing, forest thinning, and fire breaks.”

But no amount of fire-proofing or woodland management is going to eliminate fires.

If global warming models hold true, fire seasons are going to be hotter and last longer. Just as people in coastal areas need to adapt to hurricanes, residents of fire country need to learn to live with fire. In both cases, the states and the federal government need to reconsider policies that encourage people to move into these vulnerable areas. It’s easy to see why people love living in mountain foothills and forests—just as it’s easy to see why they love living on beaches. But overdevelopment of fire-prone landscapes means multiplying the inherent hazards of these regions. People need to accept that the problem isn’t just fire—it’s us.

https://slate.com/technology/2018/11/camp-fire-disaster-causes-urban-wildland-interface.html

 

Global warming isn’t natural, and here’s how we know

This is a copy of an article for our students from thelogicofscience.com

The cornerstone argument of climate change deniers is that our current warming is just a natural cycle, and this claim is usually accompanied by the statement, “the planet has warmed naturally before.” This line of reasoning is, however, seriously flawed both logically and factually. Therefore, I want to examine both the logic and the evidence to explain why this argument is faulty and why we are actually quite certain that we are the cause of our planet’s current warming.

The fact that natural climate change occurred in the past does not mean that the current warming is natural.

I cannot overstate the importance of this point. Many people say, “but the planet has warmed naturally before” as if that automatically means that our current warming is natural, but nothing could be further from the truth. In technical terms, this argument commits a logical fallacy known as non sequitur (this is the fallacy that occurs whenever the conclusion of a deductive argument does not follow necessarily from the premises). The fact that natural warming has occurred before only tells us that it is possible for natural warming to occur. It does not indicate that the current warming is natural, especially given the evidence that it is anthropogenic (man-made).

To put this another way, when you claim that virtually all of the world’s climatologists are wrong and the earth is actually warming naturally, you have just placed the burden of proof on you to provide evidence for that claim. In other words, simply citing previous warming events does not prove that the current warming is natural. You have to actually provide evidence for a natural cause of the current warming, but (as I’ll explain shortly) no such mechanism exists.

Natural causes of climate change

Now, let’s actually take a look at the natural causes of climate change to see if any of them can account for our current warming trend (spoiler alert, they can’t).

Sun

The sun is an obvious suspect for the cause of climate change. The sun is clearly an important player in our planet’s climate, and it has been responsible for some warming episodes in the past. So if, for some reason, it was burning hotter now than in the past, that would certainly cause our climate to warm. There is, however, one big problem: it’s not substantially hotter now than it was in the recent past. Multiple studies have looked at whether or not the output from the sun has increased and whether or not the sun is responsible for our current warming, and the answer is a resounding “no” (Meehl, et al. 2004Wild et al. 2007Lockwood and Frohlich 20072008Lean and Rind 2008Imbers et al. 2014).

It likely caused some warming in the first half the 20th century, but since then, the output from the sun does not match the rise in temperatures (in fact it has decreased slightly; Lockwood and Frohlich 20072008). Indeed, Foster and Rahmstorf (2011) found that after correcting for solar output, volcanoes, and El Niños, the warming trend was even more clear, which is the exact opposite of what we would expect if the sun was driving climate change (i.e., if the sun was the cause, then removing the effect of the sun should have produced a flat line, not a strong increase).

Finally, the most compelling evidence against the sun hypothesis and for anthropogenic warming is (in my opinion) the satellite data. Since the 70s, we have been using satellites to measure the energy leaving the earth (specifically, the wavelengths of energy that are trapped by CO2).

Thus, if global warming is actually caused by greenhouse gasses trapping additional heat, we should see a fairly constant amount of energy entering the earth, but less energy leaving it. In contrast, if the sun is driving climate change, we should see that both the energy entering and leaving the earth have increased.

Do you want to guess which prediction came true? That’s right, there has been very little change in the energy from the sun, but there has been a significant decrease in the amount of energy leaving the earth (Harries et al. 2001Griggs and Harries. 2007). That is about as close to “proof” as you can get in science, and if you are going to continue to insist that climate change is natural, then I have one simple question for you: where is the energy going? We know that the earth is trapping more heat now than it did in the past. So if it isn’t greenhouse gasses that are trapping the heat, then what is it?

Milankovitch cycles

Other important drivers of the earth’s climate are long-term cycles called Milankovitch cycles, which involve shifts in the earth’s orbit, tilt, and axis (or eccentricity, precession, and obliquity, if you prefer). In fact, these appear to be one of the biggest initial causes of prominent natural climate changes (like the ice ages). So it is understandable that people would suspect that they are driving the current climate change, but there are several reasons why we know that isn’t the case.

First, Milankovitch cycles are very slow, long-term cycles. Depending which of the three cycles we are talking about, they take tens of thousands of years or even 100 thousand years to complete. So changes from them occur very slowly. In contrast, our current change is very rapid (happening over a few decades as opposed to a few millennia). So the rate of our current change is a clear indication that it is not being caused by Milankovitch cycles.

Second, you need to understand how Milankovitch cycles affect the temperature. The eccentricity cycle could, in concept, directly cause global warming by changing the earth’s position relative to the sun; however, that would cause the climate to warm or cool by affecting how much energy from the sun hits the earth. In other words, we are back to the argument that climate change is caused by increased energy from the sun, which we know isn’t happening (see the section above).

The other cycles (precession and obliquity), affect the part of the earth that is warmed and the season during which the warming takes place, rather than affecting the total amount of energy entering the earth. Thus, they initially just cause regional warming. However, that regional warming leads to global warming by altering the oceans’ currents and warming the oceans, which results in the oceans releasing stored CO2 (Martin et al. 2005Toggweiler et al. 2006Schmittner and Galbraith 2008Skinner et al. 2010).

That CO2 is actually the major driver of past climate changes (Shakun et al. 2012). In other words, when we study past climate changes, what we find is that CO2 levels are a critically important factor, and, as I’ll explain later, we know that the current increase in CO2 is from us. Thus, when you understand the natural cycles, they actually support anthropogenic global warming rather than refuting it.

Volcanoes

At this point, people generally resort to claiming that volcanoes are actually the thing that is emitting the greenhouse gasses. That argument sounds appealing, but in reality, volcanoes usually emit less than 1% of the CO2 that we emit each year (Gerlach 2011). Also, several studies have directly examined volcanic emissions to see if they can explain our current warming, and they can’t (Meehl, et al. 2004Imbers et al. 2014).

Carbon dioxide (CO2)

A final major driver of climate change is, in fact, CO2. Let’s get a couple of things straight right at the start. First, we know that CO2 traps heat and we know that increasing the amount of CO2 in an environment will result in the temperature increasing (you can find a nice list of papers on the heat trapping abilities of CO2 here).

Additionally, everyone (even climate “skeptics”) agree that CO2 plays a vital role in maintaining the earth’s temperature. From those facts, it is intuitively obvious that increasing the CO2 in the atmosphere will result in the temperature increasing. Further, CO2 appears to be responsible a very large portion of the warming during past climate changes (Lorius et al. 1990Shakun et al. 2012)Note: For past climate changes, the CO2 does lag behind the temperature initially, but as I explained above, the initial warming triggers an increase in CO2, and the CO2drives the majority of the climate change

At this point, you may be thinking, “fine, it’s CO2, but the CO2 isn’t from us, nature produces way more than we do.” It is true that nature emits more CO2 than us, but prior to the industrial revolution, nature was in balance, with the same amount of CO2 being removed as was emitted. Thus, there was no net gain. We altered that equation by emitting additional CO2.

Further, the increase that we have caused is no little thing. We have nearly doubled the CO2 compared to pre-industrial levels, and the current concentration of CO2 in the atmosphere is higher than it has been at any point in the past 800,000 years. So, yes, we only emit a small fraction of the total CO2 each year, but we are emitting more CO2 than nature can remove, and a little bit each year adds up to a lot over several decades.

Additionally, we know that the current massive increase in CO2 is from us because of the C13 levels. Carbon has two stable isotopes (C12 and C13), but C13 is heavier than C12. Thus, when plants take carbon from the air and use it to make carbohydrates, they take a disproportionate amount of C12. As a result, the C13/C12 ratios in plants, animals (which get carbon from eating plants), and fossil fuels (which are formed form plants and animals) have more C12 than the C13/C12 ratios in that atmosphere.

Ratio C13 to C12 in coral

Therefore, if burning fossil fuels is responsible for the current increase in CO2, we should see that ratio of C13/C12 in the atmosphere shift to be closer to that of fossil fuels (i.e., contain more C12), and, guess what, that is exactly what we see (Bohm et al. 2002Ghosh and Brand 2003;Wei et al. 2009). This is unequivocal evidence that we are the cause of the current increase in CO2.

Finally, we can construct all of this information into a deductive logical argument (as illustrated on the left). If COtraps heat, and we have increased the CO2 in the atmosphere, then more heat will be trapped. To illustrate how truly inescapable that conclusion is, here is an analogous argument:

1). Insulation traps heat
2). You doubled the insulation of your house
3). Therefore, your house will trap more heat

You cannot accept one of those arguments and reject the other (doing so is logically inconsistent).

Note: Yes, I know that the situation is much more complex than simply CO2 trapping heat, and there are various feedback mechanisms at play, but that does not negate the core argument.

Putting the pieces together

So far, I have been talking about all of the drivers of climate change independently, which is clearly an oversimplification, because, in all likelihood, several mechanisms are all acting together. Therefore, the best way to test whether or not the current warming is natural is actually to construct statistical models that include both natural and man-made factors. We can then use those models to see which factors are causing climate change.

hansen-et-al-2005 Global Climate Forcings (warming)

Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. Science, 308:1431–1435.

We have constructed multiple of these models, and they consistently show that natural factors alone cannot explain the current warming (Stott et al. 2001Meehl et al. 2004Allen et al. 2006Lean and Rind 2008Imbers et al. 2014).

In other words, including human greenhouse gas emissions in the models is the only way to get the models to match the observed warming. This is extremely clear evidence that the current warming is not entirely natural. To be clear, natural factors do play a role and are contributing, but human factors are extremely important, and most of the models show that they account for the majority of the warming.

Correlation vs. causation

It is usually about now that opponents of climate change start to argue that scientists are actually committing a correlation fallacy, and simply showing a correlation between temperature and the CO2 that we produce does not mean that the CO2 is causing the temperature increase. There are, however, several problems with that argument.

First, correlation can indicate causation under certain circumstances. Namely, situations where you have controlled all confounding factors. In other words, if you can show that Y is the only thing that is changing significantly with X, then you can reach a causal conclusion (even placebo controlled drug trials are really just showing correlations between taking the drug and recovery, but because they used the control, they can use that correlation to reach a causal conclusion).

In the case of climate change, of course, we have examined the confounding factors. As I explained in the previous section, we have constructed statistical models with the various drivers of climate change, and anthropogenic greenhouse gasses are necessary to account for the current warming. In other words, we have controlled for the other causes of climate change, therefore we can reach a causal conclusion.

Second, and perhaps more importantly, there is nothing wrong with using correlation to show a particular instance of causation if a causal relationship between X and Y has already been established. Let me give an example. The figure to the right shows the smoking rates and lung/bronchial cancer rates in the US. There is an obvious negative correlation between the two (P < 0.0001), and I don’t think that anyone is going to disagree with the notion that the decrease in smoking is largely responsible for the decrease in lung cancers.

Indeed, there is nothing wrong with reaching that conclusion, and it does not commit a correlation fallacy. This is the case because a causal relationship between smoking and cancer has already been established. In other words, we know that smoking causes cancer because of other studies. Therefore, when you see that the two are correlated over time, there is nothing wrong with inferring that smoking is driving the cancer rates. Even so, we know from laboratory tests and past climate data that CO2 traps heat and increasing it results in more heat being trapped. In other words, a causal relationship between CO2 and temperature has already been established. Therefore, there is nothing fallacious about looking at a correlation between COand temperature over time and concluding that the CO2 is causing the temperature change.

Ad hoc fallacies and the burden of proof

At this point, I often find that people are prone to proposing that some unknown mechanism exists that scientists haven’t found yet. This is, however, a logical fallacy known as ad hoc. You can’t just make up an unknown mechanism whenever it suits you. If that was valid, then you could always reject any scientific result that you wanted, because it is always possible to propose some unknown mechanism.

Similarly, you can’t use the fact that scientists have been wrong before as evidence, nor can you argue that, “there are still things that we don’t understand about the climate, so I don’t have to accept anthropogenic climate change” (that’s an argument from ignorance fallacy). Yes, there are things that we don’t understand, but we understand enough to be very confident that we are causing climate change, and, once again, you can’t just assume that all of our current research is wrong.

The key problem here is the burden of proof. By claiming that there is some other natural mechanism out there, you have just placed the burden of proof squarely on your shoulders. In other words, you must provide actual evidence of such a mechanism. If you cannot do that, then your argument is logically invalid and must be rejected.

Summary/Conclusion

Let’s review, shall we?

  • We know that it’s not the sun
  • We know that it’s not Milankovitch cycles
  • We know that it’s not volcanoes
  • We know that even when combined, natural causes cannot explain the current warming
  • We know that CO2 traps heat
  • We know that increasing CO2 causes more heat to be trapped
  • We know that CO2 was largely responsible for past climate changes
  • We know that we have roughly doubled the CO2 in the atmosphere
  • We know that the earth is trapping more heat now than it used to
  • We know that including anthropogenic greenhouse gasses in the models is the only way to explain the current warming trend

When you look at that list of things that we have tested, the conclusion that we are causing the planet to warm is utterly inescapable. For some baffling reason, people often act as if scientists have never bothered to look for natural causes of climate change, but the exact opposite is true. We have carefully studied past climate changes and looked at the natural causes of climate changes, but none of them can explain the current warming.

The only way to account for our current warming is to include our greenhouse gasses in the models. This is extremely clear evidence that we are causing the climate to warm, and if you want to continue to insist that the current warming is natural, then you must provide actual evidence for the existence of a mechanism that scientists have missed, and you must provide evidence that it is a better explanation for the current warming than CO2. Additionally, you are still going to have to refute the deductive argument that I presented earlier (i.e., show that a premise is false or that I committed a logical fallacy), because finding a previously unknown mechanism of climate change would not discredit the importance of CO2 or the fact we have roughly doubled it. Finally, you also need to explain why the earth is trapping more heat than it used to. If you can do all of that, then we’ll talk, but if you can’t, then you must accept the conclusion that we are causing the planet to warm.

Related posts

 Literature cited

  • Allen et al. 2006. Quantifying anthropogenic influence on recent near-surface temperature change. Surveys in Geophysics 27:491–544.
  • Bohm et al. 2002. Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges. Geochemistry, Geophysics, Geosystems 3:1–13.
  • Foster and Rahmstorf. 2011. Global temperature evolution 1979–2010. Environmental Research Letters 7:011002.
  • Gerlach 2011. Volcanic versus anthropogenic carbon dioxide. EOS 92:201–202.
  • Ghosh and Brand. 2003. Stable isotope ratio mass spectrometry in global climate change research. International Journal of Mass Spectrometry 228:1–33.
  • Griggs and Harries. 2007. Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 Using IRIS, IMG, and AIRS. Journal of Climate 20:3982-4001.
  • Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. 308:1431–1435.
  • Harries et al. 2001. Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997. Nature 410:355–357.
  • Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
  • Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
  • Lockwood and Frohlich. 2007. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the National Academy of Sciences 463:2447–2460.
  • Lockwood and Frohlich. 2008. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the National Academy of Sciences 464:1367–1385.
  • Lorius et al. 1990. The ice-core record: climate sensitivity and future greenhouse warming. Nature 139–145.
  • Martin et al. 2005. Role of deep sea temperature in the carbon cycle during the last glacial. Paleoceanography 20:PA2015.
  • Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
  • Schmittner and Galbraith 2008. Glacial greenhouse-gas fluctuations controlled by ocean circulation changes. Nature 456:373–376.
  • Shakun et al. 2012. Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature 484:49–54.
  • Skinner et al. 2010. Ventilation of the deep Southern Ocean and deglacial CO2 rise. Science 328:1147-1151.
  • Stott et al. 2001. Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dynamics17:1–21.
  • Toggweiler et al. 2006. Mid-latitude westerlies, atmospheric CO2, and climate change during the ice ages. Paleoceanography 21:PA2005.
  • Wei et al. 2009. Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta 73:2332–2346.
  • Wild et al. 2007. Impact of global dimming and brightening on global warming. Geophysical Research Letters

https://thelogicofscience.com/2016/06/06/global-warming-isnt-natural-and-heres-how-we-know/

______________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)