Home » Articles posted by New England Blogger

Author Archives: New England Blogger

Should schools have Blizzard Bags during snow days?

The idea behind “blizzard bags” and similar programs is to provide an alternative to making up school days missed due to weather disruptions or other unplanned school closures. The MTA Board has some serious concerns about blizzard bags.

school bus snow closings

Image from wwjnewsradio.radio.com

Share your thoughts on ‘blizzard bags’, MTA

In February, we asked MTA members for their thoughts on what the Department of Elementary and Secondary Education refers to as “alternative structured learning day programs” — otherwise known as “blizzard bags.” Your input will help guide our activism on this matter.

We asked, you answered: Your ‘blizzard bag’ responses. MTA

The “Blizzard Bags” program that allowed Massachusetts students to do class work at home during a winter storm and not have to make up the day in the summer comes to an end with this academic year.

The Massachusetts Department of Elementary and Secondary Education announced in June that it was discontinuing the Alternative Structured Learning Day Program, commonly known as “Blizzard Bags,” in fall 2020. It based its decision on a review of the “development and implementation of these programs.”

Some parents argued that “Blizzard Bags” could not take the place of a full day of school with face-to-face instruction or adequately address the needs of students on Individualized Education Programs.

Also, the Massachusetts Teachers Association voiced “serious concerns” about “Blizzard Bags” as a means of making up for a lost day of classroom instruction.

In the fall of 2018, the state Department of Elementary and Secondary Education established a working group to review the policy. Representatives from the Massachusetts Teachers Association participated, along with representatives of administrators from 10 Massachusetts school districts.

“The decision to discontinue the use of Alternative Structured Learning Day Programs is based upon a variety of factors, including concerns about equitable access for all students,” Jeffrey C. Riley, commissioner of Elementary and Secondary Education, stated on the state DESE website on June 27. “In addition to making every attempt to reschedule school days lost due to inclement weather, leaders should consider holding the first day of school prior to Labor Day. Other possibilities include scheduling a one-week vacation in March instead of week-long vacations in February and April.”

‘Blizzard Bags’ to be dropped by Massachusetts schools after this winter. MassLive.com

But here’s a question that almost no one seemed to even ask: Do snow days actually affect a student’s learning? This study claims that they don’t:

“Snow days don’t subtract from learning”

School administrators may want to be even more aggressive in calling for weather-related closures. A new study conducted by Harvard Kennedy School Assistant Professor Joshua Goodman finds that snow days do not impact student learning. In fact, he finds, keeping schools open during a storm is more detrimental to learning than a closure.

The findings are “consistent with a model in which the central challenge of teaching is coordination of students,” Goodman writes. “With slack time in the schedule, the time lost to closure can be regained. Student absences, however, force teachers to expend time getting students on the same page as their classmates.”

Goodman, a former school teacher, began his study at the behest of the Massachusetts Department of Education, which wanted to know more about the impact of snow days on student achievement. He examined reams of data in grades three through 10 from 2003 to 2010. One conclusion — that snow days are less detrimental to student performance than other absences — can be explained by the fact that school districts typically plan for weather-related disruptions and tack on extra days in the schedule to compensate. They do not, however, typically schedule make-up days for other student absences.

The lesson for administrators might be considered somewhat counterintuitive. “They need to consider the downside when deciding not to declare a snow day during a storm — the fact that many kids will miss school regardless, either because of transportation issues or parental discretion. And because those absences typically aren’t made up in the school calendar, those kids can fall behind.”

Goodman, an assistant professor of public policy, teaches empirical methods and the economics of education. His research interests include labor and public economics, with a particular focus on education policy.

Snow days don’t subtract from learning. The Harvard Gazette

Flaking Out: Snowfall, Disruptions of Instructional Time, and Student Achievement, by Joshua Goodman, Harvard Kennedy School of Government, April 30, 2012

Flaking Out: Snowfall, Disruptions of Instructional Time, and Student Achievement

Survivorship bias and why people think cavemen were a thing

One thing that almost everyone knows about the evolution of humans is that our ancestors were “cavemen.” They lived in caves, we have found many of their fossils there, so obviously, right?

Cavemen Caveman

Cavemen at National Museum of Natural History by mbell1975, Flickr

From a discussion on Quora

Is the idea of cavemen exaggerated? Because it seems like there are not a lot of caves in the world. Cavemen couldn’t live in caves that don’t exist.

Jeff Lewis, an aerospace engineer, replies, enlightening us both on the evolution of humans, but also on the concept of suvivorship bias. (This is one kind of cognitive bias.)

Let me start with a story that at first might not seem related. Back in WWII, the U.S. Army Air Force wanted to add armor to airplanes to increase their survivability. But, since airplanes need to be light enough to fly and carry a useful load, they had to figure out the best locations to add armor, where they could get the most ‘bang for the buck’. So, when airplanes returned to their air bases, they started recording where all the bullet holes and damage were located. If you compiled it all onto a single map, you’d end up with something like this:

survivorship bias error WWII airplane bullet holes

The intuitive answer that most people seem to jump to is that the areas with the most bullet holes are the areas getting shot the most, and that’s where the armor should go.

But the real answer is that the distribution of hits across all the planes was fairly uniform, but planes that were getting hit in the engines or cockpit never made it back home.

The data being compiled back at the air bases was biased towards the survivors, and so this phenomenon is known, appropriately enough, as survivorship bias. The best places to add armor were in fact the engines and cockpits.

So, when it comes to finding bones from our own ancestors/cousins, the finds are subject to the same type of bias. Our ancestors lived and died all over their habitats, but most of their remains have since decomposed. You can’t find remains that no longer exist. The remains that have persisted to the present day are those in locations that shielded them from scavengers and decomposition. One such location is caves.

So, we find so many human and pre-human remains in caves not because they were particularly fond of living there, but because caves were better at preserving their remains than other locations our ancestors lived. However, early finds of ancestral human remains in caves led to the impression that that’s where they were living, and that initial impression has persisted in the general public ever since.

See also

The Counterintuitive World

The nature of time

What is time?

What is time? Where does time come from?

In what way is time really something objective? (something actually out there?)

In what ways is time not objective? (so it would be just a way that humans use to describe our perception of the universe)

What is time?


Why does time never go backward?

The answer apparently lies not in the laws of nature, which hardly distinguish between past and future, but in the conditions prevailing in the early universe.

The Arrow of Time, Scientific American article. David Layzer

Is there a relationship between time and the second law of thermodynamics?

Before reading further, understand that these topics require at least some familiarity with the laws of Thermodynamics

“According to many, there might be a link between what we perceive as the arrow of time and a quantity called entropy…. [but] as far as we can tell, the second law of thermodynamics is true: entropy never decreases for any closed system in the Universe, including for the entirety of the observable Universe itself. It’s also true that time always runs in one direction only, forward, for all observers. What many don’t appreciate is that these two types of arrows — the thermodynamic arrow of entropy and the perceptive arrow of time — are not interchangeable.”

No, Thermodynamics Does Not Explain Our Perceived Arrow Of Time, Starts With A Bang, Ethan Siegel, Forbes

No, Thermodynamics Does Not Explain Our Perceived Arrow Of Time

Is time (and perhaps space,) quantized?

Ethan Siegel leads us in a fascination discussion:

The idea that space (or space and time, since they’re inextricably linked by Einstein’s theories of relativity) could be quantized goes way back to Heisenberg himself.

Famous for the Uncertainty Principle, which fundamentally limits how precisely we can measure certain pairs of quantities (like position and momentum), Heisenberg realized that certain quantities diverged, or went to infinity, when you tried to calculate them in quantum field theory….

It’s possible that the problems that we perceive now, on the other hand, aren’t insurmountable problems, but are rather artifacts of having an incomplete theory of the quantum Universe.

It’s possible that space and time are really continuous backgrounds, and even though they’re quantum in nature, they cannot be broken up into fundamental units. It might be a foamy kind of spacetime, with large energy fluctuations on tiny scales, but there might not be a smallest scale. When we do successfully find a quantum theory of gravity, it may have a continuous-but-quantum fabric, after all.

Are Space And Time Quantized? Maybe Not, Says Science

Is time quantized? In other words, is there a fundamental unit of time that could not be divided into a briefer unit?

Even In A Quantum Universe, Space And Time Might Be Continuous, Not Discrete

Time’s Arrow (may be) Traced to Quantum Source: A new theory explains the seemingly irreversible arrow of time while yielding insights into entropy, quantum computers, black holes, and the past-future divide.

Theoretical physics: The origins of space and time


Why learn math?

Mathematics Geometry by Inga Nielsen Alex Landa Shutterstock


Why learn math? When are going to use this in real life?

When students ask “when will we ever need this in real life?” they often aren’t actually being curious about their future. They are actually just unhappy with being assigned work in the present. But some students truly do want to learn the answers to this question – and teachers, one would hope – should know answers as well. And there are several answers to this question, not just one.

I. First we should recognize that this is an unfair question. Douglas Corey, at Brigham Young University, writes:

In truth, the when-will-I-use-this question is unfair for the teacher. She doesn’t know when you will (or even might) use it (except on the exam and in the next course in the sequence). She might explain how other people have used it, but, as we saw above, that response is not convincing. The difficulty in answering this question lies with an implicit assumption hidden beneath the question. The student has an idea of the kinds of situations that she will encounter in her life, and when the response from the teacher doesn’t apply to any of these situations, the mathematics seems useless. But it is fraudulent to assume that we know at a moment of reflection the kinds of situations in which we might use something. Why? Because we typically don’t know what we don’t know.

– When Will I Ever Use This? An Essay for Students Who Have Ever Asked This Question in Math Class

II. Does a football team go onto the field and lift weights? Of course the team doesn’t do that. However if they didn’t practice lifting weights then they certainly wouldn’t have a chance to win.

III. We’re actually not learning hard math that mathematicians study in university. For every subject that you think you are studying – algebra, trigonometry, calculus, etc. – you’re really just learning the introduction to these subjects! Yes, even after a year in high school calculus all you have done is scratch the surface of that field of math.

So why learn any of these math topics at all in grades K-12? Because children don’t know what they are going to be 10 or 20 years from now. So consider: If we don’t teach students how to be fluent and literate in English, then how can they read and learn anything? How can they communicate using the written word? They literally would be unable to even consider a career, right?

Now realize that the same is true for math. If we don’t teach students how to be fluent and literate in mathematics and logical thinking, then how could they ever even have a chance to consider a career in medicine, engineering, coding, chemistry, artificial intelligence, astronomy, physics, or math? No one ever would even be able to consider such a career.

IV. Here I’m excerpting some thoughts from Al Sweigart.

A math teacher is giving a lesson on logarithms or the quadratic equation or whatever and is asked by a student, “When will I ever need to know this?”
“Most likely never,” replied the teacher without hesitation. “Most jobs and even a lot of professions won’t require you to know any math beyond basic arithmetic or a little algebra.”
“But,” the teacher continued, “let me ask you this. Why do people go to the gym and lift weights? Do they all plan on becoming Olympic weight lifters, or professional body builders? Do they think they’ll one day find an old lady trapped under a 200 pound bar bell and say, ‘This is what I’ve been training for.’”
“No, they lift weights because it makes them stronger. Learning math is important because because it makes you smarter. It forces your brain to think in a way that normally it wouldn’t think: a way that requires precision, discipline, and abstract thought. It’s more than rote memorization, or making beautiful things, or figuring out someone’s expectations and how to appease them.”
“Doing your math homework is practice for the kind of disciplined thinking where there are objective right and wrong answers. And math is ubiquitous: it comes up in a lot of other subjects and is universal across cultures. And all this is practice for thinking in a new way. And being able to think in new ways, more than anything, is what will prepare you for an unpredictable, even dangerous, future.”

IV. We learn mathematics without realizing its ramifications and applications

This attitude comes partly from ignorance and partly from our faulty education system. We learn mathematics without realizing its ramifications and applications. You have been led to believe that it is useless but it is not. Look around the world in which you live. Almost everything that you experience and enjoy is possible because of mathematics.

You drive a car. A car company uses CAD software which lets it design and model components with absurd ease. Do you know how a CAD software works? It uses rigorous mathematics from geometry to matrices.

That’s one part of it. The calculation part. To display a model on your computer screen is yet another story. Processes are set, algorithms are developed and executed. But merely developing an algorithm is not sufficient. You have to optimize it. To develop and optimize an algorithm you need mathematics. Somebody has to develop the optimization algorithm. Know that the optimization algorithm is an algorithm to optimize a different algorithm. To develop such feat you would probably need to master functions, graphs and calculus. To perform stress analysis on such a component you would need yet another algorithms. To develop them you would probably need to study finite element analysis and matrices. This is true for any industry and not just for car industry.

Consider a security firm. It need to be able to identify a person’s face. They need a face recognition algorithm. Now some geeks have developed many such algorithms. Some of them are simple and less accurate while some are highly accurate but difficult to employ.

Development of each such algorithm requires extensive knowledge of matrices, probabilities, and other 100 things but do you know what is beautiful? The security firm may audit itself and using yet another mathematical process, assess exactly what type of algorithm it would need. Mathematics. Again. This is true for any forensic analysis. Fingerprint matching, face matching, pattern recognition and what not. Many private and public security firms, law enforcement agencies and spy agencies are using and developing such specialized tools thanks to mathematics.

Let’s come to gaming. You will be thrilled to know that while playing combat games, you are actually fighting with an algorithm which can ‘learn’ you. Genetic algorithm, neural networks and such things. Google it. Imagine yourself at a scene in a game. You can’t see what’s behind you in a scene, but as you look at it, the scene develops. There are special compression algorithms who use the information of the scene in compressed format when nobody is looking at it. I guess I don’t have to repeat now but still, I will. Mathematics.

Investment funds, hedge funds and other financial institutions predict the market and make decisions using mathematical software. Again, it require, number crunching, statistics, pattern recognition (which itself requires a lot of mathematics), optimization, functions and graphs and calculus (for effective predictions). Insurance companies need to use probabilistic models of customers to come up with new policies. They invest money in stock market. Now again read this paragraph just put insurance companies in place of investment funds.

Kedar Marathe, Tata Technologies, answering on Quora.


V. Kalid Azad, of BetterExplained, writes in How to Develop a Mindset for Math

Math uses made-up rules to create models and relationships. When learning, I ask:

  • What relationship does this model represent?

  • What real-world items share this relationship?

  • Does that relationship make sense to me?

They’re simple questions, but they help me understand new topics. If you liked my math posts, this article covers my approach to this oft-maligned subject. Many people have left insightful comments about their struggles with math and resources that helped them.

Math Education

Textbooks rarely focus on understanding; it’s mostly solving problems with “plug and chug” formulas. It saddens me that beautiful ideas get such a rote treatment:

  • The Pythagorean Theorem is not just about triangles. It is about the relationship between similar shapes, the distance between any set of numbers, and much more.

  • e is not just a number. It is about the fundamental relationships between all growth rates.

  • The natural log is not just an inverse function. It is about the amount of time things need to grow.

Elegant, “a ha!” insights should be our focus, but we leave that for students to randomly stumble upon themselves. I hit an “a ha” moment after a hellish cram session in college; since then, I’ve wanted to find and share those epiphanies to spare others the same pain.

But it works both ways — I want you to share insights with me, too. There’s more understanding, less pain, and everyone wins.

Math Evolves Over Time

I consider math as a way of thinking, and it’s important to see how that thinking developed rather than only showing the result. Let’s try an example.

Imagine you’re a caveman doing math. One of the first problems will be how to count things. Several systems have developed over time:

number systems Unary Roman Decimal Binary

fro Kalid Azad, BetterExplained

No system is right, and each has advantages:

  • Unary system: Draw lines in the sand — as simple as it gets. Great for keeping score in games; you can add to a number without erasing and rewriting.

  • Roman Numerals: More advanced unary, with shortcuts for large numbers.

  • Decimals: Huge realization that numbers can use a “positional” system with place and zero.

  • Binary: Simplest positional system (two digits, on vs off) so it’s great for mechanical devices.

  • Scientific Notation: Extremely compact, can easily gauge a number’s size and precision (1E3 vs 1.000E3).

Think we’re done? No way. In 1000 years we’ll have a system that makes decimal numbers look as quaint as Roman Numerals (“By George, how did they manage with such clumsy tools?”).

Negative Numbers Aren’t That Real

Let’s think about numbers a bit more. The example above shows our number system is one of many ways to solve the “counting” problem.

The Romans would consider zero and fractions strange, but it doesn’t mean “nothingness” and “part to whole” aren’t useful concepts. But see how each system incorporated new ideas.

Fractions (1/3), decimals (.234), and complex numbers (3 + 4i) are ways to express new relationships. They may not make sense right now, just like zero didn’t “make sense” to the Romans. We need new real-world relationships (like debt) for them to click.

Even then, negative numbers may not exist in the way we think, as you convince me here:

You: Negative numbers are a great idea, but don’t inherently exist. It’s a label we apply to a concept.

Me: Sure they do.

You: Ok, show me -3 cows.

Me: Well, um… assume you’re a farmer, and you lost 3 cows.

You: Ok, you have zero cows.

Me: No, I mean, you gave 3 cows to a friend.

You: Ok, he has 3 cows and you have zero.

Me: No, I mean, he’s going to give them back someday. He owes you.

You: Ah. So the actual number I have (-3 or 0) depends on whether I think he’ll pay me back. I didn’t realize my opinion changed how counting worked. In my world, I had zero the whole time.

Me: Sigh. It’s not like that. When he gives you the cows back, you go from -3 to 3.

You: Ok, so he returns 3 cows and we jump 6, from -3 to 3? Any other new arithmetic I should be aware of? What does sqrt(-17) cows look like?

Me: Get out.

Negative numbers can express a relationship:

  • Positive numbers represent a surplus of cows

  • Zero represents no cows

  • Negative numbers represent a deficit of cows that are assumed to be paid back

But the negative number “isn’t really there” — there’s only the relationship they represent (a surplus/deficit of cows). We’ve created a “negative number” model to help with bookkeeping, even though you can’t hold -3 cows in your hand. (I purposefully used a different interpretation of what “negative” means: it’s a different counting system, just like Roman numerals and decimals are different counting systems.)

By the way, negative numbers weren’t accepted by many people, including Western mathematicians, until the 1700s. The idea of a negative was considered “absurd”. Negative numbers do seem strange unless you can see how they represent complex real-world relationships, like debt.

Why All The Philosophy?

I realized that my **mindset is key to learning. **It helped me arrive at deep insights, specifically:

  • Factual knowledge is not understanding. Knowing “hammers drive nails” is not the same as the insight that any hard object (a rock, a wrench) can drive a nail.

  • Keep an open mind. Develop your intuition by allowing yourself to be a beginner again.

A university professor went to visit a famous Zen master. While the master quietly served tea, the professor talked about Zen. The master poured the visitor’s cup to the brim, and then kept pouring. The professor watched the overflowing cup until he could no longer restrain himself. “It’s overfull! No more will go in!” the professor blurted. “You are like this cup,” the master replied, “How can I show you Zen unless you first empty your cup.”

  • Be creative. Look for strange relationships. Use diagrams. Use humor. Use analogies. Use mnemonics. Use anything that makes the ideas more vivid. Analogies aren’t perfect but help when struggling with the general idea.

  • Realize you can learn. We expect kids to learn algebra, trigonometry and calculus that would astound the ancient Greeks. And we should: we’re capable of learning so much, if explained correctly. Don’t stop until it makes sense, or that mathematical gap will haunt you. Mental toughness is critical — we often give up too easily.

So What’s The Point?

I want to share what I’ve discovered, hoping it helps you learn math:

  • Math creates models that have certain relationships

  • We try to find real-world phenomena that have the same relationship

  • Our models are always improving. A new model may come along that better explains that relationship (roman numerals to decimal system).

Sure, some models appear to have no use: “What good are imaginary numbers?”, many students ask. It’s a valid question, with an intuitive answer.

The use of imaginary numbers is limited by our imagination and understanding — just like negative numbers are “useless” unless you have the idea of debt, imaginary numbers can be confusing because we don’t truly understand the relationship they represent.

Math provides models; understand their relationships and apply them to real-world objects.

Developing intuition makes learning fun — even accounting isn’t bad when you understand the problems it solves. I want to cover complex numbers, calculus and other elusive topics by focusing on relationships, not proofs and mechanics.

But this is my experience — how do you learn best?

This section by Kalid Azad was made under a Creative Commons Attribution-NonCommercial-ShareAlike license.



Fraud in science

This is a work in progress.

My First Fraud Kit

Image by Aurich Lawson, ArsTechnia, Epic fraud: How to succeed in science (without doing any)

Science is a self-correcting enterprise.

But science is generally about investigating nature – not investigating the human investigators themselves. Scientists don’t assume that everyone else’s research is always correct, but realistically they operate on the presumption that research is earnest and honest.  When a scientist decides to engage in fraud, in some disciplines, their fake results are often harder to detect.


Lysenko, Russia, and genetics-denial

Lysenkoism was named for Russian botanist Trofim Denisovich Lysenko. It occurred in Joseph Stalin’s Soviet Union. Lysenkoism mandated that all biological research conducted in the USSR conform to a modified Lamarckian evolutionary theory. Communists wanted this to be true because it promised a biology based on a moldable view of life consistent with Marxist-Leninist dogma.

Lysenkoists employed a form of political correctness to instill terror in anyone who disagreed with their dogma. People who disagreed with them faced public denunciation, loss of Communist Party membership, loss of employment, and even arrest by the secret police. Between Lysenko’s grip on power and the “disappearances” of numerous of his opponents, it would be years until the Soviet biology program would recover. – adapted from RationalWiki.

“It was an ugly picture of what happens when science is subservient to ideology, arguable the most extreme example in history. As a result of Lysenko’s crank ideas, the famine that was already underway was worsened. Lysenkoism was also exported to other communist countries like China, who also experienced horrible famine. Millions of people starved due to Lysenko’s crank ideas, making him arguably the scientist with the largest body count in human history.” – The Return of Lysenkoism

Supposed link between personality types and cancer

A remarkable series of fraudulent papers which attempted to convince people that lung cancer wasn’t caused by cigarettes. This fake research turns out to have been funded by the cigarette lobby.

“In 1992, Anthony Pelosi voiced concerns in the British Medical Journal about controversial findings from Hans Eysenck – one of the most influential British psychologists of all time – and German researcher Ronald Grossarth-Maticek. Those findings claimed personality played a bigger part in people’s chances of dying from cancer or heart disease than smoking. Almost three decades later, Eysenck’s institution have recommended these studies be retracted from academic journals. Hannah Devlin speaks to Pelosi about the twists and turns in his ultimately successful journey. And to the Guardian’s health editor, Sarah Boseley, about how revelations from tobacco industry documents played a crucial role.”

Taking on Eysenck: one man’s mission to challenge a giant of psychology

Fake link between vaccines and autism

Andrew Wakefield, claimed that he had shown a link between vaccines and autism .

“He was found guilty of dishonesty in his research and banned from medicine by the UK General Medical Council following an investigation by Brian Deer of the London Sunday Times.” – Wikipedia

Anesthesiology research fraud

Yoshitaka Fujii (Japan), researcher in anesthesiology, fabricated data in at least 183 scientific papers, setting what is believed to be a record. A committee reviewing 212 papers published by Fujii over a span of 20 years found that 126 were entirely fabricated, with no scientific work done. – Wikipedia


Corrupted Science: Fraud, Ideology and Politics in Science


Retraction Watch: Reports on retractions of scientific papers and on related topics

Yoshihiro Sato: Researcher at the center of an epic fraud remains an enigma to those who exposed him

Yoshitaka Fujii: Epic fraud: How to succeed in science (without doing any)

Scientific Misconduct (Wikipedia)


The thinking error at the root of science denial

Excerpted from The thinking error at the root of science denial

Characteristics of science denial

from de.wikipedia.org, 5_characteristics_of_science_denial.jpg

By Jeremy P. Shapiro, Adjunct Assistant Professor of Psychological Sciences, Case Western Reserve University May 8, 2018, theconversation.com

As a psychotherapist, I see a striking parallel between a type of thinking involved in many mental health disturbances and the reasoning behind science denial. As I explain in my book “Psychotherapeutic Diagrams,” dichotomous thinking, also called black-and-white and all-or-none thinking, is a factor in depression, anxiety, aggression and, especially, borderline personality disorder.

In this type of cognition, a spectrum of possibilities is divided into two parts, with a blurring of distinctions within those categories. Shades of gray are missed; everything is considered either black or white. Dichotomous thinking is not always or inevitably wrong, but it is a poor tool for understanding complicated realities because these usually involve spectrums of possibilities, not binaries.

Spectrums are sometimes split in very asymmetric ways, with one-half of the binary much larger than the other.

For example, perfectionists categorize their work as either perfect or unsatisfactory; good and very good outcomes are lumped together with poor ones in the unsatisfactory category.

In borderline personality disorder, relationship partners are perceived as either all good or all bad, so one hurtful behavior catapults the partner from the good to the bad category.

It’s like a pass/fail grading system in which 100 percent correct earns a P and everything else gets an F.

In my observations, I see science deniers engage in dichotomous thinking about truth claims. In evaluating the evidence for a hypothesis or theory, they divide the spectrum of possibilities into two unequal parts: perfect certainty and inconclusive controversy. Any bit of data that does not support a theory is misunderstood to mean that the formulation is fundamentally in doubt, regardless of the amount of supportive evidence.

Similarly, deniers perceive the spectrum of scientific agreement as divided into two unequal parts: perfect consensus and no consensus at all. Any departure from 100 percent agreement is categorized as a lack of agreement, which is misinterpreted as indicating fundamental controversy in the field.

There is no ‘proof’ in science

In my view, science deniers misapply the concept of “proof.”

Proof exists in mathematics and logic but not in science. Research builds knowledge in progressive increments. As empirical evidence accumulates, there are more and more accurate approximations of ultimate truth but no final end point to the process.

Deniers exploit the distinction between proof and compelling evidence by categorizing empirically well-supported ideas as “unproven.” Such statements are technically correct but extremely misleading, because there are no proven ideas in science, and evidence-based ideas are the best guides for action we have.

I have observed deniers use a three-step strategy to mislead the scientifically unsophisticated. First, they cite areas of uncertainty or controversy, no matter how minor, within the body of research that invalidates their desired course of action. Second, they categorize the overall scientific status of that body of research as uncertain and controversial. Finally, deniers advocate proceeding as if the research did not exist.

For example, climate change skeptics jump from the realization that we do not completely understand all climate-related variables to the inference that we have no reliable knowledge at all. Similarly, they give equal weight to the 97 percent of climate scientists who believe in human-caused global warming and the 3 percent who do not, even though many of the latter receive support from the fossil fuels industry.

This same type of thinking can be seen among creationists. They seem to misinterpret any limitation or flux in evolutionary theory to mean that the validity of this body of research is fundamentally in doubt.


This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

Why is the Earth still hot?

I. The formation of the Earth created a huge amount of heat

The Earth is thought to have formed from the collision of many rocky asteroids, perhaps hundreds of kilometers in diameter, in the early solar system.

Formation of Solar System

As the proto-Earth gradually bulked up, continuing asteroid collisions and gravitational collapse kept the planet molten.

Heavier elements – in particular iron – would have sunk to the core in 10 to 100 million years’ time, carrying with it other elements that bind to iron.

Radioactive potassium may be major heat source in Earth’s core,  Robert Sanders, UC Berkeley News, 12/13/2003

II. More heat generated when dense material sank down towards the center of Earth

When the Earth was first formed this material was not solid; some was hot enough to become viscous (like silly putty) or even liquid (like lava.)

The denser material was mostly iron and some radioactive metals.

This dense metal slowly sank towards the center, while less dense rock floated upwards.

This process itself created a lot of friction, which created a lot of heat.

“Gradually, however, the Earth would have cooled off and become a dead rocky globe with a cold iron ball at the core if not for the continued release of heat by the decay of radioactive elements like:

potassium-40, uranium-238 and thorium-232, which have half-lives of 1.25 billion, 4 billion and 14 billion years, respectively.

About one in every thousand potassium atoms is radioactive.”

III. Heat from the decay of radioactive elements.

Most metals we know are stable. Think of Nickel, Iron, Copper and Gold. If you put them in a box so that they don’t get exposed to oxygen, then they don’t rust, and never change. Millions of years from now they will still be around.

What’s inside metal atoms? Electrons, protons and neutrons. In a metal atom, the number of these particles will normally never change.

Example: Iron-56 26 protons, 30 neutrons, 26 electrons.
But some very large atoms are special: they not stable – they do change, all by themselves. These are called radioactive elements.

Uranium-238 92 protons, 146 neutrons, 92 electrons

-> spontaneously will change into

Plutonium-239 94 protons, 145 neutrons, 94 electrons + heat
– – –

In sum, there was no shortage of heat in the early earth, and the planet’s inability to cool off quickly results in the continued high temperatures of the Earth’s interior. In effect, not only do the earth’s plates act as a blanket on the interior, but not even convective heat transport in the solid mantle provides a particularly efficient mechanism for heat loss.

The planet does lose some heat through the processes that drive plate tectonics, especially at mid-ocean ridges. For comparison, smaller bodies such as Mars and the Moon show little evidence for recent tectonic activity or volcanism.

We derive our primary estimate of the temperature of the deep earth from the melting behavior of iron at ultrahigh pressures.

We know that the earth’s core depths from 2,886 kilometers to the center at 6,371 kilometers (1,794 to 3,960 miles), is predominantly iron, with some contaminants.

How? The speed of sound through the core (as measured from the velocity at which seismic waves travel across it) and the density of the core are quite similar to those seen in of iron at high pressures and temperatures, as measured in the laboratory. Iron is the only element that closely matches the seismic properties of the earth’s core and is also sufficiently abundant present in sufficient abundance in the universe to make up the approximately 35 percent of the mass of the planet present in the core.

The earth’s core is divided into two separate regions: the liquid outer core and the solid inner core, with the transition between the two lying at a depth of 5,156 kilometers (3,204 miles).

Therefore, If we can measure the melting temperature of iron at the extreme pressure of the boundary between the inner and outer cores, then this lab temperature should reasonably closely approximate the real temperature at this liquid-solid interface. Scientists in mineral physics laboratories use lasers and high-pressure devices called diamond-anvil cells to re-create these hellish pressures and temperatures as closely as possible.

Those experiments provide a stiff challenge, but our estimates for the melting temperature of iron at these conditions range from about 4,500 to 7,500 kelvins (about 7,600 to 13,000 degrees F).

As the outer core is fluid and presumably convecting (and with an additional correction for the presence of impurities in the outer core), we can extrapolate this range of temperatures to a temperature at the base of Earth’s mantle (the top of the outer core) of roughly 3,500 to 5,500 kelvins (5,800 to 9,400 degrees F) at the base of the earth’s mantle.

The bottom line here is simply that a large part of the interior of the planet (the outer core) is composed of somewhat impure molten iron alloy. The melting temperature of iron under deep-earth conditions is high, thus providing prima facie evidence that the deep earth is quite hot.

Gregory Lyzenga is an associate professor of physics at Harvey Mudd College. He provided some additional details on estimating the temperature of the earth’s core:

How do we know the temperature? The answer is that we really don’t–at least not with great certainty or precision. The center of the earth lies 6,400 kilometers (4,000 miles) beneath our feet, but the deepest that it has ever been possible to drill to make direct measurements of temperature (or other physical quantities) is just about 10 kilometers (six miles).

Ironically, the core of the earth is by far less accessible more inaccessible to direct probing than would be the surface of Pluto. Not only do we not have the technology to “go to the core,” but it is not at all clear how it will ever be possible to do so.

As a result, scientists must infer the temperature in the earth’s deep interior indirectly. Observing the speed at which of passage of seismic waves pass through the earth allows geophysicists to determine the density and stiffness of rocks at depths inaccessible to direct examination.

If it is possible to match up those properties with the properties of known substances at elevated temperatures and pressures, it is possible (in principle) to infer what the environmental conditions must be deep in the earth.

The problem with this is that the conditions are so extreme at the earth’s center that it is very difficult to perform any kind of laboratory experiment that faithfully simulates conditions in the earth’s core.

Nevertheless, geophysicists are constantly trying these experiments and improving on them, so that their results can be extrapolated to the earth’s center, where the pressure is more than three million times atmospheric pressure.

The bottom line of these efforts is that there is a rather wide range of current estimates of the earth’s core temperature. The “popular” estimates range from about 4,000 kelvins up to over 7,000 kelvins (about 7,000 to 12,000 degrees F).

If we knew the melting temperature of iron very precisely at high pressure, we could pin down the temperature of the Earth’s core more precisely, because it is largely made up of molten iron. But until our experiments at high temperature and pressure become more precise, uncertainty in this fundamental property of our planet will persist.

What will happen with the Earth cools?

When the Earth’s core finally does cool – billions of years from now – then Earth will solidify and there will be no more plate tectonics. Therefore there will be

  1. No more earthquakes

  2. No more volcanic eruptions

  3. no more island building

  4. No more mountain building

The Earth’s surface will eventually be eroded down to a flatter surface, marred only by new impact craters. Earth will then be a geologically dead planet, like the Moon.

Some scientists estimate that “The planet is now cooling about 100°C every 1 billion years, so eventually, maybe several billions of years from now, the waning rays of a dying sun will shine down on a tectonically dead planet whose continents are frozen in place.”


How do we know what lies at the Earth’s core?

How we know what lies at the Earth’s core. BBC

Addressing misconceptions

If the Earth’s core is radioactive why is there no radiation at the surface?

Click the link to read the article, but short version, there indeed is radioactivity here on the Earth’s surface!

External resources and discussions

What percent of the Earth’s core is uranium? earthscience.stackexchange.com

Claim: Radioactive decay accounts for half of Earth’s heat, and related, What Keeps the Earth Cooking? Berkeley Lab scientists join their KamLAND colleagues to measure the radioactive sources of Earth’s heat flow

A fascinating although somewhat controversial article, Andrault, Denis & Monteux, J. & Le Bars, Michael & Samuel, H.. (2016). The deep Earth may not be cooling down. Earth and Planetary Science Letters. 443. 10.1016/j.epsl.2016.03.020.



This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)