Home » Articles posted by New England Blogger (Page 26)
Author Archives: New England Blogger
The Physics of Interstellar Travel
Why should humanity eventually colonize the stars?
http://www.mccallstudios.com/the-prologue-and-the-promise/
“Ask ten different scientists about the environment, population control, genetics and you’ll get ten different answers, but there’s one thing every scientist on the planet agrees on. Whether it happens in a hundred years or a thousand years or a million years, eventually our Sun will grow cold and go out. When that happens, it won’t just take us. It’ll take Marilyn Monroe and Lao-Tzu, Einstein, Morobuto, Buddy Holly, Aristophanes .. and all of this .. all of this was for nothing unless we go to the stars.”
– Writer J. Michael Straczynski, from a character’s speech (Commander Sinclair) in Babylon 5, season 1, “Infection”
This a resource on possible ways humans could achieve interstellar travel.
How to use this resource
Can be read as enrichment.
Resource for a science club project.
Use space travel as a NGSS phenomenon or to create a storyline; one may teach about chemistry topics:
chemical reactions
practical use of reactions – chemical rockets
ions versus atoms
practical use of ions – ion drives for space travel
atoms and anti-atoms: basic subatomics particles of matter/antimatter
energy levels/quantum jumps
Use space travel as a NGSS phenomenon or to create a storyline: one may teach about modern physics topics:
nuclear fission
nuclear fusion
magnetic fields – practical uses of fields (Bussard ramjet)
black holes and wormholes
quantum jumps (chemistry/physics)
Einstein’s theory of relativity (relates to warp drive)
Introduction
Realistically, we currently have no technology that would let us send unmanned, let alone manned, spacecraft to even the nearest star. The Voyager spacecraft – launched in 1977 – is traveling away from our Sun at a rate of 17.3 km per second.
If Voyager were to travel to our nearest star, Proxima Centauri, it would take over 73,000 years to arrive.
Yes, if we built this today, we could – with some effort – bring it to a speed ten times faster, but that still would 7300 years to reach another star.
To understand the size of this space probe, here is an image of it under construction.
What do we think about, when we think of interstellar travel?
We’re all familiar with FTL (faster than light) space travel in Star Trek…
or from movies like Star Wars.
But nothing like this currently exists. We’re not even if sure if anything like warp drive or hyperspace could exist – although we’ll get to those ideas at the end of this unit. So we need to start with what we currently have. What kinds of space travel technology do we have right now? All of our rocketships are powered by chemical reactions.
These are the manned rocketships that we have used from the 1960 up to today.
First, we need to know – What are chemical reactions?
We then need to know what combustion is.
Here we see a SpaceX falcon 9 rocket lifting off, carrying a Crew Dragon reusable manned spacecraft (see in the above image.)
Chemical reaction powered rockets are good for manned or unmanned missions within our solar system. But they are relatively slow and require huge amounts of fuel.
Solar sail spaceships
These are application of Newton’s laws of motion and conservation of momentum.
Solar sails feel the photon wind of our sun in much the same way that traditional sailboats capture the force of the wind.
The first spacecraft to make use of the technology was IKAROS, launched in 2010.
The force of sunlight on the ship’s mirrors is akin to a sail being blown by the wind. High-energy laser beams could be used as a light source to exert much greater force than would be possible using sunlight.
Solar sail craft offer the possibility of low-cost operations combined with long operating lifetimes.
These are very low-thrust propulsion system, and they use no propellant. They are very slow, but very affordable.
Newton’s laws of motion
Momentum
Ionic propulsion spacecraft
We first learn What are atoms? and What are ions?
These ideas are then related to Newton’s laws of motion and conservation of momentum.
Ionic rockets have low acceleration, and it takes a long time for a spacecraft to build up much speed. However they are extremely efficient.
Uses engines such as the Hall-effect thruster (HET). Used in European Space Agency’s (ESA) SMART-1 mission. They are good for unmanned missions within our solar system.
nuclear propulsion (working engines already designed!)
These systems have already been built and tested here on Earth.
Nuclear Electric propulsion – In this kind of system, thermal energy from a nuclear fission reactor is converted to electrical energy. This is then used to drive an ion thruster.
Nuclear Thermal Rocket – Heat from a nuclear fission reactor adds energy to a fluid. This fluid is then expelled out of a rocket nozzle, creating thrust.
Here is where we may learn about nuclear fission
Matt Williams writes
In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.
A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.
Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design – such as the Nuclear Engine for Rocket Vehicle Application (NERVA) – to more advanced and efficient concepts that rely on either a liquid or a gas core.
However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kN·s/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at “opposition” – i.e. as close as 55,000,000 km from Earth.
But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.
Torchships
“Have you simply had it up to here with these impotent little momma’s-boy rockets that take almost a year to crawl to Mars? Then you want a herculean muscle-rocket, with rippling titanium washboard abs and huge geodesic truck-nuts! You want a Torchship! Who cares if the exhaust can evaporate Rhode Island? You wanna rocket with an obscenely high delta V, one that can crank out one g for days at a time. Say goodbye to all that fussy Hohmann transfer nonsense, the only navigation you need is point-and-shoot. – Winchell D. Chung Jr.“
Torchsips are what we think of from many classic science fiction stories.
Shockingly, we already have the technology to build a Torship powered by multiple, small nuclear-fission explosions – Project Orion.
Project Orion was a study conducted between the 1950s and 1960s by the United States Air Force, DARPA, and NASA – [it would be a spaceship] propelled by a series of explosions of atomic bombs behind the craft via nuclear pulse propulsion. Early versions of this vehicle were proposed to take off from the ground; later versions were presented for use only in space. Six non-nuclear tests were conducted using models.
The Orion concept offered high thrust and high specific impulse at the same time. Orion would have offered performance greater than the most advanced conventional or nuclear rocket engines then under consideration. Supporters of Project Orion felt that it had potential for cheap interplanetary travel, but it lost political approval over concerns about fallout from its propulsion. The Partial Test Ban Treaty of 1963 is generally acknowledged to have ended the project.
Designs were considered that would actually allow us to build interstellar spacecraft! An Orion torchship could achieve about 10% of the speed of light. At this speed such a ship could reach the closest star, Alpha Centauri in just 44 years.
Our SpaceFlight Heritage: Project Orion, a nuclear bomb and rocket – all in one.
Project Orion
Realistic Designs: Atomic Rockets
Project Orion. Medium.com
Project Orion: The Spaceship Propelled By Nuclear Bombs
The Nuclear Bomb Powered Spaceship – Project Orion
And there’s more – Project Orion was just the first Torch ship designed, and that only uses 1960s level nuclear fission. In the last generation more flexible and safer methods using nuclear fission have been developed. Similarly we have made many advances in nuclear fusion – see the next section.
Torchships – nuclear fusion
Nuclear fusion is the process that powers our sun, and all stars in the universe. Inside a star, gravity pulls billions of tons of matter towards the center. Atoms are pushed very close together. Two atoms are fused into one, heavier atom.
Yet the mass of this new atom is slightly less than the mass of the pieces that it was made of in the first place. Where the did missing energy go? It becomes energy – which we see as photons, or as the heat/motion energy of other particles. This is also the process by which nuclear bombs work.
How can we possibly replicate the energy of stars here on Earth? For the last 70 years people have been working on this. It has been extremely challenging to do this, but progress is slowly being made.
Read more here about nuclear power.
Here is a great article about Torchships that realistically are possible.
and Torch Drives: An Overview
Very speculative technologies
Fusion (Bussard) Ramjet
Proposed by physicist Robert W. Bussard in 1960. It uses nuclear fusion. An enormous electromagnetic funnel “scoops” hydrogen from the interstellar medium and dumps it into the reactor as fuel.
As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel.
Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.
However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed – especially in denser regions of the galaxy.
Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.
See http://www.projectrho.com/public_html/rocket/slowerlight3.php
Antimatter-Matter annihilation powered rocket
What is antimatter?
https://www.symmetrymagazine.org/article/april-2015/ten-things-you-might-not-know-about-antimatter
https://sciencenotes.org/what-is-antimatter-definition-and-examples/
https://www.facebook.com/theuniqueknowledge/posts/935607460207906
Find source for the next quote
Fans of science fiction are sure to have heard of antimatter. But in case you haven’t, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.
In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.
The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. What’s more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.
Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.
It is for this exact reason that NASA’s Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!)
How Long Would It Take To Travel To The Nearest Star?
NASA PDF PowerPoint: Realistic Interstellar Travel
Ask Ethan: Is Interstellar Travel Possible? Forbes
Technologies at the very edge of possibility
Wormhole (traversable black holes)
Some sci-fi novels postulate a technology called a jump drive – This allows a starship to be instantaneously teleported between two points. The specific way this is done is glossed over.
Some physicists have offered tentative ideas about how it might be possible. In Stargate, and the science fiction story Contact, the characters use a traversable wormhole – a connection between two distant black holes.
So let’s start with this – What are black holes?

H. K. Wimmer’s rendition of a black hole modified by Attractor321, for Wikipedia. “Black-hole continuum and its gravity well”
Here is one hypothesis about how one might create a transversable wormhole.

A wormhole connects distant locations in space. Wormhole mouths in space connected by a tunnel, called a throat.
and see https://kardashev.fandom.com/wiki/Wormhole
Hyperspace
In Star Wars and Babylon 5 spaceships have a hyperdrive, to send a ship through hyperspace.
From Star Wars, here is a view from the cockpit of hyperspace.
Hyperspace is a very different concept than warp drive. Hyperspace is a speculative, different dimension, in which faster than light speed are possible. So, in this idea, a spaceship would somehow jump out of our universe and into this realm.
No form of hyperspace has ever been discovered by science; its existence was initially merely supposed by science fiction writers. Although in recent years, theoretical physics work on superstrings has led to something called Brane theory, which indicates the possible existence of hyperspaces of various sorts.
Presumably a spaceship would reach a point in hyperspace that corresponds to the destination in our space that they want; at this point they need to jump out of hyperspace and back into our space.
https://starwars.fandom.com/wiki/Hyperspace
https://en.wikipedia.org/wiki/Hyperspace
https://babylon5.fandom.com/wiki/Hyperspace
What realistic way could limit an FTL drive to only travelling between stars?
Warp drive
You are likely familiar with methods of interstellar travel that currently only exist in science fiction. For instance, in Star Trek, spaceships have a warp drive. Warp drive allows a spaceship to travel through our space, regular space, at FTL (faster than light) speeds.
Many people are familiar with warp drive as a form of FTL (Faster Than Light travel.) Its most popular use is in the science-fiction series Star Trek. According to the laws of physics could this potentially be possible?
Possibility of a real life warp drive, The Alcubierre drive. (KaiserScience)
Warp Drive Research Key to Interstellar Travel, Scientific American
External resources and articles
Physics of interstellar travel Michio Kaku
Space.com articles on interstellar travel
Pros and Cons of Various Methods of Interstellar Travel, Universe Today
Space.StackExchange – [interstellar-travel]
“Concepts for Deep Space Travel: From Warp Drives and Hibernation to World Ships and Cryogenics“, Current Trends in Biomedical Engineering and Biosciences
Videos
The Big Problem With Interstellar Travel, YouTube, RealLifeLore
Interstellar Travel: Approaching Light Speed. Jimiticus
Interstellar Travel – Speculative Documentary HD
Learning Standards
Massachusetts Curriculum Frameworks Science and Technology/Engineering (2016)
6.MS-ESS1-5(MA). Use graphical displays to illustrate that Earth and its solar system are one of many in the Milky Way galaxy, which is one of billions of galaxies in the universe.
A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012)
By the end of grade 8. Patterns of the apparent motion of the sun, the moon, and stars in the sky can be observed, described, predicted, and explained with models. The universe began with a period of extreme and rapid expansion known as the Big Bang. Earth and its solar system are part of the Milky Way galaxy, which is one of many galaxies in the universe.
Next Generation Science Standards
4-PS3 Energy, Disciplinary Core Ideas, ETS1.A: Defining Engineering Problems
Possible solutions to a problem are limited by available materials and resources (constraints). The success of a designed solution is determined by considering the desired features of a solution (criteria). Different proposals for solutions can be compared on the basis of how well each one meets the specified criteria for success or how well each takes the constraints into account. (secondary to 4-PS3-4)
Common Core State Standards Connections: ELA/Literacy
RST.6-8.8 Distinguish among facts, reasoned judgment based on research findings, and speculation in a text. (MS-LS2-5)
RI.8.8 Trace and evaluate the argument and specific claims in a text, assessing whether the reasoning is sound and the evidence is relevant and sufficient to support the claims. (MS-LS-4),(MS-LS2-5)
WHST.6-8.2 Write informative/explanatory texts to examine a topic and convey ideas, concepts, and information through the selection, organization, and analysis of relevant content. (MS-LS2-2)
Mad science
No science class is complete without considering mad scientists and some of their potentially-plausible mad science inventions.

It’s all about the presentation, of course 😉
Actual proposed mad science projects
Nikola Tesla
Tesla and wireless power transmission
Project Habbakuk
Project Habbakuk: Britain’s secret attempt to build an ice warship. CNN.
Project Habbakuk: Britain’s Secret Ice “Bergship” Aircraft Carrier
Project Habakkuk (Wikipedia)
Project Orion
Our SpaceFlight Heritage: Project Orion, a nuclear bomb and rocket – all in one.
Project Orion
Realistic Designs: Atomic Rockets
Project Orion. Medium.com
Extreme Engineering
Extreme Engineering: Tokyo’s Sky City, Transatlantic Tunnel, and the Space Elevator
Not so-actual proposed mad science projects
Evil environmental engineering
Elizabeth @leafcrunch offers this mad plan
“My plan would involve hollowing out West Virginia and using the slag to fill in Lake Ontario, completing a diagonal chain of now saltwater lakes across Turtle island and linking the Arctic & Atlantic seas. This would benefit no one & cause untold damage. I will take no questions.”
https://twitter.com/leafcrunch/status/1232097934503796736/photo/1
Mad Mathematicians
I’m still looking for more examples, but this is a good start:
The Mad Genius Mystery, Alexander Grothendieck, Kaja Perina, Psychology Today, 7/4/2017
Mad Sociology
Maybe? Nah.
Selected mad scientists (real and fictional)
Dr. Walter Bishop, Fringe
Ernst Stavro Blofeld (James Bond series)
Dr Emmett Brown (Back to the Future)
Dr Bruce Banner (Marvel comics and films)
Vladimir Petrovich Demikhov (1916 – 1998)
Real life mad Soviet scientist, organ transplantation pioneer, performed frightening head transplants on dogs and monkeys.
Sir Hugo Drax (James Bond: Moonraker)
Doctor Evil (from Austin Powers)
Amy Farrah Fowler, The Big Bang Theory
John Hays Hammond Jr. (1888-1965)
“The Father of Radio Control”. Had the mad idea that he could guide or control submarines, torpedoes, and boats – remotely. This was considered quackery and impossible – until he actually developed such technology.
His developments in electronic remote control are the foundation for today’s modern radio remote control devices, including modern missile guidance systems, unmanned aerial vehicles (UAVs), and the unmanned combat aerial vehicle (UCAVs). Over 400 patents.
And of course he built a giant castle with a hidden laboratory, secret passageways, and hidden doors, on the coast of Gloucester MA, because every mad scientist needs a secret castle lab.
Way back in 1922 he created a light-sensing automated driving machine (“the electric dog,”) a predecessor to today’s automated machines.
Yes, I would love to live here.
Lex Luthor (from Superman, DC comics)
Black Manta, David Kane (DC comics)
Victor Frankenstein
Felonius Gru (Despicable Me)
Professor James Moriarty (Sherlock Holmes)
Jim Moriarty (Sherlock)
Captain Nemo – Jules Verne, Twenty Thousand Leagues Under the Seas, The Mysterious Island
Dr. Julius No
Q (James Bond)
Louise G. Robinovitch 1869-1940s
These are actual news headlines:
USE ELECTRICITY TO REINSTILL LIFE; Experiments by Which an Animal Which Died Under Anaesthetics Was Resuscitated.
HUMAN PATIENTS NEXT Dr. Louise G. Rabinovitch Pursuing Experiments in Inducing Electric Sleep as Substitute for Anesthetics.
Article, The New York Times, 9/27/1908
This next image is about her work, from Technical World Magazine published in 1910.
From Alexander Pope to “Splice”: a Short History of the Female Mad Scientist
Nenad Sestan, Yale Neuroscientist – reviving decapitated heads.

Photo: Sestan and Zvonimir Vrselja (left) and Stefano Daniele (right), the two co-first authors of the paper highlighted by Nature, Yale News
Scientists Revived Cells in Dead Pig Brains, Jason Daley, Smithsonian magazine, 4/18/2019
Scientists Partially Restore Function in Dead Pigs’ Brains Katherine J. Wu, PBS Nova Next, 4/17/2019
Scientists Restore Some Function In The Brains Of Dead Pigs, Nell Greenfieldboyce, NPR All Things Considered, 4/17/2019
Restoration of brain circulation and cellular functions hours post-mortem, Nenad Sestan et al., Nature Vol. 568, 4/18/2019
Dr. Stangelove (Merkwürdigliebe)
https://deutschesoldaten.fandom.com/wiki/Merkw%C3%BCrdigliebe
Nikola Tesla
Wernher von Braun
Herbest West, Reanimator, H. P. Lovecraft
Board games
SPECTRE: The Board Game from Modiphius Entertainment
Compete to become Number 1 of the Special Executive for Counter-intelligence, Terrorism, Revenge, and Extortion (SPECTRE)
Are you simply in the game to acquire gold bullion, or are your aspirations more philosophical, safe in the knowledge that the world would be better off with you running it
Articles
From Alexander Pope to “Splice”: a Short History of the Female Mad Scientist
Jess Nevins, io9 Gizmodo, 4/21/2011
Learning Standards
2016 Massachusetts Science and Technology/Engineering Curriculum Framework
2016 High School Technology/Engineering
HS-ETS1-1. Analyze a major EVIL global challenge to specify a design problem that can be improved. Determine necessary qualitative and quantitative criteria and constraints for solutions, including any requirements set by society.
HS-ETS1-2. Break a complex real-world EVIL problem into smaller, more manageable problems that each can be solved using scientific and engineering principles.
HS-ETS1-3. Evaluate a solution to a complex real-world EVIL problem based on prioritized criteria and trade-offs that account for a range of constraints, including cost, safety, reliability, aesthetics, and maintenance, as well as social, cultural, and environmental impacts.
Next Generation Science Standards: Science & Engineering Practices
● Ask questions that arise from careful observation of EVIL phenomena, or unexpected results, to clarify and/or seek additional information.
● Ask questions that arise from examining EVIL models or a theory, to clarify and/or seek additional information and relationships.
● Ask questions to clarify and refine an EVIL model, an explanation, or an engineering problem.
● Evaluate an EVIL question to determine if it is testable and relevant.
● Ask and/or evaluate EVIL questions that challenge the premise(s) of an argument, the interpretation of a data set, or the suitability of the design
Basic chemistry rules are actually magic number approximations
The basic rules of chemistry are magic number approximations
What is Lewis Theory?
This lesson is from from Mark R. Leach, meta-synthesis.com, Lewis_theory
Lewis theory is the study of the patterns that atoms display when they bond and react with each other.
The Lewis approach is to look at many chemical systems, study patterns, count the electrons in the patterns. After that, we devise simple rules to explain what is happening.
Lewis theory makes no attempt to explain how or why these empirically derived numbers of electrons – these magic numbers – arise.
Although, it is striking that the magic numbers are generally (but not exclusively) positive integers of even parity: 0, 2, 4, 6, 8
For example:
-
Atoms and atomic ions show particular stability when they have a full outer or valence shell of electrons and are isoelectronic with He, Ne, Ar, Kr & Xe: Magic numbers 2, 10, 18, 36, 54.
-
Atoms have a shell electronic structure: Magic numbers 2, 8, 8, 18, 18.
-
Sodium metal reacts to give the sodium ion, Na+, a species that has a full octet of electrons in its valence shell. Magic number 8.
-
A covalent bond consist of a shared pair electrons: Magic number 2.
-
Atoms have valency, the number of chemical bonds formed by an element, which is the number of electrons in the valence shell divided by 2: Magic numbers 0 to 8.
-
Ammonia, H3N:, has a lone pair of electrons in its valence shell: Magic number 2.
-
Ethene, H2C=CH2, has a double covalent bond: Magic numbers (2 + 2)/2 = 2.
-
Nitrogen, N2, N≡N, has a triple covalent bond: Magic numbers (2 + 2 + 2)/2 = 3.
-
The methyl radical, H3C•, has a single unpaired electron in its valence shell: Magic number 1.
-
Lewis bases (proton abstractors & nucleophiles) react via an electron pair: Magic number 2.
-
Electrophiles, Lewis acids, accept a a pair of electron in order to fill their octet: Magic numbers 2 + 6 = 8.
-
Oxidation involves loss of electrons, reduction involves gain of electrons. Every redox reaction involves concurrent oxidation and reduction: Magic number 0 (overall).
-
Curly arrows represent the movement of an electron pair: Magic number 2.
-
Ammonia, NH3, and phosphine, PH3, are isoelectronic in that they have the same Lewis structure. Both have three covalent bonds and a lone pair of electrons: Magic numbers 2 & 8.
-
Aromaticity in benzene is associated with the species having 4n+2 π-electrons. Magic number 6.Naphthalene is also aromatic: Magic number 10.
-
Etc.
Lewis theory is numerology.
Lewis theory is electron accountancy: look for the patterns and count the electrons.
Lewis theory is also highly eclectic in that it greedily begs/borrows/steals/assimilates numbers from deeper, predictive theories and incorporates them into itself, as we shall see.
Ernest Rutherford famously said
|
Patterns
Consider the pattern shown in Diagram-1:

Now expand the view slightly and look at Diagram-2

You may feel that the right hand side “does not fit the pattern” of Diagram-1 and so is an anomaly.
So, is it an anomaly?
Zoom out a bit and look at the pattern in Diagram-3, the anomaly disappears

But then look at Diagram-4. The purple patch on the upper right hand side does not seem to fit the pattern and so it may represent anomaly

But zooming right out to Diagram-5 we see that everything is part of a larger regular pattern.

Image from dryicons.com, digital-flowers-pattern
When viewing the larger scale the overall pattern emerges and everything becomes clear. Of course, the Digital Flowers pattern is trivial, whereas the interactions of electrons and positive nuclei are astonishingly subtle.
This situation is exactly like learning about chemical structure and reactivity using Lewis theory. First we learn about the ‘Lewis octet’, and we come to believe that the pattern of chemistry can be explained in terms of the very useful Lewis octet model.
Then we encounter phosphorous pentachloride, PCl5, and discover that it has 10 electrons in its valence shell. Is PCl5 an anomaly? No! The fact is that the pattern generated through the Lewis octet model is just too simple.
As we zoom out and look at more chemical structure and reactivity examples we see that the pattern is more complicated that indicated by the Lewis octet magic number 8.
Our problem is that although the patterns of electrons in chemical systems are in principle predictable, new patterns always come as a surprise when they are first discovered:
-
The periodicity of the chemical elements
-
The 4n + 2 rule of aromaticity
-
The observation that sulfur exists in S8 rings
-
The discovery of neodymium magnets in the 1990s
-
The serendipitous discovery of how to make the fullerene C60 in large amounts
While these observations can be explained after the fact, they were not predicted beforehand. We do not have the mathematical tools to do predict the nature of the quantum patterns with absolute precision.
The chemist’s approach to understanding structure and reactivity is to count the electrons and take note of the patterns. This is Lewis theory.
As chemists we attempt to ‘explain’ many of these patterns in terms of electron accountancy and magic numbers.
Caught In The Act: Theoretical Theft & Magic Number Creation
The crucial time for our understand chemical structure & bonding occurred in the busy chemistry laboratories at UC Berkeley under the leadership of G. N. Lewis in the early years of the 20th century.
Lewis and colleagues were actively debating the new ideas about atomic structure, particularly the Rutherford & Bohr atoms and postulated how they might give rise to models of chemical structure, bonding & reactivity.
Indeed, the Lewis model uses ideas directly from the Bohr atom. The Rutherford atom shows electrons whizzing about the nucleus, but to the trained eye, there is no structure to the whizzing. Introduced by Niels Bohr in 1913, the Bohr model is a quantum physics modification of the Rutherford model and is sometimes referred to the Rutherford–Bohr model. (Bohr was Rutherford’s student at the time.) The model’s key success lay in explaining (correlating with) the Rydberg formula for the spectral emission lines of atomic hydrogen.
[Greatly simplifying both the history & the science:]
In 1916 atomic theory forked or bifurcated into physics and chemistry streams:
-
The physics fork was initiated and developed by Bohr, Pauli, Sommerfield and others. Research involved studying atomic spectroscopy and this lead to the discovery of the four quantum numbers – principal, azimuthal, magnetic & spin – and their selection rules. More advanced models of chemical structure, bonding & reactivity are based upon the Schrödinger equation in which the electron is treated as a resonant standing wave. This has developed into molecular orbital theory and the discipline ofcomputational chemistry.
-
Note: quantum numbers and their selection rules are not ‘magic’ numbers. The quantum numbers represent deep symmetries that are entirely self consistent across all quantum mechanics.
-
The chemistry fork started when Lewis published his first ideas about the patterns he saw in chemical bonding and reactivity in 1916, and later in a more advanced form in 1923. Lewis realised that electrons could be counted and that there were patterns associated with structure, bonding and reactivity behaviour.These early ideas have been extensively developed and are now taught to chemistry students the world over. This is Lewis theory.
_____________________________________________________
Lewis Theory and Quantum Mechanics
Quantum mechanics and Lewis theory are both concerned with patterns. However, quantum mechanics actively causes the patterns whereas Lewis theory is passive and it only reports on patterns that are observed through experiment.
We observe patterns of structure & reactivity behaviour through experiment.
Lewis theory looks down on the empirical evidence, identifies patterns in behaviour and classifies the patterns in terms of electron accountancy& magic numbers. Lewis theory gives no explanation for the patterns.
In large part, chemistry is about the behaviour of electrons and electrons are quantum mechanical entities. Quantum mechanics causes chemistry to be the way it is. The quantum mechanical patterns are can be:
- Observed using spectroscopy.
- Echoes of the underlying quantum mechanics can be seen in the chemical structure & reactivity behaviour patterns.
- The patterns can be calculated, although the mathematics is not trivial.
.
Tragic Decline of Music Literacy and Quality
Archived articles:
The Tragic Decline of Music Literacy (and Quality)
Jon Henschen, intellectualtakeout.org, August 16, 2018
Throughout grade school and high school, I was fortunate to participate in quality music programs. Our high school had a top Illinois state jazz band; I also participated in symphonic band, which gave me a greater appreciation for classical music. It wasn’t enough to just read music. You would need to sight read, meaning you are given a difficult composition to play cold, without any prior practice. Sight reading would quickly reveal how fine-tuned playing “chops” really were. In college I continued in a jazz band and also took a music theory class. The experience gave me the ability to visualize music (If you play by ear only, you will never have that same depth of understanding music construct.)
Both jazz and classical art forms require not only music literacy, but for the musician to be at the top of their game in technical proficiency, tonal quality and creativity in the case of the jazz idiom. Jazz masters like John Coltrane would practice six to nine hours a day, often cutting his practice only because his inner lower lip would be bleeding from the friction caused by his mouth piece against his gums and teeth.
His ability to compose and create new styles and directions for jazz was legendary. With few exceptions such as Wes Montgomery or Chet Baker, if you couldn’t read music, you couldn’t play jazz. In the case of classical music, if you can’t read music you can’t play in an orchestra or symphonic band. Over the last 20 years, musical foundations like reading and composing music are disappearing with the percentage of people that can read music notation proficiently down to 11 percent, according to some surveys.

Two primary sources for learning to read music are school programs and at home piano lessons. Public school music programs have been in decline since the 1980’s, often with school administrations blaming budget cuts or needing to spend money on competing extracurricular programs. Prior to the 1980’s, it was common for homes to have a piano with children taking piano lessons.
Even home architecture incorporated what was referred to as a “piano window” in the living room which was positioned above an upright piano to help illuminate the music. Stores dedicated to selling pianos are dwindling across the country as fewer people take up the instrument. In 1909, piano sales were at their peak when more than 364,500 were sold, but sales have plunged to between 30,000 and 40,000 annually in the US. Demand for youth sports competes with music studies, but also, fewer parents are requiring youngsters to take lessons as part of their upbringing.
Besides the decline of music literacy and participation, there has also been a decline in the quality of music which has been proven scientifically by Joan Serra, a postdoctoral scholar at the Artificial Intelligence Research Institute of the Spanish National Research Council in Barcelona. Joan and his colleagues looked at 500,000 pieces of music between 1955-2010, running songs through a complex set of algorithms examining three aspects of those songs:
1. Timbre- sound color, texture and tone quality
2. Pitch- harmonic content of the piece, including its chords, melody, and tonal arrangements
3. Loudness- volume variance adding richness and depth
The results of the study revealed that timbral variety went down over time, meaning songs are becoming more homogeneous. Translation: most pop music now sounds the same. Timbral quality peaked in the 60’s and has since dropped steadily with less diversity of instruments and recording techniques.
Today’s pop music is largely the same with a combination of keyboard, drum machine and computer software greatly diminishing the creativity and originality.
Pitch has also decreased, with the number of chords and different melodies declining. Pitch content has also decreased, with the number of chords and different melodies declining as musicians today are less adventurous in moving from one chord or note to another, opting for well-trod paths by their predecessors.
Loudness was found to have increased by about one decibel every eight years. Music loudness has been manipulated by the use of compression. Compression boosts the volume of the quietest parts of the song so they match the loudest parts, reducing dynamic range. With everything now loud, it gives music a muddled sound, as everything has less punch and vibrancy due to compression.
In an interview, Billy Joel was asked what has made him a standout. He responded his ability to read and compose music made him unique in the music industry, which as he explained, was troubling for the industry when being musically literate makes you stand out. An astonishing amount of today’s popular music is written by two people: Lukasz Gottwald of the United States and Max Martin from Sweden, who are both responsible for dozens of songs in the top 100 charts. You can credit Max and Dr. Luke for most the hits of these stars:
Katy Perry, Britney Spears, Kelly Clarkson, Taylor Swift, Jessie J., KE$HA, Miley Cyrus, Avril Lavigne, Maroon 5, Taio Cruz, Ellie Goulding, NSYNC, Backstreet Boys, Ariana Grande, Justin Timberlake, Nick Minaj, Celine Dion, Bon Jovi, Usher, Adam Lambert, Justin Bieber, Domino, Pink, Pitbull, One Direction, Flo Rida, Paris Hilton, The Veronicas, R. Kelly, Zebrahead
With only two people writing much of what we hear, is it any wonder music sounds the same, using the same hooks, riffs and electric drum effects?
Lyric Intelligence was also studied by Joan Serra over the last 10 years using several metrics such as “Flesch Kincaid Readability Index,” which reflects how difficult a piece of text is to understand and the quality of the writing. Results showed lyric intelligence has dropped by a full grade with lyrics getting shorter, tending to repeat the same words more often.
Artists that write the entirety of their own songs are very rare today. When artists like Taylor Swift claim they write their own music, it is partially true, insofar as she writes her own lyrics about her latest boyfriend breakup, but she cannot read music and lacks the ability to compose what she plays. (Don’t attack me Tay-Tay Fans!)
Music electronics are another aspect of musical decline as the many untalented people we hear on the radio can’t live without autotune. Autotune artificially stretches or slurs sounds in order to get it closer to center pitch. Many of today’s pop musicians and rappers could not survive without autotune, which has become a sort of musical training wheels. But unlike a five-year-old riding a bike, they never take the training wheels off to mature into a better musician. Dare I even bring up the subject of U2s guitarist “The Edge” who has popularized rhythmic digital delays synchronized to the tempo of the music? You could easily argue he’s more an accomplished sound engineer than a talented guitarist.
Today’s music is designed to sell, not inspire. Today’s artist is often more concerned with producing something familiar to mass audience, increasing the likelihood of commercial success (this is encouraged by music industry execs, who are notoriously risk-averse).
In the mid-1970’s, most American high schools had a choir, orchestra, symphonic band, jazz band, and music appreciation classes. Many of today’s schools limit you to a music appreciation class because it is the cheapest option. D.A. Russell wrote in the Huffington Post in an article titled, “Cancelling High School Elective, Arts and Music—So Many Reasons—So Many Lies” that music, arts and electives teachers have to face the constant threat of eliminating their courses entirely. The worst part is knowing that cancellation is almost always based on two deliberate falsehoods peddled by school administrators: 1) Cancellation is a funding issue (the big lie); 2) music and the arts are too expensive (the little lie).
The truth: Elective class periods have been usurped by standardized test prep. Administrators focus primarily on protecting their positions and the school’s status by concentrating curricula on passing the tests, rather than by helping teachers be freed up from micromanaging mandates so those same teachers can teach again in their classrooms, making test prep classes unnecessary.
What can be done? First, musical literacy should be taught in our nation’s school systems. In addition, parents should encourage their children to play an instrument because it has been proven to help in brain synapse connections, learning discipline, work ethic, and working within a team. While contact sports like football are proven brain damagers, music participation is a brain enhancer.
Where did all the key changes go?
Mallika Seshadri, 11/30/2022
Many of the biggest hits in pop music used to have something in common: a key change, like the one you hear in Whitney Houston’s “I Wanna Dance With Somebody.” But key changes have become harder to find in top hits.
Chris Dalla Riva, a musician and data analyst at Audiomack, wanted to learn more about what it takes to compose a top hit. He spent the last few years listening to every number one hit listed on the Billboard Hot 100 since 1958 – more than 1100 songs.
“I just started noticing some trends, and I set down to writing about them,” says Dalla Riva, who published some of those findings in an article for the website Tedium. He found that about a quarter of those songs from the 1960s to the 1990s included a key change. But from 2010 to 2020, there was just one top song: Travis Scott’s 2018 track, “Sicko Mode.”
According to Dalla Riva, changing the key – or shifting the base scale of a song – is a tool used across musical genres to “inject energy” into a pop number. There are two common ways to place a key change into a top hit, he says. The first is to take the key up toward the end of a number, like Beyoncé does in her 2011 song “Love on Top,” which took listeners through four consecutive key changes. This placement helps a song crescendo to its climax.
The second common placement, Dalla Riva says, is in the middle of a song to signal a change in mood. The Beach Boys took this approach in their 1966 release “Good Vibrations,” as did Scott’s “Sicko Mode.” “The key is just a tool,” Dalla Riva says. “And like all tools and music, the idea is to evoke emotion.”
…. In the absence of key changes – and in a time where hip-hop and electronic music have gained popularity – composers have turned to varying rhythmic patterns and more evocative lyrics. And if you’re one of those folks who wants the key change to come back, Charnas believes there’s one way to do it: fund music education. “You want to know why Motown was such an incredible font of composition? Three words: Detroit Public Schools.”
Decline of key changes in popular music
This image from Decline of key changes in popular music
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
California wildfire disasters: Causes, Damage, and what to do next
I. What is a wildfire?
II. What causes wildfires?
III. Is there any truth in what President Donald Trump says about the refusal to clean the forest floor as a major contributing factor to the spread of these fires?
IV. Many coastal communities had to learn to live with hurricanes. Will California communities have to learn how to live with such fires?
__________________________________________
I. What is a wildfire?
A wildfire or wildland fire is a fire in an area of combustible vegetation occurring in rural areas.
Depending on the type of vegetation present, a wildfire can also be classified more specifically as a brush fire, bushfire, desert fire, forest fire, grass fire, hill fire, peat fire, vegetation fire, and veld fire.

Photo of the Delta Fire, California, 2018. Social media/Reuters.
II. What causes wildfires?
Sadly, if you watch the news, there apparently is nothing that happens without the Jews being blamed.
Marjorie Taylor Greene (R), Qanon congresswoman from Georgia, has her own theory about what caused the 2020 California wildfires:
That’s right. Jewish space lasers.
Marjorie Taylor Greene’s space laser and the age-old problem of blaming the Jews
GOP Congresswoman Blamed Wildfires on Secret Jewish Space Laser
She falsely speculated that space lasers caused the Camp Fire. Now she’s a congresswoman.
Well, as much as my friend and I would like to take credit for this –
I’m afraid that the actual cause of forest fires is a bit more mundane:
Nearly 85 percent of wildland fires in the United States are caused by:
campfires left unattended, campers burning debris, equipment use and malfunctions, negligently discarded cigarettes, acts of arson, and lightning.
Wildfire Causes and Evaluations, NPS (National Park Service)
III. Is there any truth in what President Donald Trump says about the refusal to clean the forest floor as a major contributing factor to the spread of these fires?
Yes, there is some truth to this. The issue takes more than one sentence to explain.
Erin Ross, writer and researcher for Oregon Public Broadcasting, writes
This is a good question with a longish answer. But the short answer is no. The long answer (thread) Forest management practices (which have nothing to do with ‘raking forests’) have absolutely contributed to the size and intensity of wildfires over the last 100 or so years.
Basically, for a long time, if there was a fire you did one thing: put the fire out, ASAP. But fire is a natural part of forest ecosystems, so that lead to fuel buildup, which increased the intensity of fires. What a healthy forest looks like depends on the ecosystem.
In healthy ponderosa forest, for example, is open and park-like. Regular fires clear the underbrush, downed limbs, and young trees. You can walk through these forests with outstretched arms without touching a tree.
There will be stands of denser, younger trees where old trees fell, opening up the ground to light and growth. They’ll thin with time. An unhealthy ponderosa forest is nothing *but* dense stands of young trees and brush.
In a healthy ponderosa forest, fire rips across the understory. But ponderosas are adapted for fire, so very few trees actually catch.
In an unhealthy forest, with lots of brush to fuel flames and smaller trees to reach the fire up towards the canopy, the whole forest can burn.
In Oregon, most ponderosa forests are on the east side of the Cascade mountain range. Historically, they would have many small, brief fires. Now, because of decades of fire suppression, they have frequent massive fires.
So… should we clean the forest floor? No. Should we log all the trees? Not that, either.
You can’t just rake a forest. Forest ecosystems are more than trees and bushes. Insects, mammals, birds and plants rely on fire to exist. Some types of plant seeds won’t even germinate without fire.
If we just “raked the forest”, we’d wreck the forest. So instead we use controlled burning. When fire danger is low, crews go into the woods and light small fires, reducing fuel and simulating the fires that would have burned in the past.
But some forests — particularly ones that were logged or over-suppressed – are full of dense small trees and large trees. They’re not safe to controlled burn, and it wouldn’t reduce the fuel. So humans need to do even more.
Unfortunately, you can’t just go thin trees. That leaves all that fuel sitting on the ground. Some studies have found that forests that were thinned but not burned had *worse* fires than those with no thinning at all.
The thinned trees helped wind carry the fires.
So, you need a combination of thinning and controlled burning in these ponderosa forests. We have to undo some damage before we can return fire to the landscape.
More here at her Discussion on Twitter
IV. Many coastal communities had to learn to live with hurricanes.
Will California communities have to learn how to live with such fires?
In an article on Slate, James B. Meigs writes
Fossil charcoal indicates that wildfires began soon after the appearance of terrestrial plants 420 million years ago. Wildfire’s occurrence throughout the history of terrestrial life invites conjecture that fire must have had pronounced evolutionary effects on most ecosystems’ flora and fauna.
Earth is an intrinsically flammable planet owing to its cover of carbon-rich vegetation, seasonally dry climates, atmospheric oxygen, and widespread lightning and volcanic ignitions.
The Camp Fire may have been caused by one, but the California wildfire was years in the making.
It’s hard to look at the images of what used to be Paradise. On Nov. 8, California’s Camp Fire tore through the Sierra Nevada foothills town of 27,000 people with little advance warning. It destroyed homes, incinerated cars—many of which were abandoned on roads that had became gridlocked by fleeing residents—and left a death toll of 77 people and climbing. Nearly 1,000 remain unaccounted for.
But if you look closely at photos and video of the aftermath, you’ll notice something surprising. The buildings are gone, but most of the trees are still standing—many with their leaves or needles intact.
The Camp Fire is generally referred to as a forest fire or, to use the term preferred by firefighting professionals, a wildfire. As the name suggests, wildfires are mostly natural phenomena – even when initially triggered by humans – moving through grasslands, scrub, and forest, consuming the biomass in their paths, especially litter and deadwood.
Visiting the disaster area, President Donald Trump blamed poor forestry practices and suggested California’s forests should be managed more like Finland’s where they spend “a lot of time on raking and cleaning.”
But the photos tell a different story. Within Paradise itself, the main fuel feeding the fire wasn’t trees, nor the underbrush Trump suggested should have been raked up. It was buildings. The forest fire became an infrastructure fire.
Fire researchers Faith Kearns and Max Moritz describe what can happen when a wildfire approaches a suburban neighborhood during the high-wind conditions common during the California fall: First, a “storm of burning embers” will shower the neighborhood, setting some structures on fire.
“Under the worst circumstances, wind driven home-to-home fire spread then occurs, causing risky, fast-moving ‘urban conflagrations’ that can be almost impossible to stop and extremely dangerous to evacuate.”
The town of Paradise didn’t just experience a fast-moving wildfire, its own layout, building designs, and city management turned that fire into something even scarier.
At first glance, the cause of the Camp Fire seems obvious: Sparks from a power line ignited a brush fire, which grew and grew as high winds drove it toward the town (there were also reports of a possible second ignition point).
Pacific Gas and Electric, the regional utility, is already facing extensive lawsuits and the threat of financial liabilities large enough to bankrupt the company.
And yet, like almost every disaster that kills large numbers of people and damages communities, the causes of the tragedy in Paradise are more complex than it first appears.
The failure of the power line was the precipitating factor, but other factors came into play as well: zoning laws and living patterns, building codes and the types of construction materials used, possibly even the forestry management practices Trump inelegantly referenced.
(Many residents of Finland got a chuckle out of Trump’s “raking and cleaning” comment, but Trump isn’t alone in calling for more aggressive management of California woodlands.)
A number of environmental, political, and economic trends converged in Butte County in just a few hours on Nov. 8 to spark this fire. But the tragedy was the result of many longer-term decisions, decades in the making.
Paradise sits in the picturesque foothills of the Sierra Nevada range. Its streets bump up against the forest. The surrounding Butte County is less densely populated but still has many homes on lots of between 1 to 5 acres. (Some 46,000 people were displaced by the fire overall.) That makes Butte County a prime example of what planners call the wildland-urban interface.
A recent Department of Agriculture study defined the WUI as “the area where structures and other human development meet or intermingle with undeveloped wildland.” The report estimated that nearly a third of California’s residents lived in such regions in 2010. And their numbers are growing.
Photo: State of California, Dept of Insurance
It’s easy to see why. These are lovely places to live, attractive to longtime residents as well as retirees and people moving out of cities. But they are also dangerous, especially in California.
The state is subject to several conditions that make fires particularly threatening. One is drought. California summers have always been dry, but records show that they’ve been getting hotter and dryer. Fire season is getting longer. Climate models show that that trend is likely to get worse.
Another is wind. Each fall, hot, dry air flows westward from the state’s higher elevations toward the coast. These Santa Ana or “diablo” winds can blow at high speeds for days on end. (On the morning of the Camp Fire, wind speeds as high as 72 miles per hour were recorded.)
Like a giant hair dryer, the wind desiccates everything in its path. The night before the fire, local meteorologist Rob Elvington warned: “Worse than no rain is negative rain.” The winds were literally sucking moisture out of the ground.
Those hot, dry conditions make fires terrifyingly easy to start—a hot car muffler, a cigarette ash, a downed power line, almost anything can do it. And the wind makes them almost impossible to stop. As it barreled toward Paradise, the Camp Fire grew at the rate of roughly 80 football fields per minute.
“California is a special case,” fire historian Stephen J. Pyne recently wrote in Slate. “It’s a place that nature built to burn, often explosively.” Even if no one lived in them, California’s hills would burn regularly, Pyne notes. But humans and their infrastructure make the problem worse.
One of the biggest risk factors is electric power. Utilities like PG&E don’t have the option of not serving rural or semirural residents. And every power line that crosses dry, flammable terrain could spark a wildfire.
The culprit in these cases is, once again, the interplay between human-built infrastructure and the natural environment. Vegetation is constantly growing in the corridors, and if a tree falls on a line, or merely touches it, that can cause a short circuit that might spark a fire.
Cal Fire, the California fire management agency, estimates that problems with power lines caused at least 17 major wildfires in Northern California last year. Under an unusual feature of California law known as “inverse condemnation,” a utility can be forced to pay damages for fires that involve its equipment, even if the company hasn’t been proven negligent in its operations.
Even before the massive Camp Fire, PG&E announced that it expects its liabilities from 2017’s large wine-country fires to exceed $2.5 billion. (California Gov. Jerry Brown recently signed a bill offering some financial relief to utilities grappling with wildfire costs, but it did not do away with inverse condemnation.)
As more and more people move into wildland-urban zones, these new arrivals will need to be served with electric power. Which means that, not only will there be more people living in the zones threatened by wildfires, but more power lines will need to be built, increasing the risk of fires. Disaster researchers call this the expanding bull’s-eye effect.
Also, as more people move into vulnerable regions—and then build expensive infrastructure in those areas—the costs of natural disasters increase. This effect has been shown dramatically in coastal areas such as Houston that have seen the damage estimates associated with hurricanes skyrocket. The expanding bull’s-eye means the costs of rebuilding will keep climbing even if the frequency and severity of natural disasters doesn’t change.
So, California’s fire country faces a double-barreled threat: More lives and infrastructure lie in the path of potential fires than ever before. And the fires are getting bigger. That combination explains why 6 out of the 10 most destructive fires in California history have occurred in the past three years.
So far, California is not doing much to discourage people from moving into its danger zones. Moritz, Naomi Tague, and Sarah Anderson, researchers at the University of California, Santa Barbara, maintain that “people must begin to pay the costs for living in fire-prone landscapes.”
They argue that currently, “the relative lack of disincentives to develop in risky areas—for example, expecting state and federal payments for [fire] suppression and losses—ensures that local decisions will continue to promote disasters for which we all pay.”
(Disaster experts make a similar argument about how federal flood insurance and other programs encourage people to live in hurricane-prone areas.)
One financial analyst who works closely with California utilities believes the inverse condemnation rule is part of this problem: “These communities are very dangerous to supply power to,” he says. “But the utility is forced to carry all the risk. They can’t charge their customers a premium for fire risk.”
Of course, when fires do occur, the residents of these areas suffer the most. The question is how to provide the right incentives for people so that we limit the chances of this happening again. Looking ahead, “We need to ensure that prospective homeowners can make informed decisions about the risks they face in the WUI,” Moritz, Tague, and Anderson say.
What else can be done? Building and zoning codes can be changed to make towns less fire prone. Homes that are built or retrofitted with fireproof materials—and landscaped to keep shrubbery away from structures—can usually survive typical wildfires. In new developments, homes can be clustered and surrounded by fire-resistant buffer zones, such as orchards.
And, no matter how well designed, communities in fire zones need realistic evacuation plans and better emergency communications. (Poor communications and inadequate evacuation planning in the face of the speed a fire could move at were among the many failures in Paradise.)
There’s even a grain of truth to Trump’s comments that better forest management can reduce the ferocity of wildfires, though it’s not clear it would have helped in the case of the Camp Fire. The Santa Barbara researchers recommend increasing “fuel management such as controlled burns, vegetation clearing, forest thinning, and fire breaks.”
But no amount of fire-proofing or woodland management is going to eliminate fires.
If global warming models hold true, fire seasons are going to be hotter and last longer. Just as people in coastal areas need to adapt to hurricanes, residents of fire country need to learn to live with fire.
In both cases, the states and the federal government need to reconsider policies that encourage people to move into these vulnerable areas. It’s easy to see why people love living in mountain foothills and forests—just as it’s easy to see why they love living on beaches.
But overdevelopment of fire-prone landscapes means multiplying the inherent hazards of these regions. People need to accept that the problem isn’t just fire—it’s us.
https://slate.com/technology/2018/11/camp-fire-disaster-causes-urban-wildland-interface.html
Global warming isn’t natural, and here’s how we know
This is an archived copy of an article for our students from thelogicofscience.com
The cornerstone argument of climate change deniers is that our current warming is just a natural cycle, and this claim is usually accompanied by the statement, “the planet has warmed naturally before.” This line of reasoning is, however, seriously flawed both logically and factually. Therefore, I want to examine both the logic and the evidence to explain why this argument is faulty and why we are actually quite certain that we are the cause of our planet’s current warming.
The fact that natural climate change occurred in the past does not mean that the current warming is natural.
I cannot overstate the importance of this point. Many people say, “but the planet has warmed naturally before” as if that automatically means that our current warming is natural, but nothing could be further from the truth. In technical terms, this argument commits a logical fallacy known as non sequitur (this is the fallacy that occurs whenever the conclusion of a deductive argument does not follow necessarily from the premises). The fact that natural warming has occurred before only tells us that it is possible for natural warming to occur. It does not indicate that the current warming is natural, especially given the evidence that it is anthropogenic (man-made).
To put this another way, when you claim that virtually all of the world’s climatologists are wrong and the earth is actually warming naturally, you have just placed the burden of proof on you to provide evidence for that claim. In other words, simply citing previous warming events does not prove that the current warming is natural. You have to actually provide evidence for a natural cause of the current warming, but (as I’ll explain shortly) no such mechanism exists.
Natural causes of climate change
Now, let’s actually take a look at the natural causes of climate change to see if any of them can account for our current warming trend (spoiler alert, they can’t).
Sun
The sun is an obvious suspect for the cause of climate change. The sun is clearly an important player in our planet’s climate, and it has been responsible for some warming episodes in the past. So if, for some reason, it was burning hotter now than in the past, that would certainly cause our climate to warm. There is, however, one big problem: it’s not substantially hotter now than it was in the recent past. Multiple studies have looked at whether or not the output from the sun has increased and whether or not the sun is responsible for our current warming, and the answer is a resounding “no” (Meehl, et al. 2004; Wild et al. 2007; Lockwood and Frohlich 2007, 2008; Lean and Rind 2008; Imbers et al. 2014).
It likely caused some warming in the first half the 20th century, but since then, the output from the sun does not match the rise in temperatures (in fact it has decreased slightly; Lockwood and Frohlich 2007, 2008). Indeed, Foster and Rahmstorf (2011) found that after correcting for solar output, volcanoes, and El Niños, the warming trend was even more clear, which is the exact opposite of what we would expect if the sun was driving climate change (i.e., if the sun was the cause, then removing the effect of the sun should have produced a flat line, not a strong increase).
Finally, the most compelling evidence against the sun hypothesis and for anthropogenic warming is (in my opinion) the satellite data. Since the 70s, we have been using satellites to measure the energy leaving the earth (specifically, the wavelengths of energy that are trapped by CO2).
Thus, if global warming is actually caused by greenhouse gasses trapping additional heat, we should see a fairly constant amount of energy entering the earth, but less energy leaving it. In contrast, if the sun is driving climate change, we should see that both the energy entering and leaving the earth have increased.
Do you want to guess which prediction came true? That’s right, there has been very little change in the energy from the sun, but there has been a significant decrease in the amount of energy leaving the earth (Harries et al. 2001; Griggs and Harries. 2007). That is about as close to “proof” as you can get in science, and if you are going to continue to insist that climate change is natural, then I have one simple question for you: where is the energy going? We know that the earth is trapping more heat now than it did in the past. So if it isn’t greenhouse gasses that are trapping the heat, then what is it?
Milankovitch cycles
Other important drivers of the earth’s climate are long-term cycles called Milankovitch cycles, which involve shifts in the earth’s orbit, tilt, and axis (or eccentricity, precession, and obliquity, if you prefer). In fact, these appear to be one of the biggest initial causes of prominent natural climate changes (like the ice ages). So it is understandable that people would suspect that they are driving the current climate change, but there are several reasons why we know that isn’t the case.
First, Milankovitch cycles are very slow, long-term cycles. Depending which of the three cycles we are talking about, they take tens of thousands of years or even 100 thousand years to complete. So changes from them occur very slowly. In contrast, our current change is very rapid (happening over a few decades as opposed to a few millennia). So the rate of our current change is a clear indication that it is not being caused by Milankovitch cycles.
Second, you need to understand how Milankovitch cycles affect the temperature. The eccentricity cycle could, in concept, directly cause global warming by changing the earth’s position relative to the sun; however, that would cause the climate to warm or cool by affecting how much energy from the sun hits the earth. In other words, we are back to the argument that climate change is caused by increased energy from the sun, which we know isn’t happening (see the section above).
The other cycles (precession and obliquity), affect the part of the earth that is warmed and the season during which the warming takes place, rather than affecting the total amount of energy entering the earth. Thus, they initially just cause regional warming. However, that regional warming leads to global warming by altering the oceans’ currents and warming the oceans, which results in the oceans releasing stored CO2 (Martin et al. 2005; Toggweiler et al. 2006; Schmittner and Galbraith 2008; Skinner et al. 2010).
That CO2 is actually the major driver of past climate changes (Shakun et al. 2012). In other words, when we study past climate changes, what we find is that CO2 levels are a critically important factor, and, as I’ll explain later, we know that the current increase in CO2 is from us. Thus, when you understand the natural cycles, they actually support anthropogenic global warming rather than refuting it.
Volcanoes
At this point, people generally resort to claiming that volcanoes are actually the thing that is emitting the greenhouse gasses. That argument sounds appealing, but in reality, volcanoes usually emit less than 1% of the CO2 that we emit each year (Gerlach 2011). Also, several studies have directly examined volcanic emissions to see if they can explain our current warming, and they can’t (Meehl, et al. 2004; Imbers et al. 2014).
Carbon dioxide (CO2)
A final major driver of climate change is, in fact, CO2. Let’s get a couple of things straight right at the start. First, we know that CO2 traps heat and we know that increasing the amount of CO2 in an environment will result in the temperature increasing (you can find a nice list of papers on the heat trapping abilities of CO2 here).
Additionally, everyone (even climate “skeptics”) agree that CO2 plays a vital role in maintaining the earth’s temperature. From those facts, it is intuitively obvious that increasing the CO2 in the atmosphere will result in the temperature increasing. Further, CO2 appears to be responsible a very large portion of the warming during past climate changes (Lorius et al. 1990; Shakun et al. 2012). Note: For past climate changes, the CO2 does lag behind the temperature initially, but as I explained above, the initial warming triggers an increase in CO2, and the CO2drives the majority of the climate change.
At this point, you may be thinking, “fine, it’s CO2, but the CO2 isn’t from us, nature produces way more than we do.” It is true that nature emits more CO2 than us, but prior to the industrial revolution, nature was in balance, with the same amount of CO2 being removed as was emitted. Thus, there was no net gain. We altered that equation by emitting additional CO2.
Further, the increase that we have caused is no little thing. We have nearly doubled the CO2 compared to pre-industrial levels, and the current concentration of CO2 in the atmosphere is higher than it has been at any point in the past 800,000 years. So, yes, we only emit a small fraction of the total CO2 each year, but we are emitting more CO2 than nature can remove, and a little bit each year adds up to a lot over several decades.
Additionally, we know that the current massive increase in CO2 is from us because of the C13 levels. Carbon has two stable isotopes (C12 and C13), but C13 is heavier than C12. Thus, when plants take carbon from the air and use it to make carbohydrates, they take a disproportionate amount of C12.
As a result, the C13/C12 ratios in plants, animals (which get carbon from eating plants), and fossil fuels (which are formed form plants and animals) have more C12 than the C13/C12 ratios in that atmosphere.

Therefore, if burning fossil fuels is responsible for the current increase in CO2, we should see that ratio of C13/C12 in the atmosphere shift to be closer to that of fossil fuels (i.e., contain more C12), and, guess what, that is exactly what we see (Bohm et al. 2002; Ghosh and Brand 2003;Wei et al. 2009). This is unequivocal evidence that we are the cause of the current increase in CO2.
Finally, we can construct all of this information into a deductive logical argument (as illustrated on the left). If CO2 traps heat, and we have increased the CO2 in the atmosphere, then more heat will be trapped. To illustrate how truly inescapable that conclusion is, here is an analogous argument:
1). Insulation traps heat
2). You doubled the insulation of your house
3). Therefore, your house will trap more heat
You cannot accept one of those arguments and reject the other (doing so is logically inconsistent).
Note: Yes, I know that the situation is much more complex than simply CO2 trapping heat, and there are various feedback mechanisms at play, but that does not negate the core argument.
Putting the pieces together
So far, I have been talking about all of the drivers of climate change independently, which is clearly an oversimplification, because, in all likelihood, several mechanisms are all acting together. Therefore, the best way to test whether or not the current warming is natural is actually to construct statistical models that include both natural and man-made factors. We can then use those models to see which factors are causing climate change.

Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. Science, 308:1431–1435.
We have constructed multiple of these models, and they consistently show that natural factors alone cannot explain the current warming (Stott et al. 2001; Meehl et al. 2004; Allen et al. 2006; Lean and Rind 2008; Imbers et al. 2014).
In other words, including human greenhouse gas emissions in the models is the only way to get the models to match the observed warming. This is extremely clear evidence that the current warming is not entirely natural. To be clear, natural factors do play a role and are contributing, but human factors are extremely important, and most of the models show that they account for the majority of the warming.
Correlation vs. causation
It is usually about now that opponents of climate change start to argue that scientists are actually committing a correlation fallacy, and simply showing a correlation between temperature and the CO2 that we produce does not mean that the CO2 is causing the temperature increase. There are, however, several problems with that argument.
First, correlation can indicate causation under certain circumstances. Namely, situations where you have controlled all confounding factors. In other words, if you can show that Y is the only thing that is changing significantly with X, then you can reach a causal conclusion (even placebo controlled drug trials are really just showing correlations between taking the drug and recovery, but because they used the control, they can use that correlation to reach a causal conclusion).
In the case of climate change, of course, we have examined the confounding factors. As I explained in the previous section, we have constructed statistical models with the various drivers of climate change, and anthropogenic greenhouse gasses are necessary to account for the current warming. In other words, we have controlled for the other causes of climate change, therefore we can reach a causal conclusion.
Second, and perhaps more importantly, there is nothing wrong with using correlation to show a particular instance of causation if a causal relationship between X and Y has already been established. Let me give an example. The figure to the right shows the smoking rates and lung/bronchial cancer rates in the US. There is an obvious negative correlation between the two (P < 0.0001), and I don’t think that anyone is going to disagree with the notion that the decrease in smoking is largely responsible for the decrease in lung cancers.
Indeed, there is nothing wrong with reaching that conclusion, and it does not commit a correlation fallacy. This is the case because a causal relationship between smoking and cancer has already been established. In other words, we know that smoking causes cancer because of other studies.
Therefore, when you see that the two are correlated over time, there is nothing wrong with inferring that smoking is driving the cancer rates. Even so, we know from laboratory tests and past climate data that CO2 traps heat and increasing it results in more heat being trapped. In other words, a causal relationship between CO2 and temperature has already been established. Therefore, there is nothing fallacious about looking at a correlation between CO2 and temperature over time and concluding that the CO2 is causing the temperature change.
Ad hoc fallacies and the burden of proof
At this point, I often find that people are prone to proposing that some unknown mechanism exists that scientists haven’t found yet. This is, however, a logical fallacy known as ad hoc. You can’t just make up an unknown mechanism whenever it suits you. If that was valid, then you could always reject any scientific result that you wanted, because it is always possible to propose some unknown mechanism.
Similarly, you can’t use the fact that scientists have been wrong before as evidence, nor can you argue that, “there are still things that we don’t understand about the climate, so I don’t have to accept anthropogenic climate change” (that’s an argument from ignorance fallacy). Yes, there are things that we don’t understand, but we understand enough to be very confident that we are causing climate change, and, once again, you can’t just assume that all of our current research is wrong.
The key problem here is the burden of proof. By claiming that there is some other natural mechanism out there, you have just placed the burden of proof squarely on your shoulders. In other words, you must provide actual evidence of such a mechanism. If you cannot do that, then your argument is logically invalid and must be rejected.
Summary/Conclusion
Let’s review, shall we?
-
We know that it’s not the sun
-
We know that it’s not Milankovitch cycles
-
We know that it’s not volcanoes
-
We know that even when combined, natural causes cannot explain the current warming
-
We know that CO2 traps heat
-
We know that increasing CO2 causes more heat to be trapped
-
We know that CO2 was largely responsible for past climate changes
-
We know that we have roughly doubled the CO2 in the atmosphere
-
We know that the earth is trapping more heat now than it used to
-
We know that including anthropogenic greenhouse gasses in the models is the only way to explain the current warming trend
When you look at that list of things that we have tested, the conclusion that we are causing the planet to warm is utterly inescapable. For some baffling reason, people often act as if scientists have never bothered to look for natural causes of climate change, but the exact opposite is true. We have carefully studied past climate changes and looked at the natural causes of climate changes, but none of them can explain the current warming.
The only way to account for our current warming is to include our greenhouse gasses in the models. This is extremely clear evidence that we are causing the climate to warm, and if you want to continue to insist that the current warming is natural, then you must provide actual evidence for the existence of a mechanism that scientists have missed, and you must provide evidence that it is a better explanation for the current warming than CO2.
Additionally, you are still going to have to refute the deductive argument that I presented earlier (i.e., show that a premise is false or that I committed a logical fallacy), because finding a previously unknown mechanism of climate change would not discredit the importance of CO2 or the fact we have roughly doubled it. Finally, you also need to explain why the earth is trapping more heat than it used to. If you can do all of that, then we’ll talk, but if you can’t, then you must accept the conclusion that we are causing the planet to warm.
Related posts
- Basics of Global Climate Change: A Logical Proof That it is Our Fault
- Do we need more studies on vaccines, GMOs, climate change, etc.?
- “Follow the money”: the finances of global warming, vaccines, and GMOs
- Global warming hasn’t paused
- Yes, there is a strong consensus on climate change
Literature cited
- Allen et al. 2006. Quantifying anthropogenic influence on recent near-surface temperature change. Surveys in Geophysics 27:491–544.
- Bohm et al. 2002. Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges. Geochemistry, Geophysics, Geosystems 3:1–13.
- Foster and Rahmstorf. 2011. Global temperature evolution 1979–2010. Environmental Research Letters 7:011002.
- Gerlach 2011. Volcanic versus anthropogenic carbon dioxide. EOS 92:201–202.
- Ghosh and Brand. 2003. Stable isotope ratio mass spectrometry in global climate change research. International Journal of Mass Spectrometry 228:1–33.
- Griggs and Harries. 2007. Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 Using IRIS, IMG, and AIRS. Journal of Climate 20:3982-4001.
- Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. 308:1431–1435.
- Harries et al. 2001. Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997. Nature 410:355–357.
- Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
- Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
- Lockwood and Frohlich. 2007. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the National Academy of Sciences 463:2447–2460.
- Lockwood and Frohlich. 2008. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the National Academy of Sciences 464:1367–1385.
- Lorius et al. 1990. The ice-core record: climate sensitivity and future greenhouse warming. Nature 139–145.
- Martin et al. 2005. Role of deep sea temperature in the carbon cycle during the last glacial. Paleoceanography 20:PA2015.
- Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
- Schmittner and Galbraith 2008. Glacial greenhouse-gas fluctuations controlled by ocean circulation changes. Nature 456:373–376.
- Shakun et al. 2012. Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature 484:49–54.
- Skinner et al. 2010. Ventilation of the deep Southern Ocean and deglacial CO2 rise. Science 328:1147-1151.
- Stott et al. 2001. Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dynamics17:1–21.
- Toggweiler et al. 2006. Mid-latitude westerlies, atmospheric CO2, and climate change during the ice ages. Paleoceanography 21:PA2005.
- Wei et al. 2009. Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta 73:2332–2346.
- Wild et al. 2007. Impact of global dimming and brightening on global warming. Geophysical Research Letters
https://thelogicofscience.com/2016/06/06/global-warming-isnt-natural-and-heres-how-we-know/
______________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
What does it mean to divide a fraction by a fraction?
What does it mean to divide a fraction by a fraction?
This lesson from Virtual Nerd clearly explains the meaning of this.
What does it mean to divide a fraction by a fraction?

Thanks for visiting. See our articles on
Astronomy, Biology, Chemistry, Earth Science, Mathematics, Physics
Francis Cabot Lowell and the industrial revolution
Before the 1760s, textile production was a cottage industry using mainly flax and wool. A typical weaving family would own one hand loom, which would be operated by the man with help of a boy; the wife, girls and other women could make sufficient yarn for that loom.
The knowledge of textile production had existed for centuries. India had a textile industry that used cotton, from which it manufactured cotton textiles. When raw cotton was exported to Europe it could be used to make fustian.
Two systems had developed for spinning: the simple wheel, which used an intermittent process and the more refined, Saxony wheel which drove a differential spindle and flyer with a heck that guided the thread onto the bobbin, as a continuous process. This was satisfactory for use on hand looms, but neither of these wheels could produce enough thread for the looms after the invention by John Kay in 1734 of the flying shuttle, which made the loom twice as productive.
Cloth production moved away from the cottage into manufactories. The first moves towards manufactories called mills were made in the spinning sector. The move in the weaving sector was later. By the 1820s, all cotton, wool and worsted was spun in mills; but this yarn went to outworking weavers who continued to work in their own homes. A mill that specialised in weaving fabric was called a weaving shed.
This section has been adapted from, Textile manufacture during the British Industrial Revolution, Wikipedia
Francis Cabot Lowell
Samuel Slater had established factories in the 1790s after building textile machinery. Francis Cabot Lowell took it a step further. In 1810, Francis Cabot Lowell visited the textile mills in England. He took note of the machinery in England that was not available in the United States, and he sketched and memorized details.

One machine in particular, the power loom, could weave thread into cloth. He took his ideas to the United States and formed the Boston Manufacturing Company in 1812. With the money he made from this company, he built a water-powered mill. Francis Cabot Lowell is credited for building the first factory where raw cotton could be made into cloth under one roof.
This process, also known as the “Waltham-Lowell System” reduced the cost of cotton. By putting out cheaper cotton, Lowell’s company quickly became successful. After Lowell brought the power loom to the United States, the new textile industry boomed. The majority of businesses in the United States by 1832 were in the textile industry.
Lowell also found a specific workforce for his textile mills. He employed single girls, daughters of New England farm families, also known as The Lowell Girls. Many women were eager to work to show their independence. Lowell found this convenient because he could pay women less wages than he would have to pay men. Women also worked more efficiently than men did and were more skilled when it came to cotton production. This way, he got his work done efficiently, with the best results, and it cost him less. The success of the Lowell mills symbolizes the success and technological advancement of the Industrial Revolution.
– This has been excerpted from The industrial revolution – The textile industry
Note that the analysis above, while correct, is incomplete: This system is an example of how powerful factory owners, combined with inequitable communal social and legal norms, allow one group (in this case, rich land and factory owners) to profit at the expense of people engaged in the actual labor which produces items of value (in this case, native born and immigrant women.)
Ethical issues
This imbalance of power kept people who worked 40 to 60 hours a week poor, by depriving them of fair shares of their profits from their own labor. It also caused much injury and sometimes death from unsafe factory conditions. Factory conditions in America and Europe never improved until the development of labor unions. If you or people you know are able to work 40 hours or less a week, without living in poverty, in a safe environment, without fear of death, that’s due to labor unions.
Labor is prior to and independent of capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital, and deserves much the higher consideration.
– Abraham Lincoln, First Annual Message, 12/3/1861
“If capitalism is fair then unionism must be. If men have a right to capitalize their ideas and the resources of their country, then that implies the right of men to capitalize their labor.”
— Frank Lloyd Wright
Learning Standards
Massachusetts Science and Technology/Engineering Curriculum Framework
HS-ETS4-5(MA). Explain how a machine converts energy, through mechanical means, to do work. Collect and analyze data to determine the efficiency of simple and complex machines.
Massachusetts History and Social Science Curriculum Framework
Grade 6: HISTORY AND GEOGRAPHY Interpret geographic information from a graph or chart and construct a graph or chart that conveys geographic information (e.g., about rainfall, temperature, or population size data)
INDUSTRIAL REVOLUTION AND SOCIAL AND POLITICAL CHANGE IN EUROPE, 1800–1914 WHII.6 Summarize the social and economic impact of the Industrial Revolution… population and urban growth
Benchmarks, American Association for the Advancement of Science
In the 1700s, most manufacturing was still done in homes or small shops, using small, handmade machines that were powered by muscle, wind, or moving water. 10J/E1** (BSL)
In the 1800s, new machinery and steam engines to drive them made it possible to manufacture goods in factories, using fuels as a source of energy. In the factory system, workers, materials, and energy could be brought together efficiently. 10J/M1*
The invention of the steam engine was at the center of the Industrial Revolution. It converted the chemical energy stored in wood and coal into motion energy. The steam engine was widely used to solve the urgent problem of pumping water out of coal mines. As improved by James Watt, Scottish inventor and mechanical engineer, it was soon used to move coal; drive manufacturing machinery; and power locomotives, ships, and even the first automobiles. 10J/M2*
The Industrial Revolution developed in Great Britain because that country made practical use of science, had access by sea to world resources and markets, and had people who were willing to work in factories. 10J/H1*
The Industrial Revolution increased the productivity of each worker, but it also increased child labor and unhealthy working conditions, and it gradually destroyed the craft tradition. The economic imbalances of the Industrial Revolution led to a growing conflict between factory owners and workers and contributed to the main political ideologies of the 20th century. 10J/H2
Today, changes in technology continue to affect patterns of work and bring with them economic and social consequences. 10J/H3*
The Greatest Mistake In The History Of Physics
In optics, the Poisson spot (also called the Arago or Fresnel spot) is an unexpected bright point that appears at the center of a circular object’s shadow – something that common sense would imply is impossible. The spot turns out to be due to the wave nature of light, specifically Fresnel diffraction.
This phenomenon played an important role in the discovery of the wave nature of light. There’s a great articles on this, The Greatest Mistake In The History Of Physics, Ethan Siegel, Forbes, 8/26/2018

French educational card, late 19th/early 20th century.
We all love our most cherished ideas about how the world and the Universe works. Our conception of reality is often inextricably intertwined with our ideas of who we are. But to be a scientist is to be prepared to doubt all of it each and every time we put it to the test. All it takes is one observation, measurement, or experiment that conflicts with the predictions of your theory, and you have to consider revising or throwing out your picture of reality.
If you can reproduce that scientific test and show, convincingly, that it is inconsistent with the prevailing theory, you’ve set the stage for a scientific revolution. But if you aren’t willing to put your theory or assumption to the test, you might just make the greatest mistake in the history of physics.
Which is why, in the early 19th century, the young French scientist, Augustin-Jean Fresnel, should have expected the trouble he was about to get into.
Although it isn’t as well-known today as his work on mechanics or gravitation, Newton was also one of the pioneers in explaining how light worked. He explained reflection and refraction, absorption and transmission, and even how white light was composed of colors. Light rays bent when they went from air into water and back again, and at every surface there was a reflective component and a component that was transmitted through.
Newton’s “corpuscular” [particle] theory of light was particle-based, and his idea that light was a ray agreed with a wide variety of experiments.
Although there was a wave theory of light that was contemporary with Newton’s, put forth by Christiaan Huygens, it couldn’t explain the prism experiments. Newton’s Opticks, like his mechanics and gravitation, was a winner.
But right around the dawn of the 19th century, it started to run into trouble. Thomas Young ran a now-classic experiment where he passed light through a double slit: two narrow slits separated by an extremely small distance.
Instead of light behaving like a corpuscle, where it would either pass through one slit or the other, it displayed an interference pattern: a series of light-and-dark bands.

This shows a typical experimental set-up.

Moreover, the pattern of the bands was determined by two tunable experimental parameters: the spacing between the slit and the color of the light.
If red light corresponded to long-wavelength light and blue corresponded to short-wavelength light, then light behaved exactly as you’d expect if it were a wave.
Young’s double-slit experiments only made sense if light had a fundamentally wavelike nature.

Still, Newton’s successes couldn’t be ignored. The nature of light became a controversial topic in the early 19th century among scientists.
In 1818, the French Academy of Sciences sponsored a competition to explain light. Was it a wave? Was it a particle? How can you test it, and how can you verify that test?
Augustin-Jean Fresnel entered this competition despite being trained as a civil engineer, not as a physicist or mathematician. He had formulated a new wave theory of light that he was tremendously excited about, largely based on Huygens’ 17th century work and Young’s recent experimental results.
The stage was set for the greatest mistake in all of physics to occur.
After submitting his entry, one of the judges, the famed physicist and mathematician Simeon Poisson, investigated Fresnel’s theory in gory detail.
If light were a corpuscle, as Newton would have it, it would simply travel in a straight line through space.
But if light were a wave, it would have to interfere and diffract when it encountered a barrier, a slit, or an “edge” to a surface.
Different geometric configurations would lead to different specific patterns, but this general rule holds.
Poisson imagined light of a monochrome color: a single wavelength in Fresnel’s theory. Imagine this light makes a cone-like shape, and encounters a spherical object.
In Newton’s theory, you get a circle-shaped shadow, with light surrounding it.
But in Fresnel’s theory, as Poisson demonstrated, there should be a single, bright point at the very center of the shadow. This prediction, Poisson asserted, was clearly absurd.
Poisson attempted to disprove Fresnel’s theory by showing that it led to a logical fallacy: reductio ad absurdum. Poisson’s idea was to derive a prediction made by the light-as-a-wave theory that would have such an absurd consequence that it must be false.
If the prediction was absurd, the wave theory of light must be false. Newton was right; Fresnel was wrong. Case closed.
Except, that itself is the greatest mistake in the history of physics! You cannot draw a conclusion, no matter how obvious it seems, without performing the crucial experiment.
Physics is not decided by elegance, by beauty, by the straightforwardness of arguments, or by debate. It is settled by appealing to nature itself, and that means performing the relevant experiment.

THOMAS REISINGER, CC-BY-SA 3.0, E. SIEGEL
Thankfully, for Fresnel and for science, the head of the judging committee would have none of Poisson’s shenanigans. Standing up for not only Fresnel but for the process of scientific inquiry in general, François Arago, who later became much more famous as a politician, abolitionist, and even prime minister of France, performed the deciding experiment himself.
He fashioned a spherical obstacle and shone monochromatic light around it, checking for the wave theory’s prediction of constructive interference. Right at the center of the shadow, a bright spot of light could easily be seen.
Even though the predictions of Fresnel’s theory seemed absurd, the experimental evidence was right there to validate it. Absurd or not, nature had spoken.

A great mistake you can make in physics is to assume you know what the answer is going to be in advance. An even greater mistake is to assume that you don’t even need to perform a test, because your intuition tells you what is or isn’t acceptable to nature itself.
But physics is not always an intuitive science, and for that reason, we must always resort to experiments, observations, and measurable tests of our theories.
Without that approach, we would never have overthrown Aristotle’s view of nature. We never would have discovered special relativity, quantum mechanics, or our current theory of gravity: Einstein’s General Relativity. And, quite certainly, we would never have discovered the wave nature of light without it, either.

History, context, and the end of classical physics
Arago later noted that the phenomenon had already been observed by Joseph-Nicolas Delisle (1715) and Giacomo Maraldi (1723) a century earlier. However, those scientists had not worked out the math and were not trying to use this experiment to distinguish between the different interpretations of physics.
They had made good, solid scientific observations, absolutely. Yet this is a good example of the fact that data, by itself, is only of limited usefulness without a theory to put it in context. Data needs an interpretation to have meaning
It turned out much later (in one of Albert Einstein’s Annus Mirabilis papers) that light can be equally described as a particle. Normally, this would be a paradox – surely light must either be a particle, or a wave. It certainly shouldn’t be both at the same time.
However, the indisputable experimental proof eventually was revealed:
light absolutely does have wave-like properties, and they are clearly predictable and observable in certain circumstances.
Yet light also absolutely does have particle-light properties, which is also predictable and observable in other circumstances.
This at first paradoxical result led to perhaps the greatest development in the history of physics – the overturning of classical physics and the push into the modern, quantum understanding of reality. See articles on wave–particle and quantum mechanics.
From The greatest mistake in the history of physics
See our articles on light, on waves, and on the scientific method.
Special Education MCAS accommodations
For teachers in Massachusetts: Special Education MCAS accommodations


Graphic Organizers, Checklists
And Supplemental Reference Sheets, for use by students with disabilities.
The approved graphic organizers, checklists, and supplemental reference sheets listed in the table below are for use by students with disabilities who have this MCAS accommodation (A9 from the Accessibility and Accommodations Manual for the 2018–2019 MCAS Tests/Retests) listed in their IEPs or 504 plans.
The Department encourages schools to familiarize students with these tools, since students should be comfortable using their graphic organizer or reference sheet during MCAS testing.
Only the approved organizers and supplemental reference sheets listed below may be used for next-generation ELA and Mathematics MCAS testing and text or graphics may not be added. It is permissible to remove selected text or graphics.
The sample Science and Technology/Engineering (STE) reference sheets listed below may be used as is, or may be used with selected text and graphics removed; however, additional Department approval is required if any text or graphics are added, or if a different reference sheet is created.
| Approved ELA Graphic Organizers |
Approved Supplemental Mathematics Reference Sheets |
Sample STE Reference Sheets |
|---|---|---|
Note: If you have a problem printing a graphic organizer please call Student Assessment at 781-338-3625.
MCAS Test accommodations
Here are both the standard and non-standard MCAS test accommodations. The IEP team should work with the parent to set up accommodations that best fits the student’s needs.
MCAS test accommodations for students with disabilities
MCAS Alternate Assessment (MCAS-Alt)
MCAS is designed to measure a student’s knowledge of key concepts and skills outlined in the Massachusetts Curriculum Frameworks.
A small number of students with the most significant disabilities who are unable to take the standard MCAS tests even with accommodations participate in the MCAS Alternate Assessment (MCAS-Alt).
MCAS-Alt consists of a portfolio of specific materials collected annually by the teacher and student.
Samples!
Here are some samples of alternate assessments, and how teachers would grade them:
Examples of MCAS Alternate Assessment
Evidence for the portfolio may include work samples, instructional data, videotapes, and other supporting information.
- Commissioner’s Memo: Information and Resources for MCAS-Alt and the Every Student Succeeds Act (ESSA)
- Learn about the MCAS-Alt. View an overview and frequently asked questions.
- Access resources for conducting MCAS-Alt and on upcoming training sessions, including MCAS-Alt Newsletters, the Resource Guide, Educator’s Manual, MCAS-Alt Forms and Graphs, and registration information.
- See sample portfolio strands from students’ MCAS-Alt portfolios.
- Find information on scoring portfolios and view reports of results. Also view information on the MCAS-Alt score appeals process.
























































