Home » Physics (Page 8)
Category Archives: Physics
There Was No Big Bang Singularity
Backup articles for students
There Was No Big Bang Singularity, Ethan Siegel, Forbes, 7/27/2018
https://www.forbes.com/sites/startswithabang/2018/07/27/there-was-no-big-bang-singularity/amp/
Almost everyone has heard the story of the Big Bang. But if you ask anyone, from a layperson to a cosmologist, to finish the following sentence, “In the beginning, there was…” you’ll get a slew of different answers. One of the most common ones is “a singularity,” which refers to an instant where all the matter and energy in the Universe was concentrated into a single point. The temperatures, densities, and energies of the Universe would be arbitrarily, infinitely large, and could even coincide with the birth of time and space itself.
But this picture isn’t just wrong, it’s nearly 40 years out of date! We are absolutely certain there was no singularity associated with the hot Big Bang, and there may not have even been a birth to space and time at all. Here’s what we know and how we know it.
When we look out at the Universe today, we see that it’s full of galaxies in all directions at a wide variety of distances. On average, we also find that the more distant a galaxy is, the faster it appears to be receding from us. This isn’t due to the actual motions of the individual galaxies through space, though; it’s due to the fact that the fabric of space itself is expanding.
This was a prediction that was first teased out of General Relativity in 1922 by Alexander Friedmann, and was observationally confirmed by the work of Edwin Hubble and others in the 1920s. It means that, as time goes on, the matter within it spreads out and becomes less dense, since the volume of the Universe increases. It also means that, if we look to the past, the Universe was denser, hotter, and more uniform.
If you were to extrapolate back farther and farther in time, you’d begin to notice a few major changes to the Universe. In particular:
- you’d come to an era where gravitation hasn’t had enough time to pull matter into large enough clumps to have stars and galaxies,
- you’d come to a place where the Universe was so hot you couldn’t form neutral atoms,
- and then where even atomic nuclei were blasted apart,
- where matter-antimatter pairs would spontaneously form,
- and where individual protons and neutrons would be dissociated into quarks and gluons.
Each step represents the Universe when it was younger, smaller, denser, and hotter. Eventually, if you kept on extrapolating, you’d see those densities and temperatures rise to infinite values, as all the matter and energy in the Universe was contained within a single point: a singularity.
The hot Big Bang, as it was first conceived, wasn’t just a hot, dense, expanding state, but represented an instant where the laws of physics break down. It was the birth of space and time: a way to get the entire Universe to spontaneously pop into existence. It was the ultimate act of creation: the singularity associated with the Big Bang.
Yet, if this were correct, and the Universe had achieved arbitrarily high temperatures in the past, there would be a number of clear signatures of this we could observe today. There would be temperature fluctuations in the Big Bang’s leftover glow that would have tremendously large amplitudes. The fluctuations that we see would be limited by the speed of light; they would only appear on scales of the cosmic horizon and smaller. There would be leftover, high-energy cosmic relics from earlier times, like magnetic monopoles.
And yet, the temperature fluctuations are only 1-part-in-30,000, thousands of times smaller than a singular Big Bang predicts. Super-horizon fluctuations are real, robustly confirmed by both WMAP and Planck. And the constraints on magnetic monopoles and other ultra-high-energy relics are incredibly tight. These missing signatures have a huge implication: the Universe never reached these arbitrarily large temperatures.

Instead, there must have been a cutoff. We cannot extrapolate back arbitrarily far, to a hot-and-dense state that reaches whatever energies we can dream of. There’s a limit to how far we can go and still validly describe our Universe.
In the early 1980s, it was theorized that, before our Universe was hot, dense, expanding, cooling, and full of matter and radiation, it was inflating. A phase of cosmic inflation would mean the Universe was:
- filled with energy inherent to space itself,
- which causes a rapid, exponential expansion,
- that stretches the Universe flat,
- gives it the same properties everywhere,
- with small-amplitude quantum fluctuations,
- that get stretched to all scales (even super-horizon ones),
and then inflation comes to an end.
When it does, it converts that energy, which was previously inherent to space itself, into matter and radiation, which leads to the hot Big Bang. But it doesn’t lead to an arbitrarily hot Big Bang, but rather one that achieved a maximum temperature that’s at most hundreds of times smaller than the scale at which a singularity could emerge. In other words, it leads to a hot Big Bang that arises from an inflationary state, not a singularity.
The information that exists in our observable Universe, that we can access and measure, only corresponds to the final ~10-33 seconds of inflation, and everything that came after. If you want to ask the question of how long inflation lasted, we simply have no idea. It lasted at least a little bit longer than 10-33 seconds, but whether it lasted a little longer, a lot longer, or for an infinite amount of time is not only unknown, but unknowable.
So what happened to start inflation off? There’s a tremendous amount of research and speculation about it, but nobody knows. There is no evidence we can point to; no observations we can make; no experiments we can perform. Some people (wrongly) say something akin to:
Well, we had a Big Bang singularity give rise to the hot, dense, expanding Universe before we knew about inflation, and inflation just represents an intermediate step. Therefore, it goes: singularity, inflation, and then the hot Big Bang.
There are even some very famous graphics put out by top cosmologists that illustrate this picture. But that doesn’t mean this is right.

NATIONAL SCIENCE FOUNDATION (NASA, JPL, KECK FOUNDATION, MOORE FOUNDATION, RELATED)
In fact, there are very good reasons to believe that this isn’t right! One thing that we can mathematically demonstrate, in fact, is that it’s impossible for an inflating state to arise from a singularity.
Here’s why: space expands at an exponential rate during inflation. Think about how an exponential works: after a certain amount of time goes by, the Universe doubles in size. Wait twice as long, and it doubles twice, making it four times as large. Wait three times as long, it doubles thrice, making it 8 times as large. And if you wait 10 or 100 times as long, those doublings make the Universe 210 or 2100 times as large.
Which means if we go backwards in time by that same amount, or twice, or thrice, or 10 or 100 times, the Universe would be smaller, but would never reach a size of 0. Respectively, it would be half, a quarter, an eighth, 2-10, or 2-100 times its original size. But no matter how far back you go, you never achieve a singularity.

Image by E. Siegel
There is a theorem, famous among cosmologists, showing that an inflationary state is past-timelike-incomplete. What this means, explicitly, is that if you have any particles that exist in an inflating Universe, they will eventually meet if you extrapolate back in time.
This doesn’t, however, mean that there must have been a singularity, but rather that inflation doesn’t describe everything that occurred in the history of the Universe, like its birth. We also know, for example, that inflation cannot arise from a singular state, because an inflating region must always begin from a finite size.
Every time you see a diagram, an article, or a story talking about the “big bang singularity” or any sort of big bang/singularity existing before inflation, know that you’re dealing with an outdated method of thinking.
The idea of a Big Bang singularity went out the window as soon as we realized we had a different state — that of cosmic inflation — preceding and setting up the early, hot-and-dense state of the Big Bang.
There may have been a singularity at the very beginning of space and time, with inflation arising after that, but there’s no guarantee. In science, there are the things we can test, measure, predict, and confirm or refute, like an inflationary state giving rise to a hot Big Bang. Everything else? It’s nothing more than speculation.
Related articles by Ethan Siegel
The Big Bang Wasn’t The Beginning, After All 9/2017
What Was It Like When The Universe Was Inflating? 6/2018
How Well Has Cosmic Inflation Been Verified? 5/2019
The science wars: postmodernism as a threat against truth and reason
The science wars was an intellectual war between scientific realists and postmodernist critics.
The debate was about whether anything that humans could learn or talk about actually has meaning – or whether all words (even for science and math) ultimately only conveyed internal biases and feelings. Thus, in this view, nothing could ever be objectively said about the world.
Misunderstanding the debate
The science wars were often misunderstood by observers. Outsiders imagined that the debate was whether the intellectual paradigms of a culture affected the way data was interpreted. After all, it is noted, the same data can cause the investigator to reach different conclusions based on their internal biases.
However, this had nothing to do with the science wars. Scientists acknowledge that all people operate within intellectual paradigms, and that this of course affects how people might interpret data.
Rather, in the science wars, deconstructionists and postmodernists went much further: Many held that science tells us nothing about the real world. Some said things such as “DNA molecules are a myth of Western culture;” “the idea that 2 + 2 = 4 is white colonialist thinking,” etc. Some in this group denied that math and science had any more existence or legitimacy than “other ways of thinking” about subjects.
Ironically, this kind of thinking was foreseen by George Orwell.
In the end, the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality, was tacitly denied by their philosophy.
– George Orwell, Nineteen Eighty-Four
Scientific realists (such as Norman Levitt, Paul R. Gross, Jean Bricmont and Alan Sokal) understand and explain that scientific knowledge is real.
In contrast, many postmodernists and deconstructionists openly reject the reality and useful of science itself. Many openly reject scientific objectivity, the scientific method, Empiricism, and scientific knowledge.
Postmodernists and deconstructionists interpret Thomas Kuhn‘s ideas about scientific paradigms to mean that scientific theories are only social constructs, and not actual descriptions of reality.
Some philosophers like Paul Feyerabend argued that other, non-realist forms of knowledge production were just as valid. Therefore, for example:
a Native American thinking about nature would come up with his or her own ideas that are different from ideas in supposed “colonialist” science textbooks, and that those ideas – even when never backed by experiment – would literally be just as “true” as the ideas found by science (ideas which actually have been tested, and found to be true no matter the ethnicity of the person involved.)
a woman thinking about nature would come up with her own ideas that are different from ideas in supposed “male” science textbooks, and that those ideas – even when never backed by experiment – would literally be just as “true” as the ideas found by science (ideas which actually have been tested, and found to be true no matter the ethnicity of the person involved.)
There were attempts to bring postmodernism/deconstructionism into science back in the 1990s. There is a new attempt to do so today in the 2020s under the misleading motto “decolonize the curriculum.”
Some of these postmodernist attempts to do so at first look like a parody, but it turns out that the authors are serious.
For example, an increasing number of postmodernists claim that math itself is “colonialist.” The example shown below is becoming increasingly common.

Can you imagine what would happen if we allowed people to “decolonize” math, science, and engineering practices? Every piece of technology created by people indoctrinated with this view would be dangerous.

In the 1990’s, Scientific realists were quick to realize the danger. Large swaths of deconstructionist and postmodernist writings rejected any possibility of objectivity and realism. This not only undercut the entire idea of mathematics, and all of science, but also of philosophy and human rights.
The works of Jacques Derrida, Gilles Deleuze, Jean-François Lyotard and others claimed to say something about reality, but realists (scientists and anyone who believed in rational thought) recognized that such postmodern writings were deliberately incomprehensible or meaningless.
Example of how postmodernists understand basic logic
Some people misunderstand (or deliberately misrepresent) images like this to promote the idea that “truth is relative.” They say things like “The object is a triangle when viewed by one person, but a square when viewed by someone else, and a circle when seen by yet another person. So reality is relative, not absolute.”
The problem of course is that their claims are not only false, they are irrational.
In this example there is an actual three dimensional object (a fact in the real world.) The geometric projection of this object contains only a small part of information about the object as a whole.
Thus, a viewer who only looks at the object from one direction only receives some of the information, and does not yet know about the rest. Yet that lack of knowledge doesn’t change the reality of what the three dimensional object actually is.
If a postmodernist concluded, “I see a circle, therefore it is a circle” and then make a mathematical model of the object as a circle or sphere, their model would have predictions which immediately turn out to be wrong. Not “wrong” from one culture’s point of view, or from one religion’s point of view, or one gender’s point of view, but actually objectively wrong in reality.
News
Related articles on this website
Why does science matter?
Relativism Truth and Reality
Science denialism
Suggested reading (articles)
Campus Craziness: A New War on Science, Skeptic Magazine, Volume 22 Number 4
Suggested reading (books)
Science Wars: The Next Generation (Science for the People)
Higher Superstition: The Academic Left and Its Quarrels with Science, Paul R. Gross and Norman Levitt, 1994
Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science, Alan Sokal and Jean Bricmont, 1999
In 1996, Alan Sokal published an essay in the hip intellectual magazine Social Text parodying the scientific but impenetrable lingo of contemporary theorists. Here, Sokal teams up with Jean Bricmont to expose the abuse of scientific concepts in the writings of today’s most fashionable postmodern thinkers.
From Jacques Lacan and Julia Kristeva to Luce Irigaray and Jean Baudrillard, the authors document the errors made by some postmodernists using science to bolster their arguments and theories. Witty and closely reasoned, Fashionable Nonsense dispels the notion that scientific theories are mere “narratives” or social constructions, and explored the abilities and the limits of science to describe the conditions of existence.
Book reviews
Richard Dawkins’ review of Intellectual Impostures by Alan Sokal and Jean Bricmont.
Lenz’s law
Lenz’s law demonstration

Lenz’s law is named after the physicist Heinrich Friedrich Emil Lenz (pronounced /ˈlɛnts/) who formulated it in 1834.
The direction of the electric current induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes the initial changing magnetic field.
It is a qualitative law that specifies the direction of induced current.
This law tells us nothing about the current’s magnitude.
Lenz’s law predicts the direction of many effects in electromagnetism, such as:
-
the direction of voltage induced in an inductor or wire loop by a changing current
-
the drag force of eddy currents exerted on moving objects in a magnetic field.
Lenz’s law is not really a law of physics on its own. It is a phenomenon which can be predicted from a more general law of physics, Faraday’s law of induction.
Faraday’s law of induction itself is a subset of the even more fundamental MAXWELL’s EQUATIONS.
Step-by-step explanation
Take a copper tube (conductive but non-magnetic.) Drop a piece of steel down through the tube.
The piece of steel will fall through, as you might expect.
It accelerates very close to the acceleration due to gravity.
Only air friction and possible rubbing against the inside of the tube prevent it from reaching the acceleration due to gravity.

Now take the same copper tube and drop a strong magnet through it
Neodymium or other rare earth magnets work the best. Now the magnet falls very slowly.
This is because the copper tube experiences a changing magnetic field from the falling magnet.
This changing magnetic field induces a current in the copper tube.

The induced current in the copper tube creates its own magnetic field ,
one that opposes the magnetic field that created it!

This lesson has been archived ScienceJoyWagon and from regentsprep.org, Oswego City School District, NY.
TBA – create link to this in Electromagnetic Induction
Graveyard Spiral
In aviation, a graveyard spiral, or death spiral, is a dangerous spiral dive entered into accidentally by a pilot who is not trained or not proficient in instrument flight when flying in instrument meteorological conditions.

Graveyard spiral diagram from Figure 16-5 of the Federal Aviation Administration handbook, “Pilot’s Handbook of Aeronautical Knowledge”, 2008 edition
Graveyard spirals are most common in nighttime or poor weather conditions where no horizon exists to provide visual correction for misleading inner-ear cues.
Graveyard spirals are the result of several sensory illusions in aviation which may occur when the pilot is in IMC – Instrument meteorological conditions. That means flying in bad weather, when one can’t see the ground, or even horizon, and thus one needs to fly solely by using instruments.
In such conditions, it is possible to experience spatial disorientation and loses awareness of the aircraft’s attitude. In other words, the pilot loses the ability to judge the orientation of their aircraft due to the brain’s misperception of spatial cues.
The graveyard spiral consists of both physiological and physical components.
What is supposed to happen:
We think of our ear as an organ for hearing, but that’s only one small part of what it does. Your inner ear has a series of fluid filled tubes which sense orientation, acceleration, and up from down. It lets you tell whether you are standing up, or upside down, even if your eyes are closed.
Notice the three sets of fluid-filled tubes. They are like the motion detectors in a Wii controller
Notice the three sets of fluid-filled tubes (semicircular canals)
They are like the motion detectors in a Wii controller. Since they are all perpendicular to each other, they tell your brain about motion in the X, Y or Z direction.
Here you see what happens when you tilt your head down:
How does a pilot get disoriented, and tricked into performing a graveyard spiral?
These three sets of tubes are the equivalent of gyroscopes located in the X, Y and Z plane.
Each corresponds to the rolling, pitching, or yawing motions of an aircraft.
Ideally, as your airplane and body moves, your inner ear sends correct signals to the brain, which then correctly interprets them. Thus you should feel whether you are right side up, or upside down; whether you are banking right, or are flying level.
But this system evolved in our ancestors, for primates who lived on the ground or spent some time in trees; the vast majority of their motion was during day, or during night when the moon was out (which offers plenty of light.) Most motion of our ancestors was done with sight, not blind. But in this case we are dealing with pilots flying in IMC – Instrument meteorological conditions, and evolution didn’t prepare our species for this kind of motion.
So when flying blind, our inner ear & brain don’t work perfectly. They can get tricked. People can end up feeling like they are level, when they are really turning, or even feel right-side-up when they are upside-down! You can read more details here.
There is a solution. A pilot must consciously override our instinct to judge our orientation based on what we feel, and instead rely on the visual cues of horizon, and of the instruments in the airplane, until the brain once again adjusts.
Perception vs reality

Learning Standards
tba
The mechanics of the Nazaré Canyon wave
The Portuguese town of Nazaré can deliver 100-foot (30.4 meters) waves.
How can we explain the Nazaré Canyon geomorphologic phenomenon?
In the 16th century, Portuguese people and army protected Nazaré from pirate attacks, in the Promontório do Sítio, the cliff-top area located 110-meter above the beach.

A screenshot from the short film “Nazaré – Entre a Terra e o Mar”, showing what the canyon would look like if the sea were very clear and transparent.
Today, from this unique site, it is possible to watch the power of the Atlantic Ocean. If you face the salt water from the nearby castle, you can easily spot the famous big waves that pump the quiet village.
What are the mechanics of the Nazaré Canyon? Is there a clear explanation for the size of the local waves? First of all, let us underline the most common swell direction in the region: West and Northwest.
A few miles off the coast of Nazaré, there are drastic differences of depth between the continental shelf and the canyon. When swell heads to shore, it is quickly amplified where the two geomorphologic variables meet causing the formation of big waves.
Furthermore, a water current is channeled by the shore – from North to South – in the direction of the incoming waves, additionally contributing to wave height. Nazaré holds the Guinness World Record for the largest wave ever surfed.
In conclusion, the difference of depths increase wave height, the canyon increases and converges the swell and the local water current helps building the biggest wave in the world. Add a perfect wind speed and direction and welcome to Nazaré.
The Mechanics of the Nazaré Canyon Wave:
1. Swell refraction: difference of depths between the continental shelf and the canyon change swell speed and direction;
2. Rapid depth reduction: wave size builds gradually;
3. Converging wave: the wave from the canyon and the wave from the continental shelf meet and form a higher one;
4. Local water channel: a seashore channel drives water towards the incoming waves to increase their height;

a) Wave fronts, b) Head of the Nazaré Canyon, c) Praia do Norte
Article from Surfer Today, surfertoday.com/surfing/8247-the-mechanics-of-the-nazare-canyon-wave
____________________________
This section from telegraph.co.uk/news/earth/earthnews/10411252/How-a-100-foot-wave-is-created.html
Currents through the canyon combine with swell driven by winds from further out in the Atlantic to create waves that propagate at different speeds.
They converge as the canyon narrows and drive the swell directly towards the lighthouse that sits on the edge of Nazaré.
From the headwall to the coastline, the seabed rises gradually from around 32 feet to become shallow enough for the swell to break. Tidal conditions also help to increase the wave height.
According to Mr McNamara’s website charting the project he has been conducting, the wave produced here are “probably the biggest in all the world” for sandy a sand sea bed.
On Monday the 80 mile an hour winds created by the St Jude’s Atlantic storm whipped up the swell to monstrous proportions, leading to waves of up to 100 feet tall.
The previous day as the storm gathered pace, waves of up to 80 feet high formed and British surfer Andrew Cotton managed to ride one of these.

Image from How a 100 foot wave is created, The Telegraph (UK),
_____________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Blueberry Earth
Here’s a gedankenexperiment (that’s German for “thought experiment”) that ought to interest you.
A gedankenexperiment is a way that physicists ask questions about how something in our universe works, for the joy of working out it’s consequences. The experiments don’t need to be practical, although many do lead to advances in physics. Famous examples of gedankenexperiments that led to new ideas in physics include Schrödinger’s cat and Maxwell’s demon.
Blueberry Earth: The Delicious Thought Experiment That’s Roiling Planetary Scientists
“A roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit.”
Sarah Zhang, The Atlantic, 8/2/1018

Image from pxhere.com, 517756, CC0 Public Domain
Sarah Zhang, in The Atlantic, 8/2/1018, writes
Can I offer you a thought experiment on what would happen if the Earth were replaced by “an equal volume of closely packed but uncompressed blueberries”? When Anders Sandberg saw this question, he could not let it go. The asker was one “billybodega,” who posted the scenario on Physics Stack Exchange. (Though the question was originally posed on Twitter by writer Sandra Newman.)
A moderator of the usually staid forum closed the discussion before Sandberg could reply. That didn’t matter. Sandberg, a researcher at Oxford’s Future of Humanity Institute, wrote a lengthy answer on his blog and then an even lengthier paper that he posted to arxiv.org, a repository for physics preprints that have not yet been peer reviewed. The result is a brilliant explanation of how planets form.
To begin: The 1.5 x 1025 pounds of “closely packed but uncompressed” berries will start to collapse onto themselves and crush the berries deeper than 11.4 meters – or 37 feet – into a pulp. “Enormous amounts of air will be pushing out from the pulp as bubbles and jets, producing spectacular geysers,” writes Sandberg. What’s more, this rapid shrinking will release a huge amount of gravitational energy—equal to, according to Sandberg’s calculations, the energy output of the sun over 20 minutes. It’s enough to make the pulp boil. Behold:
“The result is that blueberry earth will turn into a roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit. As the planet evolves a thick atmosphere of released steam will add to the already considerable air from the berries. It is not inconceivable that the planet may heat up further due to a water vapour greenhouse effect, turning into a very odd Venusian world.”
Deep under the roiling jam waves, the pressure is high enough that even the warm jam will turn to ice. Blueberry Earth will have an ice core 4,000 miles wide, by Sandberg’s calculations. “The end result is a world that has a steam atmosphere covering an ocean of jam on top of warm blueberry granita,” he writes.
The process is not so different from the birth of a planet out of a disc of rotating debris. The coalescing, the emergence of an atmosphere, the formation of a dense core—all of these happened at one point to the real Earth. And it is currently happening elsewhere in the universe, as exoplanets are forming around other stars in other galaxies.
What happens-if-the-earth-instantly-turned into a mass of blueberries? The Atlantic
An interview with the author on Slate.com
Blueberry Earth by Anders Sandberg, on Arxiv
___________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Why Old Physics Still Matters
By Chad Orzel, Forbes, 7/30/18
(The following is an approximation of what I will say in my invited talk at the 2018 Summer Meeting of the American Association of Physics Teachers. They encourage sharing of slides from the talks, but my slides for this talk are done in what I think of as a TED style, with minimal text, meaning that they’re not too comprehensible by themselves. So, I thought I would turn the talk into a blog post, too, maximizing the ratio of birds to stones…
(The full title of the talk is Why “Old Physics” Still Matters: History as an Aid to Understanding, and the abstract I sent in is:
A common complaint about physics curricula is that too much emphasis is given to “old physics,” phenomena that have been understood for decades, and that curricula should spend less time on the history of physics in order to emphasize topics of more current interest. Drawing on experience both in the classroom and in writing books for a general audience, I will argue that discussing the historical development of the subject is an asset rather than an impediment. Historical presentation is particularly useful in the context of quantum mechanics and relativity, where it helps to ground the more exotic and counter-intuitive aspects of those theories in a concrete process of observation and discovery.
The title of this talk refers to a very common complaint made about the teaching of physics, namely that we spend way too much time on “old physics,” and never get to anything truly modern. This is perhaps best encapsulated by Henry Reich of MinutePhysics, who made a video open letter to Barack Obama after his re-election noting that the most modern topics on the AP Physics exam date from about 1905.
This is a reflection of the default physics curriculum, which generally starts college students off with a semester of introductory Newtonian physics, which was cutting-edge stuff in the 1600s. The next course in the usual sequence is introductory E&M, which was nailed down in the 1800’s, and shortly after that comes a course on “modern physics,” which describes work from the 1900s.
Within the usual “modern physics” course, the usual approach is also historical: we start out with the problem of blackbody radiation, solved by Max Planck in 1900, then move on to the photoelectric effect, explained by Albert Einstein in 1905, and then to Niels Bohr’s model of the hydrogen atom from 1913, and eventually matter waves and the Schrodinger equation, bringing us all the way up to the late 1920’s.
It’s almost become cliche to note that “modern physics” richly deserves to be in scare quotes. A typical historically-ordered curriculum never gets past 1950, and doesn’t deal with any of the stuff that is exciting about quantum physics today.
This is the root of the complaint about “old physics,” and it doesn’t necessarily have to be this way. There are approaches to the subject that are, well, more modern. John Townsend’s textbook for example, starts with the quantum physics of two-state systems, using electron spins as an example, and works things out from there. This is a textbook aimed at upper-level majors, but Leonard Susskind and Art Friedman’s Theoretical Minimum book uses essentially the same approach for a non-scientific audience. Looking at the table of contents of this, you can see that it deals with the currently hot topic of entanglement a few chapters before getting to particle-wave duality, flipping the historical order of stuff around, and getting to genuinely modern approaches earlier.
There’s a lot to like about these books that abandon the historical approach, but when I sat down and wrote my forthcoming general-audience book on quantum physics, I ended up taking the standard historical approach: if you look at the table of contents, you’ll see it starts with Planck’s blackbody model, then Einstein’s introduction of photons, then the Bohr model, and so on.
This is not a decision made from inertia or ignorance, but a deliberate choice, because I think the historical approach offers some big advantages not only in terms of making the specific physics content more understandable, but for boosting science more broadly. While there are good things to take away from the ahistorical approaches, they have to open with blatant assertions regarding the existence of spins. They’re presenting these as facts that simply have to be accepted as a starting point, and I think that not only loses some readers who will get hung up on that call, it goes a bit against the nature of science, as a process for generating knowledge, not a collection of facts.
This historical approach gets to the weird stuff, but grounds it in very concrete concerns. Planck didn’t start off by asserting the existence of quantized energy, he started with a very classical attack on a universal phenomenon, namely the spectrum of light emitted by a hot object. Only after he failed to explain the spectrum by classical means did he resort to the quantum, assigning a characteristic energy to light that depends on the frequency. At high frequencies, the heat energy available to produce light is less than one “quantum” of light, which cuts off the light emitted at those frequencies, rescuing the model from the “ultraviolet catastrophe” that afflicted classical approaches to the problem.
Planck used this quantum idea as a desperate trick, but Einstein picked it up and ran with us, arguing that the quantum hypothesis Planck resorted to from desperation could explain another phenomenon, the photoelectric effect. Einstein’s simple “heuristic” works brilliantly, and was what officially won him the Nobel Prize. Niels Bohr took these quantum ideas and applied them to atoms, making the first model that could begin to explain the absorption and emission of light by atoms, which used discrete energy states for electrons within atoms, and light with a characteristic energy proportional to the frequency. And quantum physics was off and running.
This history is useful because it grounds an exceptionally weird subject in concrete solutions to concrete problems. Nobody woke up one morning and asserted the existence of particles that behave like waves and vice versa. Instead, physicists were led to the idea, somewhat reluctantly but inevitably, by rigorously working out the implications of specific experiments. Going through the history makes the weird end result more plausible, and gives future physicists something to hold on to as they start on the journey for themselves.
This historical approach also has educational benefits when applied to the other great pillar of “modern physics” classes, namely Einstein’s theory of special relativity. This is another subject that is often introduced in very abstract ways– envisioning a universe filled with clocks and meter sticks and pondering the meaning of simultaneity, or considering the geometry of spacetime. Again, there are good things to take away from this– I learned some great stuff from Takeuchi’s Illustrated Guide to Relativity and Cox and Forshaw’s Why Does E=mc2?. But for a lot of students, the abstraction of this approach leads to them thinking “Why in hell are we talking about this nonsense?”
Some of those concerns can be addressed by a historical approach. The most standard way of doing this is to go back to the Michelson-Morley experiment, started while Einstein was in diapers, that proved that the speed of light was constant. But more than that, I think it’s useful to bring in some actual history– I’ve found it helpful to draw on Peer Galison’s argument in Einstein’s Clocks, Poincare’s Maps.
Galison notes that the abstract concerns about simultaneity that connect to relativity arise very directly from considering very concrete problems of timekeeping and telegraphy, used in surveying the planet to determine longitude, and establishing the modern system of time zones to straighten out the chaos that multiple incompatible local times created for railroads.
Poincare was deeply involved in work on longitude and timekeeping, and these practical issues led him to think very philosophically about the nature of time and simultaneity, several years before Einstein’s relativity. Einstein, too, was in an environment where practical timekeeping issues would’ve come up with some regularity, which naturally leads to similar thoughts. And it wasn’t only those two– Hendrik Lorentz and George FitzGerald worked out much of the necessary mathematics for relativity on their own.
So, adding some history to discussions of relativity helps both ground what is otherwise a very abstract process and also helps reinforce a broader understanding of science as a process. Relativity, seen through a historical perspective, is not merely the work of a lone genius who was bored by his job in the patent office, but the culmination of a process involving many people thinking about issues of practical importance.
Bringing in some history can also have benefits when discussing topics that are modern enough to be newsworthy. There’s a big argument going on at the moment about dark matter, with tempers running a little high. On the one hand, some physicists question whether it’s time to consider alternative explanations, while other observations bolster the theory.
Dark matter is a topic that might very well find its way into classroom discussions, and it’s worth introducing a bit of the history to explore this. Specifically, it’s good to go back to the initial observations of galaxy rotation curves. The spectral lines emitted by stars and hot gas are redshifted by the overall motion of the galaxy, but also bent into a sort of S-shape by the fact that stars on one side tend to be moving toward us due to the galaxy’s rotation, and stars on the other side tend to be moving away. The difference between these lets you find the velocity of rotation as a function of distance from the center of the galaxy, and this turns out to be higher than can be explained by the mass we can see and the normal behavior of gravity.
This work is worth introducing not only because these galaxy rotations are the crux of the matter for the current argument, but because they help make an important point about science in context. The initial evidence for something funny about these rotation curves came largely from work by Vera Rubin, who was a remarkable person. As a woman in a male-dominated field, she had to overcome many barriers along the course of her career.
Bringing up the history of dark matter observations is a natural means to discuss science in a broader social context, and the issues that Rubin faced and overcame, and how those resonate today. Talking about her work and history allows both a better grounding for the current dark matter fights, and also a chance to make clear that science takes place within and is affected by a larger societal context. That’s probably at least as important an issue to drive home as any particular aspect of the dark matter debate.
So, those are some examples of areas in which a historical approach to physics is actively helpful to students, not just a way to delay the teaching of more modern topics. By grounding abstract issues in concrete problems, making the collaborative and cumulative nature of science clear, and placing scientific discoveries in a broader social context, adding a bit of history to the classroom helps students get a better grasp on specific physics topics, and also on science as a whole.
About the author: Chad Orzel is Associate Professor in the Department of Physics and Astronomy at Union College
_______________________________________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
The Momentum Principle Vs Newton’s 2nd Law
Practical problem solving: When we do use conservation of momentum to solve a problem? When do we use Newton’s laws of motions?

Sometimes we need to use only one or the other; other times both are equally useful. And on other occasions some problems may require the use of both approaches. Rhett Allain on Wired.com discusses this in “Physics Face Off: The Momentum Principle Vs Newton’s 2nd Law”
__________________________
CONSIDER THE FOLLOWING physics problem.
An object with a mass of 1 kg and a velocity of 1 m/s in the x-direction has a net force of 1 Newton pushing on it (also in the x-direction). What will the velocity of the object be after 1 second? (Yes, I am using simple numbers—because the numbers aren’t the point.)
Let’s solve this simple problem two different ways. For the first method, I will use Newton’s Second Law. In one dimension, I can write this as:
F (net – x) = m x ax
Using this equation, I can get the acceleration of the object (in the x-direction). I’ll skip the details, but it should be fairly easy to see that it would have an acceleration of 1 m/s2. Next, I need the definition of acceleration (in the x-direction). Oh, and just to be clear—I’m trying to be careful about these equations since they are inherently vector equations.
a = delta Vx / time
The article continues here:
Physics Face Off: The Momentum Principle Vs Newton’s 2nd Law
3D Color X-rays
What if X-rays could produce three dimensional color images?

This is now a reality, thanks to a New-Zealand company that scanned, for the first time, a human body using a breakthrough colour medical scanner based on the Medipix3 technology developed at CERN. Father and son scientists Professors Phil and Anthony Butler from Canterbury and Otago Universities spent a decade building and refining their product.
Medipix is a family of read-out chips for particle imaging and detection. The original concept of Medipix is that it works like a camera, detecting and counting each individual particle hitting the pixels when its electronic shutter is open. This enables high-resolution, high-contrast, very reliable images, making it unique for imaging applications in particular in the medical field.
Hybrid pixel-detector technology was initially developed to address the needs of particle tracking at the Large Hadron Collider, and successive generations of Medipix chips have demonstrated over 20 years the great potential of the technology outside of high-energy physics.
They use the spectroscopic information generated by the detector with mathemtaical algorithms to generate 3D images. The colours represent different energy levels of the X-ray photons as recorded by the detector. Hence, colors identify different components of body parts such as fat, water, calcium, and disease markers.
First 3D colour X-ray of a human using CERN technology, by Romain Muller. VERN.
How to teach AP physics
It’s easy to teach physics in a wordy and complicated way – but taking a concept and breaking it down into simple steps, and presenting ideas in a way that are easily comprehensible to the eager student, is more challenging.
Yet that is what Nobel prize winning physicist Richard Feynman excelled at. The same skills that made one a good teacher also caused one to more fully understand the topic him/herself. This was Feynman’s basic method of learning.

1) Develop an array of hands-on labs that allow one to study basic phenomenon.
You can also use many wonderful online simulations, such as PhET or Physics Aviary.
2) Each day go over several problems in class. They need to see a master teacher take what appears to be a complex word problem, and turn it into equations.
3.) Insure that students take good notes. One way of doing this is having the occasional surprise graded notebook check (say, twice per month.)
4) Each week assign homework. Each day randomly call a few students to put one of their solutions on the board. Recall that the goal is not to get the correct numerical answer. (That sometime can come by luck or cheating.) Focus on the derivation. Does the student understand which basic principles are involved?
5) Keep track of strengths and weaknesses: Is there a weakness in algebra, trigonometry, or geometry? When you see a pattern emerge, assign problem sets that require mastering the weak area – not to punish them, but to build skills. Start with a few very easy problems, and slowly build in complexity. Let them work in groups if you like.
6) Don’t drown yourself in paperwork: Don’t grade every problem, from every student, every day. You could easily work 24 hours a day and still have more work to do. Only collect & grade some percent of the homework.
7) Focus on simple drawings – or for classes that uses programming to simulate physics phenomenon – simple animations. Are the students capable of sketching free-body diagrams that strip away extraneous info? Can they diagram out all the forces on an object?
8) Give frequent assessments that are easy to grade.
9) Get books such as TIPERS for Physics, or Ranking Task Exercises in Physics. They are diagnostic tools to check for misconceptions.. Call publishers for free sample textbooks and resources. For a textbook I happen to like Giancoli Physics; their teacher solution manual is very well thought out.




