The mechanics of the Nazaré Canyon wave
The Portuguese town of Nazaré can deliver 100-foot (30.4 meters) waves.
How can we explain the Nazaré Canyon geomorphologic phenomenon?
In the 16th century, Portuguese people and army protected Nazaré from pirate attacks, in the Promontório do Sítio, the cliff-top area located 110-meter above the beach.

A screenshot from the short film “Nazaré – Entre a Terra e o Mar”, showing what the canyon would look like if the sea were very clear and transparent.
Today, from this unique site, it is possible to watch the power of the Atlantic Ocean. If you face the salt water from the nearby castle, you can easily spot the famous big waves that pump the quiet village.
What are the mechanics of the Nazaré Canyon? Is there a clear explanation for the size of the local waves? First of all, let us underline the most common swell direction in the region: West and Northwest.
A few miles off the coast of Nazaré, there are drastic differences of depth between the continental shelf and the canyon. When swell heads to shore, it is quickly amplified where the two geomorphologic variables meet causing the formation of big waves.
Furthermore, a water current is channeled by the shore – from North to South – in the direction of the incoming waves, additionally contributing to wave height. Nazaré holds the Guinness World Record for the largest wave ever surfed.
In conclusion, the difference of depths increase wave height, the canyon increases and converges the swell and the local water current helps building the biggest wave in the world. Add a perfect wind speed and direction and welcome to Nazaré.
The Mechanics of the Nazaré Canyon Wave:
1. Swell refraction: difference of depths between the continental shelf and the canyon change swell speed and direction;
2. Rapid depth reduction: wave size builds gradually;
3. Converging wave: the wave from the canyon and the wave from the continental shelf meet and form a higher one;
4. Local water channel: a seashore channel drives water towards the incoming waves to increase their height;

a) Wave fronts, b) Head of the Nazaré Canyon, c) Praia do Norte
Article from Surfer Today, surfertoday.com/surfing/8247-the-mechanics-of-the-nazare-canyon-wave
____________________________
This section from telegraph.co.uk/news/earth/earthnews/10411252/How-a-100-foot-wave-is-created.html
Currents through the canyon combine with swell driven by winds from further out in the Atlantic to create waves that propagate at different speeds.
They converge as the canyon narrows and drive the swell directly towards the lighthouse that sits on the edge of Nazaré.
From the headwall to the coastline, the seabed rises gradually from around 32 feet to become shallow enough for the swell to break. Tidal conditions also help to increase the wave height.
According to Mr McNamara’s website charting the project he has been conducting, the wave produced here are “probably the biggest in all the world” for sandy a sand sea bed.
On Monday the 80 mile an hour winds created by the St Jude’s Atlantic storm whipped up the swell to monstrous proportions, leading to waves of up to 100 feet tall.
The previous day as the storm gathered pace, waves of up to 80 feet high formed and British surfer Andrew Cotton managed to ride one of these.

Image from How a 100 foot wave is created, The Telegraph (UK),
_____________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Blueberry Earth
Here’s a gedankenexperiment (that’s German for “thought experiment”) that ought to interest you.
A gedankenexperiment is a way that physicists ask questions about how something in our universe works, for the joy of working out it’s consequences. The experiments don’t need to be practical, although many do lead to advances in physics. Famous examples of gedankenexperiments that led to new ideas in physics include Schrödinger’s cat and Maxwell’s demon.
Blueberry Earth: The Delicious Thought Experiment That’s Roiling Planetary Scientists
“A roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit.”
Sarah Zhang, The Atlantic, 8/2/1018

Image from pxhere.com, 517756, CC0 Public Domain
Sarah Zhang, in The Atlantic, 8/2/1018, writes
Can I offer you a thought experiment on what would happen if the Earth were replaced by “an equal volume of closely packed but uncompressed blueberries”? When Anders Sandberg saw this question, he could not let it go. The asker was one “billybodega,” who posted the scenario on Physics Stack Exchange. (Though the question was originally posed on Twitter by writer Sandra Newman.)
A moderator of the usually staid forum closed the discussion before Sandberg could reply. That didn’t matter. Sandberg, a researcher at Oxford’s Future of Humanity Institute, wrote a lengthy answer on his blog and then an even lengthier paper that he posted to arxiv.org, a repository for physics preprints that have not yet been peer reviewed. The result is a brilliant explanation of how planets form.
To begin: The 1.5 x 1025 pounds of “closely packed but uncompressed” berries will start to collapse onto themselves and crush the berries deeper than 11.4 meters – or 37 feet – into a pulp. “Enormous amounts of air will be pushing out from the pulp as bubbles and jets, producing spectacular geysers,” writes Sandberg. What’s more, this rapid shrinking will release a huge amount of gravitational energy—equal to, according to Sandberg’s calculations, the energy output of the sun over 20 minutes. It’s enough to make the pulp boil. Behold:
“The result is that blueberry earth will turn into a roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit. As the planet evolves a thick atmosphere of released steam will add to the already considerable air from the berries. It is not inconceivable that the planet may heat up further due to a water vapour greenhouse effect, turning into a very odd Venusian world.”
Deep under the roiling jam waves, the pressure is high enough that even the warm jam will turn to ice. Blueberry Earth will have an ice core 4,000 miles wide, by Sandberg’s calculations. “The end result is a world that has a steam atmosphere covering an ocean of jam on top of warm blueberry granita,” he writes.
The process is not so different from the birth of a planet out of a disc of rotating debris. The coalescing, the emergence of an atmosphere, the formation of a dense core—all of these happened at one point to the real Earth. And it is currently happening elsewhere in the universe, as exoplanets are forming around other stars in other galaxies.
What happens-if-the-earth-instantly-turned into a mass of blueberries? The Atlantic
An interview with the author on Slate.com
Blueberry Earth by Anders Sandberg, on Arxiv
___________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
What did Earth look like millions of years ago?
Ever wonder what the Earth looked like before humans came along?

The 3D interactive website called Ancient Earth Globe lets you glimpse the world from space during the age of the dinosaurs — and more. Seeing the Earth at various points in geological history, from 750 million years ago to today, is an eye-opening activity to say the least. The website allows you to see the entire globe as it slowly rotates, or zoom in to see closer details of land and oceans. There’s also an option to remove clouds for an even better look.
(Text by Bonnie Burton, Cnet, 8/7/18, See what Earth looked like from space when it was ruled by dinosaurs)
You’re better than your last report card
Check out this disastrous report card. Yet John Gurdon went on to do well in college, and later became a Nobel Prize winner in Biology!

That’s because he had grit, moxie, steadfastness, backbone. When you access that then you can achieve great things! Nick Collins writes:
At the age of 15, Prof Sir John Gurdon ranked last out of the 250 boys in his Eton year group at biology, and was in the bottom set in every other science subject.
Sixty-four years later he has been recognised as one of the finest minds of his generation after being awarded the £750,000 annual prize, which he shares with Japanese stem cell researcher Shinya Yamanaka.
Speaking after learning of his award in London on Monday, Sir John revealed that his school report still sits above his desk at the Gurdon Institute in Cambridge, which is named in his honour. While it might be less than complimentary, noting that for him to study science at University would be a “sheer waste of time”, Sir John said it is the only item he has ever framed.
… After receiving the report Sir John said he switched his attention to classics and was offered a place to study at Christ Church, Oxford, but was allowed to switch courses and read zoology instead because of a mix-up in the admissions office.
It was at Oxford as a postgraduate student that he published his groundbreaking research on genetics and proved for the first time that every cell in the body contains the same genes. He did so by taking a cell from an adult frog’s intestine, removing its genes and implanting them into an egg cell, which grew into a clone of the adult frog.
The idea was controversial at the time because it contradicted previous studies by much more senior scientists, and it was a decade before the then-graduate student’s work became widely accepted. But it later led directly to the cloning of Dolly the Sheep by Prof Ian Wilmut in 1996, and to the subsequent discovery by Prof Yamanaka that adult cells can be “reprogrammed” into stem cells for use in medicine. This means that cells from someone’s skin can be made into stem cells which in turn can turn into any type of tissue in the body, meaning they can replace diseased or damaged tissue in patients.
– The Telegraph (UK), Nick Collins, Oct 8, 2012
Great things happened because John had indefatigability – sustained enthusiastic action with unflagging vitality.

Jonathan Player. Rex Features/AP, 2003
Why Old Physics Still Matters
By Chad Orzel, Forbes, 7/30/18
(The following is an approximation of what I will say in my invited talk at the 2018 Summer Meeting of the American Association of Physics Teachers. They encourage sharing of slides from the talks, but my slides for this talk are done in what I think of as a TED style, with minimal text, meaning that they’re not too comprehensible by themselves. So, I thought I would turn the talk into a blog post, too, maximizing the ratio of birds to stones…
(The full title of the talk is Why “Old Physics” Still Matters: History as an Aid to Understanding, and the abstract I sent in is:
A common complaint about physics curricula is that too much emphasis is given to “old physics,” phenomena that have been understood for decades, and that curricula should spend less time on the history of physics in order to emphasize topics of more current interest. Drawing on experience both in the classroom and in writing books for a general audience, I will argue that discussing the historical development of the subject is an asset rather than an impediment. Historical presentation is particularly useful in the context of quantum mechanics and relativity, where it helps to ground the more exotic and counter-intuitive aspects of those theories in a concrete process of observation and discovery.
The title of this talk refers to a very common complaint made about the teaching of physics, namely that we spend way too much time on “old physics,” and never get to anything truly modern. This is perhaps best encapsulated by Henry Reich of MinutePhysics, who made a video open letter to Barack Obama after his re-election noting that the most modern topics on the AP Physics exam date from about 1905.
This is a reflection of the default physics curriculum, which generally starts college students off with a semester of introductory Newtonian physics, which was cutting-edge stuff in the 1600s. The next course in the usual sequence is introductory E&M, which was nailed down in the 1800’s, and shortly after that comes a course on “modern physics,” which describes work from the 1900s.
Within the usual “modern physics” course, the usual approach is also historical: we start out with the problem of blackbody radiation, solved by Max Planck in 1900, then move on to the photoelectric effect, explained by Albert Einstein in 1905, and then to Niels Bohr’s model of the hydrogen atom from 1913, and eventually matter waves and the Schrodinger equation, bringing us all the way up to the late 1920’s.
It’s almost become cliche to note that “modern physics” richly deserves to be in scare quotes. A typical historically-ordered curriculum never gets past 1950, and doesn’t deal with any of the stuff that is exciting about quantum physics today.
This is the root of the complaint about “old physics,” and it doesn’t necessarily have to be this way. There are approaches to the subject that are, well, more modern. John Townsend’s textbook for example, starts with the quantum physics of two-state systems, using electron spins as an example, and works things out from there. This is a textbook aimed at upper-level majors, but Leonard Susskind and Art Friedman’s Theoretical Minimum book uses essentially the same approach for a non-scientific audience. Looking at the table of contents of this, you can see that it deals with the currently hot topic of entanglement a few chapters before getting to particle-wave duality, flipping the historical order of stuff around, and getting to genuinely modern approaches earlier.
There’s a lot to like about these books that abandon the historical approach, but when I sat down and wrote my forthcoming general-audience book on quantum physics, I ended up taking the standard historical approach: if you look at the table of contents, you’ll see it starts with Planck’s blackbody model, then Einstein’s introduction of photons, then the Bohr model, and so on.
This is not a decision made from inertia or ignorance, but a deliberate choice, because I think the historical approach offers some big advantages not only in terms of making the specific physics content more understandable, but for boosting science more broadly. While there are good things to take away from the ahistorical approaches, they have to open with blatant assertions regarding the existence of spins. They’re presenting these as facts that simply have to be accepted as a starting point, and I think that not only loses some readers who will get hung up on that call, it goes a bit against the nature of science, as a process for generating knowledge, not a collection of facts.
This historical approach gets to the weird stuff, but grounds it in very concrete concerns. Planck didn’t start off by asserting the existence of quantized energy, he started with a very classical attack on a universal phenomenon, namely the spectrum of light emitted by a hot object. Only after he failed to explain the spectrum by classical means did he resort to the quantum, assigning a characteristic energy to light that depends on the frequency. At high frequencies, the heat energy available to produce light is less than one “quantum” of light, which cuts off the light emitted at those frequencies, rescuing the model from the “ultraviolet catastrophe” that afflicted classical approaches to the problem.
Planck used this quantum idea as a desperate trick, but Einstein picked it up and ran with us, arguing that the quantum hypothesis Planck resorted to from desperation could explain another phenomenon, the photoelectric effect. Einstein’s simple “heuristic” works brilliantly, and was what officially won him the Nobel Prize. Niels Bohr took these quantum ideas and applied them to atoms, making the first model that could begin to explain the absorption and emission of light by atoms, which used discrete energy states for electrons within atoms, and light with a characteristic energy proportional to the frequency. And quantum physics was off and running.
This history is useful because it grounds an exceptionally weird subject in concrete solutions to concrete problems. Nobody woke up one morning and asserted the existence of particles that behave like waves and vice versa. Instead, physicists were led to the idea, somewhat reluctantly but inevitably, by rigorously working out the implications of specific experiments. Going through the history makes the weird end result more plausible, and gives future physicists something to hold on to as they start on the journey for themselves.
This historical approach also has educational benefits when applied to the other great pillar of “modern physics” classes, namely Einstein’s theory of special relativity. This is another subject that is often introduced in very abstract ways– envisioning a universe filled with clocks and meter sticks and pondering the meaning of simultaneity, or considering the geometry of spacetime. Again, there are good things to take away from this– I learned some great stuff from Takeuchi’s Illustrated Guide to Relativity and Cox and Forshaw’s Why Does E=mc2?. But for a lot of students, the abstraction of this approach leads to them thinking “Why in hell are we talking about this nonsense?”
Some of those concerns can be addressed by a historical approach. The most standard way of doing this is to go back to the Michelson-Morley experiment, started while Einstein was in diapers, that proved that the speed of light was constant. But more than that, I think it’s useful to bring in some actual history– I’ve found it helpful to draw on Peer Galison’s argument in Einstein’s Clocks, Poincare’s Maps.
Galison notes that the abstract concerns about simultaneity that connect to relativity arise very directly from considering very concrete problems of timekeeping and telegraphy, used in surveying the planet to determine longitude, and establishing the modern system of time zones to straighten out the chaos that multiple incompatible local times created for railroads.
Poincare was deeply involved in work on longitude and timekeeping, and these practical issues led him to think very philosophically about the nature of time and simultaneity, several years before Einstein’s relativity. Einstein, too, was in an environment where practical timekeeping issues would’ve come up with some regularity, which naturally leads to similar thoughts. And it wasn’t only those two– Hendrik Lorentz and George FitzGerald worked out much of the necessary mathematics for relativity on their own.
So, adding some history to discussions of relativity helps both ground what is otherwise a very abstract process and also helps reinforce a broader understanding of science as a process. Relativity, seen through a historical perspective, is not merely the work of a lone genius who was bored by his job in the patent office, but the culmination of a process involving many people thinking about issues of practical importance.
Bringing in some history can also have benefits when discussing topics that are modern enough to be newsworthy. There’s a big argument going on at the moment about dark matter, with tempers running a little high. On the one hand, some physicists question whether it’s time to consider alternative explanations, while other observations bolster the theory.
Dark matter is a topic that might very well find its way into classroom discussions, and it’s worth introducing a bit of the history to explore this. Specifically, it’s good to go back to the initial observations of galaxy rotation curves. The spectral lines emitted by stars and hot gas are redshifted by the overall motion of the galaxy, but also bent into a sort of S-shape by the fact that stars on one side tend to be moving toward us due to the galaxy’s rotation, and stars on the other side tend to be moving away. The difference between these lets you find the velocity of rotation as a function of distance from the center of the galaxy, and this turns out to be higher than can be explained by the mass we can see and the normal behavior of gravity.
This work is worth introducing not only because these galaxy rotations are the crux of the matter for the current argument, but because they help make an important point about science in context. The initial evidence for something funny about these rotation curves came largely from work by Vera Rubin, who was a remarkable person. As a woman in a male-dominated field, she had to overcome many barriers along the course of her career.
Bringing up the history of dark matter observations is a natural means to discuss science in a broader social context, and the issues that Rubin faced and overcame, and how those resonate today. Talking about her work and history allows both a better grounding for the current dark matter fights, and also a chance to make clear that science takes place within and is affected by a larger societal context. That’s probably at least as important an issue to drive home as any particular aspect of the dark matter debate.
So, those are some examples of areas in which a historical approach to physics is actively helpful to students, not just a way to delay the teaching of more modern topics. By grounding abstract issues in concrete problems, making the collaborative and cumulative nature of science clear, and placing scientific discoveries in a broader social context, adding a bit of history to the classroom helps students get a better grasp on specific physics topics, and also on science as a whole.
About the author: Chad Orzel is Associate Professor in the Department of Physics and Astronomy at Union College
_______________________________________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
The Momentum Principle Vs Newton’s 2nd Law
Practical problem solving: When we do use conservation of momentum to solve a problem? When do we use Newton’s laws of motions?

Sometimes we need to use only one or the other; other times both are equally useful. And on other occasions some problems may require the use of both approaches. Rhett Allain on Wired.com discusses this in “Physics Face Off: The Momentum Principle Vs Newton’s 2nd Law”
__________________________
CONSIDER THE FOLLOWING physics problem.
An object with a mass of 1 kg and a velocity of 1 m/s in the x-direction has a net force of 1 Newton pushing on it (also in the x-direction). What will the velocity of the object be after 1 second? (Yes, I am using simple numbers—because the numbers aren’t the point.)
Let’s solve this simple problem two different ways. For the first method, I will use Newton’s Second Law. In one dimension, I can write this as:
F (net – x) = m x ax
Using this equation, I can get the acceleration of the object (in the x-direction). I’ll skip the details, but it should be fairly easy to see that it would have an acceleration of 1 m/s2. Next, I need the definition of acceleration (in the x-direction). Oh, and just to be clear—I’m trying to be careful about these equations since they are inherently vector equations.
a = delta Vx / time
The article continues here:
Physics Face Off: The Momentum Principle Vs Newton’s 2nd Law
3D Color X-rays
What if X-rays could produce three dimensional color images?

This is now a reality, thanks to a New-Zealand company that scanned, for the first time, a human body using a breakthrough colour medical scanner based on the Medipix3 technology developed at CERN. Father and son scientists Professors Phil and Anthony Butler from Canterbury and Otago Universities spent a decade building and refining their product.
Medipix is a family of read-out chips for particle imaging and detection. The original concept of Medipix is that it works like a camera, detecting and counting each individual particle hitting the pixels when its electronic shutter is open. This enables high-resolution, high-contrast, very reliable images, making it unique for imaging applications in particular in the medical field.
Hybrid pixel-detector technology was initially developed to address the needs of particle tracking at the Large Hadron Collider, and successive generations of Medipix chips have demonstrated over 20 years the great potential of the technology outside of high-energy physics.
They use the spectroscopic information generated by the detector with mathemtaical algorithms to generate 3D images. The colours represent different energy levels of the X-ray photons as recorded by the detector. Hence, colors identify different components of body parts such as fat, water, calcium, and disease markers.
First 3D colour X-ray of a human using CERN technology, by Romain Muller. VERN.
How to teach AP physics
It’s easy to teach physics in a wordy and complicated way – but taking a concept and breaking it down into simple steps, and presenting ideas in a way that are easily comprehensible to the eager student, is more challenging.
Yet that is what Nobel prize winning physicist Richard Feynman excelled at. The same skills that made one a good teacher also caused one to more fully understand the topic him/herself. This was Feynman’s basic method of learning.

1) Develop an array of hands-on labs that allow one to study basic phenomenon.
You can also use many wonderful online simulations, such as PhET or Physics Aviary.
2) Each day go over several problems in class. They need to see a master teacher take what appears to be a complex word problem, and turn it into equations.
3.) Insure that students take good notes. One way of doing this is having the occasional surprise graded notebook check (say, twice per month.)
4) Each week assign homework. Each day randomly call a few students to put one of their solutions on the board. Recall that the goal is not to get the correct numerical answer. (That sometime can come by luck or cheating.) Focus on the derivation. Does the student understand which basic principles are involved?
5) Keep track of strengths and weaknesses: Is there a weakness in algebra, trigonometry, or geometry? When you see a pattern emerge, assign problem sets that require mastering the weak area – not to punish them, but to build skills. Start with a few very easy problems, and slowly build in complexity. Let them work in groups if you like.
6) Don’t drown yourself in paperwork: Don’t grade every problem, from every student, every day. You could easily work 24 hours a day and still have more work to do. Only collect & grade some percent of the homework.
7) Focus on simple drawings – or for classes that uses programming to simulate physics phenomenon – simple animations. Are the students capable of sketching free-body diagrams that strip away extraneous info? Can they diagram out all the forces on an object?
8) Give frequent assessments that are easy to grade.
9) Get books such as TIPERS for Physics, or Ranking Task Exercises in Physics. They are diagnostic tools to check for misconceptions.. Call publishers for free sample textbooks and resources. For a textbook I happen to like Giancoli Physics; their teacher solution manual is very well thought out.
Graboids
In this lesson students view scenes from the Tremors series of movies. Students take notes on the animal’s biology: external anatomy, internal anatomy, lifecycle and behavior.
We then use scientific reasoning to infer the evolution and anatomy of these creatures,.

Lifecycle
Based on movie scenes students can explain the graboid lifecycle.

Graboids and sound waves
How do graboids navigate underground and detect food sources?
Sonar is the use of sound to navigate, communicate with, or detect objects – on or under the surface of the water – such as another vessel.
Active sonar uses a sound transmitter and a receiver. Active sonar creates a pulse of sound, often called a “ping”, and then listens for reflections (echo) of the pulse.
Here we see an animation from the US Navy made in the 1940’s, showing how sonar works.

Some animals have natural sonar, such as bats and whales.
The Tremors movies imply that graboids have a similar way of detecting prey.

Internal anatomy
As this is a science fiction movie these creatures aren’t real. But the film makers made it clear that these animals would have realistic internal as well as external anatomy.
In this section we ask students to speculate what kind of organs a creature like this would or wouldn’t have, based on the available information.
Students work in groups to come up with answers – and they have to justify their conclusions.
For instance, they might claim that the animal has no skeleton: If so, explain why they conclude this. Or they might claim it does have a skeleton: if so, explain why they conclude this.

from deviantart.com/christopher-stoll
Evolution of graboids
Based on the observed characteristics, what animals are graboids most closely related to?
What animals in the past might they have evolved from?
Could students make a speculative family tree/cladogram, showing the possible evolution of graboids?
clades & phylogenies
clades rotate = equivalent phylogenies
Gradualism vs. Punctuated Equilibrium
Introductory material
Create a packet to be given to students.
Graboids reference material
This material has been archived from the SciFi.com website.
You may choose to make some of this material available to students during or after viewing the scenes. However, withhold the majority of this material until after the students finish the section in which they speculate and justify their conclusions.
Archive.org Notes from Tremors, The Series (SciFi.com)
1.0 Introduction
2.0 External Anatomy
2.1 – Graboids
2.2 – Shriekers
2.3 – AssBlasters
3.0 Internal Anatomy
4.0 Ecology
5.0 Evolutionary Overview
5.1 – Evolutionary History
5.2 – Issues of Reproduction
5.3 – Development of Shrieker Legs
5.4 – AssBlasters’ Reproductive Role
5.5 – AssBlaster Biochemistry
6.0 Hypothetical Taxonomy
7.0 Historical and Mythological References
8.0 Threat Assessment
Monster Guide
Tremors/monsters
Introduction
Introduction
External anatomy
External anatomy
Internal anatomy
Internal anatomy
Ecology
Ecology
Evolutionary history
Conjectural Evolutionary History
Taxonomy
Taxonomy/Classification

The proper taxonomical classification of Graboids, Shriekers and AssBlasters was a curious challenge because the Graboid species does not clearly belong to any previously known Family grouping.
To complete its zoological nomenclature, we were forced to look much deeper into the evolutionary tree than we had expected.
Graboids have been described by some witnesses as being “reptilian,” but this is probably no more accurate than describing the AssBlaster as a bird because it flies or the Shrieker as a frog because it undergoes a metamorphosis.
The Graboid does not appear to possess any of the features of true reptiles, though the Shrieker and AssBlaster, curiously, each possess some, such as clawed toes. However, they share just as many similarities with birds and mammals, so a reptilian classification was not indicated.
In fact, Graboids, Shriekers and AssBlasters do not appear to belong to any existing class of vertebrates. They clearly are not fish, and it takes only a slightly more professional observer to see that they they are also neither amphibians nor reptiles, neither birds nor mammals.
It is doubtful that they are even vertebrates, although they do seem to possess endoskeleton-like structures. Vertebrates, it should be stressed, derive from a family of creatures called notochords, which gave rise to fish.
Also descended from notochords are amphibians, reptiles, birds and mammals. All these different forms share a heritage of organs and anatomy, ranging from bilateral symmetry to a similarity of organ/tissue types and functions.
The three known forms of genus Caederus lack many of the features inherent to members of the vertebrate line. Most obviously, they lack eyes. Their multistage life cycle is similarly dissociated from known vertebrate reproductive models. In fact, research has not yet yielded any proof that the Graboid species is connected to the vertebrate line.
Regardless, the Graboid, the Shrieker and the AssBlaster are all highly sophisticated lifeforms, which implies that they represent the culmination of a long evolutionary history. Only three other non-vertebrate lines of animal life on Earth have reached a similar level of sophistication: arthropods, annelids and mollusks.
Arthropods (including insects, arachnids, crustaceans and other forms) typically have hard, segmented or jointed exoskeletons, and generally remain small in size when compared with vertebrates. Most arthropods evolved with multiple external limbs and some form of eyes. All these traits are inconsistent with the speculated evolution of C. americana.
Available evidence suggests the Graboid also is not a member of the subphylum Annelida. Annelids — earthworms — share some traits with the Graboid, such as an underground habitat, stiff hairs in the skin to assist in locomotion and an ability to extract nutrients directly from the soil.
No annelid, however, has ever possessed anything resembling an endoskeleton or semirigid support system, which C. americana is believed to possess.
In addition, C. americana and C. mexicana possess other features not found in annelids: segmented jaws; prehensile mouth tentacles; a multiphase life cycle; and thermal sensors.
The Graboid is also larger and more sophisticated than any known annelid, making it highly unlikely that genus Caederus belongs in this subphylum.
Genus Caederus might be unique, in a class of its own. It might even be extraterrestrial. More likely, though, it is a form of mollusk.
The subphylum Mollusca is one of the oldest, most diversified and successful on Earth. It includes clams, mussels, snails, slugs, cuttlefish, nautili, squids and octopi. The most advanced form of mollusks are the cephalopods (octopi and squids), which share many important features with the Graboid.
Cephalopods have multiple tentacles, ranging from eight to dozens, all surrounding a mouth or gullet — an arrangement that resembles the Graboid’s tentacled mouth structure. Furthermore, some cephalopods (such as the prehistoric ammonites or the modern nautilus) have external shells or carapaces, as does the Graboid.
At least one cephalopod, the cuttlefish, has a Graboid-like external carapace, or bony structure. In addition, octopi have enough control over the muscles of their skin to change their texture from craggy to smooth, suggesting a skin musculature similar to that of the Graboid, although of different degree.
The “wing structure” of the AssBlaster bears at least a passing resemblance to the rippling “fins” of the cuttlefish.
Although no known aquatic cephalopod ejects combustible compounds, it is a compelling similarity that several eject prodigious clouds of ink as a defensive mechanism, and some have a hydrojet-like propulsive organ that resembles the AssBlaster’s dramatically fiery self-launching ability.
Cephalopods are water-breathers, but other mollusks, including snails and slugs, exist on dry land. Many cephalopods, as well as certain bivalve mollusks, are able to survive for short durations out of the water.
Cephalopods are the most intelligent non-vertebrate animals known to exist. Studies have indicated that they might possess a capacity for memory, learning and problem-solving, and witnesses have reported signs of social behavior among groups of squid and octopi.
Cephalopods might well be as intelligent as some species of birds or mammals; certainly, they seem to show a level of “smart” behavior similar to that of genus Caederus.
Finally, cephalopods have managed to achieve significant size and mass in aquatic habitats. The giant squid, for instance, is a deep-ocean-dweller that might rival the Graboid in size. The largest known giant squid have weighed several tons and stretched up to 55 feet from their flukes to the extremity of their longest tentacle.
Although the Graboid and its related forms possess features previously undocumented among cephalopods (such as jointed limbs, endoskeletons and a multiphase life cycle), these differences do not disqualify their categorization as mollusks.
For example, bivalve mollusks (clams and mussels) possess hinged shells; it is not unreasonable to assume that the Graboid family of mollusks may have developed hinged internal shells and eventually evolved more complex internal skeletons.
However, no mollusk has evolved anything resembling the thermal sensors of the Shrieker and AssBlaster; likewise, the incendiary metabolism of the AssBlaster is unique to the Graboid species. Furthermore, no cephalopod or other mollusk possesses a life cycle nearly as complex as that of genus Caederus.
Still, the shared traits documented above and elsewhere in this document are significant enough to justify a tentative classification of the Graboid, the Shrieker and the AssBlaster as distant, terrestrial relatives of class Cephalopoda.
Historical and mythological references
Historical and mythological references
Concluding thoughts
Concluding thoughts and threat assessment
Additional resources
Graboid article (Tremors.Fandom.Com)
Learning Standards
This unit addresses critical thinking skills in the Next Generation Science Standards, which are based on “A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas”, by the National Research Council of the National Academies. In this document we read
“Through discussion and reflection, students can come to realize that scientific inquiry embodies a set of values. These values include respect for the importance of logical thinking, precision, open-mindedness, objectivity, skepticism, and a requirement for transparent research procedures and honest reporting of findings.”
Next Generation Science Standards: Science & Engineering Practices
● Ask questions that arise from careful observation of phenomena, or unexpected results, to clarify and/or seek additional information.
● Ask questions that arise from examining models or a theory, to clarify and/or seek additional information and relationships.
● Ask questions to determine relationships, including quantitative relationships, between independent and dependent variables.
● Ask questions to clarify and refine a model, an explanation, or an engineering problem.
● Evaluate a question to determine if it is testable and relevant.
● Ask questions that can be investigated within the scope of the school laboratory, research facilities, or field (e.g., outdoor environment) with available resources and, when appropriate, frame a hypothesis based on a model or theory.
● Ask and/or evaluate questions that challenge the premise(s) of an argument, the interpretation of a data set, or the suitability of the design
Science and engineering practices: NSTA National Science Teacher Association
Next Gen Science Standards Appendix F: Science and engineering practices
Galvanic cell
A galvanic cell is a device in which chemical energy is converted into electric energy through the transfer of electrons. This is accomplished through a redox reaction.
The reduction half-reaction of the redox reaction occurs at the cathode (RED CAT)
The oxidation half reaction occurs at the anode (AN OX).
To maintain the flow of electrons something is needed to transfer positive charge. This can be accomplished in two ways:
(1) A salt bridge. This allows the transfer of positive charge through the movement of positive ions. In the example below
Copper is the cathode in the cathode half-cell.
Here is where the reduction of Cu2+ ion to Cu metal occurs.
Zn metal is the anode in the anode half-cell.
Here is where the oxidation of Zn to Zn2+ ion occurs.
Other ions are present for charge neutralization, ionic conduction, and completion of the circuit.
This is the basis for most batteries.
You can usually see the + marked on the battery’s cathode, while the other end is the anode.

As we said above, to maintain the flow of electrons something is needed to transfer positive charge.
Another way to do this is to use a porous disk:

Electrodes are not stable
Electrodes slowly corrode.
Here is an example from a Zinc and Copper Galvanic cell
Here, electrons flow in the wire (above the solution) from Zn to Cu.

because Zn is a more active metal than Cu , it tends to lose e-
So the Zn electrode is oxidized: a Zinc ion and 2 free e- are made per original Zn atom
This Zn ion breaks apart from the electrode and floats off into the solution

tba
Pure metals corrode because they aren’t stable
Why do the electrodes corrode? Well the real question is “Why don’t all metals corrode”?
Look around you – what metals don’t corrode (rust)? Only gold, platinum and a few others. Every other metal does.
Look for pure metals… good luck – you won’t find any. They’re all already chemically bound to other substances. Instead of finding copper, we find copper ore. Same for iron, or anything else.
How do we get pure metals, then? We need to expend a lot of energy to separate the metal that we want from the other atoms.
Here’s the physics explanation of why this is so. It has been excerpted and adapted from Corrosion of metals (author unknown.)
Pure metals contain more bound energy, representing a higher energy state than that found in the nature as sulphides or oxides.

All material in the universe strives to return to its lowest energy state.
Same for metals. They tend to revert to their lowest energy state which they had as sulphides or oxides. They revert to a low energy level by corrosion.
For batteries, we see electrochemical corrosion. Takes place in an aqueous environment.
All metals in dry air are covered by a very thin layer of oxide, about 100Å (10-2µm) thick. This layer is built up by chemical corrosion with the oxygen in the air. At very high temperatures, the reaction with the oxygen in the air can continue without restraint and the metal will rapidly be transformed into an oxide.

At room temperature the reaction stops when the layer is thin. These thin layers of oxide can protect the metal against continued attack, e.g. in a water solution. In actual fact, it is these layers of oxide and/or products of corrosion formed on the surface of the metal that protect the metal from continued attack to a far greater extent that the corrosion resistance of the metal itself.
These layers of oxide may be more or less durable in water, for instance. We know that plain carbon steel corrodes faster in water than stainless steel. The difference depends on the composition and the penetrability of their respectively oxide layers. The following description of the corrosion phenomenon will only deal with electrochemical corrosion, i.e. wet corrosion.
Corrosion cells
How do metals corrode in liquids? Let us illustrate this, using a corrosion phenomenon called bimetal corrosion or galvanic corrosion. The bimetal corrosion cell can e.g. consist of a steel plate and a copper plate in electrical contact with one another and immersed in an aqueous solution (electrolyte).
The electrolyte contains dissolved oxygen from the air and dissolved salt. If a lamp is connected between the steel plate and the copper plate, it will light up. This indicates that current is flowing between the metal plates. The copper will be the positive electrode and the steel will be the negative electrode.

The driving force of the current is the difference in electrical potential between the copper and the steel. The circuit must be closed and current will consequently flow in the liquid (electrolyte) from the steel plate to the copper plate. The flow of current takes place by the positively charged iron atoms (iron ions) leaving the steel plate and the steel plate corrodes.
The corroding metal surface is called the anode. Oxygen and water are consumed at the surface of the copper plate and hydroxyl ions (OH-), which are negatively charged, are formed. The negative hydroxyl ions “neutralize” the positively charged iron atoms. The iron and hydroxyl ions form ferrous hydroxide (rust).

In the corrosion cell described above, the copper metal is called the cathode. Both metal plates are referred to as electrodes and the definition of the anode and the cathode are given below.
Anode: Electrode from which positive current flows into an electrolyte.
Cathode: Electrode through which positive electric current leaves an electrolyte.
When positive iron atoms go into solution from the steel plate, electrons remain in the metal and are transported in the opposite direction, towards the positive current.

Videos
https://www.youtube.com/watch?v=C26pH8kC_Wk
Learning Standards
HS-PS1-10(MA). Use an oxidation-reduction reaction model to predict products of reactions given the reactants, and to communicate the reaction models using a representation that shows electron transfer (redox). Use oxidation numbers to account for how electrons are redistributed in redox processes used in devices that generate electricity or systems that prevent corrosion.*
