Home » Physics (Page 9)
Category Archives: Physics
The mechanics of the Nazaré Canyon wave
The Portuguese town of Nazaré can deliver 100-foot (30.4 meters) waves.
How can we explain the Nazaré Canyon geomorphologic phenomenon?
In the 16th century, Portuguese people and army protected Nazaré from pirate attacks, in the Promontório do Sítio, the cliff-top area located 110-meter above the beach.

A screenshot from the short film “Nazaré – Entre a Terra e o Mar”, showing what the canyon would look like if the sea were very clear and transparent.
Today, from this unique site, it is possible to watch the power of the Atlantic Ocean. If you face the salt water from the nearby castle, you can easily spot the famous big waves that pump the quiet village.
What are the mechanics of the Nazaré Canyon? Is there a clear explanation for the size of the local waves? First of all, let us underline the most common swell direction in the region: West and Northwest.
A few miles off the coast of Nazaré, there are drastic differences of depth between the continental shelf and the canyon. When swell heads to shore, it is quickly amplified where the two geomorphologic variables meet causing the formation of big waves.
Furthermore, a water current is channeled by the shore – from North to South – in the direction of the incoming waves, additionally contributing to wave height. Nazaré holds the Guinness World Record for the largest wave ever surfed.
In conclusion, the difference of depths increase wave height, the canyon increases and converges the swell and the local water current helps building the biggest wave in the world. Add a perfect wind speed and direction and welcome to Nazaré.
The Mechanics of the Nazaré Canyon Wave:
1. Swell refraction: difference of depths between the continental shelf and the canyon change swell speed and direction;
2. Rapid depth reduction: wave size builds gradually;
3. Converging wave: the wave from the canyon and the wave from the continental shelf meet and form a higher one;
4. Local water channel: a seashore channel drives water towards the incoming waves to increase their height;

a) Wave fronts, b) Head of the Nazaré Canyon, c) Praia do Norte
Article from Surfer Today, surfertoday.com/surfing/8247-the-mechanics-of-the-nazare-canyon-wave
____________________________
This section from telegraph.co.uk/news/earth/earthnews/10411252/How-a-100-foot-wave-is-created.html
Currents through the canyon combine with swell driven by winds from further out in the Atlantic to create waves that propagate at different speeds.
They converge as the canyon narrows and drive the swell directly towards the lighthouse that sits on the edge of Nazaré.
From the headwall to the coastline, the seabed rises gradually from around 32 feet to become shallow enough for the swell to break. Tidal conditions also help to increase the wave height.
According to Mr McNamara’s website charting the project he has been conducting, the wave produced here are “probably the biggest in all the world” for sandy a sand sea bed.
On Monday the 80 mile an hour winds created by the St Jude’s Atlantic storm whipped up the swell to monstrous proportions, leading to waves of up to 100 feet tall.
The previous day as the storm gathered pace, waves of up to 80 feet high formed and British surfer Andrew Cotton managed to ride one of these.

Image from How a 100 foot wave is created, The Telegraph (UK),
_____________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Blueberry Earth
Here’s a gedankenexperiment (that’s German for “thought experiment”) that ought to interest you.
A gedankenexperiment is a way that physicists ask questions about how something in our universe works, for the joy of working out it’s consequences. The experiments don’t need to be practical, although many do lead to advances in physics. Famous examples of gedankenexperiments that led to new ideas in physics include Schrödinger’s cat and Maxwell’s demon.
Blueberry Earth: The Delicious Thought Experiment That’s Roiling Planetary Scientists
“A roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit.”
Sarah Zhang, The Atlantic, 8/2/1018

Image from pxhere.com, 517756, CC0 Public Domain
Sarah Zhang, in The Atlantic, 8/2/1018, writes
Can I offer you a thought experiment on what would happen if the Earth were replaced by “an equal volume of closely packed but uncompressed blueberries”? When Anders Sandberg saw this question, he could not let it go. The asker was one “billybodega,” who posted the scenario on Physics Stack Exchange. (Though the question was originally posed on Twitter by writer Sandra Newman.)
A moderator of the usually staid forum closed the discussion before Sandberg could reply. That didn’t matter. Sandberg, a researcher at Oxford’s Future of Humanity Institute, wrote a lengthy answer on his blog and then an even lengthier paper that he posted to arxiv.org, a repository for physics preprints that have not yet been peer reviewed. The result is a brilliant explanation of how planets form.
To begin: The 1.5 x 1025 pounds of “closely packed but uncompressed” berries will start to collapse onto themselves and crush the berries deeper than 11.4 meters – or 37 feet – into a pulp. “Enormous amounts of air will be pushing out from the pulp as bubbles and jets, producing spectacular geysers,” writes Sandberg. What’s more, this rapid shrinking will release a huge amount of gravitational energy—equal to, according to Sandberg’s calculations, the energy output of the sun over 20 minutes. It’s enough to make the pulp boil. Behold:
“The result is that blueberry earth will turn into a roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit. As the planet evolves a thick atmosphere of released steam will add to the already considerable air from the berries. It is not inconceivable that the planet may heat up further due to a water vapour greenhouse effect, turning into a very odd Venusian world.”
Deep under the roiling jam waves, the pressure is high enough that even the warm jam will turn to ice. Blueberry Earth will have an ice core 4,000 miles wide, by Sandberg’s calculations. “The end result is a world that has a steam atmosphere covering an ocean of jam on top of warm blueberry granita,” he writes.
The process is not so different from the birth of a planet out of a disc of rotating debris. The coalescing, the emergence of an atmosphere, the formation of a dense core—all of these happened at one point to the real Earth. And it is currently happening elsewhere in the universe, as exoplanets are forming around other stars in other galaxies.
What happens-if-the-earth-instantly-turned into a mass of blueberries? The Atlantic
An interview with the author on Slate.com
Blueberry Earth by Anders Sandberg, on Arxiv
___________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Why Old Physics Still Matters
By Chad Orzel, Forbes, 7/30/18
(The following is an approximation of what I will say in my invited talk at the 2018 Summer Meeting of the American Association of Physics Teachers. They encourage sharing of slides from the talks, but my slides for this talk are done in what I think of as a TED style, with minimal text, meaning that they’re not too comprehensible by themselves. So, I thought I would turn the talk into a blog post, too, maximizing the ratio of birds to stones…
(The full title of the talk is Why “Old Physics” Still Matters: History as an Aid to Understanding, and the abstract I sent in is:
A common complaint about physics curricula is that too much emphasis is given to “old physics,” phenomena that have been understood for decades, and that curricula should spend less time on the history of physics in order to emphasize topics of more current interest. Drawing on experience both in the classroom and in writing books for a general audience, I will argue that discussing the historical development of the subject is an asset rather than an impediment. Historical presentation is particularly useful in the context of quantum mechanics and relativity, where it helps to ground the more exotic and counter-intuitive aspects of those theories in a concrete process of observation and discovery.
The title of this talk refers to a very common complaint made about the teaching of physics, namely that we spend way too much time on “old physics,” and never get to anything truly modern. This is perhaps best encapsulated by Henry Reich of MinutePhysics, who made a video open letter to Barack Obama after his re-election noting that the most modern topics on the AP Physics exam date from about 1905.
This is a reflection of the default physics curriculum, which generally starts college students off with a semester of introductory Newtonian physics, which was cutting-edge stuff in the 1600s. The next course in the usual sequence is introductory E&M, which was nailed down in the 1800’s, and shortly after that comes a course on “modern physics,” which describes work from the 1900s.
Within the usual “modern physics” course, the usual approach is also historical: we start out with the problem of blackbody radiation, solved by Max Planck in 1900, then move on to the photoelectric effect, explained by Albert Einstein in 1905, and then to Niels Bohr’s model of the hydrogen atom from 1913, and eventually matter waves and the Schrodinger equation, bringing us all the way up to the late 1920’s.
It’s almost become cliche to note that “modern physics” richly deserves to be in scare quotes. A typical historically-ordered curriculum never gets past 1950, and doesn’t deal with any of the stuff that is exciting about quantum physics today.
This is the root of the complaint about “old physics,” and it doesn’t necessarily have to be this way. There are approaches to the subject that are, well, more modern. John Townsend’s textbook for example, starts with the quantum physics of two-state systems, using electron spins as an example, and works things out from there. This is a textbook aimed at upper-level majors, but Leonard Susskind and Art Friedman’s Theoretical Minimum book uses essentially the same approach for a non-scientific audience. Looking at the table of contents of this, you can see that it deals with the currently hot topic of entanglement a few chapters before getting to particle-wave duality, flipping the historical order of stuff around, and getting to genuinely modern approaches earlier.
There’s a lot to like about these books that abandon the historical approach, but when I sat down and wrote my forthcoming general-audience book on quantum physics, I ended up taking the standard historical approach: if you look at the table of contents, you’ll see it starts with Planck’s blackbody model, then Einstein’s introduction of photons, then the Bohr model, and so on.
This is not a decision made from inertia or ignorance, but a deliberate choice, because I think the historical approach offers some big advantages not only in terms of making the specific physics content more understandable, but for boosting science more broadly. While there are good things to take away from the ahistorical approaches, they have to open with blatant assertions regarding the existence of spins. They’re presenting these as facts that simply have to be accepted as a starting point, and I think that not only loses some readers who will get hung up on that call, it goes a bit against the nature of science, as a process for generating knowledge, not a collection of facts.
This historical approach gets to the weird stuff, but grounds it in very concrete concerns. Planck didn’t start off by asserting the existence of quantized energy, he started with a very classical attack on a universal phenomenon, namely the spectrum of light emitted by a hot object. Only after he failed to explain the spectrum by classical means did he resort to the quantum, assigning a characteristic energy to light that depends on the frequency. At high frequencies, the heat energy available to produce light is less than one “quantum” of light, which cuts off the light emitted at those frequencies, rescuing the model from the “ultraviolet catastrophe” that afflicted classical approaches to the problem.
Planck used this quantum idea as a desperate trick, but Einstein picked it up and ran with us, arguing that the quantum hypothesis Planck resorted to from desperation could explain another phenomenon, the photoelectric effect. Einstein’s simple “heuristic” works brilliantly, and was what officially won him the Nobel Prize. Niels Bohr took these quantum ideas and applied them to atoms, making the first model that could begin to explain the absorption and emission of light by atoms, which used discrete energy states for electrons within atoms, and light with a characteristic energy proportional to the frequency. And quantum physics was off and running.
This history is useful because it grounds an exceptionally weird subject in concrete solutions to concrete problems. Nobody woke up one morning and asserted the existence of particles that behave like waves and vice versa. Instead, physicists were led to the idea, somewhat reluctantly but inevitably, by rigorously working out the implications of specific experiments. Going through the history makes the weird end result more plausible, and gives future physicists something to hold on to as they start on the journey for themselves.
This historical approach also has educational benefits when applied to the other great pillar of “modern physics” classes, namely Einstein’s theory of special relativity. This is another subject that is often introduced in very abstract ways– envisioning a universe filled with clocks and meter sticks and pondering the meaning of simultaneity, or considering the geometry of spacetime. Again, there are good things to take away from this– I learned some great stuff from Takeuchi’s Illustrated Guide to Relativity and Cox and Forshaw’s Why Does E=mc2?. But for a lot of students, the abstraction of this approach leads to them thinking “Why in hell are we talking about this nonsense?”
Some of those concerns can be addressed by a historical approach. The most standard way of doing this is to go back to the Michelson-Morley experiment, started while Einstein was in diapers, that proved that the speed of light was constant. But more than that, I think it’s useful to bring in some actual history– I’ve found it helpful to draw on Peer Galison’s argument in Einstein’s Clocks, Poincare’s Maps.
Galison notes that the abstract concerns about simultaneity that connect to relativity arise very directly from considering very concrete problems of timekeeping and telegraphy, used in surveying the planet to determine longitude, and establishing the modern system of time zones to straighten out the chaos that multiple incompatible local times created for railroads.
Poincare was deeply involved in work on longitude and timekeeping, and these practical issues led him to think very philosophically about the nature of time and simultaneity, several years before Einstein’s relativity. Einstein, too, was in an environment where practical timekeeping issues would’ve come up with some regularity, which naturally leads to similar thoughts. And it wasn’t only those two– Hendrik Lorentz and George FitzGerald worked out much of the necessary mathematics for relativity on their own.
So, adding some history to discussions of relativity helps both ground what is otherwise a very abstract process and also helps reinforce a broader understanding of science as a process. Relativity, seen through a historical perspective, is not merely the work of a lone genius who was bored by his job in the patent office, but the culmination of a process involving many people thinking about issues of practical importance.
Bringing in some history can also have benefits when discussing topics that are modern enough to be newsworthy. There’s a big argument going on at the moment about dark matter, with tempers running a little high. On the one hand, some physicists question whether it’s time to consider alternative explanations, while other observations bolster the theory.
Dark matter is a topic that might very well find its way into classroom discussions, and it’s worth introducing a bit of the history to explore this. Specifically, it’s good to go back to the initial observations of galaxy rotation curves. The spectral lines emitted by stars and hot gas are redshifted by the overall motion of the galaxy, but also bent into a sort of S-shape by the fact that stars on one side tend to be moving toward us due to the galaxy’s rotation, and stars on the other side tend to be moving away. The difference between these lets you find the velocity of rotation as a function of distance from the center of the galaxy, and this turns out to be higher than can be explained by the mass we can see and the normal behavior of gravity.
This work is worth introducing not only because these galaxy rotations are the crux of the matter for the current argument, but because they help make an important point about science in context. The initial evidence for something funny about these rotation curves came largely from work by Vera Rubin, who was a remarkable person. As a woman in a male-dominated field, she had to overcome many barriers along the course of her career.
Bringing up the history of dark matter observations is a natural means to discuss science in a broader social context, and the issues that Rubin faced and overcame, and how those resonate today. Talking about her work and history allows both a better grounding for the current dark matter fights, and also a chance to make clear that science takes place within and is affected by a larger societal context. That’s probably at least as important an issue to drive home as any particular aspect of the dark matter debate.
So, those are some examples of areas in which a historical approach to physics is actively helpful to students, not just a way to delay the teaching of more modern topics. By grounding abstract issues in concrete problems, making the collaborative and cumulative nature of science clear, and placing scientific discoveries in a broader social context, adding a bit of history to the classroom helps students get a better grasp on specific physics topics, and also on science as a whole.
About the author: Chad Orzel is Associate Professor in the Department of Physics and Astronomy at Union College
_______________________________________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
The Momentum Principle Vs Newton’s 2nd Law
Practical problem solving: When we do use conservation of momentum to solve a problem? When do we use Newton’s laws of motions?

Sometimes we need to use only one or the other; other times both are equally useful. And on other occasions some problems may require the use of both approaches. Rhett Allain on Wired.com discusses this in “Physics Face Off: The Momentum Principle Vs Newton’s 2nd Law”
__________________________
CONSIDER THE FOLLOWING physics problem.
An object with a mass of 1 kg and a velocity of 1 m/s in the x-direction has a net force of 1 Newton pushing on it (also in the x-direction). What will the velocity of the object be after 1 second? (Yes, I am using simple numbers—because the numbers aren’t the point.)
Let’s solve this simple problem two different ways. For the first method, I will use Newton’s Second Law. In one dimension, I can write this as:
F (net – x) = m x ax
Using this equation, I can get the acceleration of the object (in the x-direction). I’ll skip the details, but it should be fairly easy to see that it would have an acceleration of 1 m/s2. Next, I need the definition of acceleration (in the x-direction). Oh, and just to be clear—I’m trying to be careful about these equations since they are inherently vector equations.
a = delta Vx / time
The article continues here:
Physics Face Off: The Momentum Principle Vs Newton’s 2nd Law
3D Color X-rays
What if X-rays could produce three dimensional color images?

This is now a reality, thanks to a New-Zealand company that scanned, for the first time, a human body using a breakthrough colour medical scanner based on the Medipix3 technology developed at CERN. Father and son scientists Professors Phil and Anthony Butler from Canterbury and Otago Universities spent a decade building and refining their product.
Medipix is a family of read-out chips for particle imaging and detection. The original concept of Medipix is that it works like a camera, detecting and counting each individual particle hitting the pixels when its electronic shutter is open. This enables high-resolution, high-contrast, very reliable images, making it unique for imaging applications in particular in the medical field.
Hybrid pixel-detector technology was initially developed to address the needs of particle tracking at the Large Hadron Collider, and successive generations of Medipix chips have demonstrated over 20 years the great potential of the technology outside of high-energy physics.
They use the spectroscopic information generated by the detector with mathemtaical algorithms to generate 3D images. The colours represent different energy levels of the X-ray photons as recorded by the detector. Hence, colors identify different components of body parts such as fat, water, calcium, and disease markers.
First 3D colour X-ray of a human using CERN technology, by Romain Muller. VERN.
How to teach AP physics
It’s easy to teach physics in a wordy and complicated way – but taking a concept and breaking it down into simple steps, and presenting ideas in a way that are easily comprehensible to the eager student, is more challenging.
Yet that is what Nobel prize winning physicist Richard Feynman excelled at. The same skills that made one a good teacher also caused one to more fully understand the topic him/herself. This was Feynman’s basic method of learning.

1) Develop an array of hands-on labs that allow one to study basic phenomenon.
You can also use many wonderful online simulations, such as PhET or Physics Aviary.
2) Each day go over several problems in class. They need to see a master teacher take what appears to be a complex word problem, and turn it into equations.
3.) Insure that students take good notes. One way of doing this is having the occasional surprise graded notebook check (say, twice per month.)
4) Each week assign homework. Each day randomly call a few students to put one of their solutions on the board. Recall that the goal is not to get the correct numerical answer. (That sometime can come by luck or cheating.) Focus on the derivation. Does the student understand which basic principles are involved?
5) Keep track of strengths and weaknesses: Is there a weakness in algebra, trigonometry, or geometry? When you see a pattern emerge, assign problem sets that require mastering the weak area – not to punish them, but to build skills. Start with a few very easy problems, and slowly build in complexity. Let them work in groups if you like.
6) Don’t drown yourself in paperwork: Don’t grade every problem, from every student, every day. You could easily work 24 hours a day and still have more work to do. Only collect & grade some percent of the homework.
7) Focus on simple drawings – or for classes that uses programming to simulate physics phenomenon – simple animations. Are the students capable of sketching free-body diagrams that strip away extraneous info? Can they diagram out all the forces on an object?
8) Give frequent assessments that are easy to grade.
9) Get books such as TIPERS for Physics, or Ranking Task Exercises in Physics. They are diagnostic tools to check for misconceptions.. Call publishers for free sample textbooks and resources. For a textbook I happen to like Giancoli Physics; their teacher solution manual is very well thought out.
Ferris wheel physics
A Ferris wheel is a large structure consisting of a rotating upright wheel, with multiple passenger cars.
The cars are attached to the rim in such a way that as the wheel turns, they are kept upright by gravity.

The original Ferris Wheel was designed and constructed by George Washington Gale Ferris Jr. as a landmark for the 1893 World’s Columbian Exposition in Chicago.
The generic term Ferris wheel is now used for all such structures, which have become the most common type of amusement ride at state fairs in the United States.
Forces in the wheel
The wheel keeps its circular shape by the tension of the spokes, pulling upward against the lower half of the framework and downward against the huge axle.
Also see
Classical relativity
This animation shows simultaneous views of a ball tossed up and then caught by a ferris wheel rider –
It shows this from one inertial POV and from two non-inertial POVs.
P. Fraundorf writes
Although Newton’s predictions are easier to track from the inertial point of view, it turns out that they still work locally in accelerated frames and curved spacetime if we consider “geometric accelerations and forces” that act on every ounce of an object’s being and can be made to disappear by a suitable vantage point change.

Created by P. Fraundorf, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
Net work done on you while on the wheel
if you are on a ferris wheel that is rotating, the total work done by all the forces acting on your is zero.
https://www.physicsforums.com/threads/ferris-wheel-work-done-by-net-force.715905/
External resources
https://www.real-world-physics-problems.com/ferris-wheel-physics.html
https://physics.stackexchange.com/questions/205918/centripetal-force-on-a-ferris-wheel
How products are made: http://www.madehow.com/Volume-6/Ferris-Wheel.html
AP Physics problems: Ferris wheels and rotational motion
Build A Big Wheel, by Try Engineering, Lesson plan
AP Physics problem solving
http://faculty.washington.edu/boynton/114AWinter08/LectureNotes/Le8.pdf
How records work
How record work (private for now)
https://kaiserscience.wordpress.com/physics/waves/how-records-work/
Miniaturization
Many sci-fi stories depend upon a technology called miniaturization. Isaac Asimov’s classic Fantastic Voyage; his more scientifically rigorous sequel, Fantastic Voyage II; DC Comics featuring The Atom, and Marvel Comics featuring Antman and The Wasp.
Is miniaturization real? Could it be real? What would be the results if it was real?

Scene from the 1966 movie Fantastic Voyage. The medical ship, inside in a blood vessel, is under attack from antibodies!
Miniaturization in movies and TV
1940’s movie – Dr. Cyclops. People are reduced to less than a foot in size by the titular mad scientist, and are subjugated to his whims.
1957 movie – The Incredible Shrinking Man inspired a boom in science fiction films that made use of size-alteration.
1961 The Atom, a Silver Age comic book character, the Atom, Dr. Ray Palmer, created by DC Comics.
1960’s Ant-Man, Marvel Comics superhero.
1966 Fantastic Voyage
1976 Dr. Shrinker, from the ABC network’s The Krofft Supershow
1987 Innerspace starring stars Dennis Quaid, Martin Short and Meg Ryan.
1989 Honey, I Shrunk the Kids, 1997 Honey, We Shrunk Ourselves
2015 Ant-Man, and 2018 Ant-Man and the Wasp
2016 – DC’s Legends of Tomorrow (featuring The Atom)
What would happen if we compressed someone?
Neil deGrasse Tyson shows us the real physics.
Although he probably shouldn’t write any more comics 😉

By Clay Yount, Hamlet’s Danish, 12/9/2014
Physics: How would one try to do this?
There are no practical ways to actually do this. However, science fiction stories speculate on how this could be done.
Interestingly, sustained thought and speculation on science fiction technologies has allowed scientists to develop real-world technologies.
A. Compression / increasing density
“Why are you so certain miniaturization is impossible?”
“If you reduce a man to the dimensions of a fly, then all the mass of a man would be crowded into the volume of a fly. You’d end up with a density of something like -” he paused to think – “a hundred and fifty thousand times that of platinum. “
From Fantastic Voyage II
B. Removing atoms
“But what if the mass were reduced in proportion?” – “Then you end up with one atom in the miniaturized man for every three million in the original. The miniaturized man would not only have the size of a fly but the brainpower of a fly as well. “
From Fantastic Voyage II:
C. Changing Planck’s constant
This is a major science-plot point in Fantastic Voyage II (1988)
“And if the atoms are reduced, too?”
“If it is miniaturized atoms you are speaking of, then Planck’s constant, which is an absolutely fundamental quantity in our Universe, forbids it. Miniaturized atoms would be too small to fit into the graininess of the Universe. “
“And if I told you that Planck’s constant was reduced as well, so that a miniaturized man would be encased in a field in which the graininess of the Universe was incredibly finer than it is under normal conditions?”
“Then I wouldn’t believe you. “
“Without examining the matter? You would refuse to believe it as a result of preconceived convictions, as your colleagues refuse to believe you?”
And at this, Morrison was, for a moment, silent….
…Well over half an hour had passed before Morrison felt convinced that the objects he could see outside the ship were shrinking and were receding perceptibly toward their normal size.
Morrison said, “I am thinking of a paradox.”
“What’s that?” said Kalinin, yawning. She had obviously taken her own advice about the advisability of relaxing.
“The objects outside the ship seemed to grow larger as we shrink. Ought not the wavelengths of light outside the ship also grow larger, becoming longer in wavelength, as we shrink? Should we not see everything outside turn reddish, since there can scarcely be enough ultraviolet outside to expand and replace the shorter-wave visible light?”
Kalinin said, “If you could see the light waves outside, that would indeed be how they would appear to you. But you don’t. You see the light waves only after they’ve entered the ship and impinged upon your retina. And as they enter the ship, they come under the influence of the miniaturization field and automatically shrink in wavelength, so that you see those wavelengths inside the ship exactly as you would see them outside.”
“If they shrink in wavelength, they must gain energy.”
“Yes, if Planck’s constant were the same size inside the miniaturization field as it is outside. But Planck’s constant decreases inside the miniaturization field — that is the essence of miniaturization. The wavelengths, in shrinking, maintain their relationship to the shrunken Planck’s constant and do not gain energy. An analogous case is that of the atoms. They also shrink and yet the interrelationships among atoms and among the subatomic particles that make them up remain the same to us inside the ship as they would seem to us outside the ship.”
“But gravity changes. It becomes weaker in here.”
“The strong interaction and the electroweak interaction come under the umbrella of the quantum theory. They depend on Planck’s constant. As for gravitation?” Kalinin shrugged. “Despite two centuries of effort, gravitation has never been quantized. Frankly, I think the gravitational change with miniaturization is evidence enough that gravitation cannot be quanitzed, that it is fundamentally nonquantum in nature.”
“I can’t believe that,” said Morrison. “Two centuries of failure can merely mean we haven’t managed to get deep enough into the problem yet. Superstring theory nearly gave us out unified field at last.” (It relieved him to discuss the matter. Surely he couldn’t do so if his brain were heating in the least.)
“Nearly doesn’t count,” said Kalinin. “Still, Shapirov aagreed with you, I think. It was his notion that once we tied Planck’s constant to the speed of light, we would not only have the practical effect of miniaturizing and deminiaturizing in an essentially energy-free manner, but that we would have the theoretical effect of being able to work out the connection between quantum theory and relativity and finally have a good unified field theory. And probably a simpler one than we could have imagined possible, he would say.”
“Maybe,” said Morrison. He didn’t know enough to comment beyond that.
Surely this is complete fantasy, correct? Well, probably. But there is some room in physics to believe that the constants of nature, even Planck’s constant, may quite be constant:
Could Fundamental Constants Be Neither Fundamental nor Constant?
Are Nature’s Laws Really Universal? Dr. Michael Murphy, Centre for Astrophysics and Supercomputing,
Swinburne University of Technology
The Variability of the ‘Fundamental Constants’
The Constants of Nature: From Alpha to Omega – The Numbers That Encode the Deepest Secrets of the Universe (book,) John D. Barrow
D. Nanotechnology as miniaturization
“…The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled “There’s Plenty of Room at the Bottom” by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used.
In his talk, Feynman described a process in which scientists would be able to manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn’t until 1981, with the development of the scanning tunneling microscope that could “see” individual atoms, that modern nanotechnology began.”
Nano.gov What is nanotechnology?
Nanotechnology isn’t so impossible. We already have developed techniques to image and pick atoms. one atom at a time, through technologies such as atomic force microscopy and Scanning tunneling microscopy.
References
Miniaturization: Technovelgy article
Excerpt from Fantastic Voyage: II A Novel By Isaac Asimov, 1988
Learning Standards
Next Generation Science Standards: Science & Engineering Practices
● Ask questions that arise from careful observation of phenomena, or unexpected results, to clarify and/or seek additional information.
● Ask questions that arise from examining models or a theory, to clarify and/or seek additional information and relationships.
● Ask questions to determine relationships, including quantitative relationships, between independent and dependent variables.
● Ask questions to clarify and refine a model, an explanation, or an engineering problem.
● Evaluate a question to determine if it is testable and relevant.
● Ask questions that can be investigated within the scope of the school laboratory, research facilities, or field (e.g., outdoor environment) with available resources and, when appropriate, frame a hypothesis based on a model or theory.
● Ask and/or evaluate questions that challenge the premise(s) of an argument, the interpretation of a data set, or the suitability of the design
MA 2016 Science and technology
Appendix I Science and Engineering Practices Progression Matrix
Science and engineering practices include the skills necessary to engage in scientific inquiry and engineering design. It is necessary to teach these so students develop an understanding and facility with the practices in appropriate contexts. The Framework for K-12 Science Education (NRC, 2012) identifies eight essential science and engineering practices:
1. Asking questions (for science) and defining problems (for engineering).
2. Developing and using models.
3. Planning and carrying out investigations.
4. Analyzing and interpreting data.
5. Using mathematics and computational thinking.
6. Constructing explanations (for science) and designing solutions (for engineering).
7. Engaging in argument from evidence.
8. Obtaining, evaluating, and communicating information.
Scientific inquiry and engineering design are dynamic and complex processes. Each requires engaging in a range of science and engineering practices to analyze and understand the natural and designed world. They are not defined by a linear, step-by-step approach. While students may learn and engage in distinct practices through their education, they should have periodic opportunities at each grade level to experience the holistic and dynamic processes represented below and described in the subsequent two pages… http://www.doe.mass.edu/frameworks/scitech/2016-04.pdf
Facts and Fiction of the Schumann Resonance
This has been excerpted from Facts and Fiction of the Schumann Resonance,by Brian Dunning, Skeptoid Podcast #352
It’s increasingly hard to find a web page dedicated to the sales of alternative medicine products or New Age spirituality that does not cite the Schumann resonances as proof that some product or service is rooted in science. … Today we’re going to see what the Schumann resonances actually are, how they formed and what they do, and see if we can determine whether they are, in fact, related to human health.
In physics, Schumann resonances are the name given to the resonant frequency of the Earth’s atmosphere, between the surface and the densest part of the ionosphere.

Image from nasa.gov/mission_pages/sunearth/news/gallery
They’re named for the German physicist Winfried Otto Schumann (1888-1974) who worked briefly in the United States after WWII, and predicted that the Earth’s atmosphere would resonate certain electromagnetic frequencies.
[What is a resonant frequency? Here is a common example. When you blow on a glass bottle at a certain frequency, you can get the bottle to vibrate at the same frequency]

from acs.psu.edu/drussell/Demos/BeerBottle/beerbottle.html
This glass bottle has a resonant frequency of about 196 Hz.
That’s the frequency of sound waves that most efficiently bounce back and forth between the sides of the bottle, at the speed of sound, propagating via the air molecules.
Electromagnetic radiation – like light, and radio waves – is similar, except the waves travel at the speed of light, and do not require a medium like air molecules.
The speed of light is a lot faster than the speed of sound, but the electromagnetic waves have a lot further to go between the ground and the ionosphere than do the sound waves between the sides of the bottle.
This atmospheric electromagnetic resonant frequency is 7.83 Hz, which is near the bottom of the ELF frequency range, or Extremely Low Frequency.
The atmosphere has its own radio equivalent of someone blowing across the top of the bottle: lightning.

Lightning is constantly flashing all around the world, many times per second; and each bolt is a radio source. This means our atmosphere is continuously resonating with a radio frequency of 7.83 Hz, along with progressively weaker harmonics at 14.3, 20.8, 27.3 and 33.8 Hz.
These are the Schumann resonances.
It’s nothing to do with the Earth itself, or with life, or with any spiritual phenomenon;
it’s merely an artifact of the physical dimensions of the space between the surface of the Earth and the ionosphere.
Every planet and moon that has an ionosphere has its own set of Schumann resonances defined by the planet’s size.

Biggest point: this resonated radio from lightning is a vanishingly small component of the electromagnetic spectrum to which we’re all naturally exposed.
The overwhelming source is the sun, blasting the Earth with infrared, visible light, and ultraviolet radiation.
All natural sources from outer space, and even radioactive decay of naturally occurring elements on Earth, produce wide-spectrum radio noise. Those resonating in the Schumann cavity are only a tiny, tiny part of the spectrum.

Nevertheless, because the Schumann resonance frequencies are defined by the dimensions of the Earth, many New Age proponents and alternative medicine advocates have come to regard 7.83 Hz as some sort of Mother Earth frequency, asserting the belief that it’s related to life on Earth.
The most pervasive of all the popular fictions surrounding the Schumann resonance is that it is correlated with the health of the human body.

There are a huge number of products and services sold to enhance health or mood, citing the Schumann resonance as the foundational science.
A notable example is the Power Balance bracelets. Tom O’Dowd, formerly the Australian distributor, said that the mylar hologram resonated at 7.83 Hz.
When the bracelet was placed within the body’s natural energy field, the resonance would [supposedly] “reset” your energy field to that frequency.
Well, there were a lot of problems with that claim.
First of all, 7.83 Hz has a wavelength of about 38,000 kilometers. This is about the circumference of the Earth, which is why its atmospheric cavity resonates at that frequency. 38,000 kilometers is WAY bigger than a bracelet!
There’s no way that something that tiny could resonate such an enormous wavelength. O’Dowd’s sales pitch was implausible, by a factor of billions, to anyone who understood resonance.
This same fact also applies to the human body. Human beings are so small, relative to a radio wavelength of 38,000 kilometers, that there’s no way our anatomy could detect or interact with such a radio signal in any way.
Proponents of binaural beats cite the Schumann frequency as well. These are audio recordings which combine two slightly offset frequencies to produce a third phantom beat frequency that is perceived from the interference of the two.
Some claim to change your brain’s encephalogram, which they say is a beneficial thing to do. Brain waves range from near zero up to about 100 Hz during normal activity, with a typical reading near the lower end of the scale.
This happens to overlap 7.83 — suggesting the aforementioned pseudoscientific connection between humans and the Schumann resonance — but with a critical difference. An audio recording is audio, not radio. It’s the physical oscillation of air molecules, not the propagation of electromagnetic waves. The two have virtually nothing to do with each other.
[Other salespeople claim] that our bodies’ energy fields need to interact with the Schumann resonance, but can’t because of all the interference from modern society [and so they try to sell devices that supposedly connect our body to the Schumann resonance.]
It’s all complete and utter nonsense. Human bodies do not have an energy field: in fact there’s not even any such thing as an energy field. Fields are constructs in which some direction or intensity is measured at every point: gravity, wind, magnetism, some expression of energy.
Energy is just a measurement; it doesn’t exist on its own as a cloud or a field or some other entity. The notion that frequencies can interact with the body’s energy field is, as the saying goes, so wrong it’s not even wrong.
Another really common New Age misconception about the Schumann resonance is that it is the resonant frequency of the Earth. But there’s no reason to expect the Earth’s electromagnetic resonant frequency to bear any similarity to the Schumann resonance.
Furthermore, the Earth probably doesn’t even have a resonant electromagnetic frequency. Each of the Earth’s many layers is a very poor conductor of radio; combined all together, the Earth easily absorbs just about every frequency it’s exposed to. If you’ve ever noticed that your car radio cuts out when you drive through a tunnel, you’ve seen an example of this.
Now the Earth does, of course, conduct low-frequency waves of other types. Earthquakes are the prime example of this. The Earth’s various layers propagate seismic waves differently, but all quite well. Seismic waves are shockwaves, a physical oscillation of the medium. Like audio waves, these are unrelated to electromagnetic radio waves.
Each and every major structure within the Earth — such as a mass of rock within a continent, a particular layer of magma, etc. — does have its own resonant frequency for seismic shockwaves, but there is (definitively) no resonant electromagnetic frequency for the Earth as a whole.
So our major point today is that you should be very skeptical of any product that uses the Schumann resonance as part of a sales pitch.
The Earth does not have any particular frequency. Life on Earth is neither dependent upon, nor enhanced by, any specific frequency.
Source: skeptoid.com/episodes/4352

