Home » Science (Page 7)
Category Archives: Science
Respiratory viruses (influenza, rhinoviruses, coronaviruses etc)
What is a respiratory virus?
They are viruses that affect your breathing passages.
They cause common cold and flu-like symptoms.
They can cause shortness of breath and in more severe cases cause pneumonia.
Some infect mostly the upper respiratory tract, in the larynx, vocal cords and above.
Others infect mostly the lower respiratory tract symptoms – below the larynx and vocal cords.
and
Symptoms
These vary significantly from person to person.
Aching muscles and joints
Cough and sputum
Fever
Headache
Runny nose
Sneezing
Sore throat
Tiredness
Common complications of respiratory viruses include:
Bronchiolitis – inflammation of small air passages in the lungs
Croup – inflammation and swelling of the voice box (larynx), the windpipe (trachea) and the airways (bronchi)
Pneumonia – lung infection with inflammation
Sinusitis – infection or inflammation of the sinuses
What are the common respiratory viruses?
Influenza virus – “the flu”
The CDC estimates that influenza has resulted in between 9 million – 45 million illnesses, between 140,000 – 810,000 hospitalizations and between 12,000 – 61,000 deaths annually since 2010.
Respiratory syncytial virus
common respiratory virus that usually causes mild, cold-like symptoms. Most people recover in a week or two, but RSV can be serious, especially for infants and older adults.
RSV is the most common cause of bronchiolitis (inflammation of the small airways in the lung) and pneumonia (infection of the lungs) in children younger than 1 year of age in the United States. – CDC
Human parainfluenza viruses (HPIVs)
Human parainfluenza viruses (HPIVs) commonly cause respiratory illnesses in infants and young children. But anyone can get HPIV illness.
Symptoms may include fever, runny nose, and cough. Patients usually recover on their own. However, HPIVs can also cause more severe illness, such as croup or pneumonia.
Metapneumovirus
MPV is associated with 5% to 40% of respiratory tract infections in hospitalized and outpatient children. It is distributed worldwide and, in temperate regions, has a seasonal distribution generally following that of RSV and influenza virus during late winter and spring.
By the age of five, virtually all children worldwide have been exposed. Despite near universal infection during early life, reinfections are common in older children and adults. They may cause mild upper respiratory tract infection (the common cold).
However, premature infants, immunocompromised persons, and older adults >65 years are at risk for severe disease and hospitalization.
from Wikipedia metapneumovirus
Rhinovirus
The most common viral infectious agent in humans. Main cause of the common cold. Exists in three species with at least 160 recognized types.
Coronaviruses
Coronaviruses are a group of related RNA viruses that cause diseases in mammals and birds. In humans and birds, they cause respiratory tract infections that can range from mild to lethal.
Mild illnesses in humans include some cases of the common cold (which is also caused by other viruses, predominantly rhinoviruses), while more lethal varieties can cause SARS, MERS, and COVID-19.
Adenoviruses
Adenoviruses are a type of virus without an outer lipid bilayer, with a double stranded DNA genome. More than 50 distinct types have been in people.
They usually cause mild respiratory infections, the common cold. But they can cause life-threatening multi-organ disease in people with a weakened immune system.
Human bocavirus (HBoV)
HBoV1 is strongly implicated in causing some cases of lower respiratory tract infection, especially in young children. Discovered in 2005.
These are the fourth most common virus in respiratory samples, behind rhinoviruses, respiratory syncytial virus and adenoviruses. Usually causes the common cold although it can also cause very dangerous illness.
Several versions of this virus have been linked to gastroenteritis.
The full role of this emerging infectious disease remains to be known.
– Wikipedia
Main method of transmission is through the air
See this infographic

Image from paper by Jianjian Wei and Yuguo Li. Airborne spread of infectious agents in the indoor environment
In Deep Cleaning Isn’t a Victimless Crime we read
These days, Goldman is extending his crusade against fomite fear from COVID-19 to other diseases. The old story is that if you make contact with a surface that a sick person touched, and then you touch your eyes or lips, you’ll infect yourself.
While Goldman acknowledges that many diseases, especially bacterial diseases, spread easily from surfaces, he now suspects that most respiratory viruses spread primarily through the air, like SARS-CoV-2 does.
“For most respiratory viruses, the evidence for fomite transmission looks pretty weak,” Goldman said. “With the exception of RSV [respiratory syncytial virus], there are few other respiratory viruses where fomite transmission has been conclusively shown.”
For example, rhinovirus, one of the most common viruses in the world and the predominant cause of the common cold, is probably overwhelmingly spread via aerosols. The same may be true of influenza.
Many experiments that suggest surface transmission of respiratory viruses stack the deck by studying unrealistically large amounts of virus using unrealistically ideal (cold, dry, and dark) conditions for their survival. Based on our experience with SARS-CoV-2, these may not be trustworthy studies.
Deep Cleaning Isn’t a Victimless Crime The CDC has finally said what scientists have been screaming for months: The coronavirus is overwhelmingly spread through the air, not via surfaces. Derek Thompson, The Atlantic, 4/13/2021
Thompson also writes:
It’s quite possible that ALMOST ALL respiratory viruses mostly spread through the air—including rhinovirus (lots of common colds) and the flu. That means the best way to avoid getting sick isn’t power-washing strategies, but ventilation strategies. Think windows over Windex.
Articles
Science Brief: SARS-CoV-2 and Surface (Fomite) Transmission for Indoor Community Environments, CDC, 4/5/2021
People can be infected with SARS-CoV-2 through contact with surfaces. However, based on available epidemiological data and studies of environmental transmission factors, surface transmission is not the main route by which SARS-CoV-2 spreads, and the risk is considered to be low. The principal mode by which people are infected with SARS-CoV-2 is through exposure to respiratory droplets carrying infectious virus.
and
Aerosol Transmission of Rhinovirus Colds, Elliot C. Dick et al., The Journal of Infectious Diseases, Volume 156, Issue 3, September 1987, Pages 442–448, https://doi.org/10.1093/infdis/156.3.442
and
Exaggerated risk of transmission of COVID-19 by fomites, Emanuel Goldman, The Lancet Infectious Diseases, Vol. 20(8), p.892-893, 8/1/2020
Do data (information) have mass?
Is data – information – something physically real? As it turns out, yes. All data has physical reality.
What about performing a computation? Computation is a process on data. Since data has physical reality then all computations are physical processes.
This connects with the laws of thermodynamics! In the 1960s Rolf Landauer realized that whenever we manipulate or erase information, entropy increases.
Data can be stored on any physical objects.
Here’s an example in which data is stored with real objects, each of which has mass, but changing data adds no weight:
Get 256 coins. Lay them out on a table.
Use heads to represent a ‘1’ and tails to represent a ‘0’.
Arrange the coins to create a sequence of 1’s and 0’s.
This then encodes 256 bits (32 bytes) of data.
We can erase this data by flipping all of the coins to ‘0’.
The mass of the coins does not change.
Flip the heads and tails all you like. New data, but no added mass.
But now let’s change our storage system. Get 256 empty glasses, lay them out on a table, in a grid. Use an empty glass to represent a 0 and add water to the glass to represent a 1. Here different data would have different mass.
Ways of storing data
Info in ancient cuneiform tablets
(Contract for the sale of a field and a house in the wedge-shaped cuneiform adapted for clay tablets, Shuruppak, Sumer, circa 2600 BC.E)
Letters written on parchment – or any kind of ink on paper, is data.
This is the earliest type of punch card, a way to store data as zeroes and ones.
and here is the Babbage computer, the Difference Engine.
1970s punch card, storing data for a Fortran computer program.
Data can be create and stored with paper and pencil.
Data can be recorded as a series of indentations on a vinyl circular disc, such as a vinyl LP record.
Here we see the needle of a vinyl LP record player going through the grooves.
Hard drives – patterns of magnetized particles
CD, DVD and Blu-Ray disks – burn patterns pits in a disk.
A Flash drive stores information in patterns of electrical charges
Transistors can store information. They trap electrons like a capacitor.
Does adding data to a hard drive or floppy disk drive change its mass?
For the most part, no. Data is stored by switching the magnetic polarity of tiny particles on one part of the disk. No mass is added or taken away.
Switching the polarity of tiny particles on a disk is just analogous to picking up a magnet and turning it around. Nothing added or taken away.
But, if we look at the physics much more closely –
“Hard drives store data by flipping poles in magnetic domains on the disk – at first glance this means nothing is added or subtracted and thus no weight.
However, that’s not the whole picture. The orientation of those domains matter.
There is less total field energy when the domains are 1010101010 than when they are 11111111 or 00000000. I’m sure everyone is familiar with e=mc^2.
Putting energy into the domains DOES mean mass, albeit an incredibly small amount of it. My physics isn’t up to even trying to estimate the mass but I’m sure it’s beyond anything the most sensitive scale could possibly measure.”
from How much does a gigabyte worth of data physically weigh on a hard disk?
Does adding data to RAM change its mass?
Some forms of RAM store data by adding electrons into certain ultra tiny parts. For all practical purposes there is not any noticeable gain in mass. But since when have we ever restricted ourselves to just practical purposes? 🙂
=== begin quote ===
In RAM, however, bits are comprised of electrons (or lack thereof) and they do have a mass which is about 9.10938215 × 10−31 kg. So for a gigabyte of memory, assuming equal distribution for zero and one bits, we get around
4294967296 n × 9.10938215 × 10−31 kg
4294967296 would be the number of one bits in memory (assumed to be 50 %) and n would be the number of electrons that are on average in one bit.
So we can give an estimate of how much mass 1 GiB (gigabyte) of memory would have:
1 GiB, half filled with ones ≈ 3.91 × 10−16 kg = 391 femtograms
1 GiB, completely filled with ones ≈ 7.82 × 10-16 kg = 782 femtograms
1 GB, half filled with ones ≈ 3.64 × 10−16 kg = 364 femtograms
1 GB, completely filled with ones ≈ 7.29 × 10−16 kg = 729 femtograms
So in general you can assume that weight to be pretty unnoticeable (or, with hard disks to be downright nonexistent).
This explanation is from How much does a gigabyte worth of data physically weigh on a hard disk?
All Computation is a physical process
The Fundamental Physical Limits of Computation
What constraints govern the physical process of computing? Is a minimum amount of energy required, for example, per logic step? There seems to be no minimum, but some other questions are open.
Charles H. Bennett, Rolf Landauer,, Scientific American, v253 n1, p.48-56 7/1985
http://web.eecs.umich.edu/~taustin/EECS598-HIC/public/Physical-Limits.pdf
Does Quantum Mechanics Flout the Laws of Thermodynamics?
Vlatko Vedral, Scientific American, 6/1/2011
Everyone who has ever worked with a computer knows that they get hotter the more we use them. Physicist Rolf Landauer argued that this needs to be so, elevating the observation to the level of a principle. The principle states that in order to erase one bit of information, we need to increase the entropy of the environment by at least as much. In other words we need to dissipate at least one bit of heat into the environment (which is just equal to the bit of entropy times the temperature of the environment).
Landauer’s erasure principle has been considered controversial in physics ever since he proposed it in the early ’60s. Was it a new law of physics or just a consequence of some already existing laws? Our new paper argues that in quantum physics, you can, in fact, erase information and cool the environment at the same time. For many physicists, this is tantamount to saying that perpetual motion is possible! What makes it possible is entanglement, but let me not get too far ahead of myself…
.
Misuse or misunderstanding of science
Science isn’t a position or a person. Rather, science is a method that allows us to test claims.
In science we approach claims skeptically. That doesn’t mean that that we don’t believe anything. Rather, to be skeptical means we don’t accept a claim unless we are given compelling evidence.
So while process of science can’t be disingenuous or harmful, certain people have used the word “science” to promote questionable or harmful products.
There are some products which were marketed in the 1800s or early to mid 1900s, which were claimed to be “proven safe by science.” In later years it was found that many of these products never really did what they promised, and that some were even harmful. When this was discovered many people began to think that science was unreliable. They would say things like “Science changes its mind all the time, so why should we believe it?”
That’s a legitimate question, but if someone asks it then should listen to the answers – and there are several:
(A) Very often the advertised product simply wasn’t ever proven scientifically to be effective or safe. Salespeople simply lied. American laws on advertising have always been very loose; in many ways the laws on some products are still quite loose today.
So if someone lies about a “scientifically proven product,” this doesn’t mean that science is unreliable. It means that the salespeople were unreliable.
(B) Just as often, when a product is first invented, people have only incomplete information. They may have done some testing, they may have involved some doctors, engineers, or scientists, and they may truly believe that their product is a good one. Sometimes positive effects are apparent immediately but harmful effects take time to show up. When this happens, that’s not fraud. It’s the inevitable result of people developing new things. We don’t always know how they will turn out long down the road.
(C) Some scientists knew of the danger, but it wasn’t made clear to the public at that time. For many years newspapers and radios didn’t employ writers with a scientific background. Writers and editors were told about science related stories, or occasionally investigated such stories, but without a highly trained staff they often couldn’t recognize a story worth pursuing and giving to the public. This is true, for example, about the radium being used in many popular products, such as watch dials. I stress to add that in many ways the situation is repeating itself today. Many social media sites used to disseminate news don’t have scientifically educated employees.
As such, issues like this are very important. We need to be very clear in how we discuss them.
This next image is surprising: We see here four advertisements, casually foisting harmful products on the American consumer. Three of these ads are absolutely real. Yet a fourth one is technically “fake,” it was created for a popular videogame, but it (sadly) based on genuine ads touting the supposed medical safety of the product. Can you figure out which is which?
“These ads were not deliberately used to harm people initially but highlight the consequences of not knowing because of not using science or having testing technologies that have been developed since the creation of these products. ”
Cigarettes
For many years, from the 1800s to the mid 1900s, cigarettes were promoted as a great weight loss tool. They were said said to relieve stress, and help one better digest a meal. Cigarette companies paid medical doctors to endorse certain brands of cigarettes as safe and healthy. A number of supposedly scientific research papers were paid for, done, and published, by cigarette companies themselves, and those conclusions were always the same: cigarette smoking is safe.
But by the 1950s doctors had observed a huge increase in lung and throat cancer that seemed to correlate with cigarette usage. Slowly, over time, more and more scientific studies were done on this topic. A simple and clear trend emerged:
Every scientific study done by impartial scientists, with all the data open to reviewers, showed that cigarette smoking was strongly linked to cancer.
Every scientific study paid for by cigarette companies, with hand chosen doctors working for those companies, with much data kept hidden, showed that cigarette smoking was safe.
The conclusions were obvious and undeniable: Cigarette smoking really was causing cancer, this was clearly proven by science, and the few individuals who said otherwise were all on the payroll of cigarette companies.
As such we may conclude – science never claimed or proved that cigarette smoking was safe. It was only for-profit cigarette companies that made this claim.
When Cigarette Companies Used Doctors to Push Smoking
“The Doctors’ Choice Is America’s Choice” : The Physician in US Cigarette Advertisements, 1930-1953
When Smoking Was Just What the Doctor Ordered
Over the counter heroin use
Heroin is an opioid. It was first developed in 1895 as a medicine to help treat respiratory diseases.
In some countries, in a highly regulated way, it is used medically to relieve pain, such as during childbirth or a heart attack. It is often used illegally and dangerously, as a recreational drug for its euphoric effects.
For many years, many nations allowed heroin to be sold over-the-counter (without a prescription) as a way to treat pain. Since it was discovered by scientists and was sold legally, some people could conclude that science has decided that this was a safe drug.
However, at this time there were very few, if any, peer-reviewed studies which showed the long term effects of unregulated heroin use.
The Bayer pharmaceutical company started making diacetylmorphine, and its marketing name was heroin. At this point, heroin was available over-the-counter. Heroin was viewed as a cure-all for everything from headaches to the common cold…. At the time, heroin was viewed as a safe alternative to morphine because it was seen as less addictive
By the mid-1800s, opium had become extremely popular, with opium dens located around the world, including in the United States…. Around the 1850s, morphine became available in the U.S. and its use was popular in medicine…. following the Civil War, it started to become clear that morphine had a serious side effect: addiction.
Heroin History Timeline in the U.S., Megan Hull, The Recovery Village, 12/20/2019
By the early 1900s scientists and doctors began to realize that this drug was far more dangerous than initially realized. As data accumulated, people lobbied the government to regular this substance. The first major law to do so was the Harrison Narcotics Tax Act, 1914. It controlled the sale and distribution of opioids; it did allow opioids to be prescribed and sold for medical purposes.
By 1924, the US Congress banned its sale, importation, or manufacture. Heroin is now a Schedule I substance, which makes it illegal for non-medical use in signatory nations of the Single Convention on Narcotic Drugs treaty, including the United States.
Some see this as an example of science falsely saying that a substance was safe, and then changing its mind. That is not so. Sure, in politics changing your position is seen as a weakness. People call it “flip flopping.” But in science it is a positive value to be open to new ideas. Science encourages us to change you mind if evidence reveals a better way of understanding something.
Asbestos
“Asbestos has been mined and used in a variety of materials since Ancient Greece. It wasn’t until maybe the 1950s where the connection to mesothelioma and lung cancer were made.”
Automobile industry
From 1900s to the 1960s the industry falsely claimed that cars were so safe that they didn’t need seatbelts, etc.
Radium
Radium used in lotions and toothpaste and cosmetics and as a healthy glowing elixir /fountain
How We Realized Putting Radium in Everything Was Not the Answer, Taylor Orci, The Atlantic, 3/7/2013
and this is of great historical interest: Radium Historical Items Catalog, By Buchholz and Cervera, Oak Ridge Institute
In the years following the discovery of radium-226 in 1898 by Madame Curie, radium became a novelty product used in everything from medicinal “cures” to children’s toys. At the time, radium was believed to pose negligible risk due to the radiation, and in fact was believed by many to have health benefits. However, over time the risks became apparent, and the use of radium in consumer products was gradually phased out, with the last common consumer application being in luminescent timepieces during the 1960s.
While radium-containing consumer products are no longer generally produced, many of the historically produced items are in circulation, sold in antique stores, held in private collections and displayed in museums. Record keeping by the manufacturers at the time was poor, and most companies that manufactured the products are no longer in existence. This makes identification of these items and finding applicable information difficult.
Under contract with the U.S. Nuclear Regulatory Commission (NRC), Oak Ridge Associated Universities (ORAU) has compiled this catalog of historical items known to or claimed to contain radium, either as a component of uranium ore or as purified radium.
Facts and ideas from anywhere: The Radium Girls, William Clifford Roberts, Proceedings (Baylor University. Medical Center). 2017 Oct; 30(4): 481–490
DDT
DDT (dichlorodiphenyltrichloroethane) is a colorless, tasteless, and almost odorless crystalline compound. Its insecticidal action was discovered in 1939. It was used to limit the spread of insect-born diseases like malaria and typhus.
The DDT issue is complex: Many people today believe that (a) DDT is terribly dangerous, (b) its use was only due to profit induced pseudoscience, and (c) Rachel Carson exposed the danger of this compound, and called for it to be banned.
The problem with those ideas is that none of them are quite correct.
The overriding theme of Rachel Carson’s Silent Spring is the powerful—and often negative—effect humans have on the natural world. Carson’s main argument is that pesticides have detrimental effects on the environment; she says these are more properly termed “biocides” because their effects are rarely limited to the target pests.
DDT is a prime example, but other synthetic pesticides—many of which are subject to bioaccumulation—are scrutinized. Carson accuses the chemical industry of intentionally spreading disinformation and public officials of accepting industry claims uncritically.
Most of the book is devoted to pesticides’ effects on natural ecosystems, but four chapters detail cases of human pesticide poisoning, cancer, and other illnesses attributed to pesticides. About DDT and cancer, Carson says only:
In laboratory tests on animal subjects, DDT has produced suspicious liver tumors. Scientists of the Food and Drug Administration who reported the discovery of these tumors were uncertain how to classify them, but felt there was some “justification for considering them low grade hepatic cell carcinomas.” Dr. Hueper [author of Occupational Tumors and Allied Diseases] now gives DDT the definite rating of a “chemical carcinogen.”
Carson predicts increased consequences in the future, especially since targeted pests may develop resistance to pesticides , and weakened ecosystems fall prey to unanticipated invasive species.
The book closes with a call for a biotic approach to pest control. Carson never called for an outright ban on DDT. She said in Silent Spring that even if DDT and other insecticides had no environmental side effects, their indiscriminate overuse was counterproductive because it would create insect resistance to pesticides, making them useless in eliminating the target insect populations:
No responsible person contends that insect-borne disease should be ignored. The question that has now urgently presented itself is whether it is either wise or responsible to attack the problem by methods that are rapidly making it worse.
The world has heard much of the triumphant war against disease through the control of insect vectors of infection, but it has heard little of the other side of the story – the defeats, the short-lived triumphs that now strongly support the alarming view that the insect enemy has been made actually stronger by our efforts. Even worse, we may have destroyed our very means of fighting.
Carson also said that “Malaria programmes are threatened by resistance among mosquitoes”, and quoted the advice given by the director of Holland’s Plant Protection Service: “Practical advice should be ‘Spray as little as you possibly can’ rather than ‘Spray to the limit of your capacity’ … Pressure on the pest population should always be as slight as possible.”
Excerpted from http://en.wikipedia.org/wiki/Silent_Spring
None of this is meant to suggest that all DDT use is safe. Rather, the point is that how it is used and in what quantities matters. There indeed are dangerous consequences to overuse or inappropriate disposal of DDT.
How a shocking environmental disaster was uncovered off the California coast after 70 years, Jeff Beradelli, CBS News, 4/12/2021
Thalidomide
thalidomide
Podcast – Thalidomide: Justice Delayed, Justice Denied, Erin Welsh and Erin Allman Updyke,, 9/29/2020,
All of this is why the DEA and the EPA were created.
Books
Trust Us We’re Experts: How Industry Manipulates Science and Gambles with Your Future, 2002, Sheldon Rampton and John Stauber
Public relations firms and corporations know well how to exploit your trust to get you to buy what they have to sell: Let you hear it from a neutral third party, like a professor or a pediatrician or a soccer mom or a watchdog group. The problem is, these third parties are usually anything but neutral. They have been handpicked, cultivated, and meticulously packaged in order to make you believe what they have to say—preferably in an “objective” format like a news show or a letter to the editor. And in some cases, they have been paid handsomely for their “opinions.”
Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming
A 2010 non-fiction book by American historians of science Naomi Oreskes and Erik M. Conway. It identifies parallels between the global warming controversy and earlier controversies over tobacco smoking, acid rain, DDT, and the hole in the ozone layer. Oreskes and Conway write that in each case “keeping the controversy alive” by spreading doubt and confusion after a scientific consensus had been reached was the basic strategy of those opposing action. In particular, they show that Fred Seitz, Fred Singer, and a few other contrarian scientists joined forces with conservative think tanks and private corporations to challenge the scientific consensus on many contemporary issues.
Additional sources
Fact check: ‘Trust the science’ critique includes 3 real ads – and one from a video game, Nayeli Lomeli, USA TODAY, 6/30/2021
A “Nico Time” advertisement that promotes smoking during pregnancy is fake. It was posted in a Fandom page called “BioShock Wiki,” which is dedicated to the video game series BioShock. The site wrote that the advertisement was designed by Kat Berkley, a concept artist who worked on the game.
https://www.reddit.com/r/Bioshock/comments/1qiqhg/good_ol_rapture_advertising/
https://bioshock.fandom.com/wiki/Nico-Time
Learning Standards
A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012)
Implementation: Curriculum, Instruction, Teacher Development, and Assessment
“Through discussion and reflection, students can come to realize that scientific inquiry embodies a set of values. These values include respect for the importance of logical thinking, precision, open-mindedness, objectivity, skepticism, and a requirement for transparent research procedures and honest reporting of findings.”
Next Generation Science Standards: Science & Engineering Practices
● Ask questions that arise from careful observation of phenomena, or unexpected results, to clarify and/or seek additional information.
● Ask questions that arise from examining models or a theory, to clarify and/or seek additional information and relationships.
● Ask questions to determine relationships, including quantitative relationships, between independent and dependent variables.
● Ask questions to clarify and refine a model, an explanation, or an engineering problem.
● Evaluate a question to determine if it is testable and relevant.
● Ask questions that can be investigated within the scope of the school laboratory, research facilities, or field (e.g., outdoor environment) with available resources and, when appropriate, frame a hypothesis based on a model or theory.
● Ask and/or evaluate questions that challenge the premise(s) of an argument, the interpretation of a data set, or the suitability of the design
2016 Massachusetts Science and Technology/Engineering Standards
Students will be able to:
* apply scientific reasoning, theory, and/or models to link evidence to the claims and assess the extent to which the reasoning and data support the explanation or conclusion;
* respectfully provide and/or receive critiques on scientific arguments by probing reasoning and evidence and challenging ideas and conclusions, and determining what additional information is required to solve contradictions
* evaluate the validity and reliability of and/or synthesize multiple claims, methods, and/or designs that appear in scientific and technical texts or media, verifying the data when possible.
Vaccines – what does 95% efficacy actually mean?
Covid-19 is one of many types of respiratory viruses.
The mRNA coronavirus vaccines are 95% effective. What does that mean?
Does this mean that you have a 5% of chance of being infected and getting very sick (or dying) and a 95% chance of being okay?
Not at all! That “95%” figure really means that one is 95% less likely to be infected than compared to someone who hasn’t been vaccinated at at.
How about for a vaccine that is “only” 92% effective? As the math below shows, that means that one only has a 0.04 percent chance of getting COVID!
That’s just a 4 in 10,000 chance – and even then, the majority of those people won’t get it bad. They’ll just have some minor symptoms for a few days.
Let’s see the math!
This section excerpted from Coronavirus FAQs, Sheila Mulrooney Eldred, NPR Goats and Soda, 3/12/2021
…The tendency to oversimplify has led many people to the same — mistaken — conclusion that an efficacy rate of 92 percent would mean that of 100 vaccinated people, 8 of them would get sick during a pandemic.
But that’s not the case. Fortunately, a vaccine with a 92 percent efficacy rate actually means your chances of getting the disease is much, much less than 8 percent.
It means that if you were exposed to the disease, your chances of getting infected would be 92 percent less if you were vaccinated than if you weren’t.
…Say you originally had a 10% chance of getting sick without being vaccinated. If you got that vaccine with an efficacy rate of 92%, your chance of getting sick would drop from 10% to less than 1% — 0.8%, to be exact.
In reality, the trials found that the probability of getting sick in the placebo groups was much less than 10%. In the Pfizer trial, for example, it was 0.79% — or less than one per 100 people.
Participants who got the real vaccine had just a .04 percent chance of getting COVID … that’s 4 in 10,000 people.
Coronavirus FAQs: What Is ‘Vaccine Efficacy’?
What does 95% efficacy actually mean? CDC
Vaccine efficacy and vaccine effectiveness measure the proportionate reduction in cases among vaccinated persons.
Vaccine efficacy is used when a study is carried out under ideal conditions, for example, during a clinical trial.
Vaccine effectiveness is used when a study is carried out under typical field (that is, less than perfectly controlled) conditions.
Vaccine efficacy/effectiveness (VE) is measured by calculating the risk of disease among vaccinated and unvaccinated persons and determining the percentage reduction in risk of disease among vaccinated persons relative to unvaccinated persons.
The greater the percentage reduction of illness in the vaccinated group, the greater the vaccine efficacy/effectiveness. The basic formula is written as:
the numerator (risk among unvaccinated − risk among vaccinated) is sometimes called the risk difference or excess risk.
Vaccine efficacy/effectiveness is interpreted as the proportionate reduction in disease among the vaccinated group.
So a VE of 90% indicates a 90% reduction in disease occurrence among the vaccinated group, or a 90% reduction from the number of cases you would expect if they have not been vaccinated.
from CDC Principles of Epidemiology, Measures of Risk
Related articles
This section has been excerpted from COVID-19 vaccines: What does 95% efficacy actually mean?, Anna Nowogrodzki, Live Science, 2/11/2021
“I think it’s important for people to understand that this is an extremely effective vaccine,” said Brianne Barker, a virologist at Drew University in New Jersey, referring to the Pfizer vaccine. “This is much more effective than you might think.”
One common misunderstanding is that 95% efficacy means that in the Pfizer clinical trial, 5% of vaccinated people got COVID. But that’s not true; the actual percentage of vaccinated people in the Pfizer (and Moderna) trials who got COVID-19 was about a hundred times less than that: 0.04%.
What the 95% actually means is that vaccinated people had a 95% lower risk of getting COVID-19 compared with the control group participants, who weren’t vaccinated.
In other words, vaccinated people in the Pfizer clinical trial were 20 times less likely than the control group to get COVID-19.
That makes the vaccine “one of the most effective vaccines that we have,” Barker told Live Science. For comparison, the two-dose measles, mumps and rubella (MMR) vaccine is 97% effective against measles and 88% effective against mumps, according to the Centers for Disease Control and Prevention (CDC).
The seasonal flu vaccine is between 40% and 60% effective (it varies from year to year, depending on that year’s vaccine and flu strains), but it still prevented an estimated 7.5 million cases of the flu in the U.S. during the 2019-2020 flu season, according to the CDC.
So, if efficacy means some percent fewer cases of COVID-19, what counts as a “case of COVID”? Both Pfizer and Moderna defined a case as having at least one symptom (however mild) and a positive COVID-19 test.
Johnson & Johnson defined a “case” as having a positive COVID-19 test plus at least one moderate symptom (such as shortness of breath, abnormal blood oxygen levels or abnormal respiratory rate) or at least two milder symptoms (such as fever, cough, fatigue, headache, or nausea).
Someone with a moderate case of COVID-19 by this definition could either be mildly affected or be incapacitated and feel pretty sick for a few weeks.
… And none of the three vaccine trials looked at all for asymptomatic COVID-19. “All these efficacy numbers are protection from having symptoms, not protection from being infected,” Barker said.
… But all three trials also used a second, potentially more important, definition of “cases.” What we care most about is protecting people from the worst outcomes of COVID-19: hospitalization and death.
So Moderna, Pfizer and Johnson & Johnson also measured how their vaccines performed against severe disease (which meant severely affected heart or respiratory rate, the need for supplemental oxygen, ICU admission, respiratory failure or death).
All three vaccines were 100% effective at preventing severe disease six weeks after the first dose (for Moderna) or seven weeks after the first dose (for Pfizer and Johnson & Johnson, the latter of which requires only one dose). Zero vaccinated people in any of the trials were hospitalized or died of COVID-19 after the vaccines had fully taken effect.
Beware misleading terminology! The base rate fallacy
Dr. Katelyn Jetelina writes about how newspaper articles and social media discussions about science can be misleading. This is a great example: in countries where many people are already vaccinated against covid-19, some people nonetheless do become infected with various variants of covid-19.
Does this mean that the vaccines are useless? Not at all, in fact quite the opposite. She explains the simple logic of the math:
In Israel, 50% of infected are vaccinated, and base rate bias she writes:
The delta variant of covid-19 has arrived in Israel, and with its arrival, cases are increasing (albeit relatively small). And, this is expected. We’ve seen it in the UK. and India. and Indonesia. And South Africa. And Russia. No country is 100% vaccinated. And this coupled with Delta being more transmissible and preliminary evidence suggesting its ability to escape natural immunity, unvaccinated people and populations are in trouble.
The statistic that’s concerning most (and that’s in the news) is a detail the Director General of the Health Ministry of Israel (Professor Chevy Levy) said during a radio interview. When asked how many of the new COVID19 cases had been vaccinated, Levy said that, “we are looking at a rate of 40 to 50%”.
This must mean the Delta variant is escaping our vaccines, right? When I started digging into the numbers, though, this might not be as alarming as it seems.
This is likely an example of base rate bias in epidemiology (it’s called base rate fallacy in other fields).
Professor Levy said that “half of infected people were vaccinated”. This language is important because it’s very different than “half of vaccinated people were infected”. And this misunderstanding happens all. the. time.
The more vaccinated a population, the more we’ll hear of the vaccinated getting infected. For example, say there’s a community that’s 100% vaccinated. If there’s transmission, we know breakthrough cases will happen. So, by definition, 100% of outbreak cases will be among the vaccinated. It will just be 100% out of a smaller number.
Cue Israel. They are one of the global leaders in vaccinations; 85% of Israeli adults are vaccinated. So, say we have the following scenario:
-100 adult community
-4 COVID19 cases
-50% of cases were among the vaccinated
With an infection rate among the vaccinated of 2% and infection rate of 13% among the unvaccinated, this would give us an efficacy rate of 85%. This is pretty darn close to the clinical trial efficacy rate, meaning the Pfizer vaccine is still working against Delta.
Unfortunately, this gets more complicated. We know the original Israeli outbreaks were in two schools. Because the vast majority of kids in Israel are not vaccinated (only 2-4% because they were just approved), imbalance is introduced. But, I ran the numbers and as long as at least 90% of the adults in the original outbreak were vaccinated, we know the vaccine is still working against Delta. 91% isn’t a farfetched number as teachers (at least in the US) are vaccinated at a much higher rate than the general public.
We need other fundamental details before we start to worry too. Like…
1. What did these outbreaks look like? How many people were at risk? How many people infected? What proportion of the infected were adults vs. kids?
2. How were the cases caught? Was there surveillance testing at the schools? In other words, were these asymptomatic cases? If not, what was the severity of the cases? What was the severity of the vaccinated cases?
3. Were vaccinated cases fully or partially vaccinated? We know 1 dose of vaccines doesn’t work well against Delta.
Bottom Line: I have more questions than answers. And we will (hopefully) get answers to these questions soon. But, there’s a strong possibility that this is a textbook example of base rate bias. Which means I’m optimistic that this is just further evidence the vaccine works against Delta on an individual level. However, this does NOT mean that we should
About the author of this section: Katelyn Jetelina has a Masters in Public Health and PhD in Epidemiology and Biostatistics. She is an Assistant Professor at a School of Public Health where her research lab resides.
High effectiveness of covid-19 vaccines, breakthrough cases and the base rate fallacy
Here’s the good news, take-home message:
The vast majority of fully vaccinated persons who have subsequently been infected are either asymptotic or have very mild symptoms. Only one or two recently infected vaccinated persons have been hospitalized. That’s great news – and exactly what was predicted. This in fact is how all vaccines work. None of them work literally 100%. There will always be a few people who can get infected and then sick, just a tiny percent. And even then, when such people do get sick, most of the time it is very mild and they just stay home for a day or two. Very few become very sick.
More to the point, merely being infected is basically meaninglessness: In fact, a vaccine only works when a person *is* infected. A vaccine (for any virus) doesn’t prevent infection, it protects the person from succumbing to the infection. It’s not a magical protective bubble surrounding a person.
Thinking rationally: examining the base rate fallacy
Why do we rely on specific information over statistics? Base Rate Fallacy explained.
______________________________________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
El Nino and La Nina
El Niño is the name given to the periodic warming of the ocean that occurs in the central and eastern Pacific.
At irregular intervals of three to seven years, these warm countercurrents become unusually strong and replace normally cold offshore waters with warm equatorial waters.
A major El Niño episode can cause extreme weather in many parts of the world.
What is La Niña? When surface temperatures in the eastern Pacific are colder than average, a La Niña event is triggered that has a distinctive set of weather patterns.
and
and
How does it form?
How do the ocean and atmosphere come together to create thus? This problem took nearly fifty years to solve, even after all of the basic ingredients were uncovered.
The Rise of El Niño and La Niña
How this affects the USA – SciJinks – what is La Niña?
How this affects Africa – Weather conditions over the Pacific, including an unusually strong La Niña, interrupted seasonal rains for two consecutive seasons. Between July 2011 and mid-2012, a severe drought affected the entire East African region. Said to be “the worst in 60 years”, it caused a severe food crisis across Somalia, Djibouti, Ethiopia and Kenya that threatened the livelihood of 9.5 million people. Many refugees from southern Somalia fled to neighboring Kenya and Ethiopia. Other countries in East Africa, including Sudan, South Sudan and parts of Uganda, were also affected by a food crisis. Many people died.
Live video of El Nino
El Niño, Chris Farley, on Saturday Night Live
The full skit is here 🙂 NBC Saturday Night Live classic clip
Links
19.3 Regional wind systems breezes El Nino PowerPoint
Chap 19 Air Pressure Coriolis Global winds El Nino
19.3 regional wind systems PDF worksheets
19.3 Regional Wind Systems Teacher chapter
Learning Standards
NGSS
HS-ESS2-2. Analyze geoscience data to make the claim that one change to Earth’s surface can create feedbacks that cause changes to other Earth systems.
Disciplinary Core Ideas – ESS2.A: Earth Materials and Systems
Earth’s systems, being dynamic and interacting, cause feedback effects that can increase or decrease the original changes.
Crosscutting concepts: stability and change – Feedback (negative or positive) can stabilize or destabilize a system.
HS-ESS2-4. Use a model to describe how variations in the flow of energy into and out of Earth’s systems result in changes in climate
DCI – ESS2.D: Weather and Climate – ESS2.A: Earth Materials and System
The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space.
Seaspiracy – documentary or propaganda?
Seaspiracy is a 2021 documentary about the impact of fishing on marine wildlife directed by Ali Tabrizi. The film investigates the effects of plastic marine debris and overfishing around the world and argues that commercial fisheries are the main driver of marine ecosystem destruction. Is this an even-handed piece of journalism, based on science, or is this propaganda? Here we go through criticism of this movie by scientists working in the field, who see this film as misleading, and intellectually dishonest.
They also discuss implicit (albeit, unintended) racism, with upper middle class white people making movies, demanding that huge numbers of people in the Pacific Islands and off the coasts of Asia and Africa must all lose their jobs, in order to make everyone vegan.
What teachers should say to our students
Be aware that all documentaries come with some view and bias.
Students should take care to see if the film makers make specific claims, citing peer-reviewed sources, or if they just more boardly make statements of fact that are hard to (or impossible) to source.
If the filmmakers do cite a peer-reviewed article, is that article representative of the field, or is it an outlier that the great majority of scientists disagree with.
Are the filmmakers citing papers that were later retracted?
We should discuss how the peer review process is all part of science. This includes papers being revised, or occasionally withdrawn. Revision or withdrawing a paper, by the way, isn’t a sign of a problem: that is exactly what one would expect to find in an open, transparent system. Problems only arise if problems are discovered but the author refuses to revise, or if a paper is retracted, but a documentarian nonetheless cites the retracted paper without noting that it is no longer considered correct.
Global Aquaculture Alliance rebuttal to Seaspiracy
Seaspiracy film assails fishing and aquaculture sectors that seem ready for a good fight, Lauren Kramer, Global Aquaculture Alliance, 3/26/2021
“We know the producer is trying to convince an audience not to eat seafood. He’s gone into filmmaking with a desired outcome for his audience, and that’s not documentary making, it’s propaganda,” Gavin Gibbons, VP, communications at NFI, told the Advocate. “We know from Tabrizi’s previous movies, Cowspiracy and What The Health, that the facts are very relative when it comes to this filmmaker.”
Soon after its release, NFI began debunking some of the key arguments the film makes. “The idea that the oceans will be empty by 2048 is based on a completely debunked 2006 statistic, refuted by the author of the original study. The 2048 statistic was put to rest by a follow-up report in the journal Science released in 2009 under the title New hope for fisheries,” it noted.
New hope for fisheries. Scientists document prospects for recovery, call for more global action, AAAS, 7/30/2009
Seaspiracy director Tabrizi interviews Richard Oppenlander, owner of a vegan company and animal rescue sanctuary, who endorses the call to ban fishing in 30 percent of the oceans by 2030 based on his calculation that “less than 1 percent of our oceans are being regulated,” a point that NFI retorted is “not only inaccurate, it’s nonsensical.”
In his coverage of illegal, unregulated and unreported (IUU) fishing in Africa, Tabrizi claims that one in every three wild caught fish imported into the United States were caught illegally and therefor sold illegally, a statistic that prominent U.S. fisheries researcher Ray Hilborn wrote was not credible, and that the retraction of the approach has been a long, drawn-out process.
Pramod et al. methods to estimate IUU are not credible
Ray Hilborn et al
Marine Policy, Volume 108, October 2019, 103632
https://www.sciencedirect.com/science/article/abs/pii/S0308597X19303318
and
Retraction drama continues, Max Mossler, Sustainable Fisheries, University of Washington, 7/14/2019
Response by Christina Hicks
Environmental social scientist at Lancaster Environment Centre, adjunct at JCU
Unnerving to discover your cameo in a film slamming an industry you love & have committed your career to. I’ve a lot to say about #seaspiracy- but won’t. Yes there are issues but also progress & fish remain critical to food & nutrition security in many vulnerable geographies.
Absolutely they raise important issues that need addressing, but there was no real conversation (intersectional or otherwise) and their conclusion-to stop eating fish a) doesn’t address the systematic injustices & b) threatens livelihoods and food security.
Here is a resource put together by academics at UW. I work on fisheries contributions to food and nutrition security. There are important messages in the film. And we do need to challenge corporate control. I just don’t think all fishers are the villains
Rebuttal by Josette Emlen Genio
Sustainable Markets Consultant at Sustainable Fisheries Partnership (SFP)
She writes
No scientist would support the assertion that all fish stocks will be collapsed by 2048. There are threats, however.
https://sustainablefisheries-uw.org/fisheries-2048/
“The latest FAO State of World Fisheries and Aquaculture report (25) indicates that the fraction of overfished stocks has increased since 2000 (from 27% to 33%), while this study suggests that abundance of stocks is increasing.”
“Effective fisheries management instrumental in improving fish stock status” Ray Hilburn et al., Proc Natl Acad Sci USA, 2020 Jan 28;117(4):2218-2224 doi: 10.1073/pnas.1909726116. Epub 2020 Jan 13.
Thoughts by Francisco Blaha
An institutional fisheries advisor. IUU, PSM & Labor issues for FFA/FAO/NZMFAT & others.
Here it is: for all of those that tell me to watch “Seaspiracy” I started and got feed up very soon… Is a kick in the guts for most of the people I work with here in the WCPO that are doing the right thing and managing their fisheries. Be outside and point fingers stuff. Of course, there are many problems! No one doubts that. But also things progressively working in many regions like the WCPO, I choose to focus my work on those. “Gloom sells but does not help”.
Furthermore. to be totally honest: I’m over the set up where, the “bad guys” are predominantly Asian, the “victims” predominantly black/brown, and the “good guys” talking about it and saving the ocean are predominantly white. While I’m sure is well intended, still drags cliche stereotypes and racist overtones.
As for the science background research of the film… as an example… a couple of minutes on google would have shown him that even the lead author of the paper retracted the claim.
Yes, I understand you may choose to not eat fish for whatever reasons you choose to believe. Is your privilege to have a choice. Yet all food production systems have impacts, and it is easy to dismiss one when your livelihood does not depend on them, like for most Pacific Islands
These countries are managing their fisheries sustainably because they are capable and understand better than anyone else, the implications of a failure. They don’t need the uncalled opinion of privileged people to tell them that doesn’t matter what they can scientifically prove.
FFA 2019 Tuna Economics Indicators Brochure
Click to access FFA%202019%20Tuna%20Economic%20Indicators%20Brochure%202019.pdf
The western and central Pacific tuna fishery: 2019 overview and status of stocks, Fisheries, Aquaculture and Marine Ecosystems
https://fame1.spc.int/en/component/content/article/251
Rebuttal by Josette Emlen Genio
I renewed my Netflix subscription just for this, and I was disappointed. Besides the many inaccuracies, I have a gazillion thoughts and sentiments about this documentary, but I’ll be more interested to hear what my fellow colleagues in marine conservation NGOs, many of which have been discredited in the film, have to say😞 But here’s my 2 cents (beware of spoilers!):
While I agree on several of the cases they presented, you cannot ask people to just “stop eating fish and go vegan” (yes that’s exactly the docu’s message) WITHOUT considering the socio-economic impacts of this in MANY fisheries-dependent, food-deficit communities. Overfishing and Illegal, Unreported and Unregulated (IUU) fishing are not only driven by greed, but also by POVERTY.
There’s a distinct line between industrial and artisanal fishing. It’s easy to say “boycott seafood” when options are afforded to you or you do not understand the complexities of the struggles and plight of fishermen. In many coastal communities in Asia and Latin America, the oceans are their LIFEBLOOD that provides them MAIN source if not the ONLY source of livelihood and food security. More than 90% of world’s fishers are NOT from industrial fishing fleets- they are smallholder, subsistence fishers – and thus stand to benefit from eating more sustainable and responsibly-caught seafood.
When it comes to sustainability, the type, size, source, and harvest method of fish always matter. Eating matang-baka or tanigue vs “industrial” salmon or tuna will GREATLY VARY in terms of impacts. And marine conservation NGOs are working hard, so consumers have informed choices. Drastic, blanket recommendations will have drastic, unimaginable consequences. Remember that.
Accusations of racism
I dare them to tell small-scale fishers, esp the ones in the developing countries that they must stop eat and do something that keeps them alive Face with look of triumph This kind of approach – is just another example of white savior complex. I am still enraged!
The documentary outright says that the large scale fishering fleets are taking the food from the small-scale fishers and causing hunger. The dumb “just don’t eat fish” message is obviously made for the viewers that are 90% first world rich people NOT dependant on fish at all.
Magnus Johnson writes
Le Chatelier’s principle
In the early parts of a chemistry class we think of a chemical reaction as a one-time event: either compounds react, or they don’t react. Nothing.
But quite often the reality is dynamic: Chemical A and B combine to make AB…. but AB breaks apart back into A and B. Then those individual A and B can eventually recombine again into AB.
So on a microscopic level, individual reactions never cease.
Yet at the macroscopic level, the reaction seems to have come to a stop.
What does happen, is that at any given pressure and temperature, we’ll end up with an equilibrium: there will be a constant, certain amount of separate pieces, and a constant, certain amount of combined pieces.
We can make a ratio of [separate pieces] compared to [combined pieces.]
This ratio is called an equilibrium constant.
Here is a visual of a situation, not about chemical reactions, but about locations. We create a ratio of how much is one one side compared to how much is on another side.
Online lessons
CK-12 Chemistry LeChatelier’s Principle
CK-12 LeChatelier’s Principle and the Equilibrium Constant
Dynamic Equilibrium and Le Chatelier’s Principle
Opentextbc.ca Shifting Equilibria: Le Chatelier’s Principle
Libretexts Chemistry – Le Chatelier’s Principle
Here is a fantastic infographic by Compound Interest
Apps & interactives
PhET apps – Reactions & Rates, and Reversible Reactions
interactives CK-12 Scroll down to “Flat vs Fizyy Soda”
elearning at Cal Poly Pomona – Kinetics, Equilibrium, and then Le Chatelier.
PLIX Le Châtelier’s Principle and the Equilibrium Constant
The Law of Mass Action, Wolfram
Le Chatelier’s Principle in Chemical Equilibrium, Wolfram
Constructing an equilibrium expression
See the lesson here Dynamicscience.com equilibrium4
Learning Standards
NGSS
HS-PS1-6. Refine the design of a chemical system by specifying a change in conditions that would produce increased amounts of products at equilibrium. Clarification Statement: Emphasis is on the application of Le Chatelier’s Principle and on refining designs of chemical reaction systems, including descriptions of the connection between changes made at the macroscopic level and what happens at the molecular level.
Massachusetts
HS-PS1-6. Design ways to control the extent of a reaction at equilibrium (relative amount of products to reactants) by altering various conditions using Le Chatelier’s principle. Make arguments based on kinetic molecular theory to account for how altering conditions would affect the forward and reverse rates of the reaction until a new equilibrium is established.*
Massachusetts State Assessment Boundaries:
• Calculations of equilibrium constants or concentrations are not expected in state assessment.
• State assessment will be limited to simple reactions in which there are only two reactants and to specifying the change in only one variable at a time.
Who invented the…Engine, Auto, Radio, TV, Computer, Smartphone, GPS?
Who invented the …
power loom? telephone?
internal combustion engine? automobile?
radio? television? computer?
smartphone? GPS?
technology for organ transplantation?
modern light bulb?
Myth – Each of these was invented by someone.
Reality – None of these were developed by just one person. Instead, each technology developed over time – with contributions from many people.
Consider a recent meme shared on social media about Dr. Gladys West. It is well-intentioned, but ends up concealing as much as it reveals.
While doing important work, she didn’t invent GPS – no one person did.
Instead, we follow the contributions of many people. Here, from left to right are Friedwardt Winterberg, Bill Guier, Frank McClure, and George Weiffenbach.
And here are Roger Easton, Ivan Getting, Bradford Parkinson, and Gladys West.
Let’s look at the story more deeply, which covers decades:
One of the fathers of GPS was Friedwardt Winterberg. Back in 1955 he proposed a test of Einstein’s theory of general relativity.
Winterberg realized that it should be possible to detect the predicted slowing of time in a strong gravitational field; this could be done by using atomic clocks placed in Earth orbit inside artificial satellites.
Contrary to the predictions of classical physics, relativity predicts that the clocks on the GPS satellites would be seen by the Earth’s observers to run 38 microseconds faster per day than the clocks on the Earth.
His experiment was eventually experimentally verified by Hafele and Keating in 1971 by flying atomic clocks on commercial jets.
Without taking such relativistic corrections into account, any position calculated from satellite technology – such as GPS – would quickly drift into error. The error in estimated position would be as much as 10 kilometers per day (6 miles/day.)
The next people who helped create what would become GPS were William Guier and George Weiffenbach. They worked at Johns Hopkins University’s Applied Physics Laboratory (APL.)
When the Soviet Union launched the first artificial satellite (Sputnik 1) in 1957, they decided to monitor its radio transmissions.
Guier and Weiffenbach realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit.
In 1958, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem – pinpointing the user’s location, given the satellite’s location.
At the time, the US Navy was developing the submarine-launched Polaris missile, which required them to know the submarine’s location.
This led Guier and Weiffenbach, along with other scientists at APL to develop the TRANSIT system. Transit was used by the U.S. Navy to provide location information to its Polaris ballistic missile submarines.
It was also used as a navigation system by Navy surface ships, as well as for surveying. This system went online in 1960.
The next father of GPS would be Roger L. Easton of the Naval Research Laboratory. During the 1960s and early 1970s he developed a navigational system with passive ranging, circular orbits, and space-borne high precision clocks placed in satellites.
Ivan A. Getting of The Aerospace Corporation
In the 1950s, as head of research and engineering at Raytheon Corp., Waltham, Mass., Getting led a project to develop a mobile ballistic missile guidance system called Mosaic, which was to work like the Loran system.
But Getting envisioned another concept. Though the railroad mobile version of the intercontinental ballistic missile was cancelled, he realized that if a similar system were used, one that based the transmitters on satellites, and if enough satellites were lofted so that four were always in sight, it would be possible to pinpoint locations in three dimensions anywhere on earth. This theory led to Navstar.
For GPS, Also Thank Ivan Getting; He Got “the Damn Thing Funded, Tekla Perry, IEEE Spectrum, 4/19/2018
Bradford Parkinson of the Applied Physics Laboratory was the lead architect, advocate and developer of GPS. He was given full, direct control of the development of the demonstration system, which included satellites, a global ground control system, nine types of user receivers, and an extensive land, sea and air test program
Gladys West analyzed data from satellites, putting together altimeter models of the Earth’s shape. She became project manager for the Seasat radar altimetry project, the first satellite that could remotely sense oceans.
From the mid-1970s through the 1980s, West worked on precise calculations to model the shape of the Earth – a geoid – an ellipsoid with irregularities.
Generating an extremely accurate model required her to employ complex algorithms to account for variations in gravitational, tidal, and other forces that distort Earth’s shape. This was essential for the Global Positioning System (GPS).
Whew…. and all that is just the short version of who invented the GPS. The longer version would literally take a book, a dozen hours of video, and include dozens more people.
Student project
Students will work individually or in groups, researching, and then creating a presentation on the evolution of any of these technologies.
You may propose another technology to investigate; clear it with your teacher first.
power loom? telephone?
internal combustion engine? automobile?
radio? television? computer?
smartphone? GPS?
technology for organ transplantation?
modern light bulb?
Many ways to create your report!
Select one of these options
Create a written report using MS Word/Google Docs. This will have images, text, perhaps short animations if you like. If you like, you can use the built-in voice-to-text; this will transcribe your words.
Create a video, using your favorite software & apps. This will have images, text, perhaps short animations if you like. You’ll narrate it. Share the project as a video file with us.
Create a PowerPoint/Google Slides presentation. This will have images, text, perhaps short animations if you like.
Create an Infographic. There are many websites and apps out there to do this. Choose your favorite apps. This will have images, text, perhaps short animations if you like.
Resources
Engineering & Technology History, People, and Milestones PBS Learning Media
Learning Standards
NGSS Science
HS-PS3-3. Design, build, and refine a device that works within given constraints to convert one form of energy into another form of energy.*
Crosscutting concepts – Influence of Science, Engineering and Technology on Society and the Natural World. Modern civilization depends on major technological systems. Engineers continuously modify these technological systems by applying scientific knowledge and engineering design practices to increase benefits while decreasing costs and risks.
Disciplinary Core Idea Progression Matrix – ETS2.B Manufacturing
Grade 6-8. The design and structure of any particular technology product reflects its function. Products can be manufactured using common processes controlled by either people or computers.
Grade 9-10 – Manufacturing processes can transform material properties to meet a need. Particular manufacturing processes are chosen based on the product design, materials used, precision needed, and safety.
History C3 Framework and the National Social Studies Standards
D2.Eco.13.9-12. Explain why advancements in technology and investments in capital goods and human capital increase economic growth and standards of living.
D2.Geo.7.6-8. Explain how changes in transportation and communication technology influence the spatial connections among human settlements and affect the diffusion of ideas and cultural practices.
D2.His.1.9-12. Evaluate how historical events and developments were shaped by unique circumstances of time and place as well as broader historical contexts.
Common Core
CCSS.ELA-LITERACY.W.8.4
Produce clear and coherent writing in which the development, organization, and style are appropriate to task, purpose, and audience.
CCSS.ELA-Literacy.WHST.9-10.6
CCSS.ELA-Literacy.WHST.11-12.6
Use technology, including the Internet, to produce, publish, and update individual or shared writing products, taking advantage of technology’s capacity to link to other information and to display information flexibly and dynamically.

Types of time travel
Before discussing the physics and science of time travel, we first have to define what we mean by time travel. Here I am presenting several possible types of time travel.
I. What is time? Does time really even exist?
What is time? Where does time come from? In what way is time objective, something actually out there? In what ways is time not real, but just a way that humans use to describe our perception of the universe?
To some extent, understanding these questions requires knowing something about thermodynamics especially the second law of thermodynamics. Beyond that, a deeper understanding of the nature of time may rely on understanding modern physics, especially quantum mechanics .
The following article discusses entropy and the thermodynamic arrow of time; the question of ‘does time really flow?, the concept of a block universe, and presentism versus eternalism – What is time?
II. Types of time Travel
This text is currently based on the 2008 version of the Wikipedia article.
1. There is a single fixed history, which is self-consistent and unchangeable.
In this view, everything happens on a single timeline which doesn’t contradict itself.
1.1 This can be simply achieved by applying the Novikov self-consistency principle, named after Dr. Igor Dmitrievich Novikov, Professor of Astrophysics at Copenhagen University. The principle states that the timeline is totally fixed, and any actions taken by a time traveler were part of history all along, so it is impossible for the time traveler to “change” history in any way.
The time traveler’s actions may be the cause of events in their own past though, which leads to the potential for circular causation and the predestination paradox; for examples of circular causation, see Robert A. Heinlein’s story “By His Bootstraps”.
The Novikov consistency principle assumes certain conditions about what sort of time travel is possible. Specifically, it assumes either that there is only one timeline, or that any alternative timelines (such as those postulated by the many-worlds interpretation of quantum mechanics) are not accessible.
Given these assumptions, the constraint that time travel must not lead to inconsistent outcomes could be seen merely as a tautology, a self-evident truth that can not possibly be false.
However, the Novikov self-consistency principle is intended to go beyond just the statement that history must be consistent, making the additional nontrivial assumption that the universe obeys the same local laws of physics in situations involving time travel that it does in regions of space-time that lack closed timelike curves. This is clarified in the paper “Cauchy problem in spacetimes with closed timelike curves”, where the authors write:
That the principle of self-consistency is not totally tautological becomes clear when one considers the following alternative: The laws of physics might permit CTCs; and when CTCs occur, they might trigger new kinds of local physics which we have not previously met. … The principle of self-consistency is intended to rule out such behavior. It insists that local physics is governed by the same types of physical laws as we deal with in the absence of CTCs: the laws that entail self-consistent single valuedness for the fields. In essence, the principle of self-consistency is a principle of no new physics. If one is inclined from the outset to ignore or discount the possibility of new physics, then one will regard self-consistency as a trivial principle.
1.2 Alternatively, new physical laws take effect regarding time travel that thwarts attempts to change the past (contradicting the assumption mentioned in 1.1 above that the laws that apply to time travelers are the same ones that apply to everyone else).
These new physical laws can be as unsubtle as to reject time travelers who travel to the past to change it by pulling them back to the point from when they came as Michael Moorcock’s The Dancers at the End of Time or where the traveler is rendered an noncorporeal phantom unable to physically interact with the past such as in some Pre-Crisis Superman stories and Michael Garrett’s “Brief Encounter” in Twilight Zone Magazine May 1981.
2. History is flexible and is subject to change (Plastic Time)
2.1 Changes to history are easy and can impact the traveler, the world, or both
Examples include Back to the Future, Back to the Future II, and Doctor Who.
In some cases (such as Doctor Who) any resulting paradoxes can be devastating, threatening the very existence of the universe. In other cases the traveler simply cannot return home. The extreme version of this (Chaotic Time) is that history is very sensitive to changes with even small changes having large impacts such as in Ray Bradbury’s A Sound of Thunder
2.2 History is change resistant in direct relationship to the importance of the event i.e. small trivial events can be readily changed but large ones take great effort.
In the Twilight Zone episode “Back There” a traveler tries to prevent the assassination of President Lincoln and fails but his actions have turned what had originally been the butler of the club that the traveler belonged to into a rich tycoon.
In The Time Machine (2002 film) it is explained via a vision why Hartdegen could not save his sweetheart Emma–doing so would have resulted in him never developing the time machine he used to try and save her.
The Saga of Darren Shan, where major events in the past cannot be changed, but minor events can be affected. Under this model, if a time traveler were to go back in time and kill Hitler, another Nazi would simply take his place and commit his same actions, leaving the broader course of history unchanged.
3. Many-worlds interpretation and Parallel universe (fiction)
These terms are often used interchangeably in fiction but mechanically they differ:
The Many Worlds interpretation says time travel creates a coexisting alternate history –
while the second idea says that the traveler actually goes to an already existing parallel world.
In either case the traveler’s original home reality continues to exist unaffected. These versions of time travel are sometimes placed under one of the two above categories.
James P. Hogan’s The Proteus Operation fully explains parallel universe time travel in chapter 20 where it has Einstein explaining that all the outcomes already exist and all time travel does is change which already existing branch you will experience.
Star Trek has a long tradition of using the 2.1 mechanic, as seen in the episodes City on the Edge of Forever, Tomorrow is Yesterday, Time and Again (Star Trek: Voyager), Future’s End, Before and After (Star Trek: Voyager), Endgame (Star Trek: Voyager) and as late as Enterprise’s Temporal Cold War,
The Star Trek episode Parallels had an example of what Data called a quantum realities. His exact words on the matter were “But there is a theory in quantum physics that all possibilities that can happen do happen in alternate quantum realities.” leaving it up the viewer as to the exact nature of these quantum realities.
Michael Crichton’s novel Timeline takes the approach that all time travel really is is travel to an already existing parallel universe where time passes at a slower rate than our own but changes in any of these parallel universe effects the main timeline making it behave as it if was a type 2 universe.
Discussion
While a Type 1 universe will prevent a grandfather paradox it doesn’t prevent paradoxes in other aspects of physics such as the predestination paradox and the ontological paradox (GURPS Infinite Worlds calls this Free Lunch Paradox).
The predestination paradox is where the traveler’s actions create some type of causal loop, in which some event A in the future helps cause event B in the past via time travel, and the event B in turn is one of the causes of A.
For instance, a time traveler might go back to investigate a specific historical event like the Great Fire of London, and their actions in the past could then inadvertently end up being the original cause of that very event.
Examples of this kind of causal loop are found in Timemaster, a novel by Dr. Robert Forward, the Twilight Zone episode “No Time Like the Past”, the 1980 Jeannot Szwarc film Somewhere In Time (based on Richard Matheson’s novel Bid Time Return), the Michael Moorcock novel Behold the Man, and Harry Potter and the Prisoner of Azkaban.
The Novikov self-consistency principle can also result in an ontological paradox (also known as the knowledge or information paradox) where the very existence of some object or information is a time loop.
The philosopher Kelley L. Ross argues in Time Travel Paradoxes that in an ontological paradox scenario involving a physical object, there can be a violation of the second law of thermodynamics. Ross uses Somewhere in Time as an example where Jane Seymour’s character gives Christopher Reeve’s character a watch she has owned for many years, and when he travels back in time he gives the same watch to Jane Seymour’s character 60 years in the past.
Time travel to the future in standard physics
There are various ways in which a person could “travel into the future” in a limited sense: the person could set things up so that in a small amount of his own subjective time, a large amount of subjective time has passed for other people on Earth.
For example, an observer might take a trip away from the Earth and back at relativistic velocities, with the trip only lasting a few years according to the observer’s own clocks, and return to find that thousands of years had passed on Earth.
This form of “travel into the future” is theoretically allowed using the following methods:
Using velocity-based time dilation under the theory of special relativity, for instance:
Traveling at almost the speed of light to a distant star, then slowing down, turning around, and traveling at almost the speed of light back to Earth. (see the Twin paradox)
Using gravitational time dilation under the theory of general relativity, for instance:
Residing inside of a hollow, high-mass object;
Residing just outside of the event horizon of a black hole, or sufficiently near an object whose mass or density causes the gravitational time dilation near it to be larger than the time dilation factor on Earth.
Additionally, it might be possible to see the distant future of the Earth using methods which do not involve relativity at all, although it is even more debatable whether these should be deemed a form of “time travel”:
Hibernation
Suspended animation
Time Dilation
Time dilation is permitted by Albert Einstein’s special and general theories of relativity. These theories state that, relative to a given observer, time passes more slowly for bodies moving quickly relative to that observer, or bodies that are deeper within a gravity well. For example, a clock which is moving relative to the observer will be measured to run slow in that observer’s rest frame; as a clock approaches the speed of light it will almost slow to a stop, although it can never quite reach light speed so it will never completely stop.

stars rotating overhead camping timelapse
http://i.imgur.com/SLf5dW1.gifv
For two clocks moving inertially (not accelerating) relative to one another, this effect is reciprocal, with each clock measuring the other to be ticking slower. However, the symmetry is broken if one clock accelerates, as in the twin paradox where one twin stays on Earth while the other travels into space, turns around (which involves acceleration), and returns—in this case both agree the traveling twin has aged less.
General relativity states that time dilation effects also occur if one clock is deeper in a gravity well than the other, with the clock deeper in the well ticking more slowly; this effect must be taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a black hole.
Time perception
Time perception can be apparently sped up for living organisms through hibernation, where the body temperature and metabolic rate of the creature is reduced. A more extreme version of this is suspended animation, where the rates of chemical processes in the subject would be severely reduced.
Time dilation and suspended animation only allow “travel” to the future, never the past, so they do not violate causality, and it’s debatable whether they should be called time travel.
However time dilation can be viewed as a better fit for our understanding of the term “time travel” than suspended animation, since with time dilation less time actually does pass for the traveler than for those who remain behind, so the traveler can be said to have reached the future faster than others, whereas with suspended animation this is not the case.
Mutable timelines
Time travel in a Type 2 universe is much more complex. The biggest problem is how to explain changes in the past. One method of explanation is that once the past changes, so too do the memories of all observers. This would mean that no observer would ever observe the changing of the past (because they will not remember changing the past).
This would make it hard to tell whether you are in a Type 1 universe or a Type 2 universe.
You could, however, infer such information by knowing if a) communication with the past were possible or b) it appeared that the time line had never been changed as a result of an action someone remembers taking, although evidence exists that other people are changing their time lines fairly often.
An example of this kind of universe is presented in Thrice Upon a Time, a novel by James P. Hogan. The Back to the Future trilogy films also seem to feature a single mutable timeline (see the Back to the Future FAQ for details on how the writers imagined time travel worked in the movies’ world). By contrast, the short story “Brooklyn Project” by William Tenn provides a sketch of life in a Type 2 world where no one even notices as the timeline changes repeatedly.
In type 2.1, attempts are being made at changing the timeline, however, all that is accomplished in the first tries is that the method in which decisive events occur is changed; final conclusions in the bigger scheme cannot be brought to a different outcome.
As an example, the movie Deja Vu depicts a paper note sent to the past with vital information to prevent a terrorist attack. However, the vital information results in the killing of an ATF agent, but does not prevent the terrorist attack; the very same agent died in the previous version of the timeline as well, albeit under different circumstances. Finally, the timeline is changed by sending a human into the past, arguably a “stronger” measure than simply sending back a paper note, which results in preventing both a murder and the terrorist attack. As in the Back to the Future movie trilogy, there seems to be a ripple effect too as changes from the past “propagate” into the present, and people in the present have altered memory of events that occurred after the changes made to the timeline.
The science fiction writer Larry Niven suggests in his essay “The Theory and Practice of Time Travel” that in a type 2.1 universe, the most efficient way for the universe to “correct” a change is for time travel to never be discovered, and that in a type 2.2 universe, the very large (or infinite) number of time travelers from the endless future will cause the timeline to change wildly until it reaches a history in which time travel is never discovered.
However, many other “stable” situations might also exist in which time travel occurs but no paradoxes are created; if the changeable-timeline universe finds itself in such a state no further changes will occur, and to the inhabitants of the universe it will appear identical to the type 1.1 scenario.[citation needed] This is sometimes referred to as the “Time Dilution Effect”.
Few if any physicists or philosophers have taken seriously the possibility of “changing” the past except in the case of multiple universes, and in fact many have argued that this idea is logically incoherent, so the mutable timeline idea is rarely considered outside of science fiction.
Also, deciding whether a given universe is of Type 2.1 or 2.2 can not be done objectively, as the categorization of timeline-invasive measures as “strong” or “weak” is arbitrary, and up to interpretation: An observer can disagree about a measure being “weak”, and might, in the lack of context, argue instead that simply a mishap occurred which then led to no effective change.
An example would be the paper note sent back to the past in the film Deja Vu, as described above. Was it a “too weak” change, or was it just a local-time alteration which had no extended effect on the larger timeline? As the universe in Deja Vu seems not entirely immune to paradoxes (some arguably minute paradoxes do occur), both versions seem to be equally possible.
Alternate histories
In Type 3, any event that appears to have caused a paradox has instead created a new time line. The old time line remains unchanged, with the time traveler or information sent simply having vanished, never to return. A difficulty with this explanation, however, is that conservation of mass-energy would be violated for the origin timeline and the destination timeline.
A possible solution to this is to have the mechanics of time travel require that mass-energy be exchanged in precise balance between past and future at the moment of travel, or to simply expand the scope of the conservation law to encompass all timelines. Some examples of this kind of time travel can be found in David Gerrold’s book The Man Who Folded Himself and The Time Ships by Stephen Baxter, plus several episodes of the TV show Star Trek: The Next Generation.
Molecular Orbitals
Content objective (What are we learning & why?)
Lewis theory and the octet rule are not enough to describe the shapes of molecules and many of their properties.
To go beyond such limitations we learn molecular orbital theory.
Prerequisites (What do we need to know before starting this unit?)
Lewis structures; the octet rule; covalent & ionic bonds
sub-atomic particles; s, p, d, and f orbitals
the wave nature of matter; Schrödinger model of the atom
Shorthand notation reminder
e- = electron
Introduction
By this time it may be no surprise to you that the name of this theory – molecular orbitals – is a misnomer. There are no orbitals involved.
We should really call this
“Three dimensional electron-clouds, overlapping with other three dimensional electron-clouds, to make even more complicated and pretty electron-clouds theory”
But that’s way too many words. So “molecular orbitals” it is 😉
Remember, electrons are not solid objects like billiard balls.
And e- don’t really orbit an atom’s nucleus.
Electrons are better described as a rippling waves.
How does the Schrödinger equation create orbitals?
When we interact with e- in certain ways, sure they have particle-like properties.
But most of the time they have wave-like properties.
If you feel like it, you can learn a bit about quantum mechanics here.
What does this mean? When atoms get close to each other, the 3D wave function of one e- overlaps with the 3D wave function of another e-.
This creates constructive interference and destructive interference:
high parts of one wave combine with high parts of another wave to make even higher waves
A high part of a wave can be canceled out by hitting a low point of another wave.
Electrons work like this – except they have three dimensional waves (the GIF above is only 2D.)
In this unit we’re going to see what happens to the shape of orbitals when atoms come close enough to bond with each other.
===========================================
This next section has been adapted from Prentice Hall Chemistry by Wilbraham, Staley, Matta and Waterman.
Sigma Bonds
Atomic orbitals can combine to form a molecular orbital that is symmetrical around the axis connecting atomic nuclei. This is called a sigma bond.
We use the Greek letter sigma (σ).
Covalent bonding results from an imbalance between the attractions and repulsions of the nuclei and e- involved.
This next image is from Valence Bond Theory, LibreTexts
Because their charges have opposite signs, the nuclei and e- attract each other.
Because their charges have the same sign, nuclei repel other nuclei, and e- repel other e-.
In a hydrogen molecule (H2), the nuclei repel each other, as do the e-.
In a bonding molecular orbital of hydrogen, however, the attractions between the H nuclei and the e- are stronger than the repulsions.
The balance of all the interactions between the H atoms is thus tipped in favor of holding the atoms together.
The result is a stable, diatomic molecule of H2.
Atomic p orbitals can also overlap to form molecular orbitals.
A fluorine atom, for example, has a half-filled 2p orbital.
When two fluorine atoms combine then the p orbitals overlap to produce a bonding molecular orbital.
There is a high probability of finding a pair of e- between the positively charged nuclei of the two fluorines.
The fluorine nuclei are attracted to this region of high e- density.
This attraction holds the atoms together in the fluorine molecule (F2).
The overlap of the 2p orbitals produces a bonding molecular orbital that is symmetrical when viewed around the F⎯F bond axis connecting the nuclei.
Therefore, the F⎯F bond is a sigma bond.
Pi bonds, π bonds
“Pi” is symbolized by the Greek letter π.
In the sigma bond of the F2 molecule, the p atomic orbitals overlap end-to-end.
In some molecules, however, orbitals can overlap side-by-side.
The side-by-side overlap of atomic p orbitals produces pi molecular orbitals.
When a pi molecular orbital is filled with two electrons, a pi bond results.
In a pi bond, the bonding e- are most likely to be found in sausage-shaped regions above and below the bond axis of the bonded atoms.
It is not symmetrical around the F⎯F bond axis.
Atomic orbitals in pi bonding overlap less than in sigma bonding.
Therefore, pi bonds tend to be weaker than sigma bonds.
===========================================
Bonding and antibonding
When orbitals interact, the result can be bonding or antibonding.
Bonding molecular orbitals
Occurs when the interactions between the orbitals are constructive.
They are lower in energy than the orbitals that combine to produce them.
Antibonding molecular orbitals
Occurs when the interactions between the orbitals are are destructive (out-of-phase.)
The destructive interference creates a long, thin, region where the probability of finding an e- is effectively zero. We call this region a nodal plane.
They are basically an orbital containing an e- outside the region between the two nuclei.
They are higher in energy than the orbitals that combine to produce them.
Do both bonding and antibonding orbitals exist in the same molecule at the same time?
Yes. They both can develop as atoms come together to form a molecule. Both exist at the same time.
The resultant behavior of the molecule depends on how all the orbitals – bonding and antibonding – add together.
Let’s watch Pi orbitals develop
Here we see the Pi bonding orbital forming as P orbitals, from two atoms moving closer, slowly come together.
Here we see two P orbitals come together to form what is known as the antibonding Pi orbital. Notice that we see a nodal plane develop!
These two animations were created by Mohammad Alhudaithi using Wolfram Alpha. See Visualizing Molecular Orbitals for One Electron Diatomic Molecules.
Example: two O atoms bonding
Here we see 2 O atoms bonding together to create an O2 molecule.
Each atom has its own three-dimensional e- orbitals.
As the atoms get closer the wave functions overlap. The subsequent constructive and destructive interference creates a new three dimensional shape, one for the molecule as a whole.
The original 2s and 2p atomic orbitals merge to create Sigma and Pi orbitals. These bind the atoms together.
The 1s orbitals do not combine and still show the individual atoms.
This GIF is from O2 Molecular Orbitals Animation at Wikimedia by Kilohn Limahn.
________________________________________
Deep thoughts
Because arguments based on atomic orbitals focus on the bonds formed between valence electrons on an atom, they are often said to involve a valence-bond theory.
The valence-bond model can’t adequately explain the fact that some molecules contains two equivalent bonds with a bond order between that of a single bond and a double bond.
The best it can do is suggest that these molecules are mixtures, or hybrids, of the two Lewis structures that can be written for these molecules.
This problem, and many others, can be overcome by using a more sophisticated model of bonding based on molecular orbitals.
Molecular orbital theory is more powerful than valence-bond theory because the orbitals reflect the geometry of the molecule to which they are applied. But this power carries a significant cost in terms of the ease with which the model can be visualized.
Molecular Orbital Theory, Purdue, Chemical Education Division Groups, Bodner Research Web, General Chemistry Help, The Covalent bond
________________________________________
Deep thoughts
Molecular Orbital theory (MO) is the most important quantum mechanical theory for describing bonding in molecules. It is an approximate theory (as any theory that utilizes “orbitals”), but it is a very good approximation of the bonding.
The MO perspective on electrons in molecules is very different from that of a localized bonding picture such as valence bond (VB) theory.
In VB we describe particular bonds as coming from the overlap of orbitals on atomic centers.
In MO this idea is not completely gone, but now rather than just looking at individual bonds, MO describes the whole molecule as one big system.
The orbitals from MO theory are spread out over the entire molecule rather than being associated with a bond between only two atoms.
Each MO can have a particular shape such that some orbitals have greater electron density in one place or another, but in the end the orbitals now “belong” to the molecule rather than any particular bond.
For diatomic molecules (which we look at a lot), the VB picture and the MO picture are very similar. This is because the whole molecule is simply two atoms bonded together. The difference become more apparent when we look at MO in larger molecules.
Molecular orbitals, Chemistry 301 , Univ of Texas
________________________________________
Teaching molecular orbitals with the relationships analogy:
This is a great lesson which starts of simple and then brings you into a series of analogy that eventually lets you understand the topic:
From the introduction – “A lot of people say they’re happy being single, and I believe that many likely are. But in the back of their mind of many single people is the thought that if they just found the right person, they might be even happier – or less unhappy, which is a crappy way to look at it psychologically but necessary if you wish to draw a diagram where a “happy couple” is occupying a “potential energy well”, below.”
and then analogies and diagrams grow from here…
Bonding And Antibonding Pi Orbitals, by James Ashenhurst, Master organic chemistry
________________________________________
Relating molecular orbital theory to quantum mechanics and standing waves
The Lewis Structure approach provides an extremely simple method for determining the electronic structure of many molecules. It is a bit simplistic, however, and does have trouble predicting structures for a few molecules.
Nevertheless, it gives a reasonable structure for many molecules and its simplicity to use makes it a very useful tool for chemists.
A more general, but slightly more complicated approach is the Molecular Orbital Theory. This theory builds on the electron wave functions of Quantum Mechanics to describe chemical bonding.
To understand MO Theory let’s first review constructive and destructive interference of standing waves starting with the full constructive and destructive interference that occurs when standing waves overlap completely.
Molecular Orbital Theory by Philip J. Grandinetti
________________________________________

















































