KaiserScience

Start here

Advertisements

Understanding Modern Physics. Electromagnetism to Relativity

I’m linking to some rather excellent lessons on modern physics from the School of Physics – The University of New South Wales, Sydney, Australia.

website – newt.phys.unsw.edu.au/einsteinlight/index.html

Gravity General Relativity warping The Elegant Universe

From “The Elegant Universe”, PBS series NOVA. 2003.

1. GALILEO – Mechanics and Galilean relativity (Multimedia above right, smaller html version here)

Related Links

2. MAXWELL – Electricity, magnetism and relativity (Multimedia above right, smaller html version here)

Related Links

3. EINSTEIN – The principle of Special Relativity (Multimedia above right, smaller html version here)

Related Links

4. TIME DILATION – How relativity implies time dilation and length contraction (Multimediaabove right, smaller html version here)

Related Links

5. E = mc2 – How relativistic mechanics leads to E = mc2 (Multimedia above right, smaller html version here)

Related Links

6. BEYOND RELATIVITY. (Multimedia version, or smaller html version)Related Links

 

 

Advertisements

How did we develop modern physics?

For over 2,000 years, great thinkers have studied the Heavens and the Earth, searching for unifying principles that explain how our universe works.

By time of the scientific revolution and the Enlightenment we began to discover a pattern of interconnected principles that apparently explained all phenomenon ever observed in our universe.

This tested, reliable description of our universe has come to be known as classical physics. By any measure, classical physics has been an extraordinary success. It includes the laws of optics, electromagnetic radiation, Newton’s laws of motion and the law of gravity, and the laws of thermodynamics.

By the late 1800s classical physics had been so successful at describing nearly everything observed, that many scientists had come to believe that we had discovered all that could be known, and that physics was nearly at an end.

The universe followed a set of basically comprehensible, classical laws, which worked like clockwork.

kinetic-sculptural-orrery

All that was left was for physicists to make ever-more-accurate measurements, and tidy up a few “loose ends” that couldn’t yet be explained.

A few scientists discovered, however, that these loose ends simply couldn’t be explained by any of the known laws of physics.  No amount of ingenious thinking could find a way to explain these odd phenomenon.

For instance

  • Electrons should not be able to orbit around an atom’s nucleus; they should give off radiation, lose energy, and spiral into the nucleus, thus causing all atoms in the universe to collapse in a nanosecond  Yet this obviously doesn’t happen.

  • Light was proven to travel in the form of waves, yet Einstein’s explanation of the photoelectric effect proved that light travelled as discrete particles (“photons”).  How could light be both a wave and a particle at the same time?

  • Radioactive elements would spontaneously break down into lighter weight elements, but in a random process that could only be described statistically, not deterministically.

  • When elements are heated, they give off only certain frequencies of light (“spectra”).  Why would some frequencies be given off, but not others?

  • Electrons around an atom could absorb or release certain amounts of radiation, of multiples of these amounts, but not any amount in between.  How could particles have one amount of energy, a higher amount, but not an amount in-between?

    • That’s like saying that a car can travel at 50 mph or 100 mph, but not at any speed in-between.  Cars would magically jump from 50 to 100 mph, without any speed in-between. Wouldn’t this be nonsense?  Yet for electrons it was observed to be true!

Three Failure of Classical Physics, Dr. Bradley Carroll

All of these odd phenomenon were observed more and more often. No explanation consistent with the known laws of physics could explain any of them.  Over a 40 year period, between 1880 and 1920, there was a tremendous revolution in science that produced what we now call modern physics: Quantum mechanics, Special Relativity, and General Relativity

Modern physics reveals a radically new understanding of the universe that, at its core, shows us that our everyday perception of reality is entirely wrong.

Yet do apples now fall up when dropped?  Does electricity no longer flow in circuits?  Of course not.  Since the universe still operates as it always has, there must be some link between the classical world and the relativistic, quantum world.

Consider Newton’s laws of motion.  In Newtonian, classical physics, the momentum of a moving object equals its mass x velocity.

p = m·v

Let’s say we have an electron moving at 0.98 c (98% of the speed of light.)  What is its momentum? According to classical physics it must be:

p = m·v = (9.11 x 10 –31 kg)·0.98·(3 x 10 8 m/s)

   = 2.68 x 10 – 22 kg·m/s

In particle accelerators we use very strong magnetic fields to accelerate charged particles; we can make electrons actually travel at such ultra-high speeds!  When we do so, we find that the momentum of an electron traveling at 0.98 c is five times greater than this!  This sort of thing has been tested again and again.  For very high speeds, Newton’s laws of physics fail to give accurate results!  Yet Newton’s laws obviously work extremely well for all practical purposes.  What is going on?

As discovered by Albert Einstein, Newton’s laws of physics are actually a mathematical subset of a more general law of physics, relativistic physics.  Relativistic physics always gives the correct answer at all speeds, while Newton’s laws are seen to be an approximation of relativity, an approximation that works extremely well.

The relativistic formula for momentum turns out to be this:

relativistic momentum formula

Image from the Hyperphysics website

That looks very different from p = m·v.  But look more closely.  What happens when v (the velocity of an object) is much less than c , the speed of light?   (And note that even 10,000 miles an hour is very small compared to c !)  In this case,  v2 / c2 becomes very, very small.  It becomes so small that we can treat it as zero.

Then the denominator for this equation becomes 1, and the equation reduces to the classical momentum equation!  So in this sense, Newton’s laws of physics are included in relativity!

Newton’s laws have a domain in which they are applicable (i.e. give correct results) and outside of this domain they don’t function. So we look for a wider theory that encompasses Newton’s laws, but with a wider domain of applicability.

Special relativity is the set of physical laws that include Newton’s laws of motion, but work in a wider domain of applicability.

General relativity is the set of physical laws that include Special Relativity, but work in a wider domain of applicability.

Quantum mechanics is the set of physical laws that include Newton’s laws of motion, and optics, but work in a wider domain of applicability.

Problem: The description of reality given by general relativity and quantum mechanics is incompatible. That’s bad.  But within their domain of applicability, they both have been fantastically accurate!  That’s good.  This means that there must be an even deeper, more fundamental law of physics that includes both relativity and QM.

Quantum gravity is the postulated fundamental law that includes both QM and relativity.   The search for a theory of quantum gravity is one of the great quests of 21st century science.

Quotes

“If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet.” – Neils Bohr

“We know nothing except through logical analysis, and if we reject that sole connection with reality, we might as well stop trying to be adults and retreat into the capricious dream-world of infantility.”

H.P. Lovecraft, in a letter sent to Robert E. Howard, 8/16/1932

Catapult and Trebuchet build project

catapult is any one of a number of non-handheld mechanical devices used to throw a projectile a great distance without the aid of an explosive substance—particularly various types of ancient and medieval siege engines.

Catapult and Trebuchet.png

The name is the Latinized form of the Ancient Greek καταπέλτης – katapeltes, from κατά – kata (downwards, into, against) and πάλλω – pallo (to poise or sway a missile before it is thrown.) [from Wikipedia]

Ideas on how to build them at home

KnightForHire: How to build simple catapults

Quotes

Today’s Latin lesson:

“Cum catapultae proscriptae erunt tum soli proscripti catapultas habebunt.”
( “When catapults are outlawed, only outlaws will have catapults.” )

“Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam”
( “I have a catapult. Give me all your money, or I will fling an enormous rock at your head.” )

If you lived in the Dark Ages, and you were a catapult operator, I bet the most common question people would ask is, ‘Can’t you make it shoot farther?’ No. I’m sorry. That’s as far as it shoots.”
– Jack Handy, Deep Thoughts, Saturday Night Live

Build an onager, ballista or trebuchet.

Grading rubric. The project is worth 100 points.

Timeliness: Late projects lose 5 points per day.

A. Catapults use torsion (energy stored in a twisted rope or other material.) Do not merely use a stretched elastic (e.g. rubber band.)

If you build a trebuchet then you will need to use a pivoting beam and a counterweight.

B. It will have some kind of trigger or switch. (Without such a trigger, you would merely have a large slingshot.)

C. The payload range will be nearly constant (each payload lands within 15% of the other payloads.)

D. It will have adjustable firing: One setting will yield a shorter range (at least 4 feet.), while another setting yields a longer range (at least 8 feet.)

E. The weight limit is 10 pounds.

F. The longest allowable dimensions of height, length and width are 50 centimeters for each.

Scoring

100 points Machine built according to the above characteristics

– 20 points Minimum range is not met.

– 20 points Too large or too heavy.

– 10 points Firing range is not adjustable.

– 10 points Uses a stretched elastic material (e.g. rubber band) as the only source of power. (Not applicable for trebuchets, of course.)

– 10 points No trigger.

– 5 points Payload range is not constant

Catapult animations

Redstone projects.com: Catapult animations

trebuchet gif

.

Earth’s magnetic field

Earth’s Magnetic Field

The Earth has a magnetic field. When we use a compass, we make use of this field. So we’re tempted to view the Earth as a big rock with a giant bar magnet stuck through it.

But this isn’t at all how it really works: The Earth has a molten metal core, surrounded by a highly metallic shell of magma.  Electrons move through this metal  – and the motion of electrons – as will learn in this chapter – creates a magnetic field!

The Earth itself is slowly spinning, so we end up with slow-moving currents within the Earth. These currents affect the flow of electrons, thus affecting the resulting magnetic field.

Here is the “obvious” model of Earth’s magnetic field (it’s wrong)

earth-magnetic-compass

The red pointer in a compass is attracted by Earth’s own magnetism (sometimes called the geomagnetic field—”geo” simply means Earth).

As English scientist William Gilbert explained about 400 years ago, Earth behaves like a giant bar magnet with one pole up in the Arctic (near the north pole) and another pole down in Antarctica (near the south pole).

Earth’s magnetic field is actually quite weak compared to the “macho” forces like gravity and friction that really dominate our lives.

For a compass to be able to show up the relatively tiny effects of Earth’s magnetism, we have to minimize the effects of these other forces.

That’s why compass needles are :

* lightweight (so gravity has less effect on them)

* mounted on frictionless bearings (so less resistance for the magnetic force to overcome)

http://www.explainthatstuff.com/how-compasses-work.html

 ____________________________

“Where the Earth’s magnetic field comes from, Chris Rowan

The Earth’s magnetic field may approximate to a simple dipole, but explaining precisely how that dipole is generated and maintained is not simple at all. The field originates deep in the Earth, where temperatures are far too high for any material to maintain a permanent magnetisation.

The dynamism that is apparent from the wandering of the magnetic poles with respect to the spin axis (secular variation), and the quasi-periodic flips in field polarity, also suggest that some process is actively generating and maintaining the geomagnetic field. Geophysicists therefore look to the most dynamic region in the planetary depths, the molten outer core, as the source of the force that directs our compass needles…

The Earth’s interior generates a magnetic field. It reaches out into space.

v

v

This magnetic field protects us from some types of radiation

Earth’s North geographic pole has a South magnetic field

The “north” pole of a compass – by definition – is pulled to a “south” magnetic pole.

If we hold a compass in our hands, and call the part pointing to the land of Polar bears “north”, then we’d have to call the part attracting it “south.”

Earth North Geographic Pole South Magnetic Pole
_______________________________

How a planet becomes a magnet

Earth’s magnetic field single-handedly protects life on this planet from a deadly case of solar wind-burn, By Bernie Hobbs

http://www.abc.net.au/science/articles/2011/11/09/3359365.htm

Magnetic field reversals

The magnetic field of the Earth is not stable; it has flip-flopped throughout geologic time.

Evidence: (to be added)

“In the meantime, scientists are working to understand why the magnetic field is changing so dramatically. Geomagnetic pulses, like the one that happened in 2016, might be traced back to ‘hydromagnetic’ waves arising from deep in the core1. And the fast motion of the north magnetic pole could be linked to a high-speed jet of liquid iron beneath Canada2.”

Earth’s magnetic field is acting up and geologists don’t know why. Nature Jan 19

Geomagnetic acceleration and rapid hydromagnetic wave dynamics in advanced numerical simulations of the geodynamo, Aubert, Julien, Geophys. J. Int. 214, 531–547 (2018).

An accelerating high-latitude jet in Earth’s core. Livermore, P. W., Hollerbach, R. & Finlay, C. C. Nature Geosci. 10, 62–68 (2017).

_______________________________

App: The solar wind and Earth’s magnetic field

http://esamultimedia.esa.int/multimedia/edu/PlanetaryMagneticFields.swf

Learning about Earth’s magnetic field: ESA’s Swarm mission

http://www.esa.int/Our_Activities/Observing_the_Earth/The_Living_Planet_Programme/Earth_Explorers/Swarm/ESA_s_magnetic_field_mission_Swarm

Elements necessary for life

Major elements – CHONSP

Carbon – Used as the major building unit of all organic molecules.

Hydrogen – major component of water. Major component of all organic molecules.

Oxygen – major component of water. Must be transported by our red blood cells.

Nitrogen – needed in all amino acids and proteins. Needed in chlorophyll, which is necessary for photosynthesis.

Sulphur – Used in in fats, body fluids, skeletal minerals, and most proteins.

Phosphorus – Necessary to make DNA and RNA. Also a component of bones and teeth.

periodoc table elements for life biological

Microbial Genomics and the Periodic Table, Lawrence P. Wackett, Anthony G. Dodge and Lynda B. M. Ellis

.

periodic table elements life biological

Microbial Genomics and the Periodic Table, Lawrence P. Wackett, Anthony G. Dodge and Lynda B. M. Ellis

 

There are many essential trace elements in humans

Arsenic – “Despite its poisonous reputation, may be a necessary ultratrace element for humans. It is a necessary ultratrace element for red algae, chickens, rats, goats, and pigs. A deficiency results in inhibited growth (*)

Boron – essential for cell membrane characteristics and transmembrane signaling

Calcium ions are essential for muscle contractions and the clotting of blood. Necessary for cell walls, and bones.

Chlorine – Digestive juices in the stomach contain hydrochloric acid.

Chromium – essential trace element that potentiates insulin action and thus influences carbohydrate, lipid and protein metabolism.

“Chromium is an essential trace element and has a role in glucose metabolism. It seems to have an effect in the action of insulin. In anything other than trace amounts, chromium compounds should be regarded as highly toxic.” (*)

“Cobalt salts in small amounts are essential to many life forms, including humans. It is at the core of a vitamin called vitamin-B12. “ (*)

“Copper is essential for all life, but only in small quantities. It is the key component of redox enzymes and of haemocyanin.” (*)

Fluorine forms a salt with calcium. This salt makes the teeth and bones stronger.

“Iodine is an essential component of the human diet and in fact appears to be the heaviest required element in the diet. Iodine compounds are useful in medicine.” (*)

Iron – used in the hemoglobin molecules, allows your blood to hold oxygen. Iron is only about 0.004 percent of your body mass,

Magneisum “Chlorophylls (responsible for the green colour of plants) are based upon magnesium. Magnesium is required for the proper working of some enzymes.” (*)

Manganese – essential for the action of some enzymes

“Molybdenum is a necessary element, apparently for all species. … plays a role in nitrogen fixation, enzymes, and nitrate reduction enzymes.” (*)

Nickel is an essential trace element for many species. Unknown if so in humans.

“Potassium salts are essential for both animals and plants. The potassium cation (K+) is the major cation in intracellular (inside cells) fluids (sodium is the main extracellular cation). It is essential for nerve and heart function.” (*)

Selenium – essential component of one of the antioxidant defense systems of the body
“essential to mammals and higher plants, but only in small amounts…. may help protest against free radical oxidants and against some heavy metals.” (*)

Silicium – probably essential for healthy connective tissue and bone

Sodium (Na+) and potassium (K+) ions – transmission of nerve impulses between your brain and all parts of the body.

Tin – expected to have a function in the tertiary structure of proteins

Tungsten is needed in very tiny amounts in some enzymes (oxidoreductases)

Vanadium – possible role as an enzyme cofactor and in hormone, glucose, lipid, bone and tooth metabolism.

“Zinc is the key component of many enzymes. The protein hormone insulin contains zinc.” (*)

(*) WebElements: THE periodic table on the WWW
https://www.webelements.com/arsenic/biology.html

Use of marijuana cannabis

Article archive for my students

Marijuana is not anywhere near as harmless as most students believe it to be.

This is a drug that literally changes how the brain works. As such, one should expect that long term unregulated use could have harmful consequences.

=========================================================

Is Marijuana as Safe as We Think?
Permitting pot is one thing; promoting its use is another.

By Malcolm Gladwell, The New Yorker, January 14, 2019 Issue

A few years ago, the National Academy of Medicine convened a panel of sixteen leading medical experts to analyze the scientific literature on cannabis. The report they prepared, which came out in January of 2017, runs to four hundred and sixty-eight pages. It contains no bombshells or surprises, which perhaps explains why it went largely unnoticed. It simply stated, over and over again, that a drug North Americans have become enthusiastic about remains a mystery.

For example, smoking pot is widely supposed to diminish the nausea associated with chemotherapy. But, the panel pointed out, “there are no good-quality randomized trials investigating this option.” We have evidence for marijuana as a treatment for pain, but “very little is known about the efficacy, dose, routes of administration, or side effects of commonly used and commercially available cannabis products in the United States.” The caveats continue. Is it good for epilepsy? “Insufficient evidence.” Tourette’s syndrome? Limited evidence. A.L.S., Huntington’s, and Parkinson’s? Insufficient evidence. Irritable-bowel syndrome? Insufficient evidence. Dementia and glaucoma? Probably not. Anxiety? Maybe. Depression? Probably not.

Then come Chapters 5 through 13, the heart of the report, which concern marijuana’s potential risks. The haze of uncertainty continues. Does the use of cannabis increase the likelihood of fatal car accidents? Yes. By how much? Unclear. Does it affect motivation and cognition? Hard to say, but probably. Does it affect employment prospects? Probably. Will it impair academic achievement? Limited evidence. This goes on for pages.

We need proper studies, the panel concluded, on the health effects of cannabis on children and teen-agers and pregnant women and breast-feeding mothers and “older populations” and “heavy cannabis users”; in other words, on everyone except the college student who smokes a joint once a month. The panel also called for investigation into “the pharmacokinetic and pharmacodynamic properties of cannabis, modes of delivery, different concentrations, in various populations, including the dose-response relationships of cannabis and THC or other cannabinoids.”

Figuring out the “dose-response relationship” of a new compound is something a pharmaceutical company does from the start of trials in human subjects, as it prepares a new drug application for the F.D.A. Too little of a powerful drug means that it won’t work. Too much means that it might do more harm than good. The amount of active ingredient in a pill and the metabolic path that the ingredient takes after it enters your body—these are things that drugmakers will have painstakingly mapped out before the product comes on the market, with a tractor-trailer full of supporting documentation.

With marijuana, apparently, we’re still waiting for this information. It’s hard to study a substance that until very recently has been almost universally illegal. And the few studies we do have were done mostly in the nineteen-eighties and nineties, when cannabis was not nearly as potent as it is now. Because of recent developments in plant breeding and growing techniques, the typical concentration of THC, the psychoactive ingredient in marijuana, has gone from the low single digits to more than twenty per cent—from a swig of near-beer to a tequila shot.

Are users smoking less, to compensate for the drug’s new potency? Or simply getting more stoned, more quickly? Is high-potency cannabis more of a problem for younger users or for older ones? For some drugs, the dose-response curve is linear: twice the dose creates twice the effect. For other drugs, it’s nonlinear: twice the dose can increase the effect tenfold, or hardly at all. Which is true for cannabis? It also matters, of course, how cannabis is consumed. It can be smoked, vaped, eaten, or applied to the skin. How are absorption patterns affected?

Last May, not long before Canada legalized the recreational use of marijuana, Beau Kilmer, a drug-policy expert with the rand Corporation, testified before the Canadian Parliament. He warned that the fastest-growing segment of the legal market in Washington State was extracts for inhalation, and that the mean THC concentration for those products was more than sixty-five per cent. “We know little about the health consequences—risks and benefits—of many of the cannabis products likely to be sold in nonmedical markets,” he said. Nor did we know how higher-potency products would affect THC consumption.

When it comes to cannabis, the best-case scenario is that we will muddle through, learning more about its true effects as we go along and adapting as needed—the way, say, the once extraordinarily lethal innovation of the automobile has been gradually tamed in the course of its history. For those curious about the worst-case scenario, Alex Berenson has written a short manifesto, “Tell Your Children: The Truth About Marijuana, Mental Illness, and Violence.”

Berenson begins his book with an account of a conversation he had with his wife, a psychiatrist who specializes in treating mentally ill criminals. They were discussing one of the many grim cases that cross her desk—“the usual horror story, somebody who’d cut up his grandmother or set fire to his apartment.” Then his wife said something like “Of course, he was high, been smoking pot his whole life.”

Of course? I said.

Yeah, they all smoke.

Well . . . other things too, right?

Sometimes. But they all smoke.

Berenson used to be an investigative reporter for the Times, where he covered, among other things, health care and the pharmaceutical industry. Then he left the paper to write a popular series of thrillers. At the time of his conversation with his wife, he had the typical layman’s view of cannabis, which is that it is largely benign. His wife’s remark alarmed him, and he set out to educate himself. Berenson is constrained by the same problem the National Academy of Medicine faced—that, when it comes to marijuana, we really don’t know very much. But he has a reporter’s tenacity, a novelist’s imagination, and an outsider’s knack for asking intemperate questions. The result is disturbing.

The first of Berenson’s questions concerns what has long been the most worrisome point about cannabis: its association with mental illness. Many people with serious psychiatric illness smoke lots of pot. The marijuana lobby typically responds to this fact by saying that pot-smoking is a response to mental illness, not the cause of it—that people with psychiatric issues use marijuana to self-medicate. That is only partly true. In some cases, heavy cannabis use does seem to cause mental illness. As the National Academy panel declared, in one of its few unequivocal conclusions, “Cannabis use is likely to increase the risk of developing schizophrenia and other psychoses; the higher the use, the greater the risk.”

Berenson thinks that we are far too sanguine about this link. He wonders how large the risk is, and what might be behind it. In one of the most fascinating sections of “Tell Your Children,” he sits down with Erik Messamore, a psychiatrist who specializes in neuropharmacology and in the treatment of schizophrenia.

Messamore reports that, following the recent rise in marijuana use in the U.S. (it has almost doubled in the past two decades, not necessarily as the result of legal reforms), he has begun to see a new kind of patient: older, and not from the marginalized communities that his patients usually come from. These are otherwise stable middle-class professionals. Berenson writes, “A surprising number of them seemed to have used only cannabis and no other drugs before their breaks. The disease they’d developed looked like schizophrenia, but it had developed later—and their prognosis seemed to be worse. Their delusions and paranoia hardly responded to antipsychotics.”

Messamore theorizes that THC may interfere with the brain’s anti-inflammatory mechanisms, resulting in damage to nerve cells and blood vessels. Is this the reason, Berenson wonders, for the rising incidence of schizophrenia in the developed world, where cannabis use has also increased?

In the northern parts of Finland, incidence of the disease has nearly doubled since 1993. In Denmark, cases have risen twenty-five per cent since 2000. In the United States, hospital emergency rooms have seen a fifty-per-cent increase in schizophrenia admissions since 2006. If you include cases where schizophrenia was a secondary diagnosis, annual admissions in the past decade have increased from 1.26 million to 2.1 million.

Berenson’s second question derives from the first. The delusions and paranoia that often accompany psychoses can sometimes trigger violent behavior. If cannabis is implicated in a rise in psychoses, should we expect the increased use of marijuana to be accompanied by a rise in violent crime, as Berenson’s wife suggested?

Once again, there is no definitive answer, so Berenson has collected bits and pieces of evidence. For example, in a 2013 paper in the Journal of Interpersonal Violence, researchers looked at the results of a survey of more than twelve thousand American high-school students. The authors assumed that alcohol use among students would be a predictor of violent behavior, and that marijuana use would predict the opposite. In fact, those who used only marijuana were three times more likely to be physically aggressive than abstainers were; those who used only alcohol were 2.7 times more likely to be aggressive. Observational studies like these don’t establish causation. But they invite the sort of research that could.

Berenson looks, too, at the early results from the state of Washington, which, in 2014, became the first U.S. jurisdiction to legalize recreational marijuana. Between 2013 and 2017, the state’s murder and aggravated-assault rates rose forty per cent—twice the national homicide increase and four times the national aggravated-assault increase. We don’t know that an increase in cannabis use was responsible for that surge in violence. Berenson, though, finds it strange that, at a time when Washington may have exposed its population to higher levels of what is widely assumed to be a calming substance, its citizens began turning on one another with increased aggression.

His third question is whether cannabis serves as a gateway drug. There are two possibilities. The first is that marijuana activates certain behavioral and neurological pathways that ease the onset of more serious addictions. The second possibility is that marijuana offers a safer alternative to other drugs: that if you start smoking pot to deal with chronic pain you never graduate to opioids.

Which is it? This is a very hard question to answer. We’re only a decade or so into the widespread recreational use of high-potency marijuana. Maybe cannabis opens the door to other drugs, but only after prolonged use. Or maybe the low-potency marijuana of years past wasn’t a gateway, but today’s high-potency marijuana is. Methodologically, Berenson points out, the issue is complicated by the fact that the first wave of marijuana legalization took place on the West Coast, while the first serious wave of opioid addiction took place in the middle of the country. So, if all you do is eyeball the numbers, it looks as if opioid overdoses are lowest in cannabis states and highest in non-cannabis states.

Not surprisingly, the data we have are messy. Berenson, in his role as devil’s advocate, emphasizes the research that sees cannabis as opening the door to opioid use. For example, two studies of identical twins—in the Netherlands and in Australia—show that, in cases where one twin used cannabis before the age of seventeen and the other didn’t, the cannabis user was several times more likely to develop an addiction to opioids. Berenson also enlists a statistician at N.Y.U. to help him sort through state-level overdose data, and what he finds is not encouraging: “States where more people used cannabis tended to have more overdoses.”

The National Academy panel is more judicious. Its conclusion is that we simply don’t know enough, because there haven’t been any “systematic” studies. But the panel’s uncertainty is scarcely more reassuring than Berenson’s alarmism. Seventy-two thousand Americans died in 2017 of drug overdoses. Should you embark on a pro-cannabis crusade without knowing whether it will add to or subtract from that number?

Drug policy is always clearest at the fringes. Illegal opioids are at one end. They are dangerous. Manufacturers and distributors belong in prison, and users belong in drug-treatment programs. The cannabis industry would have us believe that its product, like coffee, belongs at the other end of the continuum.

“Flow Kana partners with independent multi-generational farmers who cultivate under full sun, sustainably, and in small batches,” the promotional literature for one California cannabis brand reads. “Using only organic methods, these stewards of the land have spent their lives balancing a unique and harmonious relationship between the farm, the genetics and the terroir.”

But cannabis is not coffee. It’s somewhere in the middle. The experience of most users is relatively benign and predictable; the experience of a few, at the margins, is not. Products or behaviors that have that kind of muddled risk profile are confusing, because it is very difficult for those in the benign middle to appreciate the experiences of those at the statistical tails.

Low-frequency risks also take longer and are far harder to quantify, and the lesson of “Tell Your Children” and the National Academy report is that we aren’t yet in a position to do so. For the moment, cannabis probably belongs in the category of substances that society permits but simultaneously discourages. Cigarettes are heavily taxed, and smoking is prohibited in most workplaces and public spaces. Alcohol can’t be sold without a license and is kept out of the hands of children. Prescription drugs have rules about dosages, labels that describe their risks, and policies that govern their availability. The advice that seasoned potheads sometimes give new users—“start low and go slow”—is probably good advice for society as a whole, at least until we better understand what we are dealing with.

Late last year, the commissioner of the Food and Drug Administration, Scott Gottlieb, announced a federal crackdown on e-cigarettes. He had seen the data on soaring use among teen-agers, and, he said, “it shocked my conscience.” He announced that the F.D.A. would ban many kinds of flavored e-cigarettes, which are especially popular with teens, and would restrict the retail outlets where e-cigarettes were available.

In the dozen years since e-cigarettes were introduced into the marketplace, they have attracted an enormous amount of attention. There are scores of studies and papers on the subject in the medical and legal literature, grappling with the questions raised by the new technology. Vaping is clearly popular among kids. Is it a gateway to traditional tobacco use? Some public-health experts worry that we’re grooming a younger generation for a lifetime of dangerous addiction. Yet other people see e-cigarettes as a much safer alternative for adult smokers looking to satisfy their nicotine addiction. That’s the British perspective.

Last year, a Parliamentary committee recommended cutting taxes on e-cigarettes and allowing vaping in areas where it had previously been banned. Since e-cigarettes are as much as ninety-five per cent less harmful than regular cigarettes, the committee argued, why not promote them? Gottlieb said that he was splitting the difference between the two positions—giving adults “opportunities to transition to non-combustible products,” while upholding the F.D.A.’s “solemn mandate to make nicotine products less accessible and less appealing to children.” He was immediately criticized.

“Somehow, we have completely lost all sense of public-health perspective,” Michael Siegel, a public-health researcher at Boston University, wrote after the F.D.A. announcement:

Every argument that the F.D.A. is making in justifying a ban on the sale of electronic cigarettes in convenience stores and gas stations applies even more strongly for real tobacco cigarettes: you know, the ones that kill hundreds of thousands of Americans each year. Something is terribly wrong with our sense of perspective when we take the e-cigarettes off the shelf but allow the old-fashioned ones to remain.

Among members of the public-health community, it is impossible to spend five minutes on the e-cigarette question without getting into an argument. And this is nicotine they are arguing about, a drug that has been exhaustively studied by generations of scientists. We don’t worry that e-cigarettes increase the number of fatal car accidents, diminish motivation and cognition, or impair academic achievement. The drugs through the gateway that we worry about with e-cigarettes are Marlboros, not opioids. There are no enormous scientific question marks over nicotine’s dosing and bio-availability. Yet we still proceed cautiously and carefully with nicotine, because it is a powerful drug, and when powerful drugs are consumed by lots of people in new and untested ways we have an obligation to try to figure out what will happen.

A week after Gottlieb announced his crackdown on e-cigarettes, on the ground that they are too enticing to children, Siegel visited the first recreational-marijuana facility in Massachusetts. Here is what he found on the menu, each offering laced with large amounts of a drug, THC, that no one knows much about:

Strawberry-flavored chewy bites
Large, citrus gummy bears
Delectable Belgian dark chocolate bars
Assorted fruit-flavored chews
Assorted fruit-flavored cubes
Raspberry flavored confection
Raspberry flavored lozenges
Chewy, cocoa caramel bite-sized treats
Raspberry & watermelon flavored lozenges
Chocolate-chip brownies.

He concludes, “This is public health in 2018?”

This article appears in the print edition of the January 14, 2019, issue, with the headline “Unwatched Pot.”

=========================================================

Prenatal Exposure to Cannabis Affects the Developing Brain

Children born to moms who smoked or ingested marijuana during pregnancy suffer higher rates of depression, hyperactivity, and inattention.

By Andrew Scheyer, The Scientist, 1/1/2019

Excerpt

A Lifetime of Consequences?

Large-scale, longitudinal studies of humans whose mothers smoked marijuana once or more per week and experimental work on rodents exposed to cannabinoids in utero have yielded remarkably consistent intellectual and behavioral correlates of fetal exposure to this drug. Some exposed individuals exhibit deficits in memory, cognition, and measures of sociability.

These aberrations appear during infancy and persist through adulthood and are tied to changes in the expression of multiple gene families, as well as more global measures of brain responsiveness and plasticity. Researchers currently consider these perturbations to be mediated by changes to the endocannabinoid system caused by the active compounds in cannabis.

marijuana cannabis cannabinoids fetal explosure

How Cannabis Affects the Function of Neurons

The human body contains two primary cannabinoid receptors: CB1R and CB2R. CB1R is present in the human fetal cerebrum by the first weeks of the second trimester, and is the brain’s most abundant G-protein coupled receptor. Located at the presynaptic terminal of neurons, CB1R is activated by endocannabinoids, which are synthesized from fatty acids in the postsynaptic neuron.

The receptors’ activation modulates the presynaptic release of neurotransmitters, thereby affecting synaptic function and a range of downstream signaling agents, from glutamate, dopamine, and serotonin to neuropeptides and hormones. The function of CB2Rs in the brain is still poorly understood, but there is some evidence that they exist both pre- and post-synaptically, as well as on glia and astrocytes. One recent paper suggests that, like CB1Rs, CB2Rs regulate neurotransmitter release (Synapse, 72:e22061, 2018).

When people smoke or ingest marijuana, exogenous cannabinoids enter the nervous system and activate these receptors. Stimulation by these high-affinity agonists results in stronger binding and greater activation of CB1R, triggering the process of receptor downregulation. Specifically, the greater binding causes the receptors to be internalized and degraded, such that they are no longer as available for cannabinoid signaling, and can thereby alter neuronal firing and other downstream events.

neurons marijuana cannabis cannabinoids fetal explosure presynaptic

Research articles

Cannabis use and the risk of developing a psychotic disorder, World Psychiatry, 2008 Jun; 7(2): 68–71.

Cannabis Users Have 500% Increased Risk for Schizophrenia, Increased Risk from Alcohol and Other illegal Drugs Too

BDNF overexpression prevents cognitive deficit elicited by adolescent cannabis exposure and host susceptibility interaction. Human Molecular Genetics Vol 26

Cannabis and schizophrenia: New evidence unveiled. Medical News Today.

Cannabis use and risk of schizophrenia: a Mendelian randomization study. Molecular Psychiatry volume 23, pages 1287–1292 (2018)

Impact of Cannabis Use on the Development of Psychotic Disorders. Current Addiction Reports

Samuel T. Wilkinson, Rajiv Radhakrishnan, and Deepak Cyril D’Souza write:
The link between cannabis use and psychosis comprises three distinct relationships: acute psychosis associated with cannabis intoxication; acute psychosis that lasts beyond the period of acute intoxication; and persistent psychosis not time-locked to exposure. Experimental studies reveal that cannabis, delta-9-tetrahydrocannabinol (THC) and synthetic cannabinoids reliably produce transient positive, negative, and cognitive symptoms in healthy volunteers. Case studies indicate that cannabinoids can induce acute psychosis that lasts beyond the period of acute intoxication but resolves within a month. Exposure to cannabis in adolescence is associated with a risk for later psychotic disorder in adulthood; this association is consistent, temporally related, shows a dose response, and is biologically plausible. However, cannabis is neither necessary nor sufficient to cause a persistent psychotic disorder. More likely, it is a component cause that interacts with other factors to result in psychosis. The link between cannabis and psychosis is moderated by age at onset of cannabis use, childhood abuse, and genetic vulnerability. While more research is needed to better characterize the relationship between cannabinoid use and the onset and persistence of psychosis, clinicians should be mindful of the potential risk of psychosis, especially in vulnerable populations, including adolescents and those with a psychosis diathesis.

{next article tba}

 

Related archived articles

Neuroscientist argues that addiction is not a disease

Learning Standards

Massachusetts Comprehensive Health Curriculum Framework

PreK–12 Standard 10: Tobacco, Alcohol, & Substance Use/Abuse Prevention

Students will acquire the knowledge and skills to be competent in making health-enhancing decisions regarding the use of medications and avoidance of substances, and in communicating about substance use/abuse prevention for healthier homes, schools, and communities.

Through the study of Effects on the Body students will
10.5 Describe addictions to alcohol, tobacco, and other drugs, and methods for intervention, treatment, and cessation

10.6 List the potential outcomes of prevalent early and late adolescent risk behaviors related to tobacco, alcohol, and other drugs, including the general pattern and continuum of risk behaviors involving substances that young people might follow
Students generate ideas of what the term “gateway” means in relation to substance abuse and map out a series of behaviors that begin with such “gateway” behaviors

Through the study of Healthy Decisions students will

10.7 Identify internal factors (such as character) and external factors (such as family, peers, community, faith-based affiliation, and media) that influence the decision of young people to use or not to use drugs

10.8 Demonstrate ways of refusing and of sharing preventive health information about tobacco, alcohol, and other drugs with peers. Students research and give an oral report on the effects of second-hand smoke.

By the end of grade 12

Through the study of Effects on the Body students will

10.9 Describe the relationship between multi-drug use and the increased negative effects on the body, including the stages of addiction, and overdose. Students research the increased chances of death from alcohol poisoning when alcohol is combined with marijuana.

10.10 Describe the harmful effects of tobacco, alcohol, and other substances on pregnant women and their unborn children.
+++++++++++++++++++++++++++++++++++++++++++++++++

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.

§107. Limitations on Exclusive Rights: Fair Use.  Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

How Space and Time Could Be a Quantum Error-Correcting Code

Article backup for my students

The same codes needed to thwart errors in quantum computers may also give the fabric of space-time its intrinsic robustness.

Natalie Wolchover, Quanta magazine

In 1994, a mathematician at AT&T Research named Peter Shor brought instant fame to “quantum computers” when he discovered that these hypothetical devices could quickly factor large numbers — and thus break much of modern cryptography. But a fundamental problem stood in the way of actually building quantum computers: the innate frailty of their physical components.

Unlike binary bits of information in ordinary computers, “qubits” consist of quantum particles that have some probability of being in each of two states, designated |0⟩ and |1⟩, at the same time. When qubits interact, their possible states become interdependent, each one’s chances of |0⟩ and |1⟩ hinging on those of the other. The contingent possibilities proliferate as the qubits become more and more “entangled” with each operation. Sustaining and manipulating this exponentially growing number of simultaneous possibilities are what makes quantum computers so theoretically powerful.

But qubits are maddeningly error-prone. The feeblest magnetic field or stray microwave pulse causes them to undergo “bit-flips” that switch their chances of being |0⟩ and |1⟩ relative to the other qubits, or “phase-flips” that invert the mathematical relationship between their two states. For quantum computers to work, scientists must find schemes for protecting information even when individual qubits get corrupted. What’s more, these schemes must detect and correct errors without directly measuring the qubits, since measurements collapse qubits’ coexisting possibilities into definite realities: plain old 0s or 1s that can’t sustain quantum computations.

In 1995, Shor followed his factoring algorithm with another stunner: proof that “quantum error-correcting codes” exist. The computer scientists Dorit Aharonov and Michael Ben-Or (and other researchers working independently) proved a year later that these codes could theoretically push error rates close to zero. “This was the central discovery in the ’90s that convinced people that scalable quantum computing should be possible at all,” said Scott Aaronson, a leading quantum computer scientist at the University of Texas — “that it is merely a staggering problem of engineering.”

Now, even as small quantum computers are materializing in labs around the world, useful ones that will outclass ordinary computers remain years or decades away. Far more efficient quantum error-correcting codes are needed to cope with the daunting error rates of real qubits. The effort to design better codes is “one of the major thrusts of the field,” Aaronson said, along with improving the hardware.

But in the dogged pursuit of these codes over the past quarter-century, a funny thing happened in 2014, when physicists found evidence of a deep connection between quantum error correction and the nature of space, time and gravity. In Albert Einstein’s general theory of relativity, gravity is defined as the fabric of space and time — or “space-time” — bending around massive objects. (A ball tossed into the air travels along a straight line through space-time, which itself bends back toward Earth.) But powerful as Einstein’s theory is, physicists believe gravity must have a deeper, quantum origin from which the semblance of a space-time fabric somehow emerges.

That year — 2014 — three young quantum gravity researchers came to an astonishing realization. They were working in physicists’ theoretical playground of choice: a toy universe called “anti-de Sitter space” that works like a hologram. The bendy fabric of space-time in the interior of the universe is a projection that emerges from entangled quantum particles living on its outer boundary. Ahmed AlmheiriXi Dong and Daniel Harlow did calculations suggesting that this holographic “emergence” of space-time works just like a quantum error-correcting code. They conjectured in the Journal of High Energy Physics that space-time itself is a code — in anti-de Sitter (AdS) universes, at least. The paper has triggered a wave of activity in the quantum gravity community, and new quantum error-correcting codes have been discovered that capture more properties of space-time.

John Preskill, a theoretical physicist at the California Institute of Technology, says quantum error correction explains how space-time achieves its “intrinsic robustness,” despite being woven out of fragile quantum stuff. “We’re not walking on eggshells to make sure we don’t make the geometry fall apart,” Preskill said. “I think this connection with quantum error correction is the deepest explanation we have for why that’s the case.”

The language of quantum error correction is also starting to enable researchers to probe the mysteries of black holes: spherical regions in which space-time curves so steeply inward toward the center that not even light can escape. “Everything traces back to black holes,” said Almheiri, who is now at the Institute for Advanced Study in Princeton, New Jersey. These paradox-ridden places are where gravity reaches its zenith and Einstein’s general relativity theory fails. “There are some indications that if you understand which code space-time implements,” he said, “it might help us in understanding the black hole interior.”

As a bonus, researchers hope holographic space-time might also point the way to scalable quantum computing, fulfilling the long-ago vision of Shor and others. “Space-time is a lot smarter than us,” Almheiri said. “The kind of quantum error-correcting code which is implemented in these constructions is a very efficient code.”

So, how do quantum error-correcting codes work? The trick to protecting information in jittery qubits is to store it not in individual qubits, but in patterns of entanglement among many.

As a simple example, consider the three-qubit code: It uses three “physical” qubits to protect a single “logical” qubit of information against bit-flips. (The code isn’t really useful for quantum error correction because it can’t protect against phase-flips, but it’s nonetheless instructive.) The |0⟩ state of the logical qubit corresponds to all three physical qubits being in their |0⟩ states, and the |1⟩ state corresponds to all three being |1⟩’s. The system is in a “superposition” of these states, designated |000⟩ + |111⟩. But say one of the qubits bit-flips. How do we detect and correct the error without directly measuring any of the qubits?

The qubits can be fed through two gates in a quantum circuit. One gate checks the “parity” of the first and second physical qubit — whether they’re the same or different — and the other gate checks the parity of the first and third. When there’s no error (meaning the qubits are in the state |000⟩ + |111⟩), the parity-measuring gates determine that both the first and second and the first and third qubits are always the same. However, if the first qubit accidentally bit-flips, producing the state |100⟩ + |011⟩, the gates detect a difference in both of the pairs. For a bit-flip of the second qubit, yielding |010⟩ + |101⟩, the parity-measuring gates detect that the first and second qubits are different and first and third are the same, and if the third qubit flips, the gates indicate: same, different. These unique outcomes reveal which corrective surgery, if any, needs to be performed — an operation that flips back the first, second or third physical qubit without collapsing the logical qubit. “Quantum error correction, to me, it’s like magic,” Almheiri said.

quashing qubit errors quantum computers superposition

The best error-correcting codes can typically recover all of the encoded information from slightly more than half of your physical qubits, even if the rest are corrupted. This fact is what hinted to Almheiri, Dong and Harlow in 2014 that quantum error correction might be related to the way anti-de Sitter space-time arises from quantum entanglement.

It’s important to note that AdS space is different from the space-time geometry of our “de Sitter” universe. Our universe is infused with positive vacuum energy that causes it to expand without bound, while anti-de Sitter space has negative vacuum energy, which gives it the hyperbolic geometry of one of M.C. Escher’s Circle Limit designs. Escher’s tessellated creatures become smaller and smaller moving outward from the circle’s center, eventually vanishing at the perimeter; similarly, the spatial dimension radiating away from the center of AdS space gradually shrinks and eventually disappears, establishing the universe’s outer boundary. AdS space gained popularity among quantum gravity theorists in 1997 after the renowned physicist Juan Maldacena discovered that the bendy space-time fabric in its interior is “holographically dual” to a quantum theory of particles living on the lower-dimensional, gravity-free boundary.

m c escher circle limit hyperbolic geometry

In exploring how the duality works, as hundreds of physicists have in the past two decades, Almheiri and colleagues noticed that any point in the interior of AdS space could be constructed from slightly more than half of the boundary — just as in an optimal quantum error-correcting code. 

In their paper conjecturing that holographic space-time and quantum error correction are one and the same, they described how even a simple code could be understood as a 2D hologram. It consists of three “qutrits” — particles that exist in any of three states — sitting at equidistant points around a circle. The entangled trio of qutrits encode one logical qutrit, corresponding to a single space-time point in the circle’s center. The code protects the point against the erasure of any of the three qutrits.

Of course, one point is not much of a universe. In 2015, Harlow, Preskill, Fernando Pastawski and Beni Yoshida found another holographic code, nicknamed the HaPPY code, that captures more properties of AdS space. The code tiles space in five-sided building blocks — “little Tinkertoys,” said Patrick Hayden of Stanford University, a leader in the research area. Each Tinkertoy represents a single space-time point. “These tiles would be playing the role of the fish in an Escher tiling,” Hayden said.

In the HaPPY code and other holographic error-correcting schemes that have been discovered, everything inside a region of the interior space-time called the “entanglement wedge” can be reconstructed from qubits on an adjacent region of the boundary. Overlapping regions on the boundary will have overlapping entanglement wedges, Hayden said, just as a logical qubit in a quantum computer is reproducible from many different subsets of physical qubits. “That’s where the error-correcting property comes in.”

“Quantum error correction gives us a more general way of thinking about geometry in this code language,” said Preskill, the Caltech physicist. The same language, he said, “ought to be applicable, in my opinion, to more general situations” — in particular, to a de Sitter universe like ours. But de Sitter space, lacking a spatial boundary, has so far proven much harder to understand as a hologram.

For now, researchers like Almheiri, Harlow and Hayden are sticking with AdS space, which shares many key properties with a de Sitter world but is simpler to study. Both space-time geometries abide by Einstein’s theory; they simply curve in different directions. Perhaps most importantly, both kinds of universes contain black holes. “The most fundamental property of gravity is that there are black holes,” said Harlow, who is now an assistant professor of physics at the Massachusetts Institute of Technology. “That’s what makes gravity different from all the other forces. That’s why quantum gravity is hard.”

The language of quantum error correction has provided a new way of describing black holes. The presence of a black hole is defined by the breakdown of correctability,” Hayden said: “When there are so many errors that you can no longer keep track of what’s going on in the bulk [space-time] anymore, you get a black hole. It’s like a sink for your ignorance.”

Ignorance invariably abounds when it comes to black hole interiors. Stephen Hawking’s 1974 epiphany that black holes radiate heat, and thus eventually evaporate away, triggered the infamous “black hole information paradox,” which asks what happens to all the information that black holes swallow. Physicists need a quantum theory of gravity to understand how things that fall in black holes also get out. The issue may relate to cosmology and the birth of the universe, since expansion out of a Big Bang singularity is much like gravitational collapse into a black hole in reverse.

AdS space simplifies the information question. Since the boundary of an AdS universe is holographically dual to everything in it — black holes and all — the information that falls into a black hole is guaranteed never to be lost; it’s always holographically encoded on the universe’s boundary. Calculations suggest that to reconstruct information about a black hole’s interior from qubits on the boundary, you need access to entangled qubits throughout roughly three-quarters of the boundary. “Slightly more than half is not sufficient anymore,” Almheiri said. He added that the need for three-quarters seems to say something important about quantum gravity, but why that fraction comes up “is still an open question.”

In Almheiri’s first claim to fame in 2012, the tall, thin Emirati physicist and three collaborators deepened the information paradox. Their reasoning suggested that information might be prevented from ever falling into a black hole in the first place, by a “firewall” at the black hole’s event horizon.

Like most physicists, Almheiri doesn’t really believe black hole firewalls exist, but finding the way around them has proved difficult. Now, he thinks quantum error correction is what stops firewalls from forming, by protecting information even as it crosses black hole horizons. In his latest, solo work, which appeared in October, he reported that quantum error correction is “essential for maintaining the smoothness of space-time at the horizon” of a two-mouthed black hole, called a wormhole. He speculates that quantum error correction, as well as preventing firewalls, is also how qubits escape a black hole after falling in, through strands of entanglement between the inside and outside that are themselves like miniature wormholes. This would resolve Hawking’s paradox.

This year, the Department of Defense is funding research into holographic space-time, at least partly in case advances there might spin off more efficient error-correcting codes for quantum computers.

On the physics side, it remains to be seen whether de Sitter universes like ours can be described holographically, in terms of qubits and codes. “The whole connection is known for a world that is manifestly not our world,” Aaronson said. In a paperlast summer, Dong, who is now at the University of California, Santa Barbara, and his co-authors Eva Silverstein and Gonzalo Torroba took a step in the de Sitter direction, with an attempt at a primitive holographic description. Researchers are still studying that particular proposal, but Preskill thinks the language of quantum error correction will ultimately carry over to actual space-time.

“It’s really entanglement which is holding the space together,” he said. “If you want to weave space-time together out of little pieces, you have to entangle them in the right way. And the right way is to build a quantum error-correcting code.”

https://www.quantamagazine.org/how-space-and-time-could-be-a-quantum-error-correcting-code-20190103/

_____________________

Related How does gravity work in the quantum regime? A holographic duality from string theory offers a powerful tool for unraveling the mystery.

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)