Home » quantum mechanics
Category Archives: quantum mechanics
How Space and Time Could Be a Quantum ErrorCorrecting Code
Article backup for my students
The same codes needed to thwart errors in quantum computers may also give the fabric of spacetime its intrinsic robustness.
Natalie Wolchover, Quanta magazine
In 1994, a mathematician at AT&T Research named Peter Shor brought instant fame to “quantum computers” when he discovered that these hypothetical devices could quickly factor large numbers — and thus break much of modern cryptography. But a fundamental problem stood in the way of actually building quantum computers: the innate frailty of their physical components.
Unlike binary bits of information in ordinary computers, “qubits” consist of quantum particles that have some probability of being in each of two states, designated 0⟩ and 1⟩, at the same time. When qubits interact, their possible states become interdependent, each one’s chances of 0⟩ and 1⟩ hinging on those of the other. The contingent possibilities proliferate as the qubits become more and more “entangled” with each operation. Sustaining and manipulating this exponentially growing number of simultaneous possibilities are what makes quantum computers so theoretically powerful.
But qubits are maddeningly errorprone. The feeblest magnetic field or stray microwave pulse causes them to undergo “bitflips” that switch their chances of being 0⟩ and 1⟩ relative to the other qubits, or “phaseflips” that invert the mathematical relationship between their two states. For quantum computers to work, scientists must find schemes for protecting information even when individual qubits get corrupted. What’s more, these schemes must detect and correct errors without directly measuring the qubits, since measurements collapse qubits’ coexisting possibilities into definite realities: plain old 0s or 1s that can’t sustain quantum computations.
In 1995, Shor followed his factoring algorithm with another stunner: proof that “quantum errorcorrecting codes” exist. The computer scientists Dorit Aharonov and Michael BenOr (and other researchers working independently) proved a year later that these codes could theoretically push error rates close to zero. “This was the central discovery in the ’90s that convinced people that scalable quantum computing should be possible at all,” said Scott Aaronson, a leading quantum computer scientist at the University of Texas — “that it is merely a staggering problem of engineering.”
Now, even as small quantum computers are materializing in labs around the world, useful ones that will outclass ordinary computers remain years or decades away. Far more efficient quantum errorcorrecting codes are needed to cope with the daunting error rates of real qubits. The effort to design better codes is “one of the major thrusts of the field,” Aaronson said, along with improving the hardware.
But in the dogged pursuit of these codes over the past quartercentury, a funny thing happened in 2014, when physicists found evidence of a deep connection between quantum error correction and the nature of space, time and gravity. In Albert Einstein’s general theory of relativity, gravity is defined as the fabric of space and time — or “spacetime” — bending around massive objects. (A ball tossed into the air travels along a straight line through spacetime, which itself bends back toward Earth.) But powerful as Einstein’s theory is, physicists believe gravity must have a deeper, quantum origin from which the semblance of a spacetime fabric somehow emerges.
That year — 2014 — three young quantum gravity researchers came to an astonishing realization. They were working in physicists’ theoretical playground of choice: a toy universe called “antide Sitter space” that works like a hologram. The bendy fabric of spacetime in the interior of the universe is a projection that emerges from entangled quantum particles living on its outer boundary. Ahmed Almheiri, Xi Dong and Daniel Harlow did calculations suggesting that this holographic “emergence” of spacetime works just like a quantum errorcorrecting code. They conjectured in the Journal of High Energy Physics that spacetime itself is a code — in antide Sitter (AdS) universes, at least. The paper has triggered a wave of activity in the quantum gravity community, and new quantum errorcorrecting codes have been discovered that capture more properties of spacetime.
John Preskill, a theoretical physicist at the California Institute of Technology, says quantum error correction explains how spacetime achieves its “intrinsic robustness,” despite being woven out of fragile quantum stuff. “We’re not walking on eggshells to make sure we don’t make the geometry fall apart,” Preskill said. “I think this connection with quantum error correction is the deepest explanation we have for why that’s the case.”
The language of quantum error correction is also starting to enable researchers to probe the mysteries of black holes: spherical regions in which spacetime curves so steeply inward toward the center that not even light can escape. “Everything traces back to black holes,” said Almheiri, who is now at the Institute for Advanced Study in Princeton, New Jersey. These paradoxridden places are where gravity reaches its zenith and Einstein’s general relativity theory fails. “There are some indications that if you understand which code spacetime implements,” he said, “it might help us in understanding the black hole interior.”
As a bonus, researchers hope holographic spacetime might also point the way to scalable quantum computing, fulfilling the longago vision of Shor and others. “Spacetime is a lot smarter than us,” Almheiri said. “The kind of quantum errorcorrecting code which is implemented in these constructions is a very efficient code.”
So, how do quantum errorcorrecting codes work? The trick to protecting information in jittery qubits is to store it not in individual qubits, but in patterns of entanglement among many.
As a simple example, consider the threequbit code: It uses three “physical” qubits to protect a single “logical” qubit of information against bitflips. (The code isn’t really useful for quantum error correction because it can’t protect against phaseflips, but it’s nonetheless instructive.) The 0⟩ state of the logical qubit corresponds to all three physical qubits being in their 0⟩ states, and the 1⟩ state corresponds to all three being 1⟩’s. The system is in a “superposition” of these states, designated 000⟩ + 111⟩. But say one of the qubits bitflips. How do we detect and correct the error without directly measuring any of the qubits?
The qubits can be fed through two gates in a quantum circuit. One gate checks the “parity” of the first and second physical qubit — whether they’re the same or different — and the other gate checks the parity of the first and third. When there’s no error (meaning the qubits are in the state 000⟩ + 111⟩), the paritymeasuring gates determine that both the first and second and the first and third qubits are always the same. However, if the first qubit accidentally bitflips, producing the state 100⟩ + 011⟩, the gates detect a difference in both of the pairs. For a bitflip of the second qubit, yielding 010⟩ + 101⟩, the paritymeasuring gates detect that the first and second qubits are different and first and third are the same, and if the third qubit flips, the gates indicate: same, different. These unique outcomes reveal which corrective surgery, if any, needs to be performed — an operation that flips back the first, second or third physical qubit without collapsing the logical qubit. “Quantum error correction, to me, it’s like magic,” Almheiri said.
The best errorcorrecting codes can typically recover all of the encoded information from slightly more than half of your physical qubits, even if the rest are corrupted. This fact is what hinted to Almheiri, Dong and Harlow in 2014 that quantum error correction might be related to the way antide Sitter spacetime arises from quantum entanglement.
It’s important to note that AdS space is different from the spacetime geometry of our “de Sitter” universe. Our universe is infused with positive vacuum energy that causes it to expand without bound, while antide Sitter space has negative vacuum energy, which gives it the hyperbolic geometry of one of M.C. Escher’s Circle Limit designs. Escher’s tessellated creatures become smaller and smaller moving outward from the circle’s center, eventually vanishing at the perimeter; similarly, the spatial dimension radiating away from the center of AdS space gradually shrinks and eventually disappears, establishing the universe’s outer boundary. AdS space gained popularity among quantum gravity theorists in 1997 after the renowned physicist Juan Maldacena discovered that the bendy spacetime fabric in its interior is “holographically dual” to a quantum theory of particles living on the lowerdimensional, gravityfree boundary.
In exploring how the duality works, as hundreds of physicists have in the past two decades, Almheiri and colleagues noticed that any point in the interior of AdS space could be constructed from slightly more than half of the boundary — just as in an optimal quantum errorcorrecting code.
In their paper conjecturing that holographic spacetime and quantum error correction are one and the same, they described how even a simple code could be understood as a 2D hologram. It consists of three “qutrits” — particles that exist in any of three states — sitting at equidistant points around a circle. The entangled trio of qutrits encode one logical qutrit, corresponding to a single spacetime point in the circle’s center. The code protects the point against the erasure of any of the three qutrits.
Of course, one point is not much of a universe. In 2015, Harlow, Preskill, Fernando Pastawski and Beni Yoshida found another holographic code, nicknamed the HaPPY code, that captures more properties of AdS space. The code tiles space in fivesided building blocks — “little Tinkertoys,” said Patrick Hayden of Stanford University, a leader in the research area. Each Tinkertoy represents a single spacetime point. “These tiles would be playing the role of the fish in an Escher tiling,” Hayden said.
In the HaPPY code and other holographic errorcorrecting schemes that have been discovered, everything inside a region of the interior spacetime called the “entanglement wedge” can be reconstructed from qubits on an adjacent region of the boundary. Overlapping regions on the boundary will have overlapping entanglement wedges, Hayden said, just as a logical qubit in a quantum computer is reproducible from many different subsets of physical qubits. “That’s where the errorcorrecting property comes in.”
“Quantum error correction gives us a more general way of thinking about geometry in this code language,” said Preskill, the Caltech physicist. The same language, he said, “ought to be applicable, in my opinion, to more general situations” — in particular, to a de Sitter universe like ours. But de Sitter space, lacking a spatial boundary, has so far proven much harder to understand as a hologram.
For now, researchers like Almheiri, Harlow and Hayden are sticking with AdS space, which shares many key properties with a de Sitter world but is simpler to study. Both spacetime geometries abide by Einstein’s theory; they simply curve in different directions. Perhaps most importantly, both kinds of universes contain black holes. “The most fundamental property of gravity is that there are black holes,” said Harlow, who is now an assistant professor of physics at the Massachusetts Institute of Technology. “That’s what makes gravity different from all the other forces. That’s why quantum gravity is hard.”
The language of quantum error correction has provided a new way of describing black holes. The presence of a black hole is defined by “the breakdown of correctability,” Hayden said: “When there are so many errors that you can no longer keep track of what’s going on in the bulk [spacetime] anymore, you get a black hole. It’s like a sink for your ignorance.”
Ignorance invariably abounds when it comes to black hole interiors. Stephen Hawking’s 1974 epiphany that black holes radiate heat, and thus eventually evaporate away, triggered the infamous “black hole information paradox,” which asks what happens to all the information that black holes swallow. Physicists need a quantum theory of gravity to understand how things that fall in black holes also get out. The issue may relate to cosmology and the birth of the universe, since expansion out of a Big Bang singularity is much like gravitational collapse into a black hole in reverse.
AdS space simplifies the information question. Since the boundary of an AdS universe is holographically dual to everything in it — black holes and all — the information that falls into a black hole is guaranteed never to be lost; it’s always holographically encoded on the universe’s boundary. Calculations suggest that to reconstruct information about a black hole’s interior from qubits on the boundary, you need access to entangled qubits throughout roughly threequarters of the boundary. “Slightly more than half is not sufficient anymore,” Almheiri said. He added that the need for threequarters seems to say something important about quantum gravity, but why that fraction comes up “is still an open question.”
In Almheiri’s first claim to fame in 2012, the tall, thin Emirati physicist and three collaborators deepened the information paradox. Their reasoning suggested that information might be prevented from ever falling into a black hole in the first place, by a “firewall” at the black hole’s event horizon.
Like most physicists, Almheiri doesn’t really believe black hole firewalls exist, but finding the way around them has proved difficult. Now, he thinks quantum error correction is what stops firewalls from forming, by protecting information even as it crosses black hole horizons. In his latest, solo work, which appeared in October, he reported that quantum error correction is “essential for maintaining the smoothness of spacetime at the horizon” of a twomouthed black hole, called a wormhole. He speculates that quantum error correction, as well as preventing firewalls, is also how qubits escape a black hole after falling in, through strands of entanglement between the inside and outside that are themselves like miniature wormholes. This would resolve Hawking’s paradox.
This year, the Department of Defense is funding research into holographic spacetime, at least partly in case advances there might spin off more efficient errorcorrecting codes for quantum computers.
On the physics side, it remains to be seen whether de Sitter universes like ours can be described holographically, in terms of qubits and codes. “The whole connection is known for a world that is manifestly not our world,” Aaronson said. In a paperlast summer, Dong, who is now at the University of California, Santa Barbara, and his coauthors Eva Silverstein and Gonzalo Torroba took a step in the de Sitter direction, with an attempt at a primitive holographic description. Researchers are still studying that particular proposal, but Preskill thinks the language of quantum error correction will ultimately carry over to actual spacetime.
“It’s really entanglement which is holding the space together,” he said. “If you want to weave spacetime together out of little pieces, you have to entangle them in the right way. And the right way is to build a quantum errorcorrecting code.”
https://www.quantamagazine.org/howspaceandtimecouldbeaquantumerrorcorrectingcode20190103/
_____________________
Related How does gravity work in the quantum regime? A holographic duality from string theory offers a powerful tool for unraveling the mystery.
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Basic chemistry rules are actually magic number approximations
The basic rules of chemistry are magic number approximations
What is Lewis Theory?
This lesson is from from Mark R. Leach, metasynthesis.com, Lewis_theory
Lewis theory is the study of the patterns that atoms display when they bond and react with each other.
The Lewis approach is to look at many chemical systems, study patterns, count the electrons in the patterns. After that, we devise simple rules to explain what is happening.
Lewis theory makes no attempt to explain how or why these empirically derived numbers of electrons – these magic numbers – arise.
Although, it is striking that the magic numbers are generally (but not exclusively) positive integers of even parity: 0, 2, 4, 6, 8
For example:

Atoms and atomic ions show particular stability when they have a full outer or valence shell of electrons and are isoelectronic with He, Ne, Ar, Kr & Xe: Magic numbers 2, 10, 18, 36, 54.

Atoms have a shell electronic structure: Magic numbers 2, 8, 8, 18, 18.

Sodium metal reacts to give the sodium ion, Na^{+}, a species that has a full octet of electrons in its valence shell. Magic number 8.

A covalent bond consist of a shared pair electrons: Magic number 2.

Atoms have valency, the number of chemical bonds formed by an element, which is the number of electrons in the valence shell divided by 2: Magic numbers 0 to 8.

Ammonia, H3N:, has a lone pair of electrons in its valence shell: Magic number 2.

Ethene, H2C=CH2, has a double covalent bond: Magic numbers (2 + 2)/2 = 2.

Nitrogen, N2, N≡N, has a triple covalent bond: Magic numbers (2 + 2 + 2)/2 = 3.

The methyl radical, H3C•, has a single unpaired electron in its valence shell: Magic number 1.

Lewis bases (proton abstractors & nucleophiles) react via an electron pair: Magic number 2.

Electrophiles, Lewis acids, accept a a pair of electron in order to fill their octet: Magic numbers 2 + 6 = 8.

Oxidation involves loss of electrons, reduction involves gain of electrons. Every redox reaction involves concurrent oxidation and reduction: Magic number 0 (overall).

Curly arrows represent the movement of an electron pair: Magic number 2.

Ammonia, NH3, and phosphine, PH3, are isoelectronic in that they have the same Lewis structure. Both have three covalent bonds and a lone pair of electrons: Magic numbers 2 & 8.

Aromaticity in benzene is associated with the species having 4n+2 πelectrons. Magic number 6.Naphthalene is also aromatic: Magic number 10.

Etc.
Lewis theory is numerology.
Lewis theory is electron accountancy: look for the patterns and count the electrons.
Lewis theory is also highly eclectic in that it greedily begs/borrows/steals/assimilates numbers from deeper, predictive theories and incorporates them into itself, as we shall see.
Ernest Rutherford famously said

Patterns
Consider the pattern shown in Diagram1:
Now expand the view slightly and look at Diagram2
You may feel that the right hand side “does not fit the pattern” of Diagram1 and so is an anomaly.
So, is it an anomaly?
Zoom out a bit and look at the pattern in Diagram3, the anomaly disappears
But then look at Diagram4. The purple patch on the upper right hand side does not seem to fit the pattern and so it may represent anomaly
But zooming right out to Diagram5 we see that everything is part of a larger regular pattern.
When viewing the larger scale the overall pattern emerges and everything becomes clear. Of course, the Digital Flowers pattern is trivial, whereas the interactions of electrons and positive nuclei are astonishingly subtle.
This situation is exactly like learning about chemical structure and reactivity using Lewis theory. First we learn about the ‘Lewis octet’, and we come to believe that the pattern of chemistry can be explained in terms of the very useful Lewis octet model.
Then we encounter phosphorous pentachloride, PCl5, and discover that it has 10 electrons in its valence shell. Is PCl5 an anomaly? No! The fact is that the pattern generated through the Lewis octet model is just too simple.
As we zoom out and look at more chemical structure and reactivity examples we see that the pattern is more complicated that indicated by the Lewis octet magic number 8.
Our problem is that although the patterns of electrons in chemical systems are in principle predictable, new patterns always come as a surprise when they are first discovered:

The periodicity of the chemical elements

The 4n + 2 rule of aromaticity

The observation that sulfur exists in S8 rings

The discovery of neodymium magnets in the 1990s

The serendipitous discovery of how to make the fullerene C60 in large amounts
While these observations can be explained after the fact, they were not predicted beforehand. We do not have the mathematical tools to do predict the nature of the quantum patterns with absolute precision.
The chemist’s approach to understanding structure and reactivity is to count the electrons and take note of the patterns. This is Lewis theory.
As chemists we attempt to ‘explain’ many of these patterns in terms of electron accountancy and magic numbers.
Caught In The Act: Theoretical Theft & Magic Number Creation
The crucial time for our understand chemical structure & bonding occurred in the busy chemistry laboratories at UC Berkeley under the leadership of G. N. Lewis in the early years of the 20th century.
Lewis and colleagues were actively debating the new ideas about atomic structure, particularly the Rutherford & Bohr atoms and postulated how they might give rise to models of chemical structure, bonding & reactivity.
Indeed, the Lewis model uses ideas directly from the Bohr atom. The Rutherford atom shows electrons whizzing about the nucleus, but to the trained eye, there is no structure to the whizzing. Introduced by Niels Bohr in 1913, the Bohr model is a quantum physics modification of the Rutherford model and is sometimes referred to the Rutherford–Bohr model. (Bohr was Rutherford’s student at the time.) The model’s key success lay in explaining (correlating with) the Rydberg formula for the spectral emission lines of atomic hydrogen.
[Greatly simplifying both the history & the science:]
In 1916 atomic theory forked or bifurcated into physics and chemistry streams:

The physics fork was initiated and developed by Bohr, Pauli, Sommerfield and others. Research involved studying atomic spectroscopy and this lead to the discovery of the four quantum numbers – principal, azimuthal, magnetic & spin – and their selection rules. More advanced models of chemical structure, bonding & reactivity are based upon the Schrödinger equation in which the electron is treated as a resonant standing wave. This has developed into molecular orbital theory and the discipline ofcomputational chemistry.

Note: quantum numbers and their selection rules are not ‘magic’ numbers. The quantum numbers represent deep symmetries that are entirely self consistent across all quantum mechanics.

The chemistry fork started when Lewis published his first ideas about the patterns he saw in chemical bonding and reactivity in 1916, and later in a more advanced form in 1923. Lewis realised that electrons could be counted and that there were patterns associated with structure, bonding and reactivity behaviour.These early ideas have been extensively developed and are now taught to chemistry students the world over. This is Lewis theory.
_____________________________________________________
Lewis Theory and Quantum Mechanics
Quantum mechanics and Lewis theory are both concerned with patterns. However, quantum mechanics actively causes the patterns whereas Lewis theory is passive and it only reports on patterns that are observed through experiment.
We observe patterns of structure & reactivity behaviour through experiment.
Lewis theory looks down on the empirical evidence, identifies patterns in behaviour and classifies the patterns in terms of electron accountancy& magic numbers. Lewis theory gives no explanation for the patterns.
In large part, chemistry is about the behaviour of electrons and electrons are quantum mechanical entities. Quantum mechanics causes chemistry to be the way it is. The quantum mechanical patterns are can be:
 Observed using spectroscopy.
 Echoes of the underlying quantum mechanics can be seen in the chemical structure & reactivity behaviour patterns.
 The patterns can be calculated, although the mathematics is not trivial.
.
Four types of multiverses
Most people believe that the universe began at the Big Bang, and that our universe is the only one that has ever existed. Others believe that the universe is cyclical, and that universes existed before ours: those universes, it is hypothesized, collapsed and were replaced by later universes.
When Georges Lemaître, a Belgian physicist and Roman Catholic priest, first began to develop the Big Bang Theory (in 1927), many scientists assumed the former – this is the only universe that has ever existed. In this view, it makes no sense to ask “what happened before the Big Bang?” as there was no before.
In more recent years, scientists have studied the possibility of a multiverse. Our universe may not be the only one that has existed; perhaps others existed before our own, and others may exist after our own. Also, perhaps other universes – in some way removed from our own – simultaneously exist. In this view, one indeed may ask “what happened before the Big Bang?” as there was a time before our universe.
Is there evidence of a multiverse?
At the present time, most scientists say that we don’t have any direct evidence. However, astronomical and physics evidence, as interpreted through quantum mechanics and general relativity, may suggest that other universes may exist.
As such, physicists have developed models of how our universe may have been created, perhaps from the destruction of a previous universe, or perhaps ours branched off from some other.
On the other hand, some physicists hold that certain results of quantum mechanics experiments, indeed, are direct evidence of our universe physically interfering with other “nearby” quantum multiverse.
Two of the most well known adherents of this view are Max Tegmark (his work is the basis of this article) as well as David Deutsch. See The Fabric of Reality by David Deutsch (Penguin, 1998)
A Physicist Explores the Multiverse Quantum Computers Predict Parallel Worlds by Susan Barber
David Deutsch’s multiverse carries us beyond the realms of imagination
New evidence for the multiverse—and its implications
Pilot wave theory really implies multiverse theory
Mag Tegmark Article – the four types of multiverse are
LEVEL I: REGIONS BEYOND OUR COSMIC HORIZON
Summary: The simplest type of parallel universe is simply a region of space that is too far away for us to have seen yet. The farthest that we can observe is currently about 4 ! 1026 meters, or 42 billion lightyears—the distance that light has been able to travel since the big bang. (The distance is greater than 14 billion lightyears because cosmic expansion has lengthened distances.) Each of the Level I parallel universes is basically the same as ours. All the differences stem from variations in the initial arrangement of matter.
LEVEL II: OTHER POSTINFLATION BUBBLES
Summary: A somewhat more elaborate type of parallel universe emerges from the theory of cosmological inflation. The idea is that our Level I multiverse—namely, our universe and contiguous regions of space—is a bubble embedded in an even vaster but mostly empty volume. Other bubbles exist out there, disconnected from ours. They nucleate like raindrops in a cloud. During nucleation, variations in quantum fields endow each bubble with properties that distinguish it from other bubbles.
LEVEL III: THE MANY WORLDS OF QUANTUM PHYSICS
Summary: Quantum mechanics predicts a vast number of parallel universes by broadening the concept of “elsewhere.” These universes are located elsewhere, not in ordinary space but in an abstract realm of all possible states. Every conceivable way that the world could be (within the scope of quantum mechanics) corresponds to a different universe.
Yet these parallel universes might make their presence felt in laboratory experiments, such as wave interference and quantum computation.
LEVEL IV: Mathematical universe hypothesis
Existing outside of ur space and time, they are almost impossible to visualize; the best one can do is to think of them abstractly. We can at least create static sculptures that represent the mathematical structure of the physical laws that govern them.
For example, consider a simple universe: Earth, moon and sun, obeying Newton’s laws. To an objective observer, this universe looks like a circular ring (Earth’s orbit smeared out in time) wrapped in a braid (the moon’s orbit around Earth).
Other shapes embody other laws of physics (a, b, c, d).
According to Max Tegmark, this paradigm solves various problems concerning the foundations of physics.
A Level IV multiverse comes from the idea that our physical world is a mathematical structure. It means that mathematical equations describe not merely some limited aspects of the physical world, but all aspects of it.
External resources
Max Tegmark multiverse website. MIT.Edu
Scientific American article:Parallel Universes
Infographic: Inflation Meets Many Worlds
The Physics of David Deutsch (older version)
David Deutsch homepage, Centre for Quantum Computation, Oxford Univ
Related articles
What Is (And Isn’t) Scientific About The Multiverse, by Ethan Siegel, Forbes
Learning Standards
2016 Massachusetts Science and Technology/Engineering Standards
Students will be able to:
* respectfully provide and/or receive critiques on scientific arguments by probing reasoning and evidence and challenging ideas and conclusions, and determining what additional information is required to solve contradictions
Next Generation Science Standards: Science & Engineering Practices
● Ask questions that arise from careful observation of phenomena, or unexpected results, to clarify and/or seek additional information.
● Ask questions that arise from examining models or a theory, to clarify and/or seek additional information and relationships.
Newfound Wormhole Allows Info to Escape Black Holes
By Natalie Wolchover, Senior Writer, Quanta Magazine
October 23, 2017
In 1985, when Carl Sagan was writing the novel Contact, he needed to quickly transport his protagonist Dr. Ellie Arroway from Earth to the star Vega. He had her enter a black hole and exit lightyears away, but he didn’t know if this made any sense. The Cornell University astrophysicist and television star consulted his friend Kip Thorne, a black hole expert at the California Institute of Technology (who won a Nobel Prize earlier this month). Thorne knew that Arroway couldn’t get to Vega via a black hole, which is thought to trap and destroy anything that falls in. But it occurred to him that she might make use of another kind of hole consistent with Albert Einstein’s general theory of relativity: a tunnel or “wormhole” connecting distant locations in spacetime.
While the simplest theoretical wormholes immediately collapse and disappear before anything can get through, Thorne wondered whether it might be possible for an “infinitely advanced” scifi civilization to stabilize a wormhole long enough for something or someone to traverse it.
He figured out that such a civilization could in fact line the throat of a wormhole with “exotic material” that counteracts its tendency to collapse. The material would possess negative energy, which would deflect radiation and repulse spacetime apart from itself. Sagan used the trick in Contact, attributing the invention of the exotic material to an earlier, lost civilization to avoid getting into particulars. Meanwhile, those particulars enthralled Thorne, his students and many other physicists, who spent years exploring traversable wormholes and their theoretical implications. They discovered that these wormholes can serve as time machines, invoking timetravel paradoxes — evidence that exotic material is forbidden in nature.
Now, decades later, a new species of traversable wormhole has emerged, free of exotic material and full of potential for helping physicists resolve a baffling paradox about black holes. This paradox is the very problem that plagued the early draft of Contact and led Thorne to contemplate traversable wormholes in the first place; namely, that things that fall into black holes seem to vanish without a trace. This total erasure of information breaks the rules of quantum mechanics, and it so puzzles experts that in recent years, some have argued that black hole interiors don’t really exist — that space and time strangely end at their horizons.
The flurry of findings started last year with a paper that reported the first traversable wormhole that doesn’t require the insertion of exotic material to stay open. Instead, according to Ping Gao and Daniel Jafferis of Harvard University and Aron Wall of Stanford University, the repulsive negative energy in the wormhole’s throat can be generated from the outside by a special quantum connection between the pair of black holes that form the wormhole’s two mouths. When the black holes are connected in the right way, something tossed into one will shimmy along the wormhole and, following certain events in the outside universe, exit the second.
Remarkably, Gao, Jafferis and Wall noticed that their scenario is mathematically equivalent to a process called quantum teleportation, which is key to quantum cryptography and can be demonstrated in laboratory experiments.
John Preskill, a black hole and quantum gravity expert at Caltech, says the new traversable wormhole comes as a surprise, with implications for the black hole information paradox and black hole interiors. “What I really like,” he said, “is that an observer can enter the black hole and then escape to tell about what she saw.” This suggests that black hole interiors really exist, he explained, and that what goes in must come out.
The new wormhole work began in 2013, when Jafferis attended an intriguing talk at the Strings conference in South Korea. The speaker, Juan Maldacena, a professor of physics at the Institute for Advanced Study in Princeton, New Jersey, had recently concluded, based on various hints and arguments, that “ER = EPR.” That is, wormholes between distant points in spacetime, the simplest of which are called EinsteinRosen or “ER” bridges, are equivalent (albeit in some illdefined way) to entangled quantum particles, also known as EinsteinPodolskyRosen or “EPR” pairs. The ER = EPR conjecture, posed by Maldacena and Leonard Susskind of Stanford, was an attempt to solve the modern incarnation of the infamous black hole information paradox by tying spacetime geometry, governed by general relativity, to the instantaneous quantum connections between farapart particles that Einstein called “spooky action at a distance.”
The paradox has loomed since 1974, when the British physicist Stephen Hawking determined that black holes evaporate — slowly giving off heat in the form of particles now known as “Hawking radiation.” Hawking calculated that this heat is completely random; it contains no information about the black hole’s contents. As the black hole blinks out of existence, so does the universe’s record of everything that went inside. This violates a principle called “unitarity,” the backbone of quantum theory, which holds that as particles interact, information about them is never lost, only scrambled, so that if you reversed the arrow of time in the universe’s quantum evolution, you’d see things unscramble into an exact recreation of the past.
Almost everyone believes in unitarity, which means information must escape black holes — but how? In the last five years, some theorists, most notably Joseph Polchinski of the University of California, Santa Barbara, have argued that black holes are empty shells with no interiors at all — that Ellie Arroway, upon hitting a black hole’s event horizon, would fizzle on a “firewall” and radiate out again.
Many theorists believe in black hole interiors (and gentler transitions across their horizons), but in order to understand them, they must discover the fate of information that falls inside. This is critical to building a working quantum theory of gravity, the longsought union of the quantum and spacetime descriptions of nature that comes into sharpest relief in black hole interiors, where extreme gravity acts on a quantum scale.
The quantum gravity connection is what drew Maldacena, and later Jafferis, to the ER = EPR idea, and to wormholes. The implied relationship between tunnels in spacetime and quantum entanglement posed by ER = EPR resonated with a popular recent belief that space is essentially stitched into existence by quantum entanglement. It seemed that wormholes had a role to play in stitching together spacetime and in letting black hole information worm its way out of black holes — but how might this work? When Jafferis heard Maldacena talk about his cryptic equation and the evidence for it, he was aware that a standard ER wormhole is unstable and nontraversable. But he wondered what Maldacena’s duality would mean for a traversable wormhole like the ones Thorne and others played around with decades ago. Three years after the South Korea talk, Jafferis and his collaborators Gao and Wall presented their answer. The work extends the ER = EPR idea by equating, not a standard wormhole and a pair of entangled particles, but a traversable wormhole and quantum teleportation: a protocol discovered in 1993 that allows a quantum system to disappear and reappear unscathed somewhere else.
When Maldacena read Gao, Jafferis and Wall’s paper, “I viewed it as a really nice idea, one of these ideas that after someone tells you, it’s obvious,” he said. Maldacena and two collaborators, Douglas Stanford and Zhenbin Yang, immediately began exploring the new wormhole’s ramifications for the black hole information paradox; their paper appeared in April. Susskind and Ying Zhao of Stanford followed this with a paper about wormhole teleportation in July. The wormhole “gives an interesting geometric picture for how teleportation happens,” Maldacena said. “The message actually goes through the wormhole.”
In their paper, “Diving Into Traversable Wormholes,” published in Fortschritte der Physik, Maldacena, Stanford and Yang consider a wormhole of the new kind that connects two black holes: a parent black hole and a daughter one formed from half of the Hawking radiation given off by the parent as it evaporates. The two systems are as entangled as they can be. Here, the fate of the older black hole’s information is clear: It worms its way out of the daughter black hole.
During an interview this month in his tranquil office at the IAS, Maldacena, a reserved ArgentinianAmerican with a track record of influential insights, described his radical musings. On the right side of a chalkdusty blackboard, Maldacena drew a faint picture of two black holes connected by the new traversable wormhole.
On the left, he sketched a quantum teleportation experiment, performed by the famous fictional experimenters Alice and Bob, who are in possession of entangled quantum particles a and b, respectively.
Say Alice wants to teleport a qubit q to Bob. She prepares a combined state of q and a, measures that combined state (reducing it to a pair of classical bits, 1 or 0), and sends the result of this measurement to Bob. He can then use this as a key for operating on b in a way that recreates the state q. Voila, a unit of quantum information has teleported from one place to the other.
Maldacena turned to the right side of the blackboard. “You can do operations with a pair of black holes that are morally equivalent to what I discussed [about quantum teleportation]. And in that picture, this message really goes through the wormhole.”
Say Alice throws qubit q into black hole A. She then measures a particle of its Hawking radiation, a, and transmits the result of the measurement through the external universe to Bob, who can use this knowledge to operate on b, a Hawking particle coming out of black hole B. Bob’s operation reconstructs q, which appears to pop out of B, a perfect match for the particle that fell into A. This is why some physicists are excited: Gao, Jafferis and Wall’s wormhole allows information to be recovered from black holes. In their paper, they set up their wormhole in a negatively curved spacetime geometry that often serves as a useful, if unrealistic, playground for quantum gravity theorists. However, their wormhole idea seems to extend to the real world as long as two black holes are coupled in the right way: “They have to be causally connected and then the nature of the interaction that we took is the simplest thing you can imagine,” Jafferis explained. If you allow the Hawking radiation from one of the black holes to fall into the other, the two black holes become entangled, and the quantum information that falls into one can exit the other.
The quantumteleportation format precludes using these traversable wormholes as time machines. Anything that goes through the wormhole has to wait for Alice’s message to travel to Bob in the outside universe before it can exit Bob’s black hole, so the wormhole doesn’t offer any superluminal boost that could be exploited for time travel. It seems traversable wormholes might be permitted in nature as long as they offer no speed advantage. “Traversable wormholes are like getting a bank loan,” Gao, Jafferis and Wall wrote in their paper: “You can only get one if you are rich enough not to need it.”
A Naive Octopus
While traversable wormholes won’t revolutionize space travel, according to Preskill the new wormhole discovery provides “a promising resolution” to the black hole firewall question by suggesting that there is no firewall at black hole horizons. Preskill said the discovery rescues “what we call ‘black hole complementarity,’ which means that the interior and exterior of the black hole are not really two different systems but rather two very different, complementary ways of looking at the same system.” If complementarity holds, as is widely assumed, then in passing across a black hole horizon from one realm to the other, Contact’s Ellie Arroway wouldn’t notice anything strange. This seems more likely if, under certain conditions, she could even slide all the way through a GaoJafferisWall wormhole.
The wormhole also safeguards unitarity — the principle that information is never lost — at least for the entangled black holes being studied. Whatever falls into one black hole eventually exits the other as Hawking radiation, Preskill said, which “can be thought of as in some sense a very scrambled copy of the black hole interior.”
Taking the findings to their logical conclusion, Preskill thinks it ought to be possible (at least for an infinitely advanced civilization) to influence the interior of one of these black holes by manipulating its radiation. This “sounds crazy,” he wrote in an email, but it “might make sense if we can think of the radiation, which is entangled with the black hole — EPR — as being connected to the black hole interior by wormholes — ER. Then tickling the radiation can send a message which can be read from inside the black hole!” He added, “We still have a ways to go, though, before we can flesh out this picture in more detail.”
Indeed, obstacles remain in the quest to generalize the new wormhole findings to a statement about the fate of all quantum information, or the meaning of ER = EPR.
In Maldacena and Susskind’s paper proposing ER = EPR, they included a sketch that’s become known as the “octopus”: a black hole with tentaclelike wormholes leading to distant Hawking particles that have evaporated out of it.
The authors explained that the sketch illustrates “the entanglement pattern between the black hole and the Hawking radiation. We expect that this entanglement leads to the interior geometry of the black hole.”
But according to Matt Visser, a mathematician and generalrelativity expert at Victoria University of Wellington in New Zealand who has studied wormholes since the 1990s, the most literal reading of the octopus picture doesn’t work. The throats of wormholes formed from single Hawking particles would be so thin that qubits could never fit through. “A traversable wormhole throat is ‘transparent’ only to wave packets with size smaller than the throat radius,” Visser explained. “Big wave packets will simply bounce off any small wormhole throat without crossing to the other side.”
Stanford, who cowrote the recent paper with Maldacena and Yang, acknowledged that this is a problem with the simplest interpretation of the ER = EPR idea, in which each particle of Hawking radiation has its own tentaclelike wormhole.
However, a more speculative interpretation of ER = EPR that he and others have in mind does not suffer from this failing. “The idea is that in order to recover the information from the Hawking radiation using this traversable wormhole,” Stanford said, one has to “gather the Hawking radiation together and act on it in a complicated way.”
This complicated collective measurement reveals information about the particles that fell in; it has the effect, he said, of “creating a large, traversable wormhole out of the small and unhelpful octopus tentacles. The information would then propagate through this large wormhole.” Maldacena added that, simply put, the theory of quantum gravity might have a new, generalized notion of geometry for which ER equals EPR. “We think quantum gravity should obey this principle,” he said. “We view it more as a guide to the theory.”
In his 1994 popular science book, Black Holes and Time Warps, Kip Thorne celebrated the style of reasoning involved in wormhole research. “No type of thought experiment pushes the laws of physics harder than the type triggered by Carl Sagan’s phone call to me,” he wrote; “thought experiments that ask, ‘What things do the laws of physics permit an infinitely advanced civilization to do, and what things do the laws forbid?’”
Newfound Wormhole Allows Information to Escape Black Holes 10
Arxiv paper: Cool horizons for entangled black holes Juan Maldacena and Leonard Susskind
Related articles

Wormholes Untangle a Black Hole Paradox

How Quantum Pairs Stitch SpaceTime

Interactive: What Is Space?

Alice and Bob Meet the Wall of Fire
Theoretical physics: The origins of space and time
Many researchers believe that physics will not be complete until it can explain not just the behaviour of space and time, but where these entities come from.
Zeeya Merali, Nature, 28 August 2013
“Imagine waking up one day and realizing that you actually live inside a computer game,” says Mark Van Raamsdonk, describing what sounds like a pitch for a sciencefiction film. But for Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, Canada, this scenario is a way to think about reality. If it is true, he says, “everything around us — the whole threedimensional physical world — is an illusion born from information encoded elsewhere, on a twodimensional chip”. That would make our Universe, with its three spatial dimensions, a kind of hologram, projected from a substrate that exists only in lower dimensions.
This ‘holographic principle’ is strange even by the usual standards of theoretical physics. But Van Raamsdonk is one of a small band of researchers who think that the usual ideas are not yet strange enough. If nothing else, they say, neither of the two great pillars of modern physics — general relativity, which describes gravity as a curvature of space and time, and quantum mechanics, which governs the atomic realm — gives any account for the existence of space and time. Neither does string theory, which describes elementary threads of energy.
Van Raamsdonk and his colleagues are convinced that physics will not be complete until it can explain how space and time emerge from something more fundamental — a project that will require concepts at least as audacious as holography. They argue that such a radical reconceptualization of reality is the only way to explain what happens when the infinitely dense ‘singularity’ at the core of a black hole distorts the fabric of spacetime beyond all recognition, or how researchers can unify atomiclevel quantum theory and planetlevel general relativity — a project that has resisted theorists’ efforts for generations.
“All our experiences tell us we shouldn’t have two dramatically different conceptions of reality — there must be one huge overarching theory,” says Abhay Ashtekar, a physicist at Pennsylvania State University in University Park.
Finding that one huge theory is a daunting challenge. Here, Nature explores some promising lines of attack — as well as some of the emerging ideas about how to test these concepts.
NIK SPENCER/NATURE; Panel 4 adapted from Budd, T. & Loll, R. Phys. Rev. D 88, 024015 (2013)
Gravity as thermodynamics
One of the most obvious questions to ask is whether this endeavour is a fool’s errand. Where is the evidence that there actually is anything more fundamental than space and time?
A provocative hint comes from a series of startling discoveries made in the early 1970s, when it became clear that quantum mechanics and gravity were intimately intertwined with thermodynamics, the science of heat.
In 1974, most famously, Stephen Hawking of the University of Cambridge, UK, showed that quantum effects in the space around a black hole will cause it to spew out radiation as if it was hot. Other physicists quickly determined that this phenomenon was quite general. Even in completely empty space, they found, an astronaut undergoing acceleration would perceive that he or she was surrounded by a heat bath. The effect would be too small to be perceptible for any acceleration achievable by rockets, but it seemed to be fundamental. If quantum theory and general relativity are correct — and both have been abundantly corroborated by experiment — then the existence of Hawking radiation seemed inescapable.
A second key discovery was closely related. In standard thermodynamics, an object can radiate heat only by decreasing its entropy, a measure of the number of quantum states inside it. And so it is with black holes: even before Hawking’s 1974 paper, Jacob Bekenstein, now at the Hebrew University of Jerusalem, had shown that black holes possess entropy.
But there was a difference. In most objects, the entropy is proportional to the number of atoms the object contains, and thus to its volume. But a black hole’s entropy turned out to be proportional to the surface area of its event horizon — the boundary out of which not even light can escape. It was as if that surface somehow encoded information about what was inside, just as a twodimensional hologram encodes a threedimensional image.
In 1995, Ted Jacobson, a physicist at the University of Maryland in College Park, combined these two findings, and postulated that every point in space lies on a tiny ‘blackhole horizon’ that also obeys the entropy–area relationship. From that, he found, the mathematics yielded Einstein’s equations of general relativity — but using only thermodynamic concepts, not the idea of bending spacetime^{1}.
“This seemed to say something deep about the origins of gravity,” says Jacobson. In particular, the laws of thermodynamics are statistical in nature — a macroscopic average over the motions of myriad atoms and molecules — so his result suggested that gravity is also statistical, a macroscopic approximation to the unseen constituents of space and time.
In 2010, this idea was taken a step further by Erik Verlinde, a string theorist at the University of Amsterdam, who showed^{2} that the statistical thermodynamics of the spacetime constituents — whatever they turned out to be — could automatically generate Newton’s law of gravitational attraction.
And in separate work, Thanu Padmanabhan, a cosmologist at the InterUniversity Centre for Astronomy and Astrophysics in Pune, India, showed^{3} that Einstein’s equations can be rewritten in a form that makes them identical to the laws of thermodynamics — as can many alternative theories of gravity. Padmanabhan is currently extending the thermodynamic approach in an effort to explain the origin and magnitude of dark energy: a mysterious cosmic force that is accelerating the Universe’s expansion.
Testing such ideas empirically will be extremely difficult. In the same way that water looks perfectly smooth and fluid until it is observed on the scale of its molecules — a fraction of a nanometre — estimates suggest that spacetime will look continuous all the way down to the Planck scale: roughly 10^{−35} metres, or some 20 orders of magnitude smaller than a proton.
But it may not be impossible. One oftenmentioned way to test whether spacetime is made of discrete constituents is to look for delays as highenergy photons travel to Earth from distant cosmic events such as supernovae and γray bursts. In effect, the shortestwavelength photons would sense the discreteness as a subtle bumpiness in the road they had to travel, which would slow them down ever so slightly.
Giovanni AmelinoCamelia, a quantumgravity researcher at the University of Rome, and his colleagues have found^{4} hints of just such delays in the photons from a γray burst recorded in April. The results are not definitive, says AmelinoCamelia, but the group plans to expand its search to look at the travel times of highenergy neutrinos produced by cosmic events. He says that if theories cannot be tested, “then to me, they are not science. They are just religious beliefs, and they hold no interest for me.”
Other physicists are looking at laboratory tests. In 2012, for example, researchers from the University of Vienna and Imperial College London proposed^{5} a tabletop experiment in which a microscopic mirror would be moved around with lasers. They argued that Planckscale granularities in spacetime would produce detectable changes in the light reflected from the mirror (see Naturehttp://doi.org/njf; 2012).
Loop quantum gravity
Even if it is correct, the thermodynamic approach says nothing about what the fundamental constituents of space and time might be. If spacetime is a fabric, so to speak, then what are its threads?
One possible answer is quite literal. The theory of loop quantum gravity, which has been under development since the mid1980s by Ashtekar and others, describes the fabric of spacetime as an evolving spider’s web of strands that carry information about the quantized areas and volumes of the regions they pass through^{6}. The individual strands of the web must eventually join their ends to form loops — hence the theory’s name — but have nothing to do with the much betterknown strings of string theory. The latter move around in spacetime, whereas strands actually are spacetime: the information they carry defines the shape of the spacetime fabric in their vicinity.
Because the loops are quantum objects, however, they also define a minimum unit of area in much the same way that ordinary quantum mechanics defines a minimum groundstate energy for an electron in a hydrogen atom. This quantum of area is a patch roughly one Planck scale on a side. Try to insert an extra strand that carries less area, and it will simply disconnect from the rest of the web. It will not be able to link to anything else, and will effectively drop out of spacetime.
One welcome consequence of a minimum area is that loop quantum gravity cannot squeeze an infinite amount of curvature onto an infinitesimal point. This means that it cannot produce the kind of singularities that cause Einstein’s equations of general relativity to break down at the instant of the Big Bang and at the centres of black holes.
In 2006, Ashtekar and his colleagues reported^{7} a series of simulations that took advantage of that fact, using the loop quantum gravity version of Einstein’s equations to run the clock backwards and visualize what happened before the Big Bang. The reversed cosmos contracted towards the Big Bang, as expected. But as it approached the fundamental size limit dictated by loop quantum gravity, a repulsive force kicked in and kept the singularity open, turning it into a tunnel to a cosmos that preceded our own.
This year, physicists Rodolfo Gambini at the Uruguayan University of the Republic in Montevideo and Jorge Pullin at Louisiana State University in Baton Rouge reported^{8} a similar simulation for a black hole. They found that an observer travelling deep into the heart of a black hole would encounter not a singularity, but a thin spacetime tunnel leading to another part of space. “Getting rid of the singularity problem is a significant achievement,” says Ashtekar, who is working with other researchers to identify signatures that would have been left by a bounce, rather than a bang, on the cosmic microwave background — the radiation left over from the Universe’s massive expansion in its infant moments.
Loop quantum gravity is not a complete unified theory, because it does not include any other forces. Furthermore, physicists have yet to show how ordinary spacetime would emerge from such a web of information. But Daniele Oriti, a physicist at the Max Planck Institute for Gravitational Physics in Golm, Germany, is hoping to find inspiration in the work of condensedmatter physicists, who have produced exotic phases of matter that undergo transitions described by quantum field theory. Oriti and his colleagues are searching for formulae to describe how the Universe might similarly change phase, transitioning from a set of discrete loops to a smooth and continuous spacetime. “It is early days and our job is hard because we are fishes swimming in the fluid at the same time as trying to understand it,” says Oriti.
Causal sets
Such frustrations have led some investigators to pursue a minimalist programme known as causal set theory. Pioneered by Rafael Sorkin, a physicist at the Perimeter Institute in Waterloo, Canada, the theory postulates that the building blocks of spacetime are simple mathematical points that are connected by links, with each link pointing from past to future. Such a link is a barebones representation of causality, meaning that an earlier point can affect a later one, but not vice versa. The resulting network is like a growing tree that gradually builds up into spacetime. “You can think of space emerging from points in a similar way to temperature emerging from atoms,” says Sorkin. “It doesn’t make sense to ask, ‘What’s the temperature of a single atom?’ You need a collection for the concept to have meaning.”
In the late 1980s, Sorkin used this framework to estimate^{9} the number of points that the observable Universe should contain, and reasoned that they should give rise to a small intrinsic energy that causes the Universe to accelerate its expansion. A few years later, the discovery of dark energy confirmed his guess. “People often think that quantum gravity cannot make testable predictions, but here’s a case where it did,” says Joe Henson, a quantumgravity researcher at Imperial College London. “If the value of dark energy had been larger, or zero, causal set theory would have been ruled out.”
Causal dynamical triangulations
That hardly constituted proof, however, and causal set theory has offered few other predictions that could be tested. Some physicists have found it much more fruitful to use computer simulations. The idea, which dates back to the early 1990s, is to approximate the unknown fundamental constituents with tiny chunks of ordinary spacetime caught up in a roiling sea of quantum fluctuations, and to follow how these chunks spontaneously glue themselves together into larger structures.
The earliest efforts were disappointing, says Renate Loll, a physicist now at Radboud University in Nijmegen, the Netherlands. The spacetime building blocks were simple hyperpyramids — fourdimensional counterparts to threedimensional tetrahedrons — and the simulation’s gluing rules allowed them to combine freely. The result was a series of bizarre ‘universes’ that had far too many dimensions (or too few), and that folded back on themselves or broke into pieces. “It was a freeforall that gave back nothing that resembles what we see around us,” says Loll.
But, like Sorkin, Loll and her colleagues found that adding causality changed everything. After all, says Loll, the dimension of time is not quite like the three dimensions of space. “We cannot travel back and forth in time,” she says. So the team changed its simulations to ensure that effects could not come before their cause — and found that the spacetime chunks started consistently assembling themselves into smooth fourdimensional universes with properties similar to our own^{10}.
Intriguingly, the simulations also hint that soon after the Big Bang, the Universe went through an infant phase with only two dimensions — one of space and one of time. This prediction has also been made independently by others attempting to derive equations of quantum gravity, and even some who suggest that the appearance of dark energy is a sign that our Universe is now growing a fourth spatial dimension. Others have shown that a twodimensional phase in the early Universe would create patterns similar to those already seen in the cosmic microwave background.
Holography
Meanwhile, Van Raamsdonk has proposed a very different idea about the emergence of spacetime, based on the holographic principle. Inspired by the hologramlike way that black holes store all their entropy at the surface, this principle was first given an explicit mathematical form by Juan Maldacena, a string theorist at the Institute of Advanced Study in Princeton, New Jersey, who published^{11} his influential model of a holographic universe in 1998. In that model, the threedimensional interior of the universe contains strings and black holes governed only by gravity, whereas its twodimensional boundary contains elementary particles and fields that obey ordinary quantum laws without gravity.
Hypothetical residents of the threedimensional space would never see this boundary, because it would be infinitely far away. But that does not affect the mathematics: anything happening in the threedimensional universe can be described equally well by equations in the twodimensional boundary, and vice versa.
In 2010, Van Raamsdonk studied what that means when quantum particles on the boundary are ‘entangled’ — meaning that measurements made on one inevitably affect the other^{12}. He discovered that if every particle entanglement between two separate regions of the boundary is steadily reduced to zero, so that the quantum links between the two disappear, the threedimensional space responds by gradually dividing itself like a splitting cell, until the last, thin connection between the two halves snaps. Repeating that process will subdivide the threedimensional space again and again, while the twodimensional boundary stays connected. So, in effect, Van Raamsdonk concluded, the threedimensional universe is being held together by quantum entanglement on the boundary — which means that in some sense, quantum entanglement and spacetime are the same thing.
Or, as Maldacena puts it: “This suggests that quantum is the most fundamental, and spacetime emerges from it.”
Nature 500,516–519 (29 August 2013) doi:10.1038/500516a
http://www.nature.com/news/theoreticalphysicstheoriginsofspaceandtime1.13613
__________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use
Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include:
the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
__________________________________________________
Time’s Arrow Traced to Quantum Source
A new theory explains the seemingly irreversible arrow of time while yielding insights into entropy, quantum computers, black holes, and the pastfuture divide.
Natalie Wolchover, Senior Writer, Quanta Magazine, April 16, 2014
Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomerphilosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”
But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.
“In classical physics, we were struggling,” said Sandu Popescu, a professor of physics at the University of Bristol in the United Kingdom. “If I knew more, could I reverse the event, put together all the molecules of the egg that broke? Why am I relevant?”
Surely, he said, time’s arrow is not steered by human ignorance. And yet, since the birth of thermodynamics in the 1850s, the only known approach for calculating the spread of energy was to formulate statistical distributions of the unknown trajectories of particles, and show that, over time, the ignorance smeared things out.
Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”
“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”
Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters.
Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site arXiv.org in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.
The tendency of coffee — and everything else — to reach equilibrium is “very intuitive,” said Nicolas Brunner, a quantum physicist at the University of Geneva. “But when it comes to explaining why it happens, this is the first time it has been derived on firm grounds by considering a microscopic theory.”
If the new line of research is correct, then the story of time’s arrow begins with the quantum mechanical idea that, deep down, nature is inherently uncertain. An elementary particle lacks definite physical properties and is defined only by probabilities of being in various states. For example, at a particular moment, a particle might have a 50 percent chance of spinning clockwise and a 50 percent chance of spinning counterclockwise. An experimentally tested theorem by the Northern Irish physicist John Bell says there is no “true” state of the particle; the probabilities are the only reality that can be ascribed to it.
Quantum uncertainty then gives rise to entanglement, the putative source of the arrow of time.
When two particles interact, they can no longer even be described by their own, independently evolving probabilities, called “pure states.” Instead, they become entangled components of a more complicated probability distribution that describes both particles together. It might dictate, for example, that the particles spin in opposite directions. The system as a whole is in a pure state, but the state of each individual particle is “mixed” with that of its acquaintance. The two could travel lightyears apart, and the spin of each would remain correlated with that of the other, a feature Albert Einstein famously described as “spooky action at a distance.”
“Entanglement is in some sense the essence of quantum mechanics,” or the laws governing interactions on the subatomic scale, Brunner said. The phenomenon underlies quantum computing, quantum cryptography and quantum teleportation.
The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago, when he was a 23yearold philosophy graduate student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time.
Using an obscure approach to quantum mechanics that treated units of information as its basic building blocks, Lloyd spent several years studying the evolution of particles in terms of shuffling 1s and 0s. He found that as the particles became increasingly entangled with one another, the information that originally described them (a “1” for clockwise spin and a “0” for counterclockwise, for example) would shift to describe the system of entangled particles as a whole. It was as though the particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature.
“What’s really going on is things are becoming more correlated with each other,” Lloyd recalls realizing. “The arrow of time is an arrow of increasing correlations.”
The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.
“I was darn close to driving a taxicab,” Lloyd said.
Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system.
“When Lloyd proposed the idea in his thesis, the world was not ready,” said Renato Renner, head of the Institute for Theoretical Physics at ETH Zurich. “No one understood it. Sometimes you have to have the idea at the right time.”
In 2009, the Bristol group’s proof resonated with quantum information theorists, opening up new uses for their techniques. It showed that as objects interact with their surroundings — as the particles in a cup of coffee collide with the air, for example — information about their properties “leaks out and becomes smeared over the entire environment,” Popescu explained. This local information loss causes the state of the coffee to stagnate even as the pure state of the entire room continues to evolve. Except for rare, random fluctuations, he said, “its state stops changing in time.”
Consequently, a tepid cup of coffee does not spontaneously warm up. In principle, as the pure state of the room evolves, the coffee could suddenly become unmixed from the air and enter a pure state of its own. But there are so many more mixed states than pure states available to the coffee that this practically never happens — one would have to outlive the universe to witness it. This statistical unlikelihood gives time’s arrow the appearance of irreversibility. “Essentially entanglement opens a very large space for you,” Popescu said. “It’s like you are at the park and you start next to the gate, far from equilibrium. Then you enter and you have this enormous place and you get lost in it. And you never come back to the gate.”
In the new story of the arrow of time, it is the loss of information through quantum entanglement, rather than a subjective lack of human knowledge, that drives a cup of coffee into equilibrium with the surrounding room. The room eventually equilibrates with the outside environment, and the environment drifts even more slowly toward equilibrium with the rest of the universe. The giants of 19th century thermodynamics viewed this process as a gradual dispersal of energy that increases the overall entropy, or disorder, of the universe. Today, Lloyd, Popescu and others in their field see the arrow of time differently. In their view, information becomes increasingly diffuse, but it never disappears completely. So, they assert, although entropy increases locally, the overall entropy of the universe stays constant at zero.
“The universe as a whole is in a pure state,” Lloyd said. “But individual pieces of it, because they are entangled with the rest of the universe, are in mixtures.”
One aspect of time’s arrow remains unsolved. “There is nothing in these works to say why you started at the gate,” Popescu said, referring to the park analogy. “In other words, they don’t explain why the initial state of the universe was far from equilibrium.” He said this is a question about the nature of the Big Bang.
Despite the recent progress in calculating equilibration time scales, the new approach has yet to make headway as a tool for parsing the thermodynamic properties of specific things, like coffee, glass or exotic states of matter. (Several traditional thermodynamicists reported being only vaguely aware of the new approach.) “The thing is to find the criteria for which things behave like window glass and which things behave like a cup of tea,” Renner said. “I would see the new papers as a step in this direction, but much more needs to be done.”
Some researchers expressed doubt that this abstract approach to thermodynamics will ever be up to the task of addressing the “hard nittygritty of how specific observables behave,” as Lloyd put it. But the conceptual advance and new mathematical formalism is already helping researchers address theoretical questions about thermodynamics, such as the fundamental limits of quantum computers and even the ultimate fate of the universe.
“We’ve been thinking more and more about what we can do with quantum machines,” said Paul Skrzypczyk of the Institute of Photonic Sciences in Barcelona. “Given that a system is not yet at equilibrium, we want to get work out of it. How much useful work can we extract? How can I intervene to do something interesting?”
Sean Carroll, a theoretical cosmologist at the California Institute of Technology, is employing the new formalism in his latest work on time’s arrow in cosmology. “I’m interested in the ultralongterm fate of cosmological spacetimes,” said Carroll, author of “From Eternity to Here: The Quest for the Ultimate Theory of Time.” “That’s a situation where we don’t really know all of the relevant laws of physics, so it makes sense to think on a very abstract level, which is why I found this basic quantummechanical treatment useful.”
Twentysix years after Lloyd’s big idea about time’s arrow fell flat, he is pleased to be witnessing its rise and has been applying the ideas in recent work on the black hole information paradox. “I think now the consensus would be that there is physics in this,” he said.
Not to mention a bit of philosophy.
According to the scientists, our ability to remember the past but not the future, another historically confounding manifestation of time’s arrow, can also be understood as a buildup of correlations between interacting particles. When you read a message on a piece of paper, your brain becomes correlated with it through the photons that reach your eyes. Only from that moment on will you be capable of remembering what the message says. As Lloyd put it: “The present can be defined by the process of becoming correlated with our surroundings.”
The backdrop for the steady growth of entanglement throughout the universe is, of course, time itself. The physicists stress that despite great advances in understanding how changes in time occur, they have made no progress in uncovering the nature of time itself or why it seems different (both perceptually and in the equations of quantum mechanics) than the three dimensions of space. Popescu calls this “one of the greatest unknowns in physics.”
“We can discuss the fact that an hour ago, our brains were in a state that was correlated with fewer things,” he said. “But our perception that time is flowing — that is a different matter altogether. Most probably, we will need a further revolution in physics that will tell us about that.”
https://www.quantamagazine.org/20150428howquantumpairsstitchspacetime/
The Quantum Thermodynamics Revolution
As physicists extend the 19thcentury laws of thermodynamics to the quantum realm, they’re rewriting the relationships among energy, entropy and information.
Natalie Wolchover, Senior Writer, Quanta Magazine, May 2, 2017
https://www.quantamagazine.org/quantumthermodynamicsrevolution/
In his 1824 book, Reflections on the Motive Power of Fire, the 28yearold French engineer Sadi Carnot worked out a formula for how efficiently steam engines can convert heat — now known to be a random, diffuse kind of energy — into work, an orderly kind of energy that might push a piston or turn a wheel. To Carnot’s surprise, he discovered that a perfect engine’s efficiency depends only on the difference in temperature between the engine’s heat source (typically a fire) and its heat sink (typically the outside air). Work is a byproduct, Carnot realized, of heat naturally passing to a colder body from a warmer one.
Carnot died of cholera eight years later, before he could see his efficiency formula develop over the 19th century into the theory of thermodynamics: a set of universal laws dictating the interplay among temperature, heat, work, energy and entropy — a measure of energy’s incessant spreading from more to lessenergetic bodies. The laws of thermodynamics apply not only to steam engines but also to everything else: the sun, black holes, living beings and the entire universe. The theory is so simple and general that Albert Einstein deemed it likely to “never be overthrown.”
Yet since the beginning, thermodynamics has held a singularly strange status among the theories of nature.
“If physical theories were people, thermodynamics would be the village witch,” the physicist Lídia del Rio and coauthors wrote last year in Journal of Physics A. “The other theories find her somewhat odd, somehow different in nature from the rest, yet everyone comes to her for advice, and no one dares to contradict her.”
Unlike, say, the Standard Model of particle physics, which tries to get at what exists, the laws of thermodynamics only say what can and can’t be done. But one of the strangest things about the theory is that these rules seem subjective. A gas made of particles that in aggregate all appear to be the same temperature — and therefore unable to do work — might, upon closer inspection, have microscopic temperature differences that could be exploited after all. As the 19thcentury physicist James Clerk Maxwell put it, “The idea of dissipation of energy depends on the extent of our knowledge.”
In recent years, a revolutionary understanding of thermodynamics has emerged that explains this subjectivity using quantum information theory — “a toddler among physical theories,” as del Rio and coauthors put it, that describes the spread of information through quantum systems. Just as thermodynamics initially grew out of trying to improve steam engines, today’s thermodynamicists are mulling over the workings of quantum machines. Shrinking technology — a singleion engine and threeatom fridge were both experimentally realized for the first time within the past year — is forcing them to extend thermodynamics to the quantum realm, where notions like temperature and work lose their usual meanings, and the classical laws don’t necessarily apply.
They’ve found new, quantum versions of the laws that scale up to the originals. Rewriting the theory from the bottom up has led experts to recast its basic concepts in terms of its subjective nature, and to unravel the deep and often surprising relationship between energy and information — the abstract 1s and 0s by which physical states are distinguished and knowledge is measured. “Quantum thermodynamics” is a field in the making, marked by a typical mix of exuberance and confusion.
“We are entering a brave new world of thermodynamics,” said Sandu Popescu, a physicist at the University of Bristol who is one of the leaders of the research effort. “Although it was very good as it started,” he said, referring to classical thermodynamics, “by now we are looking at it in a completely new way.”
Entropy as Uncertainty
In an 1867 letter to his fellow Scotsman Peter Tait, Maxwell described his nowfamous paradox hinting at the connection between thermodynamics and information. The paradox concerned the second law of thermodynamics — the rule that entropy always increases — which Sir Arthur Eddington would later say “holds the supreme position among the laws of nature.” According to the second law, energy becomes ever more disordered and less useful as it spreads to colder bodies from hotter ones and differences in temperature diminish. (Recall Carnot’s discovery that you need a hot body and a cold body to do work.) Fires die out, cups of coffee cool and the universe rushes toward a state of uniform temperature known as “heat death,” after which no more work can be done.
The great Austrian physicist Ludwig Boltzmann showed that energy disperses, and entropy increases, as a simple matter of statistics: There are many more ways for energy to be spread among the particles in a system than concentrated in a few, so as particles move around and interact, they naturally tend toward states in which their energy is increasingly shared.
But Maxwell’s letter described a thought experiment in which an enlightened being — later called Maxwell’s demon — uses its knowledge to lower entropy and violate the second law. The demon knows the positions and velocities of every molecule in a container of gas. By partitioning the container and opening and closing a small door between the two chambers, the demon lets only fastmoving molecules enter one side, while allowing only slow molecules to go the other way. The demon’s actions divide the gas into hot and cold, concentrating its energy and lowering its overall entropy. The once useless gas can now be put to work.
Maxwell and others wondered how a law of nature could depend on one’s knowledge — or ignorance — of the positions and velocities of molecules. If the second law of thermodynamics depends subjectively on one’s information, in what sense is it true?
A century later, the American physicist Charles Bennett, building on work by Leo Szilard and Rolf Landauer, resolved the paradox by formally linking thermodynamics to the young science of information. Bennett argued that the demon’s knowledge is stored in its memory, and memory has to be cleaned, which takes work. (In 1961, Landauer calculated that at room temperature, it takes at least 2.9 zeptojoules of energy for a computer to erase one bit of stored information.) In other words, as the demon organizes the gas into hot and cold and lowers the gas’s entropy, its brain burns energy and generates more than enough entropy to compensate. The overall entropy of the gasdemon system increases, satisfying the second law of thermodynamics.
The findings revealed that, as Landauer put it, “Information is physical.” The more information you have, the more work you can extract. Maxwell’s demon can wring work out of a singletemperature gas because it has far more information than the average user.
But it took another half century and the rise of quantum information theory, a field born in pursuit of the quantum computer, for physicists to fully explore the startling implications.
Over the past decade, Popescu and his Bristol colleagues, along with other groups, have argued that energy spreads to cold objects from hot ones because of the way information spreads between particles. According to quantum theory, the physical properties of particles are probabilistic; instead of being representable as 1 or 0, they can have some probability of being 1 and some probability of being 0 at the same time. When particles interact, they can also become entangled, joining together the probability distributions that describe both of their states. A central pillar of quantum theory is that the information — the probabilistic 1s and 0s representing particles’ states — is never lost. (The present state of the universe preserves all information about the past.)
Over time, however, as particles interact and become increasingly entangled, information about their individual states spreads and becomes shuffled and shared among more and more particles. Popescu and his colleagues believe that the arrow of increasing quantum entanglement underlies the expected rise in entropy — the thermodynamic arrow of time. A cup of coffee cools to room temperature, they explain, because as coffee molecules collide with air molecules, the information that encodes their energy leaks out and is shared by the surrounding air.
Understanding entropy as a subjective measure allows the universe as a whole to evolve without ever losing information. Even as parts of the universe, such as coffee, engines and people, experience rising entropy as their quantum information dilutes, the global entropy of the universe stays forever zero.
Renato Renner, a professor at ETH Zurich in Switzerland, described this as a radical shift in perspective. Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”
Moreover, the idea that energy has two forms, useless heat and useful work, “made sense for steam engines,” Renner said. “In the new way, there is a whole spectrum in between — energy about which we have partial information.”
Entropy and thermodynamics are “much less of a mystery in this new view,” he said. “That’s why people like the new view better than the old one.”
Thermodynamics From Symmetry
The relationship among information, energy and other “conserved quantities,” which can change hands but never be destroyed, took a new turn in two papers published simultaneously last July in Nature Communications, one by the Bristol team and another by a team that included Jonathan Oppenheim at University College London. Both groups conceived of a hypothetical quantum system that uses information as a sort of currency for trading between the other, more material resources.
Imagine a vast container, or reservoir, of particles that possess both energy and angular momentum (they’re both moving around and spinning). This reservoir is connected to both a weight, which takes energy to lift, and a turning turntable, which takes angular momentum to speed up or slow down. Normally, a single reservoir can’t do any work — this goes back to Carnot’s discovery about the need for hot and cold reservoirs. But the researchers found that a reservoir containing multiple conserved quantities follows different rules. “If you have two different physical quantities that are conserved, like energy and angular momentum,” Popescu said, “as long as you have a bath that contains both of them, then you can trade one for another.”
In the hypothetical weightreservoirturntable system, the weight can be lifted as the turntable slows down, or, conversely, lowering the weight causes the turntable to spin faster. The researchers found that the quantum information describing the particles’ energy and spin states can act as a kind of currency that enables trading between the reservoir’s energy and angular momentum supplies. The notion that conserved quantities can be traded for one another in quantum systems is brand new. It may suggest the need for a more complete thermodynamic theory that would describe not only the flow of energy, but also the interplay between all the conserved quantities in the universe.
The fact that energy has dominated the thermodynamics story up to now might be circumstantial rather than profound, Oppenheim said. Carnot and his successors might have developed a thermodynamic theory governing the flow of, say, angular momentum to go with their engine theory, if only there had been a need. “We have energy sources all around us that we want to extract and use,” Oppenheim said. “It happens to be the case that we don’t have big angular momentum heat baths around us. We don’t come across huge gyroscopes.”
Popescu, who won a Dirac Medal last year for his insights in quantum information theory and quantum foundations, said he and his collaborators work by “pushing quantum mechanics into a corner,” gathering at a blackboard and reasoning their way to a new insight after which it’s easy to derive the associated equations. Some realizations are in the process of crystalizing. In one of several phone conversations in March, Popescu discussed a new thought experiment that illustrates a distinction between information and other conserved quantities — and indicates how symmetries in nature might set them apart.
“Suppose that you and I are living on different planets in remote galaxies,” he said, and suppose that he, Popescu, wants to communicate where you should look to find his planet. The only problem is, this is physically impossible: “I can send you the story of Hamlet. But I cannot indicate for you a direction.”
There’s no way to express in a string of pure, directionless 1s and 0s which way to look to find each other’s galaxies because “nature doesn’t provide us with [a reference frame] that is universal,” Popescu said. If it did — if, for instance, tiny arrows were sewn everywhere in the fabric of the universe, indicating its direction of motion — this would violate “rotational invariance,” a symmetry of the universe. Turntables would start turning faster when aligned with the universe’s motion, and angular momentum would not appear to be conserved. The early20thcentury mathematician Emmy Noether showed that every symmetry comes with a conservation law: The rotational symmetry of the universe reflects the preservation of a quantity we call angular momentum. Popescu’s thought experiment suggests that the impossibility of expressing spatial direction with information “may be related to the conservation law,” he said.
The seeming inability to express everything about the universe in terms of information could be relevant to the search for a more fundamental description of nature. In recent years, many theorists have come to believe that spacetime, the bendy fabric of the universe, and the matter and energy within it might be a hologram that arises from a network of entangled quantum information. “One has to be careful,” Oppenheim said, “because information does behave differently than other physical properties, like spacetime.”
Knowing the logical links between the concepts could also help physicists reason their way inside black holes, mysterious spacetime swallowing objects that are known to have temperatures and entropies, and which somehow radiate information. “One of the most important aspects of the black hole is its thermodynamics,” Popescu said. “But the type of thermodynamics that they discuss in the black holes, because it’s such a complicated subject, is still more of a traditional type. We are developing a completely novel view on thermodynamics.” It’s “inevitable,” he said, “that these new tools that we are developing will then come back and be used in the black hole.”
What to Tell Technologists
Janet Anders, a quantum information scientist at the University of Exeter, takes a technologydriven approach to understanding quantum thermodynamics. “If we go further and further down [in scale], we’re going to hit a region that we don’t have a good theory for,” Anders said. “And the question is, what do we need to know about this region to tell technologists?”
In 2012, Anders conceived of and cofounded a European research network devoted to quantum thermodynamics that now has 300 members. With her colleagues in the network, she hopes to discover the rules governing the quantum transitions of quantum engines and fridges, which could someday drive or cool computers or be used in solar panels, bioengineering and other applications. Already, researchers are getting a better sense of what quantum engines might be capable of. In 2015, Raam Uzdin and colleagues at the Hebrew University of Jerusalem calculated that quantum engines can outpower classical engines. These probabilistic engines still follow Carnot’s efficiency formula in terms of how much work they can derive from energy passing between hot and cold bodies. But they’re sometimes able to extract the work much more quickly, giving them more power. An engine made of a single ion was experimentally demonstrated and reported in Science in April 2016, though it didn’t harness the powerenhancing quantum effect.
Popescu, Oppenheim, Renner and their cohorts are also pursuing more concrete discoveries. In March, Oppenheim and his former student, Lluis Masanes, published a paper deriving the third law of thermodynamics — a historically confusing statement about the impossibility of reaching absolutezero temperature — using quantum information theory. They showed that the “cooling speed limit” preventing you from reaching absolute zero arises from the limit on how fast information can be pumped out of the particles in a finitesize object. The speed limit might be relevant to the cooling abilities of quantum fridges, like the one reported in a preprint in February. In 2015, Oppenheim and other collaborators showed that the second law of thermodynamics is replaced, on quantum scales, by a panoply of second “laws” — constraints on how the probability distributions defining the physical states of particles evolve, including in quantum engines.
As the field of quantum thermodynamics grows quickly, spawning a range of approaches and findings, some traditional thermodynamicists see a mess. Peter Hänggi, a vocal critic at the University of Augsburg in Germany, thinks the importance of information is being oversold by expractitioners of quantum computing, who he says mistake the universe for a giant quantum information processor instead of a physical thing. He accuses quantum information theorists of confusing different kinds of entropy — the thermodynamic and informationtheoretic kinds — and using the latter in domains where it doesn’t apply. Maxwell’s demon “gets on my nerves,” Hänggi said. When asked about Oppenheim and company’s second “laws” of thermodynamics, he said, “You see why my blood pressure rises.”
While Hänggi is seen as too oldfashioned in his critique (quantuminformation theorists do study the connections between thermodynamic and informationtheoretic entropy), other thermodynamicists said he makes some valid points. For instance, when quantum information theorists conjure up abstract quantum machines and see if they can get work out of them, they sometimes sidestep the question of how, exactly, you extract work from a quantum system, given that measuring it destroys its simultaneous quantum probabilities. Anders and her collaborators have recently begun addressing this issue with new ideas about quantum work extraction and storage. But the theoretical literature is all over the place.
“Many exciting things have been thrown on the table, a bit in disorder; we need to put them in order,” said Valerio Scarani, a quantum information theorist and thermodynamicist at the National University of Singapore who was part of the team that reported the quantum fridge. “We need a bit of synthesis. We need to understand your idea fits there; mine fits here. We have eight definitions of work; maybe we should try to figure out which one is correct in which situation, not just come up with a ninth definition of work.”
Oppenheim and Popescu fully agree with Hänggi that there’s a risk of downplaying the universe’s physicality. “I’m wary of information theorists who believe everything is information,” Oppenheim said. “When the steam engine was being developed and thermodynamics was in full swing, there were people positing that the universe was just a big steam engine.” In reality, he said, “it’s much messier than that.” What he likes about quantum thermodynamics is that “you have these two fundamental quantities — energy and quantum information — and these two things meet together. That to me is what makes it such a beautiful theory.”
__________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use
Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include:
the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94553, Title I, 101, Oct 19, 1976, 90 Stat 2546)