In what they call the largest study ever done, researchers found using marijuana while pregnant may increase the risk that a child will develop autism.
“Women who used cannabis during pregnancy were 1.5 times more likely to have a child with autism,” said study author Dr. Darine El-Chaâr, a maternal fetal medicine specialist and clinical investigator at Ottawa Hospital Research Institute in Canada.
These are not reassuring findings. We highly discourage use of cannabis during pregnancy and breastfeeding,” she said.
Past studies have shown the use of marijuana during pregnancy is linked to low birth weight, impulsivity, hyperactivity, attention issues and other cognitive and behavioral issue in children, according to the US Centers for Disease Control and Prevention. Pregnant women who use marijuana, one study found, have a 2.3 times greater risk of stillbirth.
“Based on that, I’m not too surprised by these findings,” El-Chaâr said. “Fetal brain development occurs throughout all gestational ages.”
…. Use of marijuana by pregnant women has been growing in the United States in recent decades. An analysis last year of over 450,000 pregnant American women ages 12 to 44 by the National Institute on Drug Abuse found cannabis use more than doubled between 2002 and 2017. The vast majority of marijuana use was during the first three months of pregnancy, the study found, and was predominantly recreational rather than medical.
Yet the first trimester may be one of most sensitive times for the developing brain of a fetus, when it’s most susceptible to damage, El-Chaâr said.
Is Marijuana as Safe as We Think? Permitting pot is one thing; promoting its use is another.
Malcolm Gladwell, The New Yorker, January 14, 2019 Issue
A few years ago, the National Academy of Medicine convened a panel of sixteen leading medical experts to analyze the scientific literature on cannabis. The report they prepared, which came out in January of 2017, runs to four hundred and sixty-eight pages. It contains no bombshells or surprises, which perhaps explains why it went largely unnoticed. It simply stated, over and over again, that a drug North Americans have become enthusiastic about remains a mystery.
For example, smoking pot is widely supposed to diminish the nausea associated with chemotherapy. But, the panel pointed out, “there are no good-quality randomized trials investigating this option.” We have evidence for marijuana as a treatment for pain, but “very little is known about the efficacy, dose, routes of administration, or side effects of commonly used and commercially available cannabis products in the United States.” The caveats continue. Is it good for epilepsy? “Insufficient evidence.” Tourette’s syndrome? Limited evidence. A.L.S., Huntington’s, and Parkinson’s? Insufficient evidence. Irritable-bowel syndrome? Insufficient evidence. Dementia and glaucoma? Probably not. Anxiety? Maybe. Depression? Probably not.
Then come Chapters 5 through 13, the heart of the report, which concern marijuana’s potential risks. The haze of uncertainty continues. Does the use of cannabis increase the likelihood of fatal car accidents? Yes. By how much? Unclear. Does it affect motivation and cognition? Hard to say, but probably. Does it affect employment prospects? Probably. Will it impair academic achievement? Limited evidence. This goes on for pages.
We need proper studies, the panel concluded, on the health effects of cannabis on children and teen-agers and pregnant women and breast-feeding mothers and “older populations” and “heavy cannabis users”; in other words, on everyone except the college student who smokes a joint once a month. The panel also called for investigation into “the pharmacokinetic and pharmacodynamic properties of cannabis, modes of delivery, different concentrations, in various populations, including the dose-response relationships of cannabis and THC or other cannabinoids.”
Figuring out the “dose-response relationship” of a new compound is something a pharmaceutical company does from the start of trials in human subjects, as it prepares a new drug application for the F.D.A. Too little of a powerful drug means that it won’t work. Too much means that it might do more harm than good. The amount of active ingredient in a pill and the metabolic path that the ingredient takes after it enters your body—these are things that drugmakers will have painstakingly mapped out before the product comes on the market, with a tractor-trailer full of supporting documentation.
With marijuana, apparently, we’re still waiting for this information. It’s hard to study a substance that until very recently has been almost universally illegal. And the few studies we do have were done mostly in the nineteen-eighties and nineties, when cannabis was not nearly as potent as it is now. Because of recent developments in plant breeding and growing techniques, the typical concentration of THC, the psychoactive ingredient in marijuana, has gone from the low single digits to more than twenty per cent—from a swig of near-beer to a tequila shot.
Are users smoking less, to compensate for the drug’s new potency? Or simply getting more stoned, more quickly? Is high-potency cannabis more of a problem for younger users or for older ones? For some drugs, the dose-response curve is linear: twice the dose creates twice the effect. For other drugs, it’s nonlinear: twice the dose can increase the effect tenfold, or hardly at all. Which is true for cannabis? It also matters, of course, how cannabis is consumed. It can be smoked, vaped, eaten, or applied to the skin. How are absorption patterns affected?
Last May, not long before Canada legalized the recreational use of marijuana, Beau Kilmer, a drug-policy expert with the rand Corporation, testified before the Canadian Parliament. He warned that the fastest-growing segment of the legal market in Washington State was extracts for inhalation, and that the mean THC concentration for those products was more than sixty-five per cent. “We know little about the health consequences—risks and benefits—of many of the cannabis products likely to be sold in nonmedical markets,” he said. Nor did we know how higher-potency products would affect THC consumption.
When it comes to cannabis, the best-case scenario is that we will muddle through, learning more about its true effects as we go along and adapting as needed—the way, say, the once extraordinarily lethal innovation of the automobile has been gradually tamed in the course of its history. For those curious about the worst-case scenario, Alex Berenson has written a short manifesto, “Tell Your Children: The Truth About Marijuana, Mental Illness, and Violence.”
Berenson begins his book with an account of a conversation he had with his wife, a psychiatrist who specializes in treating mentally ill criminals. They were discussing one of the many grim cases that cross her desk—“the usual horror story, somebody who’d cut up his grandmother or set fire to his apartment.” Then his wife said something like “Of course, he was high, been smoking pot his whole life.”
Of course? I said.
Yeah, they all smoke.
Well . . . other things too, right?
Sometimes. But they all smoke.
Berenson used to be an investigative reporter for the Times, where he covered, among other things, health care and the pharmaceutical industry. Then he left the paper to write a popular series of thrillers. At the time of his conversation with his wife, he had the typical layman’s view of cannabis, which is that it is largely benign. His wife’s remark alarmed him, and he set out to educate himself. Berenson is constrained by the same problem the National Academy of Medicine faced—that, when it comes to marijuana, we really don’t know very much. But he has a reporter’s tenacity, a novelist’s imagination, and an outsider’s knack for asking intemperate questions. The result is disturbing.
The first of Berenson’s questions concerns what has long been the most worrisome point about cannabis: its association with mental illness. Many people with serious psychiatric illness smoke lots of pot. The marijuana lobby typically responds to this fact by saying that pot-smoking is a response to mental illness, not the cause of it—that people with psychiatric issues use marijuana to self-medicate. That is only partly true. In some cases, heavy cannabis use does seem to cause mental illness. As the National Academy panel declared, in one of its few unequivocal conclusions, “Cannabis use is likely to increase the risk of developing schizophrenia and other psychoses; the higher the use, the greater the risk.”
Berenson thinks that we are far too sanguine about this link. He wonders how large the risk is, and what might be behind it. In one of the most fascinating sections of “Tell Your Children,” he sits down with Erik Messamore, a psychiatrist who specializes in neuropharmacology and in the treatment of schizophrenia.
Messamore reports that, following the recent rise in marijuana use in the U.S. (it has almost doubled in the past two decades, not necessarily as the result of legal reforms), he has begun to see a new kind of patient: older, and not from the marginalized communities that his patients usually come from. These are otherwise stable middle-class professionals. Berenson writes, “A surprising number of them seemed to have used only cannabis and no other drugs before their breaks. The disease they’d developed looked like schizophrenia, but it had developed later—and their prognosis seemed to be worse. Their delusions and paranoia hardly responded to antipsychotics.”
Messamore theorizes that THC may interfere with the brain’s anti-inflammatory mechanisms, resulting in damage to nerve cells and blood vessels. Is this the reason, Berenson wonders, for the rising incidence of schizophrenia in the developed world, where cannabis use has also increased?
In the northern parts of Finland, incidence of the disease has nearly doubled since 1993. In Denmark, cases have risen twenty-five per cent since 2000. In the United States, hospital emergency rooms have seen a fifty-per-cent increase in schizophrenia admissions since 2006. If you include cases where schizophrenia was a secondary diagnosis, annual admissions in the past decade have increased from 1.26 million to 2.1 million.
Berenson’s second question derives from the first. The delusions and paranoia that often accompany psychoses can sometimes trigger violent behavior. If cannabis is implicated in a rise in psychoses, should we expect the increased use of marijuana to be accompanied by a rise in violent crime, as Berenson’s wife suggested?
Once again, there is no definitive answer, so Berenson has collected bits and pieces of evidence. For example, in a 2013 paper in the Journal of Interpersonal Violence, researchers looked at the results of a survey of more than twelve thousand American high-school students. The authors assumed that alcohol use among students would be a predictor of violent behavior, and that marijuana use would predict the opposite. In fact, those who used only marijuana were three times more likely to be physically aggressive than abstainers were; those who used only alcohol were 2.7 times more likely to be aggressive. Observational studies like these don’t establish causation. But they invite the sort of research that could.
Berenson looks, too, at the early results from the state of Washington, which, in 2014, became the first U.S. jurisdiction to legalize recreational marijuana. Between 2013 and 2017, the state’s murder and aggravated-assault rates rose forty per cent—twice the national homicide increase and four times the national aggravated-assault increase. We don’t know that an increase in cannabis use was responsible for that surge in violence. Berenson, though, finds it strange that, at a time when Washington may have exposed its population to higher levels of what is widely assumed to be a calming substance, its citizens began turning on one another with increased aggression.
His third question is whether cannabis serves as a gateway drug. There are two possibilities. The first is that marijuana activates certain behavioral and neurological pathways that ease the onset of more serious addictions. The second possibility is that marijuana offers a safer alternative to other drugs: that if you start smoking pot to deal with chronic pain you never graduate to opioids.
Which is it? This is a very hard question to answer. We’re only a decade or so into the widespread recreational use of high-potency marijuana. Maybe cannabis opens the door to other drugs, but only after prolonged use. Or maybe the low-potency marijuana of years past wasn’t a gateway, but today’s high-potency marijuana is. Methodologically, Berenson points out, the issue is complicated by the fact that the first wave of marijuana legalization took place on the West Coast, while the first serious wave of opioid addiction took place in the middle of the country. So, if all you do is eyeball the numbers, it looks as if opioid overdoses are lowest in cannabis states and highest in non-cannabis states.
Not surprisingly, the data we have are messy. Berenson, in his role as devil’s advocate, emphasizes the research that sees cannabis as opening the door to opioid use. For example, two studies of identical twins—in the Netherlands and in Australia—show that, in cases where one twin used cannabis before the age of seventeen and the other didn’t, the cannabis user was several times more likely to develop an addiction to opioids. Berenson also enlists a statistician at N.Y.U. to help him sort through state-level overdose data, and what he finds is not encouraging: “States where more people used cannabis tended to have more overdoses.”
The National Academy panel is more judicious. Its conclusion is that we simply don’t know enough, because there haven’t been any “systematic” studies. But the panel’s uncertainty is scarcely more reassuring than Berenson’s alarmism. Seventy-two thousand Americans died in 2017 of drug overdoses. Should you embark on a pro-cannabis crusade without knowing whether it will add to or subtract from that number?
Drug policy is always clearest at the fringes. Illegal opioids are at one end. They are dangerous. Manufacturers and distributors belong in prison, and users belong in drug-treatment programs. The cannabis industry would have us believe that its product, like coffee, belongs at the other end of the continuum.
“Flow Kana partners with independent multi-generational farmers who cultivate under full sun, sustainably, and in small batches,” the promotional literature for one California cannabis brand reads. “Using only organic methods, these stewards of the land have spent their lives balancing a unique and harmonious relationship between the farm, the genetics and the terroir.”
But cannabis is not coffee. It’s somewhere in the middle. The experience of most users is relatively benign and predictable; the experience of a few, at the margins, is not. Products or behaviors that have that kind of muddled risk profile are confusing, because it is very difficult for those in the benign middle to appreciate the experiences of those at the statistical tails.
Low-frequency risks also take longer and are far harder to quantify, and the lesson of “Tell Your Children” and the National Academy report is that we aren’t yet in a position to do so. For the moment, cannabis probably belongs in the category of substances that society permits but simultaneously discourages. Cigarettes are heavily taxed, and smoking is prohibited in most workplaces and public spaces. Alcohol can’t be sold without a license and is kept out of the hands of children. Prescription drugs have rules about dosages, labels that describe their risks, and policies that govern their availability. The advice that seasoned potheads sometimes give new users—“start low and go slow”—is probably good advice for society as a whole, at least until we better understand what we are dealing with.
Late last year, the commissioner of the Food and Drug Administration, Scott Gottlieb, announced a federal crackdown on e-cigarettes. He had seen the data on soaring use among teen-agers, and, he said, “it shocked my conscience.” He announced that the F.D.A. would ban many kinds of flavored e-cigarettes, which are especially popular with teens, and would restrict the retail outlets where e-cigarettes were available.
In the dozen years since e-cigarettes were introduced into the marketplace, they have attracted an enormous amount of attention. There are scores of studies and papers on the subject in the medical and legal literature, grappling with the questions raised by the new technology. Vaping is clearly popular among kids. Is it a gateway to traditional tobacco use? Some public-health experts worry that we’re grooming a younger generation for a lifetime of dangerous addiction. Yet other people see e-cigarettes as a much safer alternative for adult smokers looking to satisfy their nicotine addiction. That’s the British perspective.
Last year, a Parliamentary committee recommended cutting taxes on e-cigarettes and allowing vaping in areas where it had previously been banned. Since e-cigarettes are as much as ninety-five per cent less harmful than regular cigarettes, the committee argued, why not promote them? Gottlieb said that he was splitting the difference between the two positions—giving adults “opportunities to transition to non-combustible products,” while upholding the F.D.A.’s “solemn mandate to make nicotine products less accessible and less appealing to children.” He was immediately criticized.
“Somehow, we have completely lost all sense of public-health perspective,” Michael Siegel, a public-health researcher at Boston University, wrote after the F.D.A. announcement:
Every argument that the F.D.A. is making in justifying a ban on the sale of electronic cigarettes in convenience stores and gas stations applies even more strongly for real tobacco cigarettes: you know, the ones that kill hundreds of thousands of Americans each year. Something is terribly wrong with our sense of perspective when we take the e-cigarettes off the shelf but allow the old-fashioned ones to remain.
Among members of the public-health community, it is impossible to spend five minutes on the e-cigarette question without getting into an argument. And this is nicotine they are arguing about, a drug that has been exhaustively studied by generations of scientists. We don’t worry that e-cigarettes increase the number of fatal car accidents, diminish motivation and cognition, or impair academic achievement. The drugs through the gateway that we worry about with e-cigarettes are Marlboros, not opioids. There are no enormous scientific question marks over nicotine’s dosing and bio-availability. Yet we still proceed cautiously and carefully with nicotine, because it is a powerful drug, and when powerful drugs are consumed by lots of people in new and untested ways we have an obligation to try to figure out what will happen.
A week after Gottlieb announced his crackdown on e-cigarettes, on the ground that they are too enticing to children, Siegel visited the first recreational-marijuana facility in Massachusetts. Here is what he found on the menu, each offering laced with large amounts of a drug, THC, that no one knows much about:
Prenatal Exposure to Cannabis Affects the Developing Brain
Children born to moms who smoked or ingested marijuana during pregnancy suffer higher rates of depression, hyperactivity, and inattention.
By Andrew Scheyer, The Scientist, 1/1/2019
Excerpt
A Lifetime of Consequences?
Large-scale, longitudinal studies of humans whose mothers smoked marijuana once or more per week and experimental work on rodents exposed to cannabinoids in utero have yielded remarkably consistent intellectual and behavioral correlates of fetal exposure to this drug. Some exposed individuals exhibit deficits in memory, cognition, and measures of sociability.
These aberrations appear during infancy and persist through adulthood and are tied to changes in the expression of multiple gene families, as well as more global measures of brain responsiveness and plasticity. Researchers currently consider these perturbations to be mediated by changes to the endocannabinoid system caused by the active compounds in cannabis.
How Cannabis Affects the Function of Neurons
The human body contains two primary cannabinoid receptors: CB1R and CB2R. CB1R is present in the human fetal cerebrum by the first weeks of the second trimester, and is the brain’s most abundant G-protein coupled receptor. Located at the presynaptic terminal of neurons, CB1R is activated by endocannabinoids, which are synthesized from fatty acids in the postsynaptic neuron.
The receptors’ activation modulates the presynaptic release of neurotransmitters, thereby affecting synaptic function and a range of downstream signaling agents, from glutamate, dopamine, and serotonin to neuropeptides and hormones. The function of CB2Rs in the brain is still poorly understood, but there is some evidence that they exist both pre- and post-synaptically, as well as on glia and astrocytes. One recent paper suggests that, like CB1Rs, CB2Rs regulate neurotransmitter release (Synapse, 72:e22061, 2018).
When people smoke or ingest marijuana, exogenous cannabinoids enter the nervous system and activate these receptors. Stimulation by these high-affinity agonists results in stronger binding and greater activation of CB1R, triggering the process of receptor downregulation. Specifically, the greater binding causes the receptors to be internalized and degraded, such that they are no longer as available for cannabinoid signaling, and can thereby alter neuronal firing and other downstream events.
As the drug becomes more popular, concerns have been raised that its use can lead to psychotic disorders. Here’s what scientists know for sure, and what they don’t.
By Benedict Carey, The New York Times, 1/17/2019
Nearly a century after the film “Reefer Madness” alarmed the nation, some policymakers and doctors are again becoming concerned about the dangers of marijuana, although the reefers are long gone.
Experts now distinguish between the “new cannabis” — legal, highly potent, available in tabs, edibles and vapes — and the old version, a far milder weed passed around in joints. Levels of T.H.C., the chemical that produces marijuana’s high, have been rising for at least three decades, and it’s now possible in some states to buy vape cartridges containing little but the active ingredient.
The concern is focused largely on the link between heavy usage and psychosis in young people. Doctors first suspected a link some 70 years ago, and the evidence has only accumulated since then. In a forthcoming book, “Tell Your Children,” Alex Berenson, a former Times reporter, argues that legalization is putting a generation at higher risk of schizophrenia and other psychotic syndromes. Critics, including leading researchers, have called the argument overblown, and unfaithful to the science.
Can heavy use cause schizophrenia or other syndromes?
That is the big question, and so far the evidence is not strong enough to answer one way or the other. Even top scientists who specialize in marijuana research are divided, drawing opposite conclusions from the same data.
“I’ve been doing this research for 25 years, and it’s polarizing even among academics,” said Margaret Haney, a professor of neurobiology at Columbia University Medical Center. “This is what the marijuana field is like.”
The debate centers on the distinction between correlation and causation. People with psychotic problems often use cannabis regularly; this is a solid correlation, backed by numerous studies. But it is unclear which came first, the cannabis habit or the psychoses. Children who later develop schizophrenia often seem to retreat into their own world, stalked periodically by bizarre fears and fantasies well outside the range of usual childhood imagination, and well before they are exposed to cannabis. Those who go on to become regular marijuana users often use other substances as well, including alcohol and cigarettes, making it more difficult for researchers to untangle causation.
Consider cigarettes, the least mind-altering of these substances. In a 2015 study, a team led by Dr. Kenneth S. Kendler of Virginia Commonwealth University analyzed medical data on nearly two million people in Sweden. The data followed the individuals over time, from young adulthood, when most schizophrenia diagnoses occur, to middle age. Smoking was a predictor for later development of the disorder, and in what doctors call a dose-response relationship: the more a person smoked, the higher the risk.
Yet nicotine attracts nowhere near the concern that cannabis does, in part because the two drugs are so different in their everyday effects: mildly stimulated versus stoned. Indeed, some scientists have studied nicotine as a partial treatment for schizophrenia, to blunt the disorders effects on thinking and memory.
Is it biologically plausible that cannabis could cause a psychotic disorder?
Yes. Brain scientists know very little about the underlying biology of psychotic conditions, other than that hundreds of common gene variants are likely involved. Schizophrenia, for instance, is not a uniform disorder but an umbrella term for an array of unexplained problems involving recurrent psychosis, and other common symptoms.
Even so, there is circumstantial evidence for a biological mechanism. Psychotic disorders tend to emerge in late adolescence or early adulthood, during or after a period of rapid brain development. In the teenage years, the brain strips away unneeded or redundant connections between brain cells, in a process called synaptic pruning. This editing is concentrated in the prefrontal cortex, the region behind the forehead where thinking and planning occur — and the region that is perturbed in psychotic conditions.
The region is rich with so-called CB1 receptors, which are involved in the pruning, and are engaged by cannabis use. And alterations to the pruning process may well increase schizophrenia risk, according to recent research at the Broad Institute of M.I.T. and Harvard. In a 2016 analysis, scientists there found that people with the disorder often have a gene variant that appears to accelerate the pruning process.
What does this mean for me?
Experts may debate whether cannabis use can lead to psychotic disorders, but they mostly agree on how to minimize one’s risk.
Psychotic conditions tend to run in families, which suggests there is an inherited genetic vulnerability. Indeed, according to some studies, people prone to or at heightened risk of psychosis seem to experience the effects of cannabis differently than peers without such a history. The users experience a more vivid high, but they also are more likely to experience psychosis-like effects such as paranoia.
The evidence so far indicates that one’s familial risk for psychotic disorders outweighs any added effect of cannabis use. In a 2014 study, a team led by Ashley C. Proal and Dr. Lynn E. DeLisi of Harvard Medical School recruited cannabis users with and without a family history of schizophrenia, as well as non-users with and without such a history. The researchers made sure the cannabis users did not use other drugs in addition, a factor that muddied earlier studies. The result: there was a heightened schizophrenia risk among people with a family history, regardless of cannabis use.
“My study clearly shows that cannabis does not cause schizophrenia by itself,” said Dr. DeLisi. “Rather, a genetic predisposition is necessary. It is highly likely, based on the results of this study and others, that cannabis use during adolescence through to age 25, when the brain is maturing and at its peak of growth in a genetically vulnerable individual, can initiate the onset of schizophrenia.”
Because marijuana has been illegal for so long, research that could settle the question has been sorely lacking, although that has begun to change. The National Institutes of Health have launched a $300 million project that will track thousands of children from the age of 9 or 10 through adolescence, and might help clarify causation.
For the near future, expert opinions likely will be mixed. “Usually it is the research types who are doing ‘the sky is falling’ bit, but here it is switched,” said Dr. Jay Geidd, a professor of psychiatry at the University of California, San Diego. “The researchers are wary of overselling the dangers, as was clearly done in the past. However, clinicians overwhelmingly endorse seeing many more adolescents with ‘paranoia’” of some kind.
In short: Regularly using the new, high-potency cannabis may indeed be a risk for young people who are related to someone with a psychotic condition. On that warning, at least, most experts seem to agree.
Daily Marijuana Use And Highly Potent Weed Linked To Psychosis
NPR, 3/19/2019, by Rhitu Chatterjee
Several past studies have found that more frequent use of pot is associated with a higher risk of psychosis — that is, when someone loses touch with reality. Now a new study published Tuesday in the The Lancet Psychiatry shows that consuming pot on a daily basis and especially using high-potency cannabis increases the odds of having a psychotic episode later.
“This is more evidence that the link between cannabis and psychosis matters,” says Krista M. Lisdahl, a clinical neuropsychologist at the University of Wisconsin, Milwaukee, who wasn’t involved in the study.
The study authors consider high-potency cannabis to be products with more than 10 percent tetrahydrocannabinol or THC, the compound responsible for the drug’s psychoactive effects. The fact that consuming high-THC cannabis products has a greater risk is concerning, Lisdahl says, because these products are more common in the market now.
The study also shows that three European cities — London, Paris and Amsterdam — where high-potency weed is most commonly available actually have higher rates of new cases of psychosis than the other cities in the study.
The researchers identified 901 people aged 18 to 64 who were diagnosed with their first episode of psychosis between May 2010 and April 2015, at a mental health facility anywhere in 11 cities, including London, Paris, Amsterdam, Barcelona, other cities across Europe and one site in Brazil.
The researchers then asked these individuals and a control group of 1,200-plus other healthy people about their habits, including their use of weed. “We asked people if they used cannabis, when did they start using it and what kind of cannabis,” explains study author Marta Di Forti, a psychiatrist and clinician scientist at King’s College London.
People reported the names of weed strains they used, such as skunk in the U.K. or the Dutch Nederwiet, which allowed the researchers to identify the THC content in each product through data gathered by the European Monitoring Center for Drugs and Drug Addiction and national data from the different countries.
The study found that those who used pot daily were three times more likely to have a psychotic episode compared with someone who never used the drug.
Those who started using cannabis at 15 or younger had a slightly more elevated risk than those who started using in later years.
Use of high-potency weed almost doubled the odds of having psychosis compared with someone who had never smoked weed, explains Di Forti.
And for those who used high-potency pot on a daily basis, the risk of psychosis was even greater — four times greater than those who had never used.
The easy availability of high-THC weed is a recent phenomenon, she notes. “Almost 20 years ago, there wasn’t much high-potency cannabis available [in the market].”
One recent study showed that high-potency cannabis is increasingly dominating markets. It found that the average potency of weed in Europe and the U.S. in 2017 was 17.1 percent, up from 8.9 percent in 2008.
And some products can be even more potent. For example, in the Netherlands, the THC content of one product that’s gained popularity, locally produced Dutch resin Nederhasj, can be as high as 67 percent.
“What this paper has done that’s really nice is they look at rates of psychosis and cannabis use in lots of different places where underlying rates of psychosis are different,” says Suzanne Gage, a psychologist and epidemiologist at the University of Liverpool, who wrote a commentary linked to the study in The Lancet Psychiatry.
This allowed the researchers to compare incidence of psychosis with the availability and use of high-THC cannabis in the different cities, she says.
The study found that the three European cities — London, Paris and Amsterdam — had the highest rates of new diagnoses of psychosis — 45.7 per 100,000 person-years in London, 46.1 in Paris and 37.9 in Amsterdam.
These are also cities where high-potency weed is most easily available and commonly used.
Other European cities in Spain, Italy and France on the other hand have less than 10 percent THC content in most popular cannabis products on the market. These cities also have lower rates of new psychosis diagnosis, according to the study.
“One of the things that’s really novel is that they could show that variation of use and potency of cannabis was related to rates of first-episode psychosis,” Lisdahl says.
One critique of the theory that weed contributes to psychosis risk has been that while more people are using weed worldwide, there hasn’t been a corresponding rise in rates of psychosis, Gage explains. But the new study shows that cities with more easily available high-THC weed do have a higher rate of new diagnoses of psychosis.
“That’s a really interesting finding, and that’s not something anyone has done before,” she adds.
However, the study doesn’t prove causality, cautions Dr. Diana Martinez, a psychiatrist and addiction researcher at Columbia University. “You can’t say that cannabis causes psychosis,” she says. “It’s simply not supported by the data,” she says.
Lisdahl agrees. In order to show causality, one would have to follow people over time — before they started using weed to years later when they have their psychotic episodes, she says. “You need twins in the studies, you need genetic information,” among all other kinds of data, she says.
Psychotic disorders such as schizophrenia and bipolar are complicated, “multifaceted disorders,” Gage notes.
“In all psychotic disorders, there is this multiple hit hypothesis,” Martinez says. Many factors influence whether and how these disorders manifest.
Genetics is known to play a major role, as are a host of environmental factors. “Children who have risk of schizophrenia but grow up in stable homes … they may not go on to develop schizophrenia,” she adds.
The Adolescent Brain Cognitive Development study, which is funded by the U.S. National Institutes of Health, is attempting to tease out the various influences, Lisdahl says. “The NIH has now invested in that question.”
In the meantime, the new findings should be of interest to anyone using cannabis, says study author Di Forti. “There are people across the world who use cannabis for a variety of reasons,” she says. “Some of them recreationally, some of them for medicinal purposes.” They should be aware that using high-potency cannabis comes with a risk, she says.
“They need to know what to look for and ask for help, if they come across characteristics of a psychotic disorder,” she adds.
— – – — – – – – –
In response to The contribution of cannabis use to variation in the incidence of psychotic disorder across Europe (EU-GEI): a multicentre case-control study , Suzanne H. Gage, in Cannabis and psychosis: triangulating the evidence, writes
…It is perfectly possible that the association between cannabis and psychosis is bidirectional, as suggested by other work using genetic variables as proxies for the exposures of interest in a Mendelian randomisation design. Di Forti and colleagues’ study adds a new and novel study design to the evidence available, which consistently indicates that for some individuals there is an increased risk of psychosis resulting from daily use of high potency cannabis. Given the changing legal status of cannabis across the world, and the associated potential for an increase in use, the next priority is to identify which individuals are at risk from daily potent cannabis use, and to develop educational strategies and interventions to mitigate this.
Samuel T. Wilkinson, Rajiv Radhakrishnan, and Deepak Cyril D’Souza write:
The link between cannabis use and psychosis comprises three distinct relationships: acute psychosis associated with cannabis intoxication; acute psychosis that lasts beyond the period of acute intoxication; and persistent psychosis not time-locked to exposure. Experimental studies reveal that cannabis, delta-9-tetrahydrocannabinol (THC) and synthetic cannabinoids reliably produce transient positive, negative, and cognitive symptoms in healthy volunteers. Case studies indicate that cannabinoids can induce acute psychosis that lasts beyond the period of acute intoxication but resolves within a month. Exposure to cannabis in adolescence is associated with a risk for later psychotic disorder in adulthood; this association is consistent, temporally related, shows a dose response, and is biologically plausible. However, cannabis is neither necessary nor sufficient to cause a persistent psychotic disorder. More likely, it is a component cause that interacts with other factors to result in psychosis. The link between cannabis and psychosis is moderated by age at onset of cannabis use, childhood abuse, and genetic vulnerability. While more research is needed to better characterize the relationship between cannabinoid use and the onset and persistence of psychosis, clinicians should be mindful of the potential risk of psychosis, especially in vulnerable populations, including adolescents and those with a psychosis diathesis.
PreK–12 Standard 10: Tobacco, Alcohol, & Substance Use/Abuse Prevention
Students will acquire the knowledge and skills to be competent in making health-enhancing decisions regarding the use of medications and avoidance of substances, and in communicating about substance use/abuse prevention for healthier homes, schools, and communities.
Through the study of Effects on the Body students will
10.5 Describe addictions to alcohol, tobacco, and other drugs, and methods for intervention, treatment, and cessation
10.6 List the potential outcomes of prevalent early and late adolescent risk behaviors related to tobacco, alcohol, and other drugs, including the general pattern and continuum of risk behaviors involving substances that young people might follow
Students generate ideas of what the term “gateway” means in relation to substance abuse and map out a series of behaviors that begin with such “gateway” behaviors
Through the study of Healthy Decisions students will
10.7 Identify internal factors (such as character) and external factors (such as family, peers, community, faith-based affiliation, and media) that influence the decision of young people to use or not to use drugs
10.8 Demonstrate ways of refusing and of sharing preventive health information about tobacco, alcohol, and other drugs with peers. Students research and give an oral report on the effects of second-hand smoke.
By the end of grade 12
Through the study of Effects on the Body students will
10.9 Describe the relationship between multi-drug use and the increased negative effects on the body, including the stages of addiction, and overdose. Students research the increased chances of death from alcohol poisoning when alcohol is combined with marijuana.
10.10 Describe the harmful effects of tobacco, alcohol, and other substances on pregnant women and their unborn children.
+++++++++++++++++++++++++++++++++++++++++++++++++
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
The same codes needed to thwart errors in quantum computers may also give the fabric of space-time its intrinsic robustness.
Natalie Wolchover, Quanta magazine
In 1994, a mathematician at AT&T Research named Peter Shor brought instant fame to “quantum computers” when he discovered that these hypothetical devices could quickly factor large numbers — and thus break much of modern cryptography. But a fundamental problem stood in the way of actually building quantum computers: the innate frailty of their physical components.
Unlike binary bits of information in ordinary computers, “qubits” consist of quantum particles that have some probability of being in each of two states, designated |0⟩ and |1⟩, at the same time. When qubits interact, their possible states become interdependent, each one’s chances of |0⟩ and |1⟩ hinging on those of the other. The contingent possibilities proliferate as the qubits become more and more “entangled” with each operation. Sustaining and manipulating this exponentially growing number of simultaneous possibilities are what makes quantum computers so theoretically powerful.
But qubits are maddeningly error-prone. The feeblest magnetic field or stray microwave pulse causes them to undergo “bit-flips” that switch their chances of being |0⟩ and |1⟩ relative to the other qubits, or “phase-flips” that invert the mathematical relationship between their two states. For quantum computers to work, scientists must find schemes for protecting information even when individual qubits get corrupted. What’s more, these schemes must detect and correct errors without directly measuring the qubits, since measurements collapse qubits’ coexisting possibilities into definite realities: plain old 0s or 1s that can’t sustain quantum computations.
In 1995, Shor followed his factoring algorithm with another stunner: proof that “quantum error-correcting codes” exist. The computer scientists Dorit Aharonov and Michael Ben-Or (and other researchers working independently) proved a year later that these codes could theoretically push error rates close to zero. “This was the central discovery in the ’90s that convinced people that scalable quantum computing should be possible at all,” said Scott Aaronson, a leading quantum computer scientist at the University of Texas — “that it is merely a staggering problem of engineering.”
Now, even as small quantum computers are materializing in labs around the world, useful ones that will outclass ordinary computers remain years or decades away. Far more efficient quantum error-correcting codes are needed to cope with the daunting error rates of real qubits. The effort to design better codes is “one of the major thrusts of the field,” Aaronson said, along with improving the hardware.
But in the dogged pursuit of these codes over the past quarter-century, a funny thing happened in 2014, when physicists found evidence of a deep connection between quantum error correction and the nature of space, time and gravity. In Albert Einstein’s general theory of relativity, gravity is defined as the fabric of space and time — or “space-time” — bending around massive objects. (A ball tossed into the air travels along a straight line through space-time, which itself bends back toward Earth.) But powerful as Einstein’s theory is, physicists believe gravity must have a deeper, quantum origin from which the semblance of a space-time fabric somehow emerges.
That year — 2014 — three young quantum gravity researchers came to an astonishing realization. They were working in physicists’ theoretical playground of choice: a toy universe called “anti-de Sitter space” that works like a hologram. The bendy fabric of space-time in the interior of the universe is a projection that emerges from entangled quantum particles living on its outer boundary. Ahmed Almheiri, Xi Dong and Daniel Harlow did calculations suggesting that this holographic “emergence” of space-time works just like a quantum error-correcting code. They conjectured in the Journal of High Energy Physics that space-time itself is a code — in anti-de Sitter (AdS) universes, at least. The paper has triggered a wave of activity in the quantum gravity community, and new quantum error-correcting codes have been discovered that capture more properties of space-time.
John Preskill, a theoretical physicist at the California Institute of Technology, says quantum error correction explains how space-time achieves its “intrinsic robustness,” despite being woven out of fragile quantum stuff. “We’re not walking on eggshells to make sure we don’t make the geometry fall apart,” Preskill said. “I think this connection with quantum error correction is the deepest explanation we have for why that’s the case.”
The language of quantum error correction is also starting to enable researchers to probe the mysteries of black holes: spherical regions in which space-time curves so steeply inward toward the center that not even light can escape. “Everything traces back to black holes,” said Almheiri, who is now at the Institute for Advanced Study in Princeton, New Jersey. These paradox-ridden places are where gravity reaches its zenith and Einstein’s general relativity theory fails. “There are some indications that if you understand which code space-time implements,” he said, “it might help us in understanding the black hole interior.”
As a bonus, researchers hope holographic space-time might also point the way to scalable quantum computing, fulfilling the long-ago vision of Shor and others. “Space-time is a lot smarter than us,” Almheiri said. “The kind of quantum error-correcting code which is implemented in these constructions is a very efficient code.”
So, how do quantum error-correcting codes work? The trick to protecting information in jittery qubits is to store it not in individual qubits, but in patterns of entanglement among many.
As a simple example, consider the three-qubit code: It uses three “physical” qubits to protect a single “logical” qubit of information against bit-flips. (The code isn’t really useful for quantum error correction because it can’t protect against phase-flips, but it’s nonetheless instructive.) The |0⟩ state of the logical qubit corresponds to all three physical qubits being in their |0⟩ states, and the |1⟩ state corresponds to all three being |1⟩’s. The system is in a “superposition” of these states, designated |000⟩ + |111⟩. But say one of the qubits bit-flips. How do we detect and correct the error without directly measuring any of the qubits?
The qubits can be fed through two gates in a quantum circuit. One gate checks the “parity” of the first and second physical qubit — whether they’re the same or different — and the other gate checks the parity of the first and third. When there’s no error (meaning the qubits are in the state |000⟩ + |111⟩), the parity-measuring gates determine that both the first and second and the first and third qubits are always the same. However, if the first qubit accidentally bit-flips, producing the state |100⟩ + |011⟩, the gates detect a difference in both of the pairs. For a bit-flip of the second qubit, yielding |010⟩ + |101⟩, the parity-measuring gates detect that the first and second qubits are different and first and third are the same, and if the third qubit flips, the gates indicate: same, different. These unique outcomes reveal which corrective surgery, if any, needs to be performed — an operation that flips back the first, second or third physical qubit without collapsing the logical qubit. “Quantum error correction, to me, it’s like magic,” Almheiri said.
The best error-correcting codes can typically recover all of the encoded information from slightly more than half of your physical qubits, even if the rest are corrupted. This fact is what hinted to Almheiri, Dong and Harlow in 2014 that quantum error correction might be related to the way anti-de Sitter space-time arises from quantum entanglement.
It’s important to note that AdS space is different from the space-time geometry of our “de Sitter” universe. Our universe is infused with positive vacuum energy that causes it to expand without bound, while anti-de Sitter space has negative vacuum energy, which gives it the hyperbolic geometry of one of M.C. Escher’s Circle Limit designs. Escher’s tessellated creatures become smaller and smaller moving outward from the circle’s center, eventually vanishing at the perimeter; similarly, the spatial dimension radiating away from the center of AdS space gradually shrinks and eventually disappears, establishing the universe’s outer boundary. AdS space gained popularity among quantum gravity theorists in 1997 after the renowned physicist Juan Maldacena discovered that the bendy space-time fabric in its interior is “holographically dual” to a quantum theory of particles living on the lower-dimensional, gravity-free boundary.
In exploring how the duality works, as hundreds of physicists have in the past two decades, Almheiri and colleagues noticed that any point in the interior of AdS space could be constructed from slightly more than half of the boundary — just as in an optimal quantum error-correcting code.
In their paper conjecturing that holographic space-time and quantum error correction are one and the same, they described how even a simple code could be understood as a 2D hologram. It consists of three “qutrits” — particles that exist in any of three states — sitting at equidistant points around a circle. The entangled trio of qutrits encode one logical qutrit, corresponding to a single space-time point in the circle’s center. The code protects the point against the erasure of any of the three qutrits.
Of course, one point is not much of a universe. In 2015, Harlow, Preskill, Fernando Pastawski and Beni Yoshida found another holographic code, nicknamed the HaPPY code, that captures more properties of AdS space. The code tiles space in five-sided building blocks — “little Tinkertoys,” said Patrick Hayden of Stanford University, a leader in the research area. Each Tinkertoy represents a single space-time point. “These tiles would be playing the role of the fish in an Escher tiling,” Hayden said.
In the HaPPY code and other holographic error-correcting schemes that have been discovered, everything inside a region of the interior space-time called the “entanglement wedge” can be reconstructed from qubits on an adjacent region of the boundary. Overlapping regions on the boundary will have overlapping entanglement wedges, Hayden said, just as a logical qubit in a quantum computer is reproducible from many different subsets of physical qubits. “That’s where the error-correcting property comes in.”
“Quantum error correction gives us a more general way of thinking about geometry in this code language,” said Preskill, the Caltech physicist. The same language, he said, “ought to be applicable, in my opinion, to more general situations” — in particular, to a de Sitter universe like ours. But de Sitter space, lacking a spatial boundary, has so far proven much harder to understand as a hologram.
For now, researchers like Almheiri, Harlow and Hayden are sticking with AdS space, which shares many key properties with a de Sitter world but is simpler to study. Both space-time geometries abide by Einstein’s theory; they simply curve in different directions. Perhaps most importantly, both kinds of universes contain black holes. “The most fundamental property of gravity is that there are black holes,” said Harlow, who is now an assistant professor of physics at the Massachusetts Institute of Technology. “That’s what makes gravity different from all the other forces. That’s why quantum gravity is hard.”
The language of quantum error correction has provided a new way of describing black holes. The presence of a black hole is defined by “the breakdown of correctability,” Hayden said: “When there are so many errors that you can no longer keep track of what’s going on in the bulk [space-time] anymore, you get a black hole. It’s like a sink for your ignorance.”
Ignorance invariably abounds when it comes to black hole interiors. Stephen Hawking’s 1974 epiphany that black holes radiate heat, and thus eventually evaporate away, triggered the infamous “black hole information paradox,” which asks what happens to all the information that black holes swallow. Physicists need a quantum theory of gravity to understand how things that fall in black holes also get out. The issue may relate to cosmology and the birth of the universe, since expansion out of a Big Bang singularity is much like gravitational collapse into a black hole in reverse.
AdS space simplifies the information question. Since the boundary of an AdS universe is holographically dual to everything in it — black holes and all — the information that falls into a black hole is guaranteed never to be lost; it’s always holographically encoded on the universe’s boundary. Calculations suggest that to reconstruct information about a black hole’s interior from qubits on the boundary, you need access to entangled qubits throughout roughly three-quarters of the boundary. “Slightly more than half is not sufficient anymore,” Almheiri said. He added that the need for three-quarters seems to say something important about quantum gravity, but why that fraction comes up “is still an open question.”
In Almheiri’s first claim to fame in 2012, the tall, thin Emirati physicist and three collaborators deepened the information paradox. Their reasoning suggested that information might be prevented from ever falling into a black hole in the first place, by a “firewall” at the black hole’s event horizon.
Like most physicists, Almheiri doesn’t really believe black hole firewalls exist, but finding the way around them has proved difficult. Now, he thinks quantum error correction is what stops firewalls from forming, by protecting information even as it crosses black hole horizons. In his latest, solo work, which appeared in October, he reported that quantum error correction is “essential for maintaining the smoothness of space-time at the horizon” of a two-mouthed black hole, called a wormhole. He speculates that quantum error correction, as well as preventing firewalls, is also how qubits escape a black hole after falling in, through strands of entanglement between the inside and outside that are themselves like miniature wormholes. This would resolve Hawking’s paradox.
This year, the Department of Defense is funding research into holographic space-time, at least partly in case advances there might spin off more efficient error-correcting codes for quantum computers.
On the physics side, it remains to be seen whether de Sitter universes like ours can be described holographically, in terms of qubits and codes. “The whole connection is known for a world that is manifestly not our world,” Aaronson said. In a paperlast summer, Dong, who is now at the University of California, Santa Barbara, and his co-authors Eva Silverstein and Gonzalo Torroba took a step in the de Sitter direction, with an attempt at a primitive holographic description. Researchers are still studying that particular proposal, but Preskill thinks the language of quantum error correction will ultimately carry over to actual space-time.
“It’s really entanglement which is holding the space together,” he said. “If you want to weave space-time together out of little pieces, you have to entangle them in the right way. And the right way is to build a quantum error-correcting code.”
Related How does gravity work in the quantum regime? A holographic duality from string theory offers a powerful tool for unraveling the mystery.
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
“Ask ten different scientists about the environment, population control, genetics and you’ll get ten different answers, but there’s one thing every scientist on the planet agrees on. Whether it happens in a hundred years or a thousand years or a million years, eventually our Sun will grow cold and go out. When that happens, it won’t just take us. It’ll take Marilyn Monroe and Lao-Tzu, Einstein, Morobuto, Buddy Holly, Aristophanes .. and all of this .. all of this was for nothing unless we go to the stars.”
This a resource on possible ways humans could achieve interstellar travel.
How to use this resource
Can be read as enrichment.
Resource for a science club project.
Use space travel as a NGSS phenomenon or to create a storyline; one may teach about chemistry topics:
chemical reactions
practical use of reactions – chemical rockets
ions versus atoms
practical use of ions – ion drives for space travel
atoms and anti-atoms: basic subatomics particles of matter/antimatter
energy levels/quantum jumps
Use space travel as a NGSS phenomenon or to create a storyline: one may teach about modern physics topics:
nuclear fission
nuclear fusion
magnetic fields – practical uses of fields (Bussard ramjet)
black holes and wormholes
quantum jumps (chemistry/physics)
Einstein’s theory of relativity (relates to warp drive)
Introduction
Realistically, we currently have no technology that would let us send unmanned, let alone manned, spacecraft to even the nearest star. The Voyager spacecraft – launched in 1977 – is traveling away from our Sun at a rate of 17.3 km per second.
Yes, if we built this today, we could – with some effort – bring it to a speed ten times faster, but that still would 7300 years to reach another star.
What do we think about, when we think of interstellar travel?
We’re all familiar with FTL (faster than light) space travel in Star Trek…
or from movies like Star Wars.
Star Wars The Force Awakens, Millennium Falcon
But nothing like this currently exists. We’re not even if sure if anything like warp drive or hyperspace could exist – although we’ll get to those ideas at the end of this unit. So we need to start with what we currently have. What kinds of space travel technology do we have right now? All of our rocketships are powered by chemical reactions.
These are the manned rocketships that we have used from the 1960 up to today.
Here we see a SpaceX falcon 9 rocket lifting off, carrying a Crew Dragon reusable manned spacecraft (see in the above image.)
public domain pxhere.com/en/photo/1080045
Chemical reaction powered rockets are good for manned or unmanned missions within our solar system. But they are relatively slow and require huge amounts of fuel.
Solar sail spaceships
These are application of Newton’s laws of motion and conservation of momentum.
Solar sails feel the photon wind of our sun in much the same way that traditional sailboats capture the force of the wind.
The first spacecraft to make use of the technology was IKAROS, launched in 2010.
The force of sunlight on the ship’s mirrors is akin to a sail being blown by the wind. High-energy laser beams could be used as a light source to exert much greater force than would be possible using sunlight.
Solar sail craft offer the possibility of low-cost operations combined with long operating lifetimes.
These are very low-thrust propulsion system, and they use no propellant. They are very slow, but very affordable.
These ideas are then related to Newton’s laws of motion and conservation of momentum.
Ionic rockets have low acceleration, and it takes a long time for a spacecraft to build up much speed. However they are extremely efficient.
Uses engines such as the Hall-effect thruster (HET). Used in European Space Agency’s (ESA) SMART-1 mission. They are good for unmanned missions within our solar system.
These systems have already been built and tested here on Earth.
Nuclear Electric propulsion – In this kind of system, thermal energy from a nuclear fission reactor is converted to electrical energy. This is then used to drive an ion thruster.
Nuclear Thermal Rocket – Heat from a nuclear fission reactor adds energy to a fluid. This fluid is then expelled out of a rocket nozzle, creating thrust.
In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.
A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.
Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design – such as the Nuclear Engine for Rocket Vehicle Application (NERVA) – to more advanced and efficient concepts that rely on either a liquid or a gas core.
However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kN·s/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at “opposition” – i.e. as close as 55,000,000 km from Earth.
But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.
Torchships
“Have you simply had it up to here with these impotent little momma’s-boy rockets that take almost a year to crawl to Mars? Then you want a herculean muscle-rocket, with rippling titanium washboard abs and huge geodesic truck-nuts! You want a Torchship! Who cares if the exhaust can evaporate Rhode Island? You wanna rocket with an obscenely high delta V, one that can crank out one g for days at a time. Say goodbye to all that fussy Hohmann transfer nonsense, the only navigation you need is point-and-shoot. – Winchell D. Chung Jr.“
Torchsips are what we think of from many classic science fiction stories.
Shockingly, we already have the technology to build a Torship powered by multiple, small nuclear-fission explosions – Project Orion.
Project Orion was a study conducted between the 1950s and 1960s by the United States Air Force, DARPA, and NASA – [it would be a spaceship] propelled by a series of explosions of atomic bombs behind the craft via nuclear pulse propulsion. Early versions of this vehicle were proposed to take off from the ground; later versions were presented for use only in space. Six non-nuclear tests were conducted using models.
The Orion concept offered high thrust and high specific impulse at the same time. Orion would have offered performance greater than the most advanced conventional or nuclear rocket engines then under consideration. Supporters of Project Orion felt that it had potential for cheap interplanetary travel, but it lost political approval over concerns about fallout from its propulsion. The Partial Test Ban Treaty of 1963 is generally acknowledged to have ended the project.
Designs were considered that would actually allow us to build interstellar spacecraft! An Orion torchship could achieve about 10% of the speed of light. At this speed such a ship could reach the closest star, Alpha Centauri in just 44 years.
And there’s more – Project Orion was just the first Torch ship designed, and that only uses 1960s level nuclear fission. In the last generation more flexible and safer methods using nuclear fission have been developed. Similarly we have made many advances in nuclear fusion – see the next section.
Torchships – nuclear fusion
Nuclear fusion is the process that powers our sun, and all stars in the universe. Inside a star, gravity pulls billions of tons of matter towards the center. Atoms are pushed very close together. Two atoms are fused into one, heavier atom.
Yet the mass of this new atom is slightly less than the mass of the pieces that it was made of in the first place. Where the did missing energy go? It becomes energy – which we see as photons, or as the heat/motion energy of other particles. This is also the process by which nuclear bombs work.
How can we possibly replicate the energy of stars here on Earth? For the last 70 years people have been working on this. It has been extremely challenging to do this, but progress is slowly being made.
Proposed by physicist Robert W. Bussard in 1960. It uses nuclear fusion. An enormous electromagnetic funnel “scoops” hydrogen from the interstellar medium and dumps it into the reactor as fuel.
As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel.
Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.
However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed – especially in denser regions of the galaxy.
Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.
Design by writer Brice Cassenti, artwork by Winchell Chung
Fans of science fiction are sure to have heard of antimatter. But in case you haven’t, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.
In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.
The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. What’s more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.
Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.
It is for this exact reason that NASA’s Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!)
Some sci-fi novels postulate a technology called a jump drive – This allows a starship to be instantaneously teleported between two points. The specific way this is done is glossed over.
Some physicists have offered tentative ideas about how it might be possible. In Stargate, and the science fiction story Contact, the characters use a traversable wormhole – a connection between two distant black holes.
In Star Wars and Babylon 5 spaceships have a hyperdrive, to send a ship through hyperspace.
From Star Wars, here is a view from the cockpit of hyperspace.
Star Wars The Force Awakens, Millennium Falcon
Hyperspace is a very different concept than warp drive. Hyperspace is a speculative, different dimension, in which faster than light speed are possible. So, in this idea, a spaceship would somehow jump out of our universe and into this realm.
No form of hyperspace has ever been discovered by science; its existence was initially merely supposed by science fiction writers. Although in recent years, theoretical physics work on superstrings has led to something called Brane theory, which indicates the possible existence of hyperspaces of various sorts.
Presumably a spaceship would reach a point in hyperspace that corresponds to the destination in our space that they want; at this point they need to jump out of hyperspace and back into our space.
You are likely familiar with methods of interstellar travel that currently only exist in science fiction. For instance, in Star Trek, spaceships have a warp drive. Warp drive allows a spaceship to travel through our space, regular space, at FTL (faster than light) speeds.
Many people are familiar with warp drive as a form of FTL (Faster Than Light travel.) Its most popular use is in the science-fiction series Star Trek. According to the laws of physics could this potentially be possible?
“Concepts for Deep Space Travel: From Warp Drives and Hibernation to World Ships and Cryogenics“, Current Trends in Biomedical Engineering and Biosciences
6.MS-ESS1-5(MA). Use graphical displays to illustrate that Earth and its solar system are one of many in the Milky Way galaxy, which is one of billions of galaxies in the universe.
By the end of grade 8. Patterns of the apparent motion of the sun, the moon, and stars in the sky can be observed, described, predicted, and explained with models. The universe began with a period of extreme and rapid expansion known as the Big Bang. Earth and its solar system are part of the Milky Way galaxy, which is one of many galaxies in the universe.
Possible solutions to a problem are limited by available materials and resources (constraints). The success of a designed solution is determined by considering the desired features of a solution (criteria). Different proposals for solutions can be compared on the basis of how well each one meets the specified criteria for success or how well each takes the constraints into account. (secondary to 4-PS3-4)
Common Core State Standards Connections: ELA/Literacy
RST.6-8.8 Distinguish among facts, reasoned judgment based on research findings, and speculation in a text. (MS-LS2-5)
RI.8.8 Trace and evaluate the argument and specific claims in a text, assessing whether the reasoning is sound and the evidence is relevant and sufficient to support the claims. (MS-LS-4),(MS-LS2-5)
WHST.6-8.2 Write informative/explanatory texts to examine a topic and convey ideas, concepts, and information through the selection, organization, and analysis of relevant content. (MS-LS2-2)
Why has no one else done this? Nothing makes the hard work of learning science more fun than using it for evil! We can rewrite actual national high school learning standards – as if from the James Bond villain organization SPECTRE (Special Executive for Counter-intelligence, Terrorism, Revenge and Extortion) 😂
We can then teach a unit of physics (or any science) with real science labs & quizzes, but as if we’re a mad scientist. Keep it real and make it fun: bring in real history and engineering. Show actual, proposed mad science projects. Stuff that’s absolutely real that most people never heard of.
Have a day of class with mini bios of real & fictional mad scientists – because all kids deserve good role models 😜
Challenge our students to come up with a science-based demo, presentation, or plan based on you know, the usual: World domination, that sort of thing.
“The 1960s Project Orion examined the feasibility of building a nuclear-pulse rocket powered by nuclear fission. It was carried out by physicist Theodore Taylor and others over a seven-year period, beginning in 1958, with United States Air Force support. … it suggested releasing atomic bombs behind a spacecraft, followed by disks made of solid propellant. The bombs would explode, vaporizing the material of the disks and converting it into hot plasma. As this plasma rushed out in all directions, some of it would catch up with the spacecraft, impinge upon a pusher plate, and so drive the vehicle forward.”
“My plan would involve hollowing out West Virginia and using the slag to fill in Lake Ontario, completing a diagonal chain of now saltwater lakes across Turtle island and linking the Arctic & Atlantic seas. This would benefit no one & cause untold damage. I will take no questions.”
Real life mad Soviet scientist, organ transplantation pioneer, performed frightening head transplants on dogs and monkeys.
Sir Hugo Drax (James Bond: Moonraker)
Doctor Evil (from Austin Powers)
Amy Farrah Fowler, The Big Bang Theory
John Hays Hammond Jr. (1888-1965)
“The Father of Radio Control”. Had the mad idea that he could guide or control submarines, torpedoes, and boats – remotely. This was considered quackery and impossible – until he actually developed such technology. His developments in electronic remote control are the foundation for today’s modern radio remote control devices, including modern missile guidance systems, unmanned aerial vehicles (UAVs), and the unmanned combat aerial vehicle (UCAVs). Over 400 patents. And of course he built a giant castle with a hidden laboratory, secret passageways, and hidden doors, on the coast of Gloucester MA, because every mad scientist needs a secret castle lab.
Way back in 1922 he created a light-sensing automated driving machine (“the electric dog,”) a predecessor to today’s automated machines.
A character created by the French novelist Jules Verne (1828–1905). Nemo appears in two of Verne’s science-fiction books, Twenty Thousand Leagues Under the Seas (1870) and The Mysterious Island (1875) and in many books, comic books, and movies based on this character.
Nemo is a mysterious figure. Though of unknown nationality in the first book, he is described as the son of an Indian raja in the second book. A scientific visionary, he roams the depths of the seas in his submarine, the Nautilus, which was assembled from parts manufactured in several different countries, then shipped to a cover address. The captain is consumed by a hunger for vengeance and hatred of imperialism; Verne included references to anti-imperialist uprisings, including the Kościuszko Uprising and Indian Rebellion of 1857, in the various backstories of Nemo.
Dr. Julius No, James Bond villain
Q (James Bond) – Qhead of Q Branch (later Q Division), the fictional research and development division of the British Secret Service charged with oversight of top-secret field technologies.
Louise G. Robinovitch 1869-1940s, another real mad scientist. These are actual news headlines:
USE ELECTRICITY TO REINSTILL LIFE; Experiments by Which an Animal Which Died Under Anesthetics Was Resuscitated.
HUMAN PATIENTS NEXT Dr. Louise G. Rabinovitch Pursuing Experiments in Inducing Electric Sleep as Substitute for Anesthetics.
SPECTRE: The Board Game from Modiphius Entertainment: Compete to become Number 1 of the Special Executive for Counter-intelligence, Terrorism, Revenge, and Extortion (SPECTRE) Are you simply in the game to acquire gold bullion, or are your aspirations more philosophical, safe in the knowledge that the world would be better off with you running it?
HS-ETS1-1. Analyze a major EVIL global challenge to specify qualitative and quantitative criteria and constraints for solutions that account for societal needs and wants.
HS-ETS1-2. Design an EVIL solution to a complex real-world problem by breaking it down into smaller, more manageable problems that can be solved through engineering.
HS-ETS1-3. Evaluate an EVIL solution to a complex real-world problem based on prioritized criteria and trade-offs that account for a range of constraints, including cost, safety, reliability, and aesthetics as well as possible social, cultural, and environmental impacts.
HS-ETS1-4. Use a computer simulation to model the impact of proposed EVIL solutions to a complex real-world problem with numerous criteria and constraints on interactions within and between systems relevant to the problem.
Next Generation Science Standards: Science & Engineering Practices
● Ask questions that arise from careful observation of EVIL phenomena, or unexpected results, to clarify and/or seek additional information.
● Ask questions that arise from examining EVIL models or a theory, to clarify and/or seek additional information and relationships.
● Ask questions to clarify and refine an EVIL model, an explanation, or an engineering problem.
● Evaluate an EVIL question to determine if it is testable and relevant.
● Ask and/or evaluate EVIL questions that challenge the premise(s) of an argument, the interpretation of a data set, or the suitability of the design
HS-ETS1-1. Analyze a major EVIL global challenge to specify a design problem that can be improved. Determine necessary qualitative and quantitative criteria and constraints for solutions, including any requirements set by society.
HS-ETS1-2. Break a complex real-world EVIL problem into smaller, more manageable problems that each can be solved using scientific and engineering principles.
HS-ETS1-3. Evaluate a solution to a complex real-world EVIL problem based on prioritized criteria and trade-offs that account for a range of constraints, including cost, safety, reliability, aesthetics, and maintenance, as well as social, cultural, and environmental impacts.
Behaviors that scientists engage in as they investigate and build models and theories about the natural world and the key set of engineering practices that engineers use as they design and build models and systems.
Although engineering design is similar to scientific inquiry, there are significant differences. For example, scientific inquiry involves the formulation of a question that can be answered through investigation, while EVIL engineering design involves the formulation of a problem that can be solved through design for the purposes of counter-intelligence, terrorism, revenge and extortion. Obviously.
The basic rules of chemistry are magic number approximations
What is Lewis Theory?
This lesson is from from Mark R. Leach, meta-synthesis.com, Lewis_theory
Lewis theory is the study of the patterns that atoms display when they bond and react with each other.
The Lewis approach is to look at many chemical systems, study patterns, count the electrons in the patterns. After that, we devise simple rules to explain what is happening.
Lewis theory makes no attempt to explain how or why these empirically derived numbers of electrons – these magic numbers – arise.
Although, it is striking that the magic numbers are generally (but not exclusively) positive integers of even parity: 0, 2, 4, 6, 8
For example:
Atoms and atomic ions show particular stability when they have a full outer or valence shell of electrons and are isoelectronic with He, Ne, Ar, Kr & Xe: Magic numbers 2, 10, 18, 36, 54.
Atoms have a shell electronic structure: Magic numbers 2, 8, 8, 18, 18.
Sodium metal reacts to give the sodium ion, Na+, a species that has a full octet of electrons in its valence shell. Magic number 8.
A covalent bond consist of a shared pair electrons: Magic number 2.
Atoms have valency, the number of chemical bonds formed by an element, which is the number of electrons in the valence shell divided by 2: Magic numbers 0 to 8.
Ammonia, H3N:, has a lone pair of electrons in its valence shell: Magic number 2.
Ethene, H2C=CH2, has a double covalent bond: Magic numbers (2 + 2)/2 = 2.
Nitrogen, N2, N≡N, has a triple covalent bond: Magic numbers (2 + 2 + 2)/2 = 3.
The methyl radical, H3C•, has a single unpaired electron in its valence shell: Magic number 1.
Lewis bases (proton abstractors & nucleophiles) react via an electron pair: Magic number 2.
Electrophiles, Lewis acids, accept a a pair of electron in order to fill their octet: Magic numbers 2 + 6 = 8.
Oxidation involves loss of electrons, reduction involves gain of electrons. Every redox reaction involves concurrent oxidation and reduction: Magic number 0 (overall).
Curly arrows represent the movement of an electron pair: Magic number 2.
Ammonia, NH3, and phosphine, PH3, are isoelectronic in that they have the same Lewis structure. Both have three covalent bonds and a lone pair of electrons: Magic numbers 2 & 8.
Lewis theory is electron accountancy: look for the patterns and count the electrons.
Lewis theory is also highly eclectic in that it greedily begs/borrows/steals/assimilates numbers from deeper, predictive theories and incorporates them into itself, as we shall see.
Ernest Rutherford famously said
“Physics is the only real science. The rest are just stamp collecting”
Imagine an alien culture trying to understand planet Earth using only a large collection of postage stamps. The aliens would see all sorts of patterns and would be able to deduce the existence of: countries, national currencies, pricing strategies, differential exchange rates, inflation, the existence of heads of state, what stamps are used for, etc., and – importantly – they would be able to make predictions about missing stamps.
But the aliens would be able to infer little about the biology of life on our planet by only studying stamps, although there would be hints in the data: various creatures & plants, males & females, etc.
So it is with atoms, ions, molecules, molecular ions, materials, etc. As chemists we see many patterns in chemical structure and reactivity, and we try to draw conclusions and make predictions using these patterns:
This is Lewis theory. But this Lewis approach is not complete and it only gives hints about the underlying quantum mechanics, a world observed through spectroscopy and mathematics.
Patterns
Consider the pattern shown in Diagram-1:
Now expand the view slightly and look at Diagram-2
You may feel that the right hand side “does not fit the pattern” of Diagram-1 and so is an anomaly.
So, is it an anomaly?
Zoom out a bit and look at the pattern in Diagram-3, the anomaly disappears
But then look at Diagram-4. The purple patch on the upper right hand side does not seem to fit the pattern and so it may represent anomaly
But zooming right out to Diagram-5 we see that everything is part of a larger regular pattern.
Image from dryicons.com, digital-flowers-pattern
When viewing the larger scale the overall pattern emerges and everything becomes clear. Of course, the Digital Flowers pattern is trivial, whereas the interactions of electrons and positive nuclei are astonishingly subtle.
This situation is exactly like learning about chemical structure and reactivity using Lewis theory. First we learn about the ‘Lewis octet’, and we come to believe that the pattern of chemistry can be explained in terms of the very useful Lewis octet model.
Then we encounter phosphorous pentachloride, PCl5, and discover that it has 10 electrons in its valence shell. Is PCl5 an anomaly? No! The fact is that the pattern generated through the Lewis octet model is just too simple.
As we zoom out and look at more chemical structure and reactivity examples we see that the pattern is more complicated that indicated by the Lewis octet magic number 8.
Our problem is that although the patterns of electrons in chemical systems are in principle predictable, new patterns always come as a surprise when they are first discovered:
The serendipitous discovery of how to make the fullerene C60 in large amounts
While these observations can be explained after the fact, they were not predicted beforehand. We do not have the mathematical tools to do predict the nature of the quantum patterns with absolute precision.
The chemist’s approach to understanding structure and reactivity is to count the electrons and take note of the patterns. This is Lewis theory.
As chemists we attempt to ‘explain’ many of these patterns in terms of electron accountancy and magic numbers.
Caught In The Act: Theoretical Theft & Magic Number Creation
The crucial time for our understand chemical structure & bonding occurred in the busy chemistry laboratories at UC Berkeley under the leadership of G. N. Lewis in the early years of the 20th century.
Lewis and colleagues were actively debating the new ideas about atomic structure, particularly the Rutherford & Bohr atoms and postulated how they might give rise to models of chemical structure, bonding & reactivity.
Indeed, the Lewis model uses ideas directly from the Bohr atom. The Rutherford atom shows electrons whizzing about the nucleus, but to the trained eye, there is no structure to the whizzing. Introduced by Niels Bohr in 1913, the Bohr model is a quantum physics modification of the Rutherford model and is sometimes referred to the Rutherford–Bohr model. (Bohr was Rutherford’s student at the time.) The model’s key success lay in explaining (correlating with) the Rydberg formula for the spectral emission lines of atomic hydrogen.
[Greatly simplifying both the history & the science:]
In 1916 atomic theory forked or bifurcated into physics and chemistry streams:
The physics fork was initiated and developed by Bohr, Pauli, Sommerfield and others. Research involved studying atomic spectroscopy and this lead to the discovery of the four quantum numbers – principal, azimuthal, magnetic & spin – and their selection rules. More advanced models of chemical structure, bonding & reactivity are based upon the Schrödinger equation in which the electron is treated as a resonant standing wave. This has developed into molecular orbital theory and the discipline ofcomputational chemistry.
Note: quantum numbers and their selection rules are not ‘magic’ numbers. The quantum numbers represent deep symmetries that are entirely self consistent across all quantum mechanics.
The chemistry fork started when Lewis published his first ideas about the patterns he saw in chemical bonding and reactivity in 1916, and later in a more advanced form in 1923. Lewis realised that electrons could be counted and that there were patterns associated with structure, bonding and reactivity behaviour.These early ideas have been extensively developed and are now taught to chemistry students the world over. This is Lewis theory.
Quantum mechanics and Lewis theory are both concerned with patterns. However, quantum mechanics actively causes the patterns whereas Lewis theory is passive and it only reports on patterns that are observed through experiment.
We observe patterns of structure & reactivity behaviour through experiment.
Lewis theory looks down on the empirical evidence, identifies patterns in behaviour and classifies the patterns in terms of electron accountancy& magic numbers. Lewis theory gives no explanation for the patterns.
In large part, chemistry is about the behaviour of electrons and electrons are quantum mechanical entities. Quantum mechanics causes chemistry to be the way it is. The quantum mechanical patterns are can be:
Observed using spectroscopy.
Echoes of the underlying quantum mechanics can be seen in the chemical structure & reactivity behaviour patterns.
The patterns can be calculated, although the mathematics is not trivial.
The Tragic Decline of Music Literacy (and Quality)
Jon Henschen, intellectualtakeout.org, August 16, 2018
Throughout grade school and high school, I was fortunate to participate in quality music programs. Our high school had a top Illinois state jazz band; I also participated in symphonic band, which gave me a greater appreciation for classical music. It wasn’t enough to just read music. You would need to sight read, meaning you are given a difficult composition to play cold, without any prior practice. Sight reading would quickly reveal how fine-tuned playing “chops” really were. In college I continued in a jazz band and also took a music theory class. The experience gave me the ability to visualize music (If you play by ear only, you will never have that same depth of understanding music construct.)
Both jazz and classical art forms require not only music literacy, but for the musician to be at the top of their game in technical proficiency, tonal quality and creativity in the case of the jazz idiom. Jazz masters like John Coltrane would practice six to nine hours a day, often cutting his practice only because his inner lower lip would be bleeding from the friction caused by his mouth piece against his gums and teeth.
His ability to compose and create new styles and directions for jazz was legendary. With few exceptions such as Wes Montgomery or Chet Baker, if you couldn’t read music, you couldn’t play jazz. In the case of classical music, if you can’t read music you can’t play in an orchestra or symphonic band. Over the last 20 years, musical foundations like reading and composing music are disappearing with the percentage of people that can read music notation proficiently down to 11 percent, according to some surveys.
Two primary sources for learning to read music are school programs and at home piano lessons. Public school music programs have been in decline since the 1980’s, often with school administrations blaming budget cuts or needing to spend money on competing extracurricular programs. Prior to the 1980’s, it was common for homes to have a piano with children taking piano lessons.
Even home architecture incorporated what was referred to as a “piano window” in the living room which was positioned above an upright piano to help illuminate the music. Stores dedicated to selling pianos are dwindling across the country as fewer people take up the instrument. In 1909, piano sales were at their peak when more than 364,500 were sold, but sales have plunged to between 30,000 and 40,000 annually in the US. Demand for youth sports competes with music studies, but also, fewer parents are requiring youngsters to take lessons as part of their upbringing.
Besides the decline of music literacy and participation, there has also been a decline in the quality of music which has been proven scientifically by Joan Serra, a postdoctoral scholar at the Artificial Intelligence Research Institute of the Spanish National Research Council in Barcelona. Joan and his colleagues looked at 500,000 pieces of music between 1955-2010, running songs through a complex set of algorithms examining three aspects of those songs:
1. Timbre- sound color, texture and tone quality
2. Pitch- harmonic content of the piece, including its chords, melody, and tonal arrangements
3. Loudness- volume variance adding richness and depth
The results of the study revealed that timbral variety went down over time, meaning songs are becoming more homogeneous. Translation: most pop music now sounds the same. Timbral quality peaked in the 60’s and has since dropped steadily with less diversity of instruments and recording techniques.
Today’s pop music is largely the same with a combination of keyboard, drum machine and computer software greatly diminishing the creativity and originality.
Pitch has also decreased, with the number of chords and different melodies declining. Pitch content has also decreased, with the number of chords and different melodies declining as musicians today are less adventurous in moving from one chord or note to another, opting for well-trod paths by their predecessors.
Loudness was found to have increased by about one decibel every eight years. Music loudness has been manipulated by the use of compression. Compression boosts the volume of the quietest parts of the song so they match the loudest parts, reducing dynamic range. With everything now loud, it gives music a muddled sound, as everything has less punch and vibrancy due to compression.
In an interview, Billy Joel was asked what has made him a standout. He responded his ability to read and compose music made him unique in the music industry, which as he explained, was troubling for the industry when being musically literate makes you stand out. An astonishing amount of today’s popular music is written by two people: Lukasz Gottwald of the United States and Max Martin from Sweden, who are both responsible for dozens of songs in the top 100 charts. You can credit Max and Dr. Luke for most the hits of these stars:
Katy Perry, Britney Spears, Kelly Clarkson, Taylor Swift, Jessie J., KE$HA, Miley Cyrus, Avril Lavigne, Maroon 5, Taio Cruz, Ellie Goulding, NSYNC, Backstreet Boys, Ariana Grande, Justin Timberlake, Nick Minaj, Celine Dion, Bon Jovi, Usher, Adam Lambert, Justin Bieber, Domino, Pink, Pitbull, One Direction, Flo Rida, Paris Hilton, The Veronicas, R. Kelly, Zebrahead
With only two people writing much of what we hear, is it any wonder music sounds the same, using the same hooks, riffs and electric drum effects?
Lyric Intelligence was also studied by Joan Serra over the last 10 years using several metrics such as “Flesch Kincaid Readability Index,” which reflects how difficult a piece of text is to understand and the quality of the writing. Results showed lyric intelligence has dropped by a full grade with lyrics getting shorter, tending to repeat the same words more often.
Artists that write the entirety of their own songs are very rare today. When artists like Taylor Swift claim they write their own music, it is partially true, insofar as she writes her own lyrics about her latest boyfriend breakup, but she cannot read music and lacks the ability to compose what she plays. (Don’t attack me Tay-Tay Fans!)
Music electronics are another aspect of musical decline as the many untalented people we hear on the radio can’t live without autotune. Autotune artificially stretches or slurs sounds in order to get it closer to center pitch. Many of today’s pop musicians and rappers could not survive without autotune, which has become a sort of musical training wheels. But unlike a five-year-old riding a bike, they never take the training wheels off to mature into a better musician. Dare I even bring up the subject of U2s guitarist “The Edge” who has popularized rhythmic digital delays synchronized to the tempo of the music? You could easily argue he’s more an accomplished sound engineer than a talented guitarist.
Today’s music is designed to sell, not inspire. Today’s artist is often more concerned with producing something familiar to mass audience, increasing the likelihood of commercial success (this is encouraged by music industry execs, who are notoriously risk-averse).
In the mid-1970’s, most American high schools had a choir, orchestra, symphonic band, jazz band, and music appreciation classes. Many of today’s schools limit you to a music appreciation class because it is the cheapest option. D.A. Russell wrote in the Huffington Post in an article titled, “Cancelling High School Elective, Arts and Music—So Many Reasons—So Many Lies” that music, arts and electives teachers have to face the constant threat of eliminating their courses entirely. The worst part is knowing that cancellation is almost always based on two deliberate falsehoods peddled by school administrators: 1) Cancellation is a funding issue (the big lie); 2) music and the arts are too expensive (the little lie).
The truth: Elective class periods have been usurped by standardized test prep. Administrators focus primarily on protecting their positions and the school’s status by concentrating curricula on passing the tests, rather than by helping teachers be freed up from micromanaging mandates so those same teachers can teach again in their classrooms, making test prep classes unnecessary.
What can be done? First, musical literacy should be taught in our nation’s school systems. In addition, parents should encourage their children to play an instrument because it has been proven to help in brain synapse connections, learning discipline, work ethic, and working within a team. While contact sports like football are proven brain damagers, music participation is a brain enhancer.
Where did all the key changes go?
Mallika Seshadri, 11/30/2022
Many of the biggest hits in pop music used to have something in common: a key change, like the one you hear in Whitney Houston’s “I Wanna Dance With Somebody.” But key changes have become harder to find in top hits.
Chris Dalla Riva, a musician and data analyst at Audiomack, wanted to learn more about what it takes to compose a top hit. He spent the last few years listening to every number one hit listed on the Billboard Hot 100 since 1958 – more than 1100 songs.
“I just started noticing some trends, and I set down to writing about them,” says Dalla Riva, who published some of those findings in an article for the website Tedium. He found that about a quarter of those songs from the 1960s to the 1990s included a key change. But from 2010 to 2020, there was just one top song: Travis Scott’s 2018 track, “Sicko Mode.”
According to Dalla Riva, changing the key – or shifting the base scale of a song – is a tool used across musical genres to “inject energy” into a pop number. There are two common ways to place a key change into a top hit, he says. The first is to take the key up toward the end of a number, like Beyoncé does in her 2011 song “Love on Top,” which took listeners through four consecutive key changes. This placement helps a song crescendo to its climax.
The second common placement, Dalla Riva says, is in the middle of a song to signal a change in mood. The Beach Boys took this approach in their 1966 release “Good Vibrations,” as did Scott’s “Sicko Mode.” “The key is just a tool,” Dalla Riva says. “And like all tools and music, the idea is to evoke emotion.”
…. In the absence of key changes – and in a time where hip-hop and electronic music have gained popularity – composers have turned to varying rhythmic patterns and more evocative lyrics. And if you’re one of those folks who wants the key change to come back, Charnas believes there’s one way to do it: fund music education. “You want to know why Motown was such an incredible font of composition? Three words: Detroit Public Schools.”
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
III. Is there any truth in what President Donald Trump says about the refusal to clean the forest floor as a major contributing factor to the spread of these fires?
IV. Many coastal communities had to learn to live with hurricanes. Will California communities have to learn how to live with such fires?
__________________________________________
I. What is a wildfire?
A wildfire or wildland fire is a fire in an area of combustible vegetation occurring in rural areas.
Depending on the type of vegetation present, a wildfire can also be classified more specifically as a brush fire, bushfire, desert fire, forest fire, grass fire, hill fire, peat fire, vegetation fire, and veld fire.
Photo of the Delta Fire, California, 2018. Social media/Reuters.
II. What causes wildfires?
Sadly, if you watch the news, there apparently is nothing that happens without the Jews being blamed.
Marjorie Taylor Greene (R), Qanon congresswoman from Georgia, has her own theory about what caused the 2020 California wildfires:
III. Is there any truth in what President Donald Trump says about the refusal to clean the forest floor as a major contributing factor to the spread of these fires?
Yes, there is some truth to this. The issue takes more than one sentence to explain.
Erin Ross, writer and researcher for Oregon Public Broadcasting, writes
This is a good question with a longish answer. But the short answer is no. The long answer (thread) Forest management practices (which have nothing to do with ‘raking forests’) have absolutely contributed to the size and intensity of wildfires over the last 100 or so years.
Basically, for a long time, if there was a fire you did one thing: put the fire out, ASAP. But fire is a natural part of forest ecosystems, so that lead to fuel buildup, which increased the intensity of fires. What a healthy forest looks like depends on the ecosystem.
In healthy ponderosa forest, for example, is open and park-like. Regular fires clear the underbrush, downed limbs, and young trees. You can walk through these forests with outstretched arms without touching a tree.
There will be stands of denser, younger trees where old trees fell, opening up the ground to light and growth. They’ll thin with time. An unhealthy ponderosa forest is nothing *but* dense stands of young trees and brush.
In a healthy ponderosa forest, fire rips across the understory. But ponderosas are adapted for fire, so very few trees actually catch.
In an unhealthy forest, with lots of brush to fuel flames and smaller trees to reach the fire up towards the canopy, the whole forest can burn.
In Oregon, most ponderosa forests are on the east side of the Cascade mountain range. Historically, they would have many small, brief fires. Now, because of decades of fire suppression, they have frequent massive fires.
So… should we clean the forest floor? No. Should we log all the trees? Not that, either.
You can’t just rake a forest. Forest ecosystems are more than trees and bushes. Insects, mammals, birds and plants rely on fire to exist. Some types of plant seeds won’t even germinate without fire.
If we just “raked the forest”, we’d wreck the forest. So instead we use controlled burning. When fire danger is low, crews go into the woods and light small fires, reducing fuel and simulating the fires that would have burned in the past.
But some forests — particularly ones that were logged or over-suppressed – are full of dense small trees and large trees. They’re not safe to controlled burn, and it wouldn’t reduce the fuel. So humans need to do even more.
Unfortunately, you can’t just go thin trees. That leaves all that fuel sitting on the ground. Some studies have found that forests that were thinned but not burned had *worse* fires than those with no thinning at all.
The thinned trees helped wind carry the fires.
So, you need a combination of thinning and controlled burning in these ponderosa forests. We have to undo some damage before we can return fire to the landscape.
Photo of the Delta Fire, California, 2018. Social media/Reuters.
IV. Many coastal communities had to learn to live with hurricanes.
Will California communities have to learn how to live with such fires?
In an article on Slate, James B. Meigs writes
Fossil charcoal indicates that wildfires began soon after the appearance of terrestrial plants 420 million years ago. Wildfire’s occurrence throughout the history of terrestrial life invites conjecture that fire must have had pronounced evolutionary effects on most ecosystems’ flora and fauna.
Earth is an intrinsically flammable planet owing to its cover of carbon-rich vegetation, seasonally dry climates, atmospheric oxygen, and widespread lightning and volcanic ignitions.
The Camp Fire may have been caused by one, but the California wildfire was years in the making.
It’s hard to look at the images of what used to be Paradise. On Nov. 8, California’s Camp Fire tore through the Sierra Nevada foothills town of 27,000 people with little advance warning. It destroyed homes, incinerated cars—many of which were abandoned on roads that had became gridlocked by fleeing residents—and left a death toll of 77 people and climbing. Nearly 1,000 remain unaccounted for.
But if you look closely at photos and video of the aftermath, you’ll notice something surprising. The buildings are gone, but most of the trees are still standing—many with their leaves or needles intact.
The Camp Fire is generally referred to as a forest fire or, to use the term preferred by firefighting professionals, a wildfire. As the name suggests, wildfires are mostly natural phenomena – even when initially triggered by humans – moving through grasslands, scrub, and forest, consuming the biomass in their paths, especially litter and deadwood.
Visiting the disaster area, President Donald Trump blamed poor forestry practices and suggested California’s forests should be managed more like Finland’s where they spend “a lot of time on raking and cleaning.”
But the photos tell a different story. Within Paradise itself, the main fuel feeding the fire wasn’t trees, nor the underbrush Trump suggested should have been raked up. It was buildings. The forest fire became an infrastructure fire.
Fire researchers Faith Kearns and Max Moritz describe what can happen when a wildfire approaches a suburban neighborhood during the high-wind conditions common during the California fall: First, a “storm of burning embers” will shower the neighborhood, setting some structures on fire.
“Under the worst circumstances, wind driven home-to-home fire spread then occurs, causing risky, fast-moving ‘urban conflagrations’ that can be almost impossible to stop and extremely dangerous to evacuate.”
The town of Paradise didn’t just experience a fast-moving wildfire, its own layout, building designs, and city management turned that fire into something even scarier.
At first glance, the cause of the Camp Fire seems obvious: Sparks from a power line ignited a brush fire, which grew and grew as high winds drove it toward the town (there were also reports of a possible second ignition point).
Pacific Gas and Electric, the regional utility, is already facing extensive lawsuits and the threat of financial liabilities large enough to bankrupt the company.
And yet, like almost every disaster that kills large numbers of people and damages communities, the causes of the tragedy in Paradise are more complex than it first appears.
The failure of the power line was the precipitating factor, but other factors came into play as well: zoning laws and living patterns, building codes and the types of construction materials used, possibly even the forestry management practices Trump inelegantly referenced.
A number of environmental, political, and economic trends converged in Butte County in just a few hours on Nov. 8 to spark this fire. But the tragedy was the result of many longer-term decisions, decades in the making.
Paradise sits in the picturesque foothills of the Sierra Nevada range. Its streets bump up against the forest. The surrounding Butte County is less densely populated but still has many homes on lots of between 1 to 5 acres. (Some 46,000 people were displaced by the fire overall.) That makes Butte County a prime example of what planners call the wildland-urban interface.
A recent Department of Agriculture study defined the WUI as “the area where structures and other human development meet or intermingle with undeveloped wildland.” The report estimated that nearly a third of California’s residents lived in such regions in 2010. And their numbers are growing.
It’s easy to see why. These are lovely places to live, attractive to longtime residents as well as retirees and people moving out of cities. But they are also dangerous, especially in California.
The state is subject to several conditions that make fires particularly threatening. One is drought. California summers have always been dry, but records show that they’ve been getting hotter and dryer. Fire season is getting longer. Climate models show that that trend is likely to get worse.
Another is wind. Each fall, hot, dry air flows westward from the state’s higher elevations toward the coast. These Santa Ana or “diablo” winds can blow at high speeds for days on end. (On the morning of the Camp Fire, wind speeds as high as 72 miles per hour were recorded.)
Like a giant hair dryer, the wind desiccates everything in its path. The night before the fire, local meteorologist Rob Elvington warned: “Worse than no rain is negative rain.” The winds were literally sucking moisture out of the ground.
Those hot, dry conditions make fires terrifyingly easy to start—a hot car muffler, a cigarette ash, a downed power line, almost anything can do it. And the wind makes them almost impossible to stop. As it barreled toward Paradise, the Camp Fire grew at the rate of roughly 80 football fields per minute.
“California is a special case,” fire historian Stephen J. Pyne recently wrote in Slate. “It’s a place that nature built to burn, often explosively.” Even if no one lived in them, California’s hills would burn regularly, Pyne notes. But humans and their infrastructure make the problem worse.
One of the biggest risk factors is electric power. Utilities like PG&E don’t have the option of not serving rural or semirural residents. And every power line that crosses dry, flammable terrain could spark a wildfire.
The culprit in these cases is, once again, the interplay between human-built infrastructure and the natural environment. Vegetation is constantly growing in the corridors, and if a tree falls on a line, or merely touches it, that can cause a short circuit that might spark a fire.
Cal Fire, the California fire management agency, estimates that problems with power lines caused at least 17 major wildfires in Northern California last year. Under an unusual feature of California law known as “inverse condemnation,” a utility can be forced to pay damages for fires that involve its equipment, even if the company hasn’t been proven negligent in its operations.
Even before the massive Camp Fire, PG&E announced that it expects its liabilities from 2017’s large wine-country fires to exceed $2.5 billion. (California Gov. Jerry Brown recently signed a bill offering some financial relief to utilities grappling with wildfire costs, but it did not do away with inverse condemnation.)
As more and more people move into wildland-urban zones, these new arrivals will need to be served with electric power. Which means that, not only will there be more people living in the zones threatened by wildfires, but more power lines will need to be built, increasing the risk of fires. Disaster researchers call this the expanding bull’s-eye effect.
Also, as more people move into vulnerable regions—and then build expensive infrastructure in those areas—the costs of natural disasters increase. This effect has been shown dramatically in coastal areas such as Houston that have seen the damage estimates associated with hurricanes skyrocket. The expanding bull’s-eye means the costs of rebuilding will keep climbing even if the frequency and severity of natural disasters doesn’t change.
So, California’s fire country faces a double-barreled threat: More lives and infrastructure lie in the path of potential fires than ever before. And the fires are getting bigger. That combination explains why 6 out of the 10 most destructive fires in California history have occurred in the past three years.
So far, California is not doing much to discourage people from moving into its danger zones. Moritz, Naomi Tague, and Sarah Anderson, researchers at the University of California, Santa Barbara, maintain that “people must begin to pay the costs for living in fire-prone landscapes.”
They argue that currently, “the relative lack of disincentives to develop in risky areas—for example, expecting state and federal payments for [fire] suppression and losses—ensures that local decisions will continue to promote disasters for which we all pay.”
(Disaster experts make a similar argument about how federal flood insurance and other programs encourage people to live in hurricane-prone areas.)
One financial analyst who works closely with California utilities believes the inverse condemnation rule is part of this problem: “These communities are very dangerous to supply power to,” he says. “But the utility is forced to carry all the risk. They can’t charge their customers a premium for fire risk.”
Of course, when fires do occur, the residents of these areas suffer the most. The question is how to provide the right incentives for people so that we limit the chances of this happening again. Looking ahead, “We need to ensure that prospective homeowners can make informed decisions about the risks they face in the WUI,” Moritz, Tague, and Anderson say.
What else can be done? Building and zoning codes can be changed to make towns less fire prone. Homes that are built or retrofitted with fireproof materials—and landscaped to keep shrubbery away from structures—can usually survive typical wildfires. In new developments, homes can be clustered and surrounded by fire-resistant buffer zones, such as orchards.
And, no matter how well designed, communities in fire zones need realistic evacuation plans and better emergency communications. (Poor communications and inadequate evacuation planning in the face of the speed a fire could move at were among the many failures in Paradise.)
There’s even a grain of truth to Trump’s comments that better forest management can reduce the ferocity of wildfires, though it’s not clear it would have helped in the case of the Camp Fire. The Santa Barbara researchers recommend increasing “fuel management such as controlled burns, vegetation clearing, forest thinning, and fire breaks.”
But no amount of fire-proofing or woodland management is going to eliminate fires.
If global warming models hold true, fire seasons are going to be hotter and last longer. Just as people in coastal areas need to adapt to hurricanes, residents of fire country need to learn to live with fire.
In both cases, the states and the federal government need to reconsider policies that encourage people to move into these vulnerable areas. It’s easy to see why people love living in mountain foothills and forests—just as it’s easy to see why they love living on beaches.
But overdevelopment of fire-prone landscapes means multiplying the inherent hazards of these regions. People need to accept that the problem isn’t just fire—it’s us.
This is an archived copy of an article for our students from thelogicofscience.com
The cornerstone argument of climate change deniers is that our current warming is just a natural cycle, and this claim is usually accompanied by the statement, “the planet has warmed naturally before.” This line of reasoning is, however, seriously flawed both logically and factually. Therefore, I want to examine both the logic and the evidence to explain why this argument is faulty and why we are actually quite certain that we are the cause of our planet’s current warming.
The fact that natural climate change occurred in the past does not mean that the current warming is natural.
I cannot overstate the importance of this point. Many people say, “but the planet has warmed naturally before” as if that automatically means that our current warming is natural, but nothing could be further from the truth. In technical terms, this argument commits a logical fallacy known as non sequitur (this is the fallacy that occurs whenever the conclusion of a deductive argument does not follow necessarily from the premises). The fact that natural warming has occurred before only tells us that it is possible for natural warming to occur. It does not indicate that the current warming is natural, especially given the evidence that it is anthropogenic (man-made).
To put this another way, when you claim that virtually all of the world’s climatologists are wrong and the earth is actually warming naturally, you have just placed the burden of proof on you to provide evidence for that claim. In other words, simply citing previous warming events does not prove that the current warming is natural. You have to actually provide evidence for a natural cause of the current warming, but (as I’ll explain shortly) no such mechanism exists.
Natural causes of climate change
Now, let’s actually take a look at the natural causes of climate change to see if any of them can account for our current warming trend (spoiler alert, they can’t).
Sun
The sun is an obvious suspect for the cause of climate change. The sun is clearly an important player in our planet’s climate, and it has been responsible for some warming episodes in the past. So if, for some reason, it was burning hotter now than in the past, that would certainly cause our climate to warm. There is, however, one big problem: it’s not substantially hotter now than it was in the recent past. Multiple studies have looked at whether or not the output from the sun has increased and whether or not the sun is responsible for our current warming, and the answer is a resounding “no” (Meehl, et al. 2004; Wild et al. 2007; Lockwood and Frohlich 2007, 2008; Lean and Rind 2008; Imbers et al. 2014).
It likely caused some warming in the first half the 20th century, but since then, the output from the sun does not match the rise in temperatures (in fact it has decreased slightly; Lockwood and Frohlich 2007, 2008). Indeed, Foster and Rahmstorf (2011) found that after correcting for solar output, volcanoes, and El Niños, the warming trend was even more clear, which is the exact opposite of what we would expect if the sun was driving climate change (i.e., if the sun was the cause, then removing the effect of the sun should have produced a flat line, not a strong increase).
Finally, the most compelling evidence against the sun hypothesis and for anthropogenic warming is (in my opinion) the satellite data. Since the 70s, we have been using satellites to measure the energy leaving the earth (specifically, the wavelengths of energy that are trapped by CO2).
Thus, if global warming is actually caused by greenhouse gasses trapping additional heat, we should see a fairly constant amount of energy entering the earth, but less energy leaving it. In contrast, if the sun is driving climate change, we should see that both the energy entering and leaving the earth have increased.
Do you want to guess which prediction came true? That’s right, there has been very little change in the energy from the sun, but there has been a significant decrease in the amount of energy leaving the earth (Harries et al. 2001; Griggs and Harries. 2007). That is about as close to “proof” as you can get in science, and if you are going to continue to insist that climate change is natural, then I have one simple question for you: where is the energy going? We know that the earth is trapping more heat now than it did in the past. So if it isn’t greenhouse gasses that are trapping the heat, then what is it?
Milankovitch cycles
Other important drivers of the earth’s climate are long-term cycles called Milankovitch cycles, which involve shifts in the earth’s orbit, tilt, and axis (or eccentricity, precession, and obliquity, if you prefer). In fact, these appear to be one of the biggest initial causes of prominent natural climate changes (like the ice ages). So it is understandable that people would suspect that they are driving the current climate change, but there are several reasons why we know that isn’t the case.
First, Milankovitch cycles are very slow, long-term cycles. Depending which of the three cycles we are talking about, they take tens of thousands of years or even 100 thousand years to complete. So changes from them occur very slowly. In contrast, our current change is very rapid (happening over a few decades as opposed to a few millennia). So the rate of our current change is a clear indication that it is not being caused by Milankovitch cycles.
Second, you need to understand how Milankovitch cycles affect the temperature. The eccentricity cycle could, in concept, directly cause global warming by changing the earth’s position relative to the sun; however, that would cause the climate to warm or cool by affecting how much energy from the sun hits the earth. In other words, we are back to the argument that climate change is caused by increased energy from the sun, which we know isn’t happening (see the section above).
The other cycles (precession and obliquity), affect the part of the earth that is warmed and the season during which the warming takes place, rather than affecting the total amount of energy entering the earth. Thus, they initially just cause regional warming. However, that regional warming leads to global warming by altering the oceans’ currents and warming the oceans, which results in the oceans releasing stored CO2 (Martin et al. 2005; Toggweiler et al. 2006; Schmittner and Galbraith 2008; Skinner et al. 2010).
That CO2 is actually the major driver of past climate changes (Shakun et al. 2012). In other words, when we study past climate changes, what we find is that CO2 levels are a critically important factor, and, as I’ll explain later, we know that the current increase in CO2 is from us. Thus, when you understand the natural cycles, they actually support anthropogenic global warming rather than refuting it.
Volcanoes
At this point, people generally resort to claiming that volcanoes are actually the thing that is emitting the greenhouse gasses. That argument sounds appealing, but in reality, volcanoes usually emit less than 1% of the CO2 that we emit each year (Gerlach 2011). Also, several studies have directly examined volcanic emissions to see if they can explain our current warming, and they can’t (Meehl, et al. 2004; Imbers et al. 2014).
Carbon dioxide (CO2)
A final major driver of climate change is, in fact, CO2. Let’s get a couple of things straight right at the start. First, we know that CO2 traps heat and we know that increasing the amount of CO2 in an environment will result in the temperature increasing (you can find a nice list of papers on the heat trapping abilities of CO2here).
Additionally, everyone (even climate “skeptics”) agree that CO2 plays a vital role in maintaining the earth’s temperature. From those facts, it is intuitively obvious that increasing the CO2 in the atmosphere will result in the temperature increasing. Further, CO2 appears to be responsible a very large portion of the warming during past climate changes (Lorius et al. 1990; Shakun et al. 2012). Note: For past climate changes, the CO2 does lag behind the temperature initially, but as I explained above, the initial warming triggers an increase in CO2, and the CO2drives the majority of the climate change.
At this point, you may be thinking, “fine, it’s CO2, but the CO2 isn’t from us, nature produces way more than we do.” It is true that nature emits more CO2 than us, but prior to the industrial revolution, nature was in balance, with the same amount of CO2 being removed as was emitted. Thus, there was no net gain. We altered that equation by emitting additional CO2.
Further, the increase that we have caused is no little thing. We have nearly doubled the CO2 compared to pre-industrial levels, and the current concentration of CO2 in the atmosphere is higher than it has been at any point in the past 800,000 years. So, yes, we only emit a small fraction of the total CO2 each year, but we are emitting more CO2 than nature can remove, and a little bit each year adds up to a lot over several decades.
Additionally, we know that the current massive increase in CO2 is from us because of the C13 levels. Carbon has two stable isotopes (C12 and C13), but C13 is heavier than C12. Thus, when plants take carbon from the air and use it to make carbohydrates, they take a disproportionate amount of C12.
As a result, the C13/C12 ratios in plants, animals (which get carbon from eating plants), and fossil fuels (which are formed form plants and animals) have more C12 than the C13/C12 ratios in that atmosphere.
Therefore, if burning fossil fuels is responsible for the current increase in CO2, we should see that ratio of C13/C12 in the atmosphere shift to be closer to that of fossil fuels (i.e., contain more C12), and, guess what, that is exactly what we see (Bohm et al. 2002; Ghosh and Brand 2003;Wei et al. 2009). This is unequivocal evidence that we are the cause of the current increase in CO2.
Finally, we can construct all of this information into a deductive logical argument (as illustrated on the left). If CO2 traps heat, and we have increased the CO2 in the atmosphere, then more heat will be trapped. To illustrate how truly inescapable that conclusion is, here is an analogous argument:
1). Insulation traps heat
2). You doubled the insulation of your house
3). Therefore, your house will trap more heat
Note: Yes, I know that the situation is much more complex than simply CO2 trapping heat, and there are various feedback mechanisms at play, but that does not negate the core argument.
Putting the pieces together
So far, I have been talking about all of the drivers of climate change independently, which is clearly an oversimplification, because, in all likelihood, several mechanisms are all acting together. Therefore, the best way to test whether or not the current warming is natural is actually to construct statistical models that include both natural and man-made factors. We can then use those models to see which factors are causing climate change.
Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. Science, 308:1431–1435.
In other words, including human greenhouse gas emissions in the models is the only way to get the models to match the observed warming. This is extremely clear evidence that the current warming is not entirely natural. To be clear, natural factors do play a role and are contributing, but human factors are extremely important, and most of the models show that they account for the majority of the warming.
Correlation vs. causation
It is usually about now that opponents of climate change start to argue that scientists are actually committing a correlation fallacy, and simply showing a correlation between temperature and the CO2 that we produce does not mean that the CO2 is causing the temperature increase. There are, however, several problems with that argument.
First, correlation can indicate causation under certain circumstances. Namely, situations where you have controlled all confounding factors. In other words, if you can show that Y is the only thing that is changing significantly with X, then you can reach a causal conclusion (even placebo controlled drug trials are really just showing correlations between taking the drug and recovery, but because they used the control, they can use that correlation to reach a causal conclusion).
In the case of climate change, of course, we have examined the confounding factors. As I explained in the previous section, we have constructed statistical models with the various drivers of climate change, and anthropogenic greenhouse gasses are necessary to account for the current warming. In other words, we have controlled for the other causes of climate change, therefore we can reach a causal conclusion.
Second, and perhaps more importantly, there is nothing wrong with using correlation to show a particular instance of causation if a causal relationship between X and Y has already been established. Let me give an example. The figure to the right shows the smoking rates and lung/bronchial cancer rates in the US. There is an obvious negative correlation between the two (P < 0.0001), and I don’t think that anyone is going to disagree with the notion that the decrease in smoking is largely responsible for the decrease in lung cancers.
Indeed, there is nothing wrong with reaching that conclusion, and it does not commit a correlation fallacy. This is the case because a causal relationship between smoking and cancer has already been established. In other words, we know that smoking causes cancer because of other studies.
Therefore, when you see that the two are correlated over time, there is nothing wrong with inferring that smoking is driving the cancer rates. Even so, we know from laboratory tests and past climate data that CO2 traps heat and increasing it results in more heat being trapped. In other words, a causal relationship between CO2 and temperature has already been established. Therefore, there is nothing fallacious about looking at a correlation between CO2 and temperature over time and concluding that the CO2 is causing the temperature change.
Ad hoc fallacies and the burden of proof
At this point, I often find that people are prone to proposing that some unknown mechanism exists that scientists haven’t found yet. This is, however, a logical fallacy known as ad hoc. You can’t just make up an unknown mechanism whenever it suits you. If that was valid, then you could always reject any scientific result that you wanted, because it is always possible to propose some unknown mechanism.
Similarly, you can’t use the fact that scientists have been wrong before as evidence, nor can you argue that, “there are still things that we don’t understand about the climate, so I don’t have to accept anthropogenic climate change” (that’s an argument from ignorance fallacy). Yes, there are things that we don’t understand, but we understand enough to be very confident that we are causing climate change, and, once again, you can’t just assume that all of our current research is wrong.
The key problem here is the burden of proof. By claiming that there is some other natural mechanism out there, you have just placed the burden of proof squarely on your shoulders. In other words, you must provide actual evidence of such a mechanism. If you cannot do that, then your argument is logically invalid and must be rejected.
Summary/Conclusion
Let’s review, shall we?
We know that it’s not the sun
We know that it’s not Milankovitch cycles
We know that it’s not volcanoes
We know that even when combined, natural causes cannot explain the current warming
We know that CO2 traps heat
We know that increasing CO2 causes more heat to be trapped
We know that CO2 was largely responsible for past climate changes
We know that we have roughly doubled the CO2 in the atmosphere
We know that the earth is trapping more heat now than it used to
We know that including anthropogenic greenhouse gasses in the models is the only way to explain the current warming trend
When you look at that list of things that we have tested, the conclusion that we are causing the planet to warm is utterly inescapable. For some baffling reason, people often act as if scientists have never bothered to look for natural causes of climate change, but the exact opposite is true. We have carefully studied past climate changes and looked at the natural causes of climate changes, but none of them can explain the current warming.
The only way to account for our current warming is to include our greenhouse gasses in the models. This is extremely clear evidence that we are causing the climate to warm, and if you want to continue to insist that the current warming is natural, then you must provide actual evidence for the existence of a mechanism that scientists have missed, and you must provide evidence that it is a better explanation for the current warming than CO2.
Additionally, you are still going to have to refute the deductive argument that I presented earlier (i.e., show that a premise is false or that I committed a logical fallacy), because finding a previously unknown mechanism of climate change would not discredit the importance of CO2 or the fact we have roughly doubled it. Finally, you also need to explain why the earth is trapping more heat than it used to. If you can do all of that, then we’ll talk, but if you can’t, then you must accept the conclusion that we are causing the planet to warm.
Allen et al. 2006. Quantifying anthropogenic influence on recent near-surface temperature change. Surveys in Geophysics 27:491–544.
Bohm et al. 2002. Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges. Geochemistry, Geophysics, Geosystems 3:1–13.
Foster and Rahmstorf. 2011. Global temperature evolution 1979–2010. Environmental Research Letters 7:011002.
Gerlach 2011. Volcanic versus anthropogenic carbon dioxide. EOS 92:201–202.
Ghosh and Brand. 2003. Stable isotope ratio mass spectrometry in global climate change research. International Journal of Mass Spectrometry 228:1–33.
Griggs and Harries. 2007. Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 Using IRIS, IMG, and AIRS. Journal of Climate 20:3982-4001.
Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. 308:1431–1435.
Harries et al. 2001. Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997. Nature 410:355–357.
Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
Lockwood and Frohlich. 2007. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the National Academy of Sciences 463:2447–2460.
Lockwood and Frohlich. 2008. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the National Academy of Sciences 464:1367–1385.
Lorius et al. 1990. The ice-core record: climate sensitivity and future greenhouse warming. Nature 139–145.
Martin et al. 2005. Role of deep sea temperature in the carbon cycle during the last glacial. Paleoceanography 20:PA2015.
Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
Schmittner and Galbraith 2008. Glacial greenhouse-gas fluctuations controlled by ocean circulation changes. Nature 456:373–376.
Shakun et al. 2012. Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature 484:49–54.
Skinner et al. 2010. Ventilation of the deep Southern Ocean and deglacial CO2 rise. Science 328:1147-1151.
Stott et al. 2001. Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dynamics17:1–21.
Toggweiler et al. 2006. Mid-latitude westerlies, atmospheric CO2, and climate change during the ice ages. Paleoceanography 21:PA2005.
Wei et al. 2009. Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta 73:2332–2346.
Wild et al. 2007. Impact of global dimming and brightening on global warming. Geophysical Research Letters
______________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Before the 1760s, textile production was a cottage industry using mainly flax and wool. A typical weaving family would own one hand loom, which would be operated by the man with help of a boy; the wife, girls and other women could make sufficient yarn for that loom.
The knowledge of textile production had existed for centuries. India had a textile industry that used cotton, from which it manufactured cotton textiles. When raw cotton was exported to Europe it could be used to make fustian.
Two systems had developed for spinning: the simple wheel, which used an intermittent process and the more refined, Saxony wheel which drove a differential spindle and flyer with a heck that guided the thread onto the bobbin, as a continuous process. This was satisfactory for use on hand looms, but neither of these wheels could produce enough thread for the looms after the invention by John Kay in 1734 of the flying shuttle, which made the loom twice as productive.
Cloth production moved away from the cottage into manufactories. The first moves towards manufactories called mills were made in the spinning sector. The move in the weaving sector was later. By the 1820s, all cotton, wool and worsted was spun in mills; but this yarn went to outworking weavers who continued to work in their own homes. A mill that specialised in weaving fabric was called a weaving shed.
This section has been adapted from, Textile manufacture during the British Industrial Revolution, Wikipedia
Francis Cabot Lowell
Samuel Slater had established factories in the 1790s after building textile machinery. Francis Cabot Lowell took it a step further. In 1810, Francis Cabot Lowell visited the textile mills in England. He took note of the machinery in England that was not available in the United States, and he sketched and memorized details.
One machine in particular, the power loom, could weave thread into cloth. He took his ideas to the United States and formed the Boston Manufacturing Company in 1812. With the money he made from this company, he built a water-powered mill. Francis Cabot Lowell is credited for building the first factory where raw cotton could be made into cloth under one roof.
This process, also known as the “Waltham-Lowell System” reduced the cost of cotton. By putting out cheaper cotton, Lowell’s company quickly became successful. After Lowell brought the power loom to the United States, the new textile industry boomed. The majority of businesses in the United States by 1832 were in the textile industry.
Lowell also found a specific workforce for his textile mills. He employed single girls, daughters of New England farm families, also known as The Lowell Girls. Many women were eager to work to show their independence. Lowell found this convenient because he could pay women less wages than he would have to pay men. Women also worked more efficiently than men did and were more skilled when it came to cotton production. This way, he got his work done efficiently, with the best results, and it cost him less. The success of the Lowell mills symbolizes the success and technological advancement of the Industrial Revolution.
Note that the analysis above, while correct, is incomplete: This system is an example of how powerful factory owners, combined with inequitable communal social and legal norms, allow one group (in this case, rich land and factory owners) to profit at the expense of people engaged in the actual labor which produces items of value (in this case, native born and immigrant women.)
Ethical issues
This imbalance of power kept people who worked 40 to 60 hours a week poor, by depriving them of fair shares of their profits from their own labor. It also caused much injury and sometimes death from unsafe factory conditions. Factory conditions in America and Europe never improved until the development of labor unions. If you or people you know are able to work 40 hours or less a week, without living in poverty, in a safe environment, without fear of death, that’s due to labor unions.
Labor is prior to and independent of capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital, and deserves much the higher consideration.
– Abraham Lincoln, First Annual Message, 12/3/1861
“If capitalism is fair then unionism must be. If men have a right to capitalize their ideas and the resources of their country, then that implies the right of men to capitalize their labor.”
— Frank Lloyd Wright
HS-ETS4-5(MA). Explain how a machine converts energy, through mechanical means, to do work. Collect and analyze data to determine the efficiency of simple and complex machines.
Massachusetts History and Social Science Curriculum Framework
Grade 6: HISTORY AND GEOGRAPHY Interpret geographic information from a graph or chart and construct a graph or chart that conveys geographic information (e.g., about rainfall, temperature, or population size data)
INDUSTRIAL REVOLUTION AND SOCIAL AND POLITICAL CHANGE IN EUROPE, 1800–1914 WHII.6 Summarize the social and economic impact of the Industrial Revolution… population and urban growth
In the 1700s, most manufacturing was still done in homes or small shops, using small, handmade machines that were powered by muscle, wind, or moving water. 10J/E1** (BSL)
In the 1800s, new machinery and steam engines to drive them made it possible to manufacture goods in factories, using fuels as a source of energy. In the factory system, workers, materials, and energy could be brought together efficiently. 10J/M1*
The invention of the steam engine was at the center of the Industrial Revolution. It converted the chemical energy stored in wood and coal into motion energy. The steam engine was widely used to solve the urgent problem of pumping water out of coal mines. As improved by James Watt, Scottish inventor and mechanical engineer, it was soon used to move coal; drive manufacturing machinery; and power locomotives, ships, and even the first automobiles. 10J/M2*
The Industrial Revolution developed in Great Britain because that country made practical use of science, had access by sea to world resources and markets, and had people who were willing to work in factories. 10J/H1*
The Industrial Revolution increased the productivity of each worker, but it also increased child labor and unhealthy working conditions, and it gradually destroyed the craft tradition. The economic imbalances of the Industrial Revolution led to a growing conflict between factory owners and workers and contributed to the main political ideologies of the 20th century. 10J/H2
Today, changes in technology continue to affect patterns of work and bring with them economic and social consequences. 10J/H3*