KaiserScience

Home » Articles posted by New England Blogger (Page 27)

Author Archives: New England Blogger

Particle Detectors

A particle detector is a device used to detect, track, and/or identify ionizing particles.

These particles may have been produced by nuclear decay, cosmic radiation, or reactions in a particle accelerator.

Particle detectors can measure the particle’s energy, momentum, spin, charge, particle type, in addition to merely registering the presence of the particle.

Cloud Chamber

(Adapted from Wikipedia)

cloud chamber, also known as a Wilson cloud chamber, is a particle detector used for visualizing the passage of ionizing radiation.

A cloud chamber consists of a sealed environment containing a supersaturated vapor of water or alcohol.

An energetic charged particle (for example, an alpha or beta particle) interacts with the gaseous mixture:

it knocks electrons off gas molecules via electrostatic forces during collisions

This results in a trail of ionized gas particles. They act as condensation centers : a mist-like trail of small droplets form if the gas mixture is at the point of condensation.

These droplets are visible as a “cloud” track that persist for several seconds while the droplets fall through the vapor.

These tracks have characteristic shapes. For example, an alpha particle track is thick and straight, while an electron track is wispy and shows more evidence of deflections by collisions.

Cloud chambers played a prominent role in the experimental particle physics from the 1920s to the 1950s, until the advent of the bubble chamber.

This is a Diffusion Cloud Chamber used for public demonstrations at the Museum of Technology in Berlin. The first part shows the alpha and beta radiation occurring around us all the time, thanks to normal activity in the atmosphere. Then a sample of Radon 220 (half-life 55 sec) is inserted into the chamber and all hell breaks loose as an alpha-decay party ensues!

Source: Derek McKenzie, Physics Footnotes, http://physicsfootnotes.com/radon-cloud-chamber/

diffusion-cloud-chamber-with-radon-gas

Here is an example of two particles colliding within an accelerator, and decaying into a variety of other products.

 

.

particles colliding LHC

Let’s look at some detailed examples. We’ll see photographs of the particle detector, then we’ll see cutaway diagrams showing us what is inside the detector.

While each detector is different – designed for a different task – they all have some basic elements in common. Each has a set of wires that make a signal if a particle flies through them. These wires are arrayed around the target area – the place where the particles are forced to collide.

When a collision occurs, some particles are broken free and fly outwards.

More remarkably, when a collision occurs, some particles are actually created – we generate particles that weren’t even there before. How is that possible? Short version, Einstein’s theory of mass-energy equivalence means that matter can be converted into energy, or vice-versa. The massive energy in these collisions creates many new sub-atomic particles. Some of these may be permanent, others might exist for only short periods of time.

ALICE, A Large Ion Collider Experiment in the LHC at CERN

This animation shows what happens when electrons and positrons collide in the ILD detector, one of the planned detectors for the future ILC. Many collisions will happen at the same time around the clock, producing a vast array of possible events. This shows one possible collision event involving the Higgs boson.

 

Conundrums

“With the uncertainty principle and the observer effects in mind, how do these devices measure both the position and momentum of sub-atomic particles with the kind of accuracy that they seem to get, with the beautiful color pictures?”

How do these devices measure both the position and momentum of particles without violating the Heisenberg Uncertainty principle?

 

Infographics

.

how Particle Accelerators Work

 

Apps

The Particle Adventure app lets us discover: The Standard Model, Accelerators and Particle Detectors, Higgs Boson Discovered, Unsolved Mysteries, Particle Decays and Annihilations.

   Android – The Particle Adveture .   iOS (Apple) The Particle Adventure

Interactive website sims

The Particle Adventure

CPEP Contemporary Physics Education Project

 

Further reading

Symmetry Magazine (for high school students)

 

Learning Standards

SAT Subject Test: Physics

Quantum phenomena, such as photons and photoelectric effect. Atomic, such as the Rutherford and Bohr models, atomic energy levels, and atomic spectra. Nuclear and particle physics, such as radioactivity, nuclear reactions, and fundamental particles.
Relativity, such as time dilation, length contraction, and mass-energy equivalence

A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012)

Electromagnetic radiation can be modeled as a wave of changing electric and magnetic fields or as particles called photons. The wave model is useful for explaining many features of electromagnetic radiation, and the particle model explains other features. Quantum theory relates the two models…. Knowledge of quantum physics enabled the development of semiconductors, computer chips, and lasers, all of which are now essential components of modern imaging, communications, and information technologies.

 

Placebo effect

At the moment this is a placeholder article,

Placebo effect

image from shutterstock.com

What if the Placebo Effect Isn’t a Trick? New research is zeroing in on a biochemical basis for the placebo effect — possibly opening a Pandora’s box for Western medicine.

The New York Times Magazine, Gary Greenberg, Nov 7, 2018

Give people a sugar pill, they have shown, and those patients — especially if they have one of the chronic, stress-related conditions that register the strongest placebo effects and if the treatment is delivered by someone in whom they have confidence — will improve. Tell someone a normal milkshake is a diet beverage, and his gut will respond as if the drink were low fat. Take athletes to the top of the Alps, put them on exercise machines and hook them to an oxygen tank, and they will perform better than when they are breathing room air — even if room air is all that’s in the tank. Wake a patient from surgery and tell him you’ve done an arthroscopic repair, and his knee gets better even if all you did was knock him out and put a couple of incisions in his skin. Give a drug a fancy name, and it works better than if you don’t.

You don’t even have to deceive the patients. You can hand a patient with irritable bowel syndrome a sugar pill, identify it as such and tell her that sugar pills are known to be effective when used as placebos, and she will get better, especially if you take the time to deliver that message with warmth and close attention. Depression, back pain, chemotherapy-related malaise, migraine, post-traumatic stress disorder: The list of conditions that respond to placebos — as well as they do to drugs, with some patients — is long and growing.

But as ubiquitous as the phenomenon is, and as plentiful the studies that demonstrate it, the placebo effect has yet to become part of the doctor’s standard armamentarium — and not only because it has a reputation as “fake medicine” doled out by the unscrupulous to the credulous. It also has, so far, resisted a full understanding, its mechanisms shrouded in mystery. Without a clear knowledge of how it works, doctors can’t know when to deploy it, or how.

Not that the researchers are without explanations. But most of these have traditionally been psychological in nature, focusing on mechanisms like expectancy — the set of beliefs that a person brings into treatment — and the kind of conditioning that Ivan Pavlov first described more than a century ago. These theories, which posit that the mind acts upon the body to bring about physical responses, tend to strike doctors and researchers steeped in the scientific tradition as insufficiently scientific to lend credibility to the placebo effect.

“What makes our research believable to doctors?” asks Ted Kaptchuk, head of Harvard Medical School’s Program in Placebo Studies and the Therapeutic Encounter. “It’s the molecules. They love that stuff.” As of now, there are no molecules for conditioning or expectancy — or, indeed, for Kaptchuk’s own pet theory, which holds that the placebo effect is a result of the complex conscious and nonconscious processes embedded in the practitioner-patient relationship — and without them, placebo researchers are hard-pressed to gain purchase in mainstream medicine.

But as many of the talks at the conference indicated, this might be about to change. Aided by functional magnetic resonance imaging (f.M.R.I.) and other precise surveillance techniques, Kaptchuk and his colleagues have begun to elucidate an ensemble of biochemical processes that may finally account for how placebos work and why they are more effective for some people, and some disorders, than others. The molecules, in other words, appear to be emerging. And their emergence may reveal fundamental flaws in the way we understand the body’s healing mechanisms, and the way we evaluate whether more standard medical interventions in those processes work, or don’t. Long a useful foil for medical science, the placebo effect might soon represent a more fundamental challenge to it.

In a way, the placebo effect owes its poor reputation to the same man who cast aspersions on going to bed late and sleeping in. Benjamin Franklin was, in 1784, the ambassador of the fledgling United States to King Louis XVI’s court. Also in Paris at the time was a Viennese physician named Franz Anton Mesmer. Mesmer fled Vienna a few years earlier when the local medical establishment determined that his claim to have cured a young woman’s blindness by putting her into a trance was false, and that, even worse, there was something unseemly about his relationship with her.

By the time he arrived in Paris and hung out his shingle, Mesmer had acquired what he lacked in Vienna: a theory to account for his ability to use trance states to heal people. There was, he claimed, a force pervading the universe called animal magnetism that could cause illness when perturbed. Conveniently enough for Mesmer, the magnetism could be perceived and de-perturbed only by him and people he had trained.

Mesmer’s method was strange, even in a day when doctors routinely prescribed bloodletting and poison to cure the common cold. A group of people complaining of maladies like fatigue, numbness, paralysis and chronic pain would gather in his office, take seats around an oak cask filled with water and grab on to metal rods immersed in the water. Mesmer would alternately chant, play a glass harmonium and wave his hands at the afflicted patients, who would twitch and cry out and sometimes even lose consciousness, whereupon they would be carried to a recovery room. Enough people reported good results that patients were continually lined up at Mesmer’s door waiting for the next session.

It was the kind of success likely to arouse envy among doctors, but more was at stake than professional turf. Mesmer’s claim that a force existed that could only be perceived and manipulated by the elect few was a direct challenge to an idea central to the Enlightenment: that the truth could be determined by anyone with senses informed by skepticism, that Scripture could be supplanted by facts and priests by a democracy of people who possessed them. So, when the complaints about Mesmer came to Louis, it was to the scientists that the king — at pains to show himself an enlightened man — turned. He appointed, among others, Lavoisier the chemist, Bailly the astronomer and Guillotin the physician to investigate Mesmer’s claims, and he installed Franklin at the head of their commission.

To the Franklin commission, the question wasn’t whether Mesmer was a fraud and his patients were dupes. Everyone could be acting in good faith, but belief alone did not prove that the magnetism was at work. To settle this question, they designed a series of trials that ruled out possible causes of the observed effects other than animal magnetism. The most likely confounding variable, they thought, was some faculty of mind that made people behave as they did under Mesmer’s ministrations. To rule this out, the panel settled upon a simple method: a blindfold. Over a period of a few months, they ran a series of experiments that tested whether people experienced the effects of animal magnetism even when they couldn’t see.

One of Mesmer’s disciples, Charles d’Eslon, conducted the tests. The panel instructed him to wave his hands at a part of a patient’s body, and then asked the patient where the effect was felt. They took him to a copse to magnetize a tree — Mesmer claimed that a patient could be treated by touching one — and then asked the patient to find it. They told patients d’Eslon was in the room when he was not, and vice versa, or that he was doing something that he was not. In trial after trial, the patients responded as if the doctor were doing what they thought he was doing, not what he was actually doing.

It was possibly the first-ever blinded experiment, and it soundly proved what scientists today call the null hypothesis: There was no causal connection between the behavior of the doctor and the response of the patients, which meant, as Franklin’s panel put it in their report, that “this agent, this fluid, has no existence.” That didn’t imply that people were pretending to twitch or cry out, or lying when they said they felt better; only that their behavior wasn’t a result of this nonexistent force. Rather, the panel wrote, “the imagination singly produces all the effects attributed to the magnetism.”

When the panel gave d’Eslon a preview of its findings, he took it with equanimity. Given the results of the treatment (as opposed to the experiment), he opined, the imagination, “directed to the relief of suffering humanity, would be a most valuable means in the hands of the medical profession” — a subject to which these august scientists might wish to apply their methods. But events intervened. Franklin was called back to America in 1785; Louis XVI had bigger trouble on his hands and, along with Lavoisier and Bailly, eventually met with the short, sharp shock of the device named for Guillotin.

The panel’s report was soon translated into English by William Godwin, the father of Mary Shelley. The story spread fast — not because of the healing potential that d’Eslon had suggested, but because of the implications for science as a whole. The panel had demonstrated that by putting imagination out of play, science could find the truth about our suffering bodies, in the same way it had found the truth about heavenly bodies.

Hiving off subjectivity from the rest of medical practice, the Franklin commission had laid the conceptual foundation for the brilliant discoveries of modern medicine, the antibiotics and vaccines and other drugs that can be dispensed by whoever happens to possess the prescription pad, and to whoever happens to have the disease. Without meaning to, they had created an epistemology for the healing arts — and, in the process, inadvertently conjured the placebo effect, and established it as that to which doctors must remain blind.

It wouldn’t be the last time science would turn its focus to the placebo effect only to quarantine it. At a 1955 meeting of the American Medical Association, the Harvard surgeon Henry Beecher pointed out to his colleagues that while they might have thought that placebos were fake medicine — even the name, which means “I shall please” in Latin, carries more than a hint of contempt — they couldn’t deny that the results were real. Beecher had been looking at the subject systematically, and he determined that placebos could relieve anxiety and postoperative pain, change the blood chemistry of patients in a way similar to drugs and even cause side effects. In general, he told them, more than one-third of patients would get better when given a treatment that was, pharmacologically speaking, inert.

If the placebo was as powerful as Beecher said, and if doctors wanted to know whether their drugs actually worked, it was not sufficient simply to give patients the drugs and see whether they did better than patients who didn’t interact with the doctor at all. Instead, researchers needed to assume that the placebo effect was part of every drug effect, and that drugs could be said to work only to the extent that they worked better than placebos. An accurate measure of drug efficacy would require comparing the response of patients taking it with that of patients taking placebos; the drug effect could then be calculated by subtracting the placebo response from the overall response, much as a deli-counter worker subtracts the weight of the container to determine how much lobster salad you’re getting.

In the last half of the 1950s, this calculus gave rise to a new way to evaluate drugs: the double-blind, placebo-controlled clinical trial, in which neither patient nor clinician knew who was getting the active drug and who the placebo. In 1962, when the Food and Drug Administration began to require pharmaceutical companies to prove their new drugs were effective before they came to market, they increasingly turned to the new method; today, virtually every prospective new drug has to outperform placebos on two independent studies in order to gain F.D.A. approval.

Like Franklin’s commission, the F.D.A. had determined that the only way to sort out the real from the fake in medicine was to isolate the imagination. It also echoed the royal panel by taking note of the placebo effect only long enough to dismiss it, giving it a strange dual nature: It’s included in clinical trials because it is recognized as an important part of every treatment, but it is treated as if it were not important in itself. As a result, although virtually every clinical trial is a study of the placebo effect, it remains underexplored — an outcome that reflects the fact that there is no money in sugar pills and thus no industry interest in the topic as anything other than a hurdle it needs to overcome.

When Ted Kaptchuk was asked to give the opening keynote address at the conference in Leiden, he contemplated committing the gravest heresy imaginable: kicking off the inaugural gathering of the Society for Interdisciplinary Placebo Studies by declaring that there was no such thing as the placebo effect.

When he broached this provocation in conversation with me not long before the conference, it became clear that his point harked directly back to Franklin: that the topic he and his colleagues studied was created by the scientific establishment, and only in order to exclude it — which means that they are always playing on hostile terrain. Science is “designed to get rid of the husks and find the kernels,” he told me.

Much can be lost in the threshing — in particular, Kaptchuk sometimes worries, the rituals embedded in the doctor-patient encounter that he thinks are fundamental to the placebo effect, and that he believes embody an aspect of medicine that has disappeared as scientists and doctors pursue the course laid by Franklin’s commission. “Medical care is a moral act,” he says, in which a suffering person puts his or her fate in the hands of a trusted healer.

“I don’t love science,” Kaptchuk told me. “I want to know what heals people.” Science may not be the only way to understand illness and healing, but it is the established way. “That’s where the power is,” Kaptchuk says. That instinct is why he left his position as director of a pain clinic in 1990 to join Harvard — and it’s why he was delighted when, in 2010, he was contacted by Kathryn Hall, a molecular biologist. Here was someone with an interest in his topic who was also an expert in molecules, and who might serve as an emissary to help usher the placebo into the medical establishment.

Hall’s own journey into placebo studies began 15 years before her meeting with Kaptchuk, when she developed a bad case of carpal tunnel syndrome. Wearing a wrist brace didn’t help, and neither did over-the-counter drugs or the codeine her doctor prescribed. When a friend suggested she visit an acupuncturist, Hall balked at the idea of such an unscientific approach. But faced with the alternative, surgery, she decided to make an appointment. “I was there for maybe 10 minutes,” she recalls, “when she stuck a needle here” — Hall points to a spot on her forearm — “and this awful pain just shot through my arm.” But then the pain receded and her symptoms disappeared, as if they had been carried away on the tide. She received a few more treatments, during which the acupuncturist taught her how to manipulate a spot near her elbow if the pain recurred. Hall needed the fix from time to time, but the problem mostly just went away.

“I couldn’t believe it,” she told me. “Two years of gross drugs, and then just one treatment.” All these years later, she’s still wonder-struck. “What was that?” she asks. “Rub the spot, and the pain just goes away?”

Hall was working for a drug company at the time, but she soon left to get a master’s degree in visual arts, after which she started a documentary-production company. She was telling her carpal-tunnel story to a friend one day and recounted how the acupuncturist had climbed up on the table with her. (“I was like, ‘Oh, my God, what is this woman doing?’ ” she told me. “It was very dramatic.”) She’d never been able to understand how the treatment worked, and this memory led her to wonder out loud if maybe the drama itself had something to do with the outcome.

Her friend suggested she might find some answers in Ted Kaptchuk’s work. She picked up his book about Chinese medicine, “The Web that Has No Weaver,” in which he mentioned the possibility that placebo effects figure strongly in acupuncture, and then she read a study he had conducted that put that question to the test.

Kaptchuk had divided people with irritable bowel syndrome into three groups. In one, acupuncturists went through all the motions of treatment, but used a device that only appeared to insert a needle. Subjects in a second group also got sham acupuncture, but delivered with more elaborate doctor-patient interaction than the first group received. A third group was given no treatment at all. At the end of the trial, both treatment groups improved more than the no-treatment group, and the “high interaction” group did best of all.

Kaptchuk, who before joining Harvard had been an acupuncturist in private practice, wasn’t particularly disturbed by the finding that his own profession worked even when needles were not actually inserted; he’d never thought that placebo treatments were fake medicine. He was more interested in how the strength of the treatment varied with the quality and quantity of interaction between the healer and the patient — the drama, in other words. Hall reached out to him shortly after she read the paper.

The findings of the I.B.S. study were in keeping with a hypothesis Kaptchuk had formed over the years: that the placebo effect is a biological response to an act of caring; that somehow the encounter itself calls forth healing and that the more intense and focused it is, the more healing it evokes. He elaborated on this idea in a comparative study of conventional medicine, acupuncture and Navajo “chantway rituals,” in which healers lead storytelling ceremonies for the sick. He argued that all three approaches unfold in a space set aside for the purpose and proceed as if according to a script, with prescribed roles for every participant. Each modality, in other words, is its own kind of ritual, and Kaptchuk suggested that the ritual itself is part of what makes the procedure effective, as if the combined experiences of the healer and the patient, reinforced by the special-but-familiar surroundings, evoke a healing response that operates independently of the treatment’s specifics. “Rituals trigger specific neurobiological pathways that specifically modulate bodily sensations, symptoms and emotions,” he wrote. “It seems that if the mind can be persuaded, the body can sometimes act accordingly.” He ended that paper with a call for further scientific study of the nexus between ritual and healing.

When Hall contacted him, she seemed like a perfect addition to the team he was assembling to do just that. He even had an idea of exactly how she could help. In the course of conducting the study, Kaptchuk had taken DNA samples from subjects in hopes of finding some molecular pattern among the responses. This was an investigation tailor-made to Hall’s expertise, and she agreed to take it on. Of course, the genome is vast, and it was hard to know where to begin — until, she says, she and Kaptchuk attended a talk in which a colleague presented evidence that an enzyme called COMT affected people’s response to pain and painkillers. Levels of that enzyme, Hall already knew, were also correlated with Parkinson’s disease, depression and schizophrenia, and in clinical trials people with those conditions had shown a strong placebo response. When they heard that COMT was also correlated with pain response — another area with significant placebo effects — Hall recalls, “Ted and I looked at each other and were like: ‘That’s it! That’s it!’ ”

It is not possible to assay levels of COMT directly in a living brain, but there is a snippet of the genome called rs4680 that governs the production of the enzyme, and that varies from one person to another: One variant predicts low levels of COMT, while another predicts high levels. When Hall analyzed the I.B.S. patients’ DNA, she found a distinct trend. Those with the high-COMT variant had the weakest placebo responses, and those with the opposite variant had the strongest. These effects were compounded by the amount of interaction each patient got: For instance, low-COMT, high-interaction patients fared best of all, but the low-COMT subjects who were placed in the no-treatment group did worse than the other genotypes in that group. They were, in other words, more sensitive to the impact of the relationship with the healer.

The discovery of this genetic correlation to placebo response set Hall off on a continuing effort to identify the biochemical ensemble she calls the placebome — the term reflecting her belief that it will one day take its place among the other important “-omes” of medical science, from the genome to the microbiome. The rs4680 gene snippet is one of a group that governs the production of COMT, and COMT is one of a number of enzymes that determine levels of catecholamines, a group of brain chemicals that includes dopamine and epinephrine. (Low COMT tends to mean higher levels of dopamine, and vice versa.) Hall points out that the catecholamines are associated with stress, as well as with reward and good feeling, which bolsters the possibility that the placebome plays an important role in illness and health, especially in the chronic, stress-related conditions that are most susceptible to placebo effects.

Her findings take their place among other results from neuroscientists that strengthen the placebo’s claim to a place at the medical table, in particular studies using f.M.R.I. machines that have found consistent patterns of brain activation in placebo responders. “For years, we thought of the placebo effect as the work of imagination,” Hall says. “Now through imaging you can literally see the brain lighting up when you give someone a sugar pill.”

One group with a particularly keen interest in those brain images, as Hall well knows, is her former employers in the pharmaceutical industry. The placebo effect has been plaguing their business for more than a half-century — since the placebo-controlled study became the clinical-trial gold standard, requiring a new drug to demonstrate a significant therapeutic benefit over placebo to gain F.D.A. approval.

That’s a bar that is becoming ever more difficult to surmount, because the placebo effect seems to be becoming stronger as time goes on. A 2015 study published in the journal Pain analyzed 84 clinical trials of pain medication conducted between 1990 and 2013 and found that in some cases the efficacy of placebo had grown sharply, narrowing the gap with the drugs’ effect from 27 percent on average to just 9 percent. The only studies in which this increase was detected were conducted in the United States, which has spawned a variety of theories to explain the phenomenon: that patients in the United States, one of only two countries where medications are allowed to be marketed directly to consumers, have been conditioned to expect greater benefit from drugs; or that the larger and longer-duration trials more common in America have led to their often being farmed out to contract organizations whose nurses’ only job is to conduct the trial, perhaps fostering a more placebo-triggering therapeutic interaction.

Whatever the reason, a result is that drugs that pass the first couple of stages of the F.D.A. approval process founder more and more frequently in the larger late-stage trials; more than 90 percent of pain medications now fail at this stage. The industry would be delighted if it were able to identify placebo responders — say, by their genome — and exclude them from clinical trials.

That may seem like putting a thumb on the scale for drugs, but under the logic of the drug-approval regime, to eliminate placebo effects is not to cheat; it merely reduces the noise in order for the drug’s signal to be heard more clearly. That simple logic, however, may not hold up as Hall continues her research into the genetic basis of the placebo. Indeed, that research may have deeper implications for clinical drug trials, and for the drugs themselves, than pharma companies might expect.

Since 2013, Hall has been involved with the Women’s Health Study, which has tracked the cardiovascular health of nearly 40,000 women over more than 20 years. The subjects were randomly divided into four groups, following standard clinical-trial protocol, and received a daily dose of either vitamin E, aspirin, vitamin E with aspirin or a placebo. A subset also had their DNA sampled — which, Hall realized, offered her a vastly larger genetic database to plumb for markers correlated to placebo response. Analyzing the data amassed during the first 10 years of the study, Hall found that the women with the low-COMT gene variant had significantly higher rates of heart disease than women with the high-COMT variant, and that the risk was reduced for those low-COMT women who received the active treatments but not in those given placebos. Among high-COMT people, the results were the inverse: Women taking placebos had the lowest rates of disease; people in the treatment arms had an increased risk.

These findings in some ways seem to confound the results of the I.B.S. study, in which it was the low-COMT patients who benefited most from the placebo. But, Hall argues, what’s important isn’t the direction of the effect, but rather that there is an effect, one that varies depending on genotype — and that the same gene variant also seems to determine the relative effectiveness of the drug. This outcome contradicts the logic underlying clinical trials. It suggests that placebo and drug do not involve separate processes, one psychological and the other physical, that add up to the overall effectiveness of the treatment; rather, they may both operate on the same biochemical pathway — the one governed in part by the COMT gene.

Hall has begun to think that the placebome will wind up essentially being a chemical pathway along which healing signals travel — and not only to the mind, as an experience of feeling better, but also to the body. This pathway may be where the brain translates the act of caring into physical healing, turning on the biological processes that relieve pain, reduce inflammation and promote health, especially in chronic and stress-related illnesses — like irritable bowel syndrome and some heart diseases. If the brain employs this same pathway in response to drugs and placebos, then of course it is possible that they might work together, like convoys of drafting trucks, to traverse the territory. But it is also possible that they will encroach on one another, that there will be traffic jams in the pathway.

What if, Hall wonders, a treatment fails to work not because the drug and the individual are biochemically incompatible, but rather because in some people the drug interferes with the placebo response, which if properly used might reduce disease? Or conversely, what if the placebo response is, in people with a different variant, working against drug treatments, which would mean that a change in the psychosocial context could make the drug more effective? Everyone may respond to the clinical setting, but there is no reason to think that the response is always positive. According to Hall’s new way of thinking, the placebo effect is not just some constant to be subtracted from the drug effect but an intrinsic part of a complex interaction among genes, drugs and mind. And if she’s right, then one of the cornerstones of modern medicine — the placebo-controlled clinical trial — is deeply flawed.

When Kathryn Hall told Ted Kaptchuk what she was finding as she explored the relationship of COMT to the placebo response, he was galvanized. “Get this molecule on the map!” he urged her. It’s not hard to understand his excitement. More than two centuries after d’Eslon suggested that scientists turn their attention directly to the placebo effect, she did exactly that and came up with a finding that might have persuaded even Ben Franklin.

But Kaptchuk also has a deeper unease about Hall’s discovery. The placebo effect can’t be totally reduced to its molecules, he feels certain — and while research like Hall’s will surely enhance its credibility, he also sees a risk in playing his game on scientific turf. “Once you start measuring the placebo effect in a quantitative way,” he says, “you’re transforming it to be something other than what it is. You suck out what was previously there and turn it into science.” Reduced to its molecules, he fears, the placebo effect may become “yet another thing on the conveyor belt of routinized care.”

“We’re dancing with the devil here,” Kaptchuk once told me, by way of demonstrating that he was aware of the risks he’s taking in using science to investigate a phenomenon it defined only to exclude. Kaptchuk, an observant Jew who is a student of both the Torah and the Talmud, later modified his comment. It’s more like Jacob wrestling with the angel, he said — a battle that Jacob won, but only at the expense of a hip injury that left him lame for the rest of his life.

Indeed, Kaptchuk seems wounded when he complains about the pervasiveness of research that uses healthy volunteers in academic settings, as if the response to mild pain inflicted on an undergraduate participating in an on-campus experiment is somehow comparable to the despair often suffered by people with chronic, intractable pain. He becomes annoyed when he talks about how quickly some of his colleagues want to move from these studies to clinical recommendations. And he can even be disparaging of his own work, wondering, for instance, whether the study in which placebos were openly given to irritable bowel syndrome patients succeeded only because it convinced the subjects that the sugar was really a drug. But it’s the prospect of what will become of his findings, and of the placebo, as they make their way into clinical practice, that really seems to torment him.

Kaptchuk may wish “to help reconfigure biomedicine by rejecting the idea that healing is only the application of mechanical tools.” He may believe that healing is a moral act in which “caring in the context of hope qualitatively changes clinical outcomes.” He may be convinced that the relationship kindled by the encounter between a suffering person and a healer is a central, and almost entirely overlooked, component of medical treatment. And he may have dedicated the last 20 years of his life to persuading the medical establishment to listen to him. But he may also come to regret the outcome.

After all, if Hall is right that clinician warmth is especially effective with a certain genotype, then, as she wrote in the paper presenting her findings from the I.B.S./sham-acupuncture study, it is also true that a different group will “derive minimum benefit” from “empathic attentions.” Should medical rituals be doled out according to genotype, with warmth and caring withheld in order to clear the way for the drugs? And if she is correct that a certain ensemble of neurochemical events underlies the placebo effect, then what is to stop a drug company from manufacturing a drug — a real drug, that is — that activates the same process pharmacologically? Welcomed back into the medical fold, the placebo effect may raise enough mischief to make Kaptchuk rue its return, and bewilder patients when they discover that their doctor’s bedside manner is tailored to their genes.

For the most part, most days, Kaptchuk manages to keep his qualms to himself, to carry on as if he were fully confident that scientific inquiry can restore the moral dimension to medicine. But the precariousness of his endeavors is never far from his mind. “Will this work destroy the stuff that actually has to do with wisdom, preciousness, imagination, the things that are actually critical to who we are as human beings?” he asks. His answer: “I don’t know, but I have to believe there is an infinite reserve of wisdom and imagination that will resist being reduced to simple materialistic explanations.”

The ability to hold two contradictory thoughts in mind at the same time seems to come naturally to Kaptchuk, but he may overestimate its prevalence in the rest of us. Even if his optimism is well placed, however, there’s nothing like being sick to make a person toss that kind of intelligence aside in favor of the certainties offered by modern medicine. Indeed, it’s exactly that yearning that sickness seems to awaken and that our healers, imbued with the power of science, purport to provide, no imagination required. Armed with our confidence in them, we’re pleased to give ourselves over to their ministrations, and pleased to believe that it’s the molecules, and the molecules alone, that are healing us. People do like to be cheated, after all.

Gary Greenberg is the author, most recently, of “The Book of Woe: The DSM and the Unmaking of Psychiatry.” He is a contributing editor for Harper’s Magazine. This is his first article for the magazine.

A version of this article appears in print on Nov. 11, 2018, on Page 50 of the Sunday Magazine with the headline: Why Nothing Works.

original link: http://www.nytimes.com/2018/11/07/magazine/placebo-effect-medicine.html

________________________________

 

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

 

The Enlightenment

Notes for teachers who are covering the age of the Enlightenment

A Reading in the Salon of Mme Geoffrin by Anicet Lemonnier

“A Reading in the Salon of Mme Geoffrin,” 1755, By Anicet Charles Gabriel Lemonnier. Marie Geoffrin was one of the leading female figures in the French Enlightenment. She hosted some of the most important Philosophes and Encyclopédistes of her time.

Introduction

For now, this introduction has been loosely adapted from the Wikipedia article.

French historians traditionally place the Enlightenment between 1715 (the year that Louis XIV died) and 1789 (the beginning of the French Revolution).

International historians often say that the Enlightenment began in the 1620s, with the start of the scientific revolution.

Earlier philosophers whose work influenced the Enlightenment included Bacon, Descartes, Locke, and Spinoza.

Many of the Enlightenment thinkers are known as Les philosophes -French writers and thinkers – who – circulated their ideas through meetings at scientific academies, Masonic lodges, literary salons, coffee houses, and in printed books and pamphlets.

The ideas of the Enlightenment undermined the authority of the monarchy and the Church. These ideas paved the way for the political revolutions of the 18th and 19th centuries.

Major figures of the Enlightenment included Beccaria, Diderot, Hume, Kant, Montesquieu, Rousseau, Adam Smith, and Voltaire.

Some European rulers, including Catherine II of Russia, Joseph II of Austria and Frederick II of Prussia, tried to apply Enlightenment thought on religious and political tolerance, “enlightened absolutism.”

Benjamin Franklin visited Europe and contributed to the scientific and political debates there; he brought these ideas back to Philadelphia. Thomas Jefferson incorporated Enlightenment philosophy into the Declaration of Independence (1776). James Madison, incorporated these ideas in the United States Constitution during its framing in 1787

Secondary section (to be re-titled)

In his famous 1784 essay “What Is Enlightenment?”, Immanuel Kant defined it as follows:

“Enlightenment is man’s leaving his self-caused immaturity. Immaturity is the incapacity to use one’s own understanding without the guidance of another. Such immaturity is self-caused if its cause is not lack of intelligence, but by lack of determination and courage to use one’s intelligence without being guided by another. The motto of enlightenment is therefore: Have courage to use your own intelligence!”

By mid-Century the pinnacle of purely Enlightenment thinking was being reached with Voltaire.

Born Francois Marie Arouet in 1694, he was exiled to England between 1726 and 1729, and there he studied Locke, Newton, and the English Monarchy.

Voltaire’s ethos was:  “Those who can make you believe absurdities can make you commit atrocities” – that is, if people believed in what is unreasonable, they will do what is unreasonable.

Reforms sought

The Enlightenment sought reform of Monarchy by laws which were in the best interest of the subjects, and the “enlightened” ordering of society.  In the 1750s there would be attempts in England, Austria, Prussia and France to “rationalize” the Monarchical system and its laws. When this failed to end wars, there was an increasing drive for revolution or dramatic alteration. The Enlightenment found its way to the heart of the American Declaration of Independence, and the Jacobin program of the French Revolution, as well as the American Constitution of 1787.

Common values

Many values were common to enlightenment thinkers, including:

✔ Nations exist to protect the rights of the individual, instead of the other way around.

✔ Each individual should be afforded dignity, and should be allowed to live one’s life with the maximum amount of personal freedom.

✔ Some form of Democracy is the best form of government.

✔ All of humanity, all races, nationalities and religions, are of equal worth and value.

✔ People have a right to free speech and expression, the right to free association, the right to hold to any – or no – religion; the right to elect their own leaders.

✔ The scientific method is our only ally in helping us discern fact from fiction.

✔Science, properly used, is a positive force for the good of all humanity.

✔ Classical religious dogma and mystical experiences are inferior to logic and philosophy.

✔ Theism – the belief in a God that wants morality – was held by most Enlightenment thinkers to be essential for a person to have good moral character. 

✔ Deism – to be added

✔ Some classical religious dogma has been harmful, causing crusades, Jihads, holy wars, or denial of human rights to various classes of people.

Learning Standards

Massachusetts History and Social Science Curriculum Framework

High School World History Content Standards

Topic 6: Philosophies of government and society Supporting question: How did philosophies of government shape the everyday lives of people? 34. Identify the origins and the ideals of the European Enlightenment, such as happiness, reason, progress, liberty, and natural rights, and how intellectuals of the movement (e.g., Denis Diderot, Emmanuel Kant, John Locke, Charles de Montesquieu, Jean-Jacques Rousseau, Mary Wollstonecraft, Cesare Beccaria, Voltaire, or social satirists such as Molière and William Hogarth) exemplified these ideals in their work and challenged existing political, economic, social, and religious structures.

New York State Grades 9-12 Social Studies Framework

9.9 TRANSFORMATION OF WESTERN EUROPE AND RUSSIA:

9.9d The development of the Scientific Revolution challenged traditional authorities and beliefs.  Students will examine the Scientific Revolution, including the influence of Galileo and Newton.
9.9e The Enlightenment challenged views of political authority and how power and authority were conceptualized.

10.2: ENLIGHTENMENT, REVOLUTION, AND NATIONALISM: The Enlightenment called into question traditional beliefs and inspired widespread political, economic, and social change. This intellectual movement was used to challenge political authorities in Europe and colonial rule in the Americas. These ideals inspired political and social movements.

10.2a Enlightenment thinkers developed political philosophies based on natural laws, which included the concepts of social contract, consent of the governed, and the rights of citizens.

10.2b Individuals used Enlightenment ideals to challenge traditional beliefs and secure people’s rights in reform movements, such as women’s rights and abolition; some leaders may be considered enlightened despots.

10.2c Individuals and groups drew upon principles of the Enlightenment to spread rebellions and call for revolutions in France and the Americas.

History–Social Science Content Standards for California Public Schools

7.11 Students analyze political and economic change in the sixteenth, seventeenth, and eighteenth centuries (the Age of Exploration, the Enlightenment, and the Age of Reason).
1. Know the great voyages of discovery, the locations of the routes, and the influence of cartography in the development of a new European worldview.
2. Discuss the exchanges of plants, animals, technology, culture, and ideas among Europe, Africa, Asia, and the Americas in the fifteenth and sixteenth centuries and the
major economic and social effects on each continent.
3. Examine the origins of modern capitalism; the influence of mercantilism and cottage industry; the elements and importance of a market economy in seventeenth-century Europe; the changing international trading and marketing patterns, including their locations on a world map; and the influence of explorers and map makers.
4. Explain how the main ideas of the Enlightenment can be traced back to such movements as the Renaissance, the Reformation, and the Scientific Revolution and to the Greeks, Romans, and Christianity.
5. Describe how democratic thought and institutions were influenced by Enlightenment thinkers (e.g., John Locke, Charles-Louis Montesquieu, American founders).
6. Discuss how the principles in the Magna Carta were embodied in such documents as the English Bill of Rights and the American Declaration of Independence.

AP World History

The 18th century marked the beginning of an intense period of revolution and rebellion against existing governments, and the establishment of new nation-states around the world.

I. The rise and diffusion of Enlightenment thought that questioned established traditions in all areas of life often preceded the revolutions and rebellions against existing governments.

Also see AP Worldipedia. Key Concept 5.3 Nationalism, Revolution, and Reform

Australopithecus skeleton

Australopithecus skeleton

A team of Northeast Ohio researchers announced a rare and important find – the partial skeleton of a 3.6 million-year-old early human ancestor belonging to the same species as, but much older than, the iconic 3.2 million-year-old Lucy fossil discovered in 1974.

Less than 10 such largely intact skeletons 1.5 million years or older have been found. Greater Cleveland researchers have played leading roles in three of those discoveries, reinforcing the region’s prominence in the search for humanity’s origins.

The new specimen is called Kadanuumuu (pronounced Kah-dah-NEW-moo). The nickname means “big man” in the language of the Afar tribesmen who helped unearth his weathered bones from a hardscrabble Ethiopian plain beginning in 2005.

“Big” is an apt description of both Kadanuumuu’s stature and his significance. The scientists who analyzed the long-legged fossil say it erases any doubts about stubby Lucy and her kind’s ability to walk well on two legs, and reveals new information about when and how bipedality developed.

“It’s all about human-like bipedality evolving earlier than some people think,” said Cleveland Museum of Natural History anthropologist Yohannes Haile-Selassie,

– http://www.cleveland.com/science/index.ssf/2010/06/partial_skeleton_from_lucys_sp.html

– —
“Many dozens of A. afarensis fossils have been uncovered since Lucy was discovered in 1974, but none as complete as this one. Kadanuumuu’s forearm was first extracted from a hunk of mudstone in February 2005, and subsequent expeditions uncovered an entire knee, part of a pelvis, and well preserved sections of the thorax.
“We have the clavicle, a first rib, a scapula, and the humerus,” says physical anthropologist Bruce Latimer of Case Western Reserve University in Cleveland, Ohio, one of the co-leaders on the dig. “That enables us to say something about how [Kadanuumuu] was using its arm, and it was clearly not using it the way an ape uses it. It finally takes knuckle-walking off the table.” At five and a half feet tall, Kadanuumuu would also have towered two feet over Lucy, lending support to the view that there was a high degree of sexual dimorphism in the species.”

– Archaeology, “Kadanuumuu” – Woranso-Mille, Ethiopia Volume 64 Number 1, January/February 2011 by Brendan Borrell

http://www.cleveland.com/science/index.ssf/2010/06/partial_skeleton_from_lucys_sp.html

Modeling DNA with Legos

Students learn best when they develop mental models. For many students this is almost automatic – they see diagrams and can internally translate them into. But for many other students mental models take a lot of practice to produce. We as teachers need to give our students multiple opportunities to take ideas about objects (such as atoms, molecules, polymers) and show them how to create 3D models.

Some of this can be done on apps on a smartphone or computer; this could be more than adequate for some students. But many people need the hands-on experience, putting objects together one part at a time.

In this next example a student showed the difference between purines and pyrimidines by making skeletal structures showing the essential geometry. In this case we didn’t want the Legos to connect as they usually do, because they easily create the correct kind of geometry. But that’s Ok. Building blocks can be used any way we wish.

Modeling purines and pyrimidines

We start by reviewing the monomers:

Blausen CC BY-SA 4.0, via Wikimedia Commons

One way of doing so is by having students use Legos or other materials, to work out their ring structures.

Legos DNA purines pyrimidines 1

Another way is to use Legos to show the units by which a sequence of DNA is made – one color for phosphate groups, another for sugars, and other colors for the purines or pyrimidines.

KaiserScience

Modeling DNA base pairing

Here every nucleotide is color coded.

Here the student uses the given base pairing code to construct the other half of this gene.

In class we go through many steps. Here we move ahead to the result: The original DNA gene has unzipped, new nucleotides came in; and those were picked up.

DNA Replication

Here we see the original gene in the process of replication.

Here we model DNA replication, one nucleotide at a time.

KaiserScience

We can use Legos to model genetic phenomenon on a wide range of levels.

In this next image we see a student use blue Legos to represent phosphate groups; gray to represent sugars, and blue and orange to represent nucleotides.  (The white are the intermolecular bonds.)

DNA transcription

DNA transcription is the process by which the cell makes an RNA copy of the DNA original.

Here we show how the cell makes an RNA copy of a DNA original.

Here we see that the mRNA is no longer next to the DNA original. It has floated out of the nucleus, and has connected with some larger structure, transfer RNA (tRNA.)

DNA translation – from mRNA to proteins

DNA translation and the Genetic Code Here we are just focusing on which DNA nucleotides match with which RNA nucleotides.

(We aren’t concerned with codons yet, just the concept of base pairing/color matching.  We will do codons the next day.)

Here we see multiple (Lego!) amino acids bonded together into a short protein.

Learning Expectations

2016 Massachusetts Science and Technology/Engineering Curriculum Framework

HS-LS1-1. Construct a model of transcription and translation to explain the roles of DNA and RNA that code for proteins that regulate and carry out essential functions of life.

HS-LS1-6. Construct an explanation based on evidence that organic molecules are primarily composed of six elements, where carbon, hydrogen, and oxygen atoms may combine with nitrogen, sulfur, and phosphorus to form monomers that can further combine to form large carbon-based macromolecules.

College Board Standards for College Success: Science

LSM-PE.5.2.2 Construct a representation of DNA replication, showing how the helical DNA molecule unzips and how nucleotide bases pair with the DNA template to form a duplicate of the DNA molecule.

Benchmarks for Science Literacy, AAAS

The information passed from parents to offspring is coded in DNA molecules, long chains linking just four kinds of smaller molecules, whose precise sequence encodes genetic information. 5B/H3*

Genes are segments of DNA molecules. Inserting, deleting, or substituting segments of DNA molecules can alter genes. An altered gene may be passed on to every cell that develops from it. The resulting features may help, harm, or have little or no effect on the offspring’s success in its environment. 5B/H4*

NSTA Position Statement: The Teaching of Climate Science

NSTA National Science Teachers Association

The National Science Teachers Association (NSTA) acknowledges that decades of research and overwhelming scientific consensus indicate with increasing certainty that Earth’s climate is changing, largely due to human-induced increases in the concentrations of heat-absorbing gases (IPCC 2014; Melillo, Richmond, and Yohe 2014).

The scientific consensus on the occurrence, causes, and consequences of climate change is both broad and deep (Melillo, Richmond, and Yohe 2014). The nation’s leading scientific organizations support the core findings related to climate change, as do a broad range of government agencies, university and government research centers, educational organizations, and numerous international groups (NCSE 2017; U.S. Global Change Research Program 2017).

According to the National Academy of Sciences, “it is now more certain than ever, based on many lines of evidence, that humans are changing Earth’s climate” (NAS 2014). Scientific evidence advances our understanding of the challenges that climate change presents and of the need for people to prepare for and respond to its far-reaching implications (Melillo, Richmond, and Yohe 2014; Watts 2017).

The science of climate change is firmly rooted in decades of peer-reviewed scientific literature and is as sound and advanced as other established geosciences that have provided deep understandings in fields such as plate tectonics and planetary astronomy. As such, A Framework for K–12 Science Education (Framework) recommends that foundational climate change science concepts be included as part of a high-quality K–12 science education (NRC 2012).

Given the solid scientific foundation on which climate change science rests, any controversies regarding climate change and human-caused contributions to climate change that are based on social, economic, or political arguments—rather than scientific arguments—should not be part of a science curriculum.

NSTA recognizes that because of confusion and misinformation, many Americans do not think that the scientific basis for climate change is established and well-grounded (Leiserowitz 2005; van der Linden et al. 2015).

This belief, coupled with political efforts to actively promote the inclusion of non-scientific ideas in science classrooms (Plutzer et al. 2016), is negatively affecting science instruction in some schools. Active opposition to and the anticipation of opposition to climate change science from students, parents, other subject-area teachers, and/or school leadership is having a documented negative impact on science teachers in some states and local school districts (Plutzer et al. 2016).

Teachers are facing pressure to not only eliminate or de-emphasize climate change science, but also to introduce non-scientific ideas in science classrooms (NESTA 2011; Branch 2013; Branch, Rosenau, and Berbeco 2016).

This pressure sometimes takes the form of rhetorical tactics, such as “teach the controversy,” that are not based on science. Scientific explanations must be consistent with existing empirical evidence or stand up to empirical testing. Ideas based on political ideologies or pseudoscience that fail these empirical tests do not constitute science and should not be allowed to compromise the teaching of climate science. These tactics promote the teaching of non-scientific ideas that deliberately misinform students and increase confusion about climate science.

In conclusion, our knowledge of all the sciences, including climate science, grows and changes through the continual process of scientific exploration, investigation, and dialogue. While the details of scientific understandings about the Earth’s climate will undoubtedly evolve in the future, a large body of foundational knowledge exists regarding climate science that is agreed upon by the scientific community and should be included in science education at all levels. These understandings include the increase in global temperatures and the significant impact of human activities on these increases (U.S. Global Change Research Program 2009), as well as mitigation and resilience strategies that human societies may choose to adopt. Students in today’s classrooms will be the ones accelerating these decisions well underway in communities across the world.

NSTA confirms the solid scientific foundation on which climate change science rests and advocates for quality, evidence-based science to be taught in science classrooms in grades K–12 and higher education.

Declarations

To ensure a high-quality K–12 science education constructed upon evidence-based science, including the science of climate change, NSTA recommends that teachers of science

  • recognize the cumulative weight of scientific evidence that indicates Earth’s climate is changing, largely due to human-induced increases in the concentration of heat-absorbing gases (IPCC 2014; Melillo, Richmond, and Yohe 2014);
  • emphasize to students that no scientific controversy exists regarding the basic facts of climate change and that any controversies are based on social, economic, or political arguments and are not science;
  • deliver instruction using evidence-based science, including climate change, human impacts on natural systems, human sustainability, and engineering design, as recommended by the Framework for K–12 Science Education (Framework);
  • expand the instruction of climate change science across the K–12 span, consistent with learning progressions offered in the Framework;
  • advocate for integrating climate and climate change science across the K–12 curriculum beyond STEM (science, technology, engineering, and mathematics) classes;
  • teach climate change as any other established field of science and reject pressures to eliminate or de-emphasize climate-based science concepts in science instruction;
  • recognize that scientific argumentation is not the same as arguing beliefs and opinions. It requires the use of evidence-based scientific explanations to defend arguments and critically evaluate the claims of others;
  • plan instruction on the premise that debates and false-equivalence arguments are not demonstrably effective science teaching strategies;
  • help students learn how to use scientific evidence to evaluate claims made by others, including those from media sources that may be politically or socially biased;
  • provide students with the historical basis in science that recognizes the relationship between heat-absorbing greenhouse gases—especially those that are human-induced—and the amount of energy in the atmosphere;
  • highlight for students the datasets from which scientific consensus models are built and describe how they have been tested and refined;
  • recognize that attempts to use large-scale climate intervention to halt or reverse rapid climate change are well beyond simple solutions and will likely result in both intended and unintended consequences in the Earth system (NRC 2015; USGCRP 2017);
  • analyze different climate change mitigation strategies with students, including those that reduce carbon emissions as well as those aimed at building resilience to the effects of global climate change;
  • seek out resources and professional learning opportunities to better understand climate science and explore effective strategies for teaching climate science accurately while acknowledging social or political controversy; and
  • analyze future climate change scenarios and their relationships to societal decisions regarding energy-source and land-use choices.

Necessary Support Structures

To support the work of teachers of science, NSTA recommends that school administrators, school boards, and school and district leaders

  • ensure the use of evidence-based scientific information when addressing climate change and climate science in all parts of the school curriculum, such as social studies, mathematics, and reading;
  • provide teachers of science with ongoing professional learning opportunities to strengthen their content knowledge, enhance their teaching of scientific practices, and help them develop confidence to address socially controversial topics in the classroom;
  • support teachers as they review, adopt, and implement evidence-based science curricula and curricular materials that accurately represent the occurrence of, evidence for, and responses to climate change;
  • ensure teachers have adequate time, guidance, and resources to learn about climate science and have continued access to these resources;
  • resist pressures to promote non-scientific views that seek to deemphasize or eliminate the scientific study of climate change, or to misrepresent the scientific evidence for climate change; and
  • provide full support to teachers in the event of community-based conflict.

To support the teaching of climate change in K–12 school science, NSTA recommends that state and district policy makers

  • ensure that licensure and preparation standards for all teachers of science include science practices and climate change science content;
  • ensure that instructional materials considered for adoption are based on both recognized practices and contemporary, scientifically accurate data;
  • preserve the quality of science education by rejecting censorship, pseudoscience, logical fallacies, faulty scholarship, narrow political agendas, or unconstitutional mandates; and
  • understand that demand is increasing for a workforce that is knowledgeable about and capable of addressing climate change mitigation and building resilience to the effects of global climate change.

To support the teaching of climate change in K–12 school science, NSTA recommends that parents and other members of the community and media

  • seek the expertise of science educators on science topics, including climate change science;
  • augment the work of science teachers by supporting student learning of science at home, including the science of climate change;
  • help students understand the contributions that STEM professionals, policy makers, and educators can make to mitigate the effects of climate change and how they can make decisions that contribute to desired outcomes; and
  • clarify that societal controversies surrounding climate change are not scientific in nature, but are social, political, and economic.

To support the teaching of climate change in K–12 school science, NSTA recommends that higher education professors and administrators

  • design curricula that incorporate climate change science into science and general education coursework, and that these materials meet social, economic, mathematical, and literary general education goals;
  • provide teacher-education students with science content and pedagogy that meets the Framework‘s expectations for the grade band(s) they will teach; and
  • recognize that a solid foundation in Earth system science should be a consideration in student admissions decisions.

Adopted by the NSTA Board of Directors, September 2018

There Was No Big Bang Singularity

Backup articles for students

There Was No Big Bang Singularity, Ethan Siegel, Forbes, 7/27/2018

https://www.forbes.com/sites/startswithabang/2018/07/27/there-was-no-big-bang-singularity/amp/

Almost everyone has heard the story of the Big Bang. But if you ask anyone, from a layperson to a cosmologist, to finish the following sentence, “In the beginning, there was…” you’ll get a slew of different answers. One of the most common ones is “a singularity,” which refers to an instant where all the matter and energy in the Universe was concentrated into a single point. The temperatures, densities, and energies of the Universe would be arbitrarily, infinitely large, and could even coincide with the birth of time and space itself.

But this picture isn’t just wrong, it’s nearly 40 years out of date! We are absolutely certain there was no singularity associated with the hot Big Bang, and there may not have even been a birth to space and time at all. Here’s what we know and how we know it.

When we look out at the Universe today, we see that it’s full of galaxies in all directions at a wide variety of distances. On average, we also find that the more distant a galaxy is, the faster it appears to be receding from us. This isn’t due to the actual motions of the individual galaxies through space, though; it’s due to the fact that the fabric of space itself is expanding.

This was a prediction that was first teased out of General Relativity in 1922 by Alexander Friedmann, and was observationally confirmed by the work of Edwin Hubble and others in the 1920s. It means that, as time goes on, the matter within it spreads out and becomes less dense, since the volume of the Universe increases. It also means that, if we look to the past, the Universe was denser, hotter, and more uniform.

If you were to extrapolate back farther and farther in time, you’d begin to notice a few major changes to the Universe. In particular:

  • you’d come to an era where gravitation hasn’t had enough time to pull matter into large enough clumps to have stars and galaxies,
  • you’d come to a place where the Universe was so hot you couldn’t form neutral atoms,
  • and then where even atomic nuclei were blasted apart,
  • where matter-antimatter pairs would spontaneously form,
  • and where individual protons and neutrons would be dissociated into quarks and gluons.

Each step represents the Universe when it was younger, smaller, denser, and hotter. Eventually, if you kept on extrapolating, you’d see those densities and temperatures rise to infinite values, as all the matter and energy in the Universe was contained within a single point: a singularity.

The hot Big Bang, as it was first conceived, wasn’t just a hot, dense, expanding state, but represented an instant where the laws of physics break down. It was the birth of space and time: a way to get the entire Universe to spontaneously pop into existence. It was the ultimate act of creation: the singularity associated with the Big Bang.

Yet, if this were correct, and the Universe had achieved arbitrarily high temperatures in the past, there would be a number of clear signatures of this we could observe today. There would be temperature fluctuations in the Big Bang’s leftover glow that would have tremendously large amplitudes. The fluctuations that we see would be limited by the speed of light; they would only appear on scales of the cosmic horizon and smaller. There would be leftover, high-energy cosmic relics from earlier times, like magnetic monopoles.

And yet, the temperature fluctuations are only 1-part-in-30,000, thousands of times smaller than a singular Big Bang predicts. Super-horizon fluctuations are real, robustly confirmed by both WMAP and Planck. And the constraints on magnetic monopoles and other ultra-high-energy relics are incredibly tight. These missing signatures have a huge implication: the Universe never reached these arbitrarily large temperatures.

CMB Cosmic Microwave Background COBE WMAP

Instead, there must have been a cutoff. We cannot extrapolate back arbitrarily far, to a hot-and-dense state that reaches whatever energies we can dream of. There’s a limit to how far we can go and still validly describe our Universe.

In the early 1980s, it was theorized that, before our Universe was hot, dense, expanding, cooling, and full of matter and radiation, it was inflating. A phase of cosmic inflation would mean the Universe was:

  • filled with energy inherent to space itself,
  • which causes a rapid, exponential expansion,
  • that stretches the Universe flat,
  • gives it the same properties everywhere,
  • with small-amplitude quantum fluctuations,
  • that get stretched to all scales (even super-horizon ones),

and then inflation comes to an end.

When it does, it converts that energy, which was previously inherent to space itself, into matter and radiation, which leads to the hot Big Bang. But it doesn’t lead to an arbitrarily hot Big Bang, but rather one that achieved a maximum temperature that’s at most hundreds of times smaller than the scale at which a singularity could emerge. In other words, it leads to a hot Big Bang that arises from an inflationary state, not a singularity.

The information that exists in our observable Universe, that we can access and measure, only corresponds to the final ~10-33 seconds of inflation, and everything that came after. If you want to ask the question of how long inflation lasted, we simply have no idea. It lasted at least a little bit longer than 10-33 seconds, but whether it lasted a little longer, a lot longer, or for an infinite amount of time is not only unknown, but unknowable.

So what happened to start inflation off? There’s a tremendous amount of research and speculation about it, but nobody knows. There is no evidence we can point to; no observations we can make; no experiments we can perform. Some people (wrongly) say something akin to:

Well, we had a Big Bang singularity give rise to the hot, dense, expanding Universe before we knew about inflation, and inflation just represents an intermediate step. Therefore, it goes: singularity, inflation, and then the hot Big Bang.

There are even some very famous graphics put out by top cosmologists that illustrate this picture. But that doesn’t mean this is right.

Big Bang Singularity Inflation Gravitational Waves

NATIONAL SCIENCE FOUNDATION (NASA, JPL, KECK FOUNDATION, MOORE FOUNDATION, RELATED)

In fact, there are very good reasons to believe that this isn’t right! One thing that we can mathematically demonstrate, in fact, is that it’s impossible for an inflating state to arise from a singularity.

Here’s why: space expands at an exponential rate during inflation. Think about how an exponential works: after a certain amount of time goes by, the Universe doubles in size. Wait twice as long, and it doubles twice, making it four times as large. Wait three times as long, it doubles thrice, making it 8 times as large. And if you wait 10 or 100 times as long, those doublings make the Universe 210 or 2100 times as large.

Which means if we go backwards in time by that same amount, or twice, or thrice, or 10 or 100 times, the Universe would be smaller, but would never reach a size of 0. Respectively, it would be half, a quarter, an eighth, 2-10, or 2-100 times its original size. But no matter how far back you go, you never achieve a singularity.

How Universe Grows Time Before Singularity

Image by E. Siegel

There is a theorem, famous among cosmologists, showing that an inflationary state is past-timelike-incomplete. What this means, explicitly, is that if you have any particles that exist in an inflating Universe, they will eventually meet if you extrapolate back in time.

This doesn’t, however, mean that there must have been a singularity, but rather that inflation doesn’t describe everything that occurred in the history of the Universe, like its birth. We also know, for example, that inflation cannot arise from a singular state, because an inflating region must always begin from a finite size.

Every time you see a diagram, an article, or a story talking about the “big bang singularity” or any sort of big bang/singularity existing before inflation, know that you’re dealing with an outdated method of thinking.

The idea of a Big Bang singularity went out the window as soon as we realized we had a different state — that of cosmic inflation — preceding and setting up the early, hot-and-dense state of the Big Bang.

There may have been a singularity at the very beginning of space and time, with inflation arising after that, but there’s no guarantee. In science, there are the things we can test, measure, predict, and confirm or refute, like an inflationary state giving rise to a hot Big Bang. Everything else? It’s nothing more than speculation.

Related articles by Ethan Siegel

The Big Bang Wasn’t The Beginning, After All 9/2017

What Was It Like When The Universe Was Inflating? 6/2018

How Well Has Cosmic Inflation Been Verified? 5/2019

Did Time Have A Beginning? 7/2019

What Came First: Inflation Or The Big Bang? 10/2019

New information requires prior basic information

Building Pyramids: A model of of knowledge representation

Efrat Furst, Post-doc Fellow at the Learning Incubator, SEAS, Harvard University. Her  background is in cognitive-neuroscientific research and professional development for educators.

Archived from https://sites.google.com/view/efratfurst/pyramids

Every new piece of knowledge is learnt on the basis of already existing knowledge.

The principle that organizes the knowledge is ‘Making Meaning’, or the ability to integrate and use a new concept in the context of what we already know.

In this pyramid model, every brick is a ‘piece of knowledge’ and the correct placement, on top of previous layer represents ‘meaning’, the final structure requires both.

Every pyramid is also a brick in a higher-level pyramid. To learn a new piece of information (orange triangles) effectively, it should be learned on the basis of existing prior knowledge (gray triangles). Without prior knowledge (top panel), the new information cannot be integrated meaningfully (create a structure), and would most likely not survive overtime.

Knowledge Building Pyramids 1

Shing Y & Brod G (2016) Effects of Prior Knowledge on Memory: Implications for Education, Mind Brain and Education.

see also BLOOM’S TAXONOMY—THAT PYRAMID IS A PROBLEM by Doug Lemov

Higher order learning abilities like critical thinking, and creativity are depended on the existence of broad and well-established domain-specific knowledge, in one or more areas.

Without this base, new high-level information cannot be structured appropriately, and hence will not be useful and will not be retained (top panel).

The wider and more varied the basis of prior knowledge is, the higher, more complex and more creative structures it can support (bottom panel).

Knowledge Building Pyramids 2

Willingham, D. T. (2007). Critical thinking. American Educator, 31(3), 8-19

When the same routine of information is rehearsed during a session, a fast and impressive improvement may be evident . The gain, however, may not last long, when it is largely dependent on the specific context (of time, place, content, method, specific sequence etc.). When context fades as time goes by, the same level of performance cannot be maintained (top panel).

Knowledge Building Pyramids 3

However, when the study or practice in done in effective ways that emphasize crating meaningful connections to prior knowledge (elaboration), and between the newly learned items, we are building a stable structure of knowledge that may survive the passage of time and the absence of the learning context (bottom panel).

Prof. Robert Bjork on the distinction between Learning and Performance.

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way

Often we want learning or practice to be fun for ourselves of for our students, in order to build a positive experience. But if we wish to build knowledge through this experience, we must make sure that something is actually being built.

Effective learning should include explicit elements of connecting the new knowledge to prior knowledge in meaningful ways (bottom panel), rather than just playing around with the new concept (top panel). Effective learning maybe more effortful (in a good way) than fun, but the long term results is usually rewarding.

Knowledge Building Pyramids 4

Prof Robert Bjork on Desirable Difficulties

Some things can be learned independently: when the relevant prior knowledge is available and when the learner is able to make the required connections between the new information and the existing knowledge (top panel).

But for learning some other things guidance is essential: to supply information, or to select the relevant information. Often guidance is needed to establish the nature of the relationships between the new and the existing information: a concrete example or a clear explanation that would make the pieces “fall” into the right place. With the appropriate guidance (bottom panel) more can be learned.

Knowledge Building Pyramids 5

Clark, R., Kirschner, P. A., & Sweller, J. (2012). Putting students on the path to learning: The case for fully guided instruction.‏

From neuroscience to the classroom

26th September 2018, by Efrat Furst

Can neuroscience add anything to our understanding of the classroom? And what should teachers make of it? Efrat Furst looks into how this lens might prove useful in the future.

https://researched.org.uk/from-neuroscience-to-the-classroom/

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.

§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

 

Alzheimer’s disease

Alzheimer’s disease

alzheimers disease
Possible causes of Alzheimer’s diseases

We currently don’t know the cause of all forms of Alzheimer’s disease. There may be more than one cause. But today we have increasingly strong evidence that many cases are caused by a combination of a genetic mutation and Herpes virus.

Prions

Two proteins central to the pathology of Alzheimer’s disease act as prions—misshapen proteins that spread through tissue like an infection by forcing normal proteins to adopt the same misfolded shape—according to new UC San Francisco research.

Using novel laboratory tests, the researchers were able to detect and measure specific, self-propagating prion forms of the proteins amyloid beta (A-β) and tau in postmortem brain tissue of 75 Alzheimer’s patients. In a striking finding, higher levels of these prions in human brain samples were strongly associated with early-onset forms of the disease and younger age at death.

by University of California, San Francisco

Alzheimer’s disease is a ‘double-prion disorder,’ study shows

Herpes virus

Alzheimer’s: The heretical and hopeful role of infection

David Robson writes

…. The “amyloid beta hypothesis” has inspired countless trials of drugs that aimed to break up these toxic plaques. Yet this research has ended in many disappointments, without producing the desired improvements in patients’ prognosis. This has led some to wonder whether the amyloid beta hypothesis may be missing an important part of the story. “The plaques that Alzheimer observed are the manifestation of the disease, not the cause,” says geriatrics scientist Tamas Fulop at the University of Sherbrooke in Canada.

Scientists studying Alzheimer’s have also struggled to explain why some people develop the disease while others don’t. Genetic studies show that the presence of a gene variant – APOE4 – can vastly increase someone’s chances of building the amyloid plaques and developing the disease.

But the gene variant does not seal someone’s fate as many people carry APOE4 but don’t suffer from serious neurodegeneration. Some environmental factors must be necessary to set off the genetic time bomb, prompting the build-up of the toxic plaques and protein tangles.

Early evidence

Could certain microbes act as a trigger? That’s the central premise of the infection hypothesis.

Itzhaki has led the way with her examinations into the role of the herpes simplex virus (HSV1), which is most famous for causing cold sores on the skin around the mouth. Importantly, the virus is known to lie dormant for years, until times of stress or ill health, when it can become reactivated – leading to a new outbreak of the characteristic blisters.

While it had long been known that the virus could infect the brain – leading to a dangerous swelling called encephalitis that required immediate treatment – this was thought to be a very rare event. In the early 1990s, however, Itzhaki’s examinations of post-mortem tissue revealed that a surprising number of people showed signs of HSV1 in their neural tissue, without having suffered from encephalitis.

Importantly, the virus didn’t seem to be a risk for the people without the APOE4 gene variant, most of whom did not develop dementia. Nor did the presence of APOE4 make much difference to the risk of people without the infection.

Instead, it was the combination of the two that proved to be important. Overall, Itzhaki estimates that the two risk factors make it 12 times more likely that someone will develop Alzheimer’s, compared to people without the gene variant or the latent infection in their brain.

Itzhaki hypothesised that this was due to repeated reactivation of the latent virus – which, during each bout, invades the brain and somehow triggers the production of amyloid beta, until eventually, people start to show the cognitive decline that marks the onset of dementia.

Itzhaki says that her findings were met with a high degree of scepticism by other scientists. “We had the most awful trouble getting it published.” Many assumed that the experiments were somehow contaminated, she says, leading to an illusory result. Yet she had been careful to avoid this possibility, and the apparent link between HSV1 infection and Alzheimer’s disease has now been replicated in many different populations.

One paper, published earlier this year, examined cohorts from Bordeaux, Dijon, Montpellier and rural France. By tracking certain antibodies, they were able to detect who had been infected with the herpes simplex virus.  The researchers found that the infection roughly tripled the risk of developing Alzheimer’s in APOE4 carriers over a seven-year follow-up period – but had no effect in people who were not carrying the gene.

“The herpes virus was only able to have a deleterious effect if there was APOE4,” says Catherine Helmer at the University of Bordeaux in France, who conducted the research.

To date, the most compelling evidence for the infection hypothesis comes from a large study in Taiwan, published in 2018, which looked at the progress of 8,362 people carrying a herpes simplex virus. Crucially, some of the participants were given antiviral drugs to treat the infection. As the infection hypothesis predicted, this reduced the risk of dementia.

Overall, those taking a long course of medication were around 90% less likely to develop dementia over the 10-year study period than the participants who had not received any treatment for their infection.

“It’s a result that is so striking, it’s hard to believe,” says Anthony Komaroff, a professor at Harvard Medical School and a senior physician at Brigham and Women’s Hospital in Boston, who recently reviewed the current state of the research into the infection hypothesis for the Journal of the American Medical Association. Although he remains cautious about lending too much confidence to any single study, he is now convinced that the idea demands more attention. “It’s such a dramatic result that it must be taken seriously,” he says.

Komaroff knows of no theoretical objections to the theory. “I haven’t heard anyone, even world-class Alzheimer’s experts who are dubious about the infection hypothesis, give a good reason why it has to be bunkum,” he adds. We simply need more studies providing direct evidence for the link, he says, to be able to convince the sceptics.

As interest in the infection hypothesis has grown, scientists have started to investigate whether any other pathogens may trigger a similar response – with some intriguing conclusions. A 2017 study suggested that the virus behind shingles and chickenpox can moderately increase the risk of Alzheimer’s disease.

There is also evidence that Porphyromonas gingivalis, the bacterium behind gum disease, can trigger the accumulation of amyloid beta, which may explain why poor dental health predicts people’s cognitive decline in old age.

Certain fungi may even penetrate the brain and trigger neurodegeneration. If the causal role of these microbes is confirmed, then each finding could inspire new treatments for the disease.

Scientists studying the infection hypothesis have also started making some headway in explaining the physiological mechanisms.

Their explanation centres on the surprising discovery that amyloid beta can act as a kind of microbicide that fights pathogens in the brain.

Studies by Fulop and others, for instance, show that the protein can bind to the surface of the herpes simplex virus. This seems to entrap the pathogen with a web of tiny fibres and prevents it from attaching to cells.

In the short term, this could be highly advantageous, preventing the infection from spiralling out of control so that it poses an immediate danger to someone’s life. But if the pathogen is repeatedly reactivated during times of stress, the amyloid beta could accumulate in the toxic plaques, harming the cells it is meant to be protecting.

Connection to coronavirus, covid-19

During the current pandemic, some scientists have started to worry that the coronavirus could increase the risk of dementia. As scientists from Mount Sinai School of Medicine, New York warned in the Journal of Alzheimer’s Disease last year: “It is possible that there may be an existing population who have become unknowingly predisposed to neurodegeneration through silent viral entry into the brain.”

So far there are some signs that Covid infections can bring about neural damage. Researchers at a recent meeting of the Alzheimer’s Association, for example, presented an analysis of blood samples taken from otherwise healthy patients recovering from Covid. They found elevated levels of signature chemicals that often accompany the onset of Alzheimer’s disease.

This could just be another consequence of the overall assault on the body, including the increased inflammation that comes with the disease. But some animal studies and analyses of human autopsies suggest that the coronavirus can invade the brain. And laboratory experiments suggest that this infection may, in turn, trigger neural damage.

In one striking study, Jay Gopalakrishnan at Heinrich-Heine-University in Dusseldorf and colleagues created a series of “cerebral organoids” – miniature, lab-grown brain tissue – and then exposed them to the virus. They saw some marked changes in the tau proteins that are associated with Alzheimer’s, and increased neural death, after infection from the virus.

Such findings ring alarm bells for Fulop. “Sars-Cov-2 may act exactly as HSV-1,” he proposes. Others – including Gopalakrishnan – are more cautious, however. “We have demonstrated that the virus can infect human neurons, and it can cause some sort of neuronal stress,” he says. “And this may have some unexpected effects.” Much more research will be necessary to assess any long-term risks for neurological disease

– from Alzheimer’s: The heretical and hopeful role of infection, BBC Future, David Robson, 6th October 2021

==============

Alzheimer’s disease: mounting evidence that herpes virus is a cause, The Conversation US, Oct 19, 2018

Ruth Itzhaki, Professor Emeritus of Molecular Neurobiology, University of Manchester

More than 30m people worldwide suffer from Alzheimer’s disease – the most common form of dementia. Unfortunately, there is no cure, only drugs to ease the symptoms. However, my latest review, suggests a way to treat the disease. I found the strongest evidence yet that the herpes virus is a cause of Alzheimer’s, suggesting that effective and safe antiviral drugs might be able to treat the disease. We might even be able to vaccinate our children against it.

The virus implicated in Alzheimer’s disease, herpes simplex virus type 1 (HSV1), is better known for causing cold sores. It infects most people in infancy and then remains dormant in the peripheral nervous system (the part of the nervous system that isn’t the brain and the spinal cord). Occasionally, if a person is stressed, the virus becomes activated and, in some people, it causes cold sores.

We discovered in 1991 that in many elderly people HSV1 is also present in the brain. And in 1997 we showed that it confers a strong risk of Alzheimer’s disease when present in the brain of people who have a specific gene known as APOE4.

The virus can become active in the brain, perhaps repeatedly, and this probably causes cumulative damage. The likelihood of developing Alzheimer’s disease is 12 times greater for APOE4 carriers who have HSV1 in the brain than for those with neither factor.

Later, we and others found that HSV1 infection of cell cultures causes beta-amyloid and abnormal tau proteins to accumulate. An accumulation of these proteins in the brain is characteristic of Alzheimer’s disease.

We believe that HSV1 is a major contributory factor for Alzheimer’s disease and that it enters the brains of elderly people as their immune system declines with age. It then establishes a latent (dormant) infection, from which it is reactivated by events such as stress, a reduced immune system and brain inflammation induced by infection by other microbes.

Reactivation leads to direct viral damage in infected cells and to viral-induced inflammation. We suggest that repeated activation causes cumulative damage, leading eventually to Alzheimer’s disease in people with the APOE4 gene.

Presumably, in APOE4 carriers, Alzheimer’s disease develops in the brain because of greater HSV1-induced formation of toxic products, or less repair of damage.

New treatments? The data suggest that antiviral agents might be used for treating Alzheimer’s disease. The main antiviral agents, which are safe, prevent new viruses from forming, thereby limiting viral damage.

In an earlier study, we found that the anti-herpes antiviral drug, acyclovir, blocks HSV1 DNA replication, and reduces levels of beta-amyloid and tau caused by HSV1 infection of cell cultures.

It’s important to note that all studies, including our own, only show an association between the herpes virus and Alzheimer’s – they don’t prove that the virus is an actual cause. Probably the only way to prove that a microbe is a cause of a disease is to show that an occurrence of the disease is greatly reduced either by targeting the microbe with a specific anti-microbial agent or by specific vaccination against the microbe.

Excitingly, successful prevention of Alzheimer’s disease by use of specific anti-herpes agents has now been demonstrated in a large-scale population study in Taiwan. Hopefully, information in other countries, if available, will yield similar results.

=================================

Corroboration of a Major Role for Herpes Simplex Virus Type 1 in Alzheimer’s Disease

Ruth F. Itzhaki, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom

Front. Aging Neurosci., 19 October 2018, https://doi.org/10.3389/fnagi.2018.00324

Strong evidence has emerged recently for the concept that herpes simplex virus type 1 (HSV1) is a major risk for Alzheimer’s disease (AD). This concept proposes that latent HSV1 in brain of carriers of the type 4 allele of the apolipoprotein E gene (APOE-ε4) is reactivated intermittently by events such as immunosuppression, peripheral infection, and inflammation, the consequent damage accumulating, and culminating eventually in the development of AD….

===================

How an outsider in Alzheimer’s research bucked the prevailing theory — and clawed for validation

Sharon Begley, Stat News, 10/29/2018

Robert Moir was damned if he did and damned if he didn’t. The Massachusetts General Hospital neurobiologist had applied for government funding for his Alzheimer’s disease research and received wildly disparate comments from the scientists tapped to assess his proposal’s merits.

It was an “unorthodox hypothesis” that might “fill flagrant knowledge gaps,” wrote one reviewer, but another said the planned work might add little “to what is currently known.” A third complained that although Moir wanted to study whether microbes might be involved in causing Alzheimer’s, no one had proved that was the case.

As if scientists are supposed to study only what’s already known, an exasperated Moir thought when he read the reviews two years ago.

He’d just had a paper published in a leading journal, providing strong data for his idea that beta-amyloid, a hallmark of Alzheimer’s disease, might be a response to microbes in the brain. If true, the finding would open up vastly different possibilities for therapy than the types of compounds virtually everyone else was pursuing.

But the inconsistent evaluations doomed Moir’s chances of winning the $250,000 a year for five years that he was requesting from the National Institutes of Health. While two reviewers rated his application highly, the third gave him scores in the cellar. Funding rejected.

Complaints about being denied NIH funding are as common among biomedical researchers as spilled test tubes after a Saturday night lab kegger. The budgets of NIH institutes that fund Alzheimer’s research at universities and medical centers cover only the top 18 percent or so of applications. There are more worthy studies than money.

Moir’s experience is notable, however, because it shows that, even as one potential Alzheimer’s drug after another has failed for the last 15 years (the last such drug, Namenda, was approved in 2003), researchers with fresh approaches — and sound data to back them up — have struggled to get funded and to get studies published in top journals. Many scientists in the NIH “study sections” that evaluate grant applications, and those who vet submitted papers for journals, have so bought into the prevailing view of what causes Alzheimer’s that they resist alternative explanations, critics say.

“They were the most prominent people in the field, and really good at selling their ideas,” said George Perry of the University of Texas at San Antonio and editor-in-chief of the Journal of Alzheimer’s Disease. “Salesmanship carried the day.”

Dating to the 1980s, the amyloid hypothesis holds that the disease is caused by sticky agglomerations, or plaques, of the peptide beta-amyloid, which destroy synapses and trigger the formation of neuron-killing “tau tangles.” Eliminating plaques was supposed to reverse the disease, or at least keep it from getting inexorably worse. It hasn’t. The reason, more and more scientists suspect, is that “a lot of the old paradigms, from the most cited papers in the field going back decades, are wrong,” said MGH’s Rudolph Tanzi, a leading expert on the genetics of Alzheimer’s.

Even with the failure of amyloid orthodoxy to produce effective drugs, scientists who had other ideas saw their funding requests repeatedly denied and their papers frequently rejected. Moir is one of them.

For years in the 1990s, Moir, too, researched beta-amyloid, especially its penchant for gunking up into plaques and “a whole bunch of things all viewed as abnormal and causing disease,” he said. “The traditional view is that amyloid-beta is a freak, that it has a propensity to form fibrils that are toxic to the brain — that it’s irredeemably bad. In the 1980s, that was a reasonable assumption.”

But something had long bothered him about the “evil amyloid” dogma. The peptide is made by all vertebrates, including frogs and lizards and snakes and fish. In most species, it’s identical to humans’, suggesting that beta-amyloid evolved at least 400 million years ago. “Anything so extensively conserved over that immense span of time must play an important physiological role,” Moir said.

What, he wondered, could that be?

In 1994, Moir changed hemispheres to work as a postdoctoral fellow with Tanzi. They’d hit it off over beers at a science meeting in Amsterdam. Moir liked that Tanzi’s lab was filled with energetic young scientists — and that in cosmopolitan Boston, he could play the hyper-kinetic (and bone-crunching) sport of Australian rules football. Tanzi liked that Moir was the only person in the world who could purify large quantities of the molecule from which the brain makes amyloid.

Moir initially focused on genes that affect the risk of Alzheimer’s — Tanzi’s specialty. But Moir’s intellectual proclivities were clear even then. His mind is constantly noodling scientific puzzles, colleagues say, even during down time. Moir took a vacation in the White Mountains a decade ago with his then-6-year-old son and a family friend, an antimicrobial expert; in between hikes, Moir explained a scientific roadblock he’d hit, and the friend explained a workaround.

Moir’s inclination toward unconventional thinking took flight in 2007. He was (and still is) in the habit of spending a couple of hours Friday afternoons on what he calls “PubMed walkabouts,” casually perusing that database of biomedical papers. One summer day, a Corona in hand, he came across a paper on something called LL37. It was described as an “antimicrobial peptide” that kills viruses, fungi, and bacteria, including — maybe especially — in the brain.

What caught his eye was that LL37’s size and structure and other characteristics were so similar to beta-amyloid, the two might be twins.

Moir hightailed it to Tanzi’s office next door. Serendipitously, Tanzi (also Corona-fueled) had just received new data from his study of genes that increase the risk of Alzheimer’s disease. Many of the genes, he saw, are involved in innate immunity, the body’s first line of defense against germs. If immune genetics affect Alzheimer’s, and if the chief suspect in Alzheimer’s (beta-amyloid) is a virtual twin of an antimicrobial peptide, maybe beta-amyloid is also an antimicrobial, Moir told Tanzi.

If so, then the plaques it forms might be the brain’s last-ditch effort to protect itself from microbes, a sort of Spider-Man silk that binds up pathogens to keep them from damaging the brain. Maybe they save the brain from pathogens in the short term only to themselves prove toxic over the long term.

Tanzi encouraged Moir to pursue that idea. “Rob was trained [by Marshall] to think out of the box,” Tanzi said. “He thinks so far out of the box he hasn’t found the box yet.”

Moir spent the next three years testing whether beta-amyloid can kill pathogens. He started simple, in test tubes and glass dishes. Those are relatively cheap, and Tanzi had enough funding to cover what Moir was doing: growing little microbial gardens in lab dishes and then trying to kill them.

Day after day, Moir and his junior colleagues played horticulturalists. They added staph and strep, the yeast candida, and the bacteria pseudomonas, enterococcus, and listeria to lab dishes filled with the nutrient medium agar. Once the microbes formed a thin layer on top, they squirted beta-amyloid onto it and hoped for an Alexander Fleming discovery-of-penicillin moment.

How an outsider in Alzheimer’s research bucked the prevailing theory — and clawed for validation. Stat News

_________________________________

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.

§107. Limitations on Exclusive Rights: Fair Use.  Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

Autoimmune disease

Autoimmune diseases occur when the body’s immune system targets and damages the body’s own cells.

autoimmune disease

Our bodies have an immune system: a network of special cells and organs that defends the body from germs and other foreign invaders.

At the core of the immune system is the ability to tell the difference between self and nonself: between what’s you and what’s foreign.

If the system becomes unable to tell the difference between self and nonself then the body makes autoantibodies (AW-toh-AN-teye-bah-deez) that attack normal cells by mistake.

At the same time, we always have regulatory T cells. They keep the rest of our immune system in line. If they fail to work correctly then other white blood cells can mistakenly attack parts of our body. This causes the damage we know as autoimmune disease.

The body parts that are affected depend on the type of autoimmune disease. There are more than 100 known types.

Overall, autoimmune diseases are common, affecting more than 23.5 million Americans. They are a leading cause of death and disability. Some autoimmune diseases are rare, while others, such as Hashimoto’s disease, affect many people.

(Intro adapted from U.S. Department of Health & Human Services, Office on Women’s Health)

Causes

There are many different auto-immune diseases. Each one has a separate cause. In fact, each particular autoimmune disorder itself may have several different causes.

Medical researchers are still learning how auto-immune diseases develop. They seem to be a combination of genetic mutations and some trigger in the environment.

TBA: The hygiene hypothesis

Examples

Crohn’s disease

Diabetes (Type 1 diabetes mellitus)

Guillain-Barre syndrome

Inflammatory bowel disease (IBD)

Lupus (Systemic lupus erythematosus)

Multiple sclerosis (MS)

Rheumatoid arthritis

Treatment

Many autoimmune disorders can now be partially treated with biologics (artificial biological molecules.) These biologics modulate the immune system. These can treat – but not cure – some auto-immune diseases.

Infliximab, etanercept, adalimumab, etc.

Learning Standards

Massachusetts Comprehensive Health Curriculum Framework

Students will gain the knowledge and skills to select a diet that supports health and reduces the risk of illness and future chronic diseases. PreK–12 Standard 4

Through the study of Prevention students will

8.1 Describe how the body fights germs and disease naturally and with medicines and immunization.

Through the study of Signs, Causes, and Treatment students will

8.2 Identify the common symptoms of illness and recognize that being responsible for individual health means alerting caretakers to any symptoms of illness

8.5 Identify ways individuals can reduce risk factors related to communicable and chronic diseases

8.13 Explain how the immune system functions to prevent and combat disease

Benchmarks for Science Literacy, AAAS

The immune system functions to protect against microscopic organisms and foreign substances that enter from outside the body and against some cancer cells that arise within. 6C/H1*

Some allergic reactions are caused by the body’s immune responses to usually harmless environmental substances. Sometimes the immune system may attack some of the body’s own cells. 6E/H1