KaiserScience

Start here

Advertisements

Particle Detectors

A particle detector is a device used to detect, track, and/or identify ionizing particles.

These particles may have been produced by nuclear decay, cosmic radiation, or reactions in a particle accelerator.

Particle detectors can measure the particle’s energy, momentum, spin, charge, particle type, in addition to merely registering the presence of the particle.

Cloud Chamber

(Adapted from Wikipedia)

cloud chamber, also known as a Wilson cloud chamber, is a particle detector used for visualizing the passage of ionizing radiation.

A cloud chamber consists of a sealed environment containing a supersaturated vapor of water or alcohol.

An energetic charged particle (for example, an alpha or beta particle) interacts with the gaseous mixture:

it knocks electrons off gas molecules via electrostatic forces during collisions

This results in a trail of ionized gas particles. They act as condensation centers : a mist-like trail of small droplets form if the gas mixture is at the point of condensation.

These droplets are visible as a “cloud” track that persist for several seconds while the droplets fall through the vapor.

These tracks have characteristic shapes. For example, an alpha particle track is thick and straight, while an electron track is wispy and shows more evidence of deflections by collisions.

Cloud chambers played a prominent role in the experimental particle physics from the 1920s to the 1950s, until the advent of the bubble chamber.

This is a Diffusion Cloud Chamber used for public demonstrations at the Museum of Technology in Berlin. The first part shows the alpha and beta radiation occurring around us all the time, thanks to normal activity in the atmosphere. Then a sample of Radon 220 (half-life 55 sec) is inserted into the chamber and all hell breaks loose as an alpha-decay party ensues!

Source: Derek McKenzie, Physics Footnotes, http://physicsfootnotes.com/radon-cloud-chamber/

diffusion-cloud-chamber-with-radon-gas

CERN

how Particle Accelerators Work

Apps

The Particle Adventure app

There are five basics adventure paths to take: The Standard Model, Accelerators and Particle Detectors, Higgs Boson Discovered, Unsolved Mysteries, Particle Decays and Annihilations.

Android – The Particle Adveture

iOS (Apple) The Particle Adventure

Interactive website sims

The Particle Adventure

CPEP Contemporary Physics Education Project

Learning Standards

SAT Subject Test: Physics

Quantum phenomena, such as photons and photoelectric effect
Atomic, such as the Rutherford and Bohr models, atomic energy levels, and atomic spectra
Nuclear and particle physics, such as radioactivity, nuclear reactions, and fundamental particles
Relativity, such as time dilation, length contraction, and mass-energy equivalence

A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012)

Electromagnetic radiation can be modeled as a wave of changing electric and magnetic fields or as particles called photons. The wave model is useful for explaining many features of electromagnetic radiation, and the particle model explains other features. Quantum theory relates the two models…. Knowledge of quantum physics enabled the development of semiconductors, computer chips, and lasers, all of which are now essential components of modern imaging, communications, and information technologies.

 

Advertisements

Placebo effect

At the moment this is a placeholder article,

Placebo effect

image from shutterstock.com

What if the Placebo Effect Isn’t a Trick? New research is zeroing in on a biochemical basis for the placebo effect — possibly opening a Pandora’s box for Western medicine.

The New York Times Magazine, Gary Greenberg, Nov 7, 2018

Give people a sugar pill, they have shown, and those patients — especially if they have one of the chronic, stress-related conditions that register the strongest placebo effects and if the treatment is delivered by someone in whom they have confidence — will improve. Tell someone a normal milkshake is a diet beverage, and his gut will respond as if the drink were low fat. Take athletes to the top of the Alps, put them on exercise machines and hook them to an oxygen tank, and they will perform better than when they are breathing room air — even if room air is all that’s in the tank. Wake a patient from surgery and tell him you’ve done an arthroscopic repair, and his knee gets better even if all you did was knock him out and put a couple of incisions in his skin. Give a drug a fancy name, and it works better than if you don’t.

You don’t even have to deceive the patients. You can hand a patient with irritable bowel syndrome a sugar pill, identify it as such and tell her that sugar pills are known to be effective when used as placebos, and she will get better, especially if you take the time to deliver that message with warmth and close attention. Depression, back pain, chemotherapy-related malaise, migraine, post-traumatic stress disorder: The list of conditions that respond to placebos — as well as they do to drugs, with some patients — is long and growing.

But as ubiquitous as the phenomenon is, and as plentiful the studies that demonstrate it, the placebo effect has yet to become part of the doctor’s standard armamentarium — and not only because it has a reputation as “fake medicine” doled out by the unscrupulous to the credulous. It also has, so far, resisted a full understanding, its mechanisms shrouded in mystery. Without a clear knowledge of how it works, doctors can’t know when to deploy it, or how.

Not that the researchers are without explanations. But most of these have traditionally been psychological in nature, focusing on mechanisms like expectancy — the set of beliefs that a person brings into treatment — and the kind of conditioning that Ivan Pavlov first described more than a century ago. These theories, which posit that the mind acts upon the body to bring about physical responses, tend to strike doctors and researchers steeped in the scientific tradition as insufficiently scientific to lend credibility to the placebo effect.

“What makes our research believable to doctors?” asks Ted Kaptchuk, head of Harvard Medical School’s Program in Placebo Studies and the Therapeutic Encounter. “It’s the molecules. They love that stuff.” As of now, there are no molecules for conditioning or expectancy — or, indeed, for Kaptchuk’s own pet theory, which holds that the placebo effect is a result of the complex conscious and nonconscious processes embedded in the practitioner-patient relationship — and without them, placebo researchers are hard-pressed to gain purchase in mainstream medicine.

But as many of the talks at the conference indicated, this might be about to change. Aided by functional magnetic resonance imaging (f.M.R.I.) and other precise surveillance techniques, Kaptchuk and his colleagues have begun to elucidate an ensemble of biochemical processes that may finally account for how placebos work and why they are more effective for some people, and some disorders, than others. The molecules, in other words, appear to be emerging. And their emergence may reveal fundamental flaws in the way we understand the body’s healing mechanisms, and the way we evaluate whether more standard medical interventions in those processes work, or don’t. Long a useful foil for medical science, the placebo effect might soon represent a more fundamental challenge to it.

In a way, the placebo effect owes its poor reputation to the same man who cast aspersions on going to bed late and sleeping in. Benjamin Franklin was, in 1784, the ambassador of the fledgling United States to King Louis XVI’s court. Also in Paris at the time was a Viennese physician named Franz Anton Mesmer. Mesmer fled Vienna a few years earlier when the local medical establishment determined that his claim to have cured a young woman’s blindness by putting her into a trance was false, and that, even worse, there was something unseemly about his relationship with her.

By the time he arrived in Paris and hung out his shingle, Mesmer had acquired what he lacked in Vienna: a theory to account for his ability to use trance states to heal people. There was, he claimed, a force pervading the universe called animal magnetism that could cause illness when perturbed. Conveniently enough for Mesmer, the magnetism could be perceived and de-perturbed only by him and people he had trained.

Mesmer’s method was strange, even in a day when doctors routinely prescribed bloodletting and poison to cure the common cold. A group of people complaining of maladies like fatigue, numbness, paralysis and chronic pain would gather in his office, take seats around an oak cask filled with water and grab on to metal rods immersed in the water. Mesmer would alternately chant, play a glass harmonium and wave his hands at the afflicted patients, who would twitch and cry out and sometimes even lose consciousness, whereupon they would be carried to a recovery room. Enough people reported good results that patients were continually lined up at Mesmer’s door waiting for the next session.

It was the kind of success likely to arouse envy among doctors, but more was at stake than professional turf. Mesmer’s claim that a force existed that could only be perceived and manipulated by the elect few was a direct challenge to an idea central to the Enlightenment: that the truth could be determined by anyone with senses informed by skepticism, that Scripture could be supplanted by facts and priests by a democracy of people who possessed them. So, when the complaints about Mesmer came to Louis, it was to the scientists that the king — at pains to show himself an enlightened man — turned. He appointed, among others, Lavoisier the chemist, Bailly the astronomer and Guillotin the physician to investigate Mesmer’s claims, and he installed Franklin at the head of their commission.

To the Franklin commission, the question wasn’t whether Mesmer was a fraud and his patients were dupes. Everyone could be acting in good faith, but belief alone did not prove that the magnetism was at work. To settle this question, they designed a series of trials that ruled out possible causes of the observed effects other than animal magnetism. The most likely confounding variable, they thought, was some faculty of mind that made people behave as they did under Mesmer’s ministrations. To rule this out, the panel settled upon a simple method: a blindfold. Over a period of a few months, they ran a series of experiments that tested whether people experienced the effects of animal magnetism even when they couldn’t see.

One of Mesmer’s disciples, Charles d’Eslon, conducted the tests. The panel instructed him to wave his hands at a part of a patient’s body, and then asked the patient where the effect was felt. They took him to a copse to magnetize a tree — Mesmer claimed that a patient could be treated by touching one — and then asked the patient to find it. They told patients d’Eslon was in the room when he was not, and vice versa, or that he was doing something that he was not. In trial after trial, the patients responded as if the doctor were doing what they thought he was doing, not what he was actually doing.

It was possibly the first-ever blinded experiment, and it soundly proved what scientists today call the null hypothesis: There was no causal connection between the behavior of the doctor and the response of the patients, which meant, as Franklin’s panel put it in their report, that “this agent, this fluid, has no existence.” That didn’t imply that people were pretending to twitch or cry out, or lying when they said they felt better; only that their behavior wasn’t a result of this nonexistent force. Rather, the panel wrote, “the imagination singly produces all the effects attributed to the magnetism.”

When the panel gave d’Eslon a preview of its findings, he took it with equanimity. Given the results of the treatment (as opposed to the experiment), he opined, the imagination, “directed to the relief of suffering humanity, would be a most valuable means in the hands of the medical profession” — a subject to which these august scientists might wish to apply their methods. But events intervened. Franklin was called back to America in 1785; Louis XVI had bigger trouble on his hands and, along with Lavoisier and Bailly, eventually met with the short, sharp shock of the device named for Guillotin.

The panel’s report was soon translated into English by William Godwin, the father of Mary Shelley. The story spread fast — not because of the healing potential that d’Eslon had suggested, but because of the implications for science as a whole. The panel had demonstrated that by putting imagination out of play, science could find the truth about our suffering bodies, in the same way it had found the truth about heavenly bodies.

Hiving off subjectivity from the rest of medical practice, the Franklin commission had laid the conceptual foundation for the brilliant discoveries of modern medicine, the antibiotics and vaccines and other drugs that can be dispensed by whoever happens to possess the prescription pad, and to whoever happens to have the disease. Without meaning to, they had created an epistemology for the healing arts — and, in the process, inadvertently conjured the placebo effect, and established it as that to which doctors must remain blind.

It wouldn’t be the last time science would turn its focus to the placebo effect only to quarantine it. At a 1955 meeting of the American Medical Association, the Harvard surgeon Henry Beecher pointed out to his colleagues that while they might have thought that placebos were fake medicine — even the name, which means “I shall please” in Latin, carries more than a hint of contempt — they couldn’t deny that the results were real. Beecher had been looking at the subject systematically, and he determined that placebos could relieve anxiety and postoperative pain, change the blood chemistry of patients in a way similar to drugs and even cause side effects. In general, he told them, more than one-third of patients would get better when given a treatment that was, pharmacologically speaking, inert.

If the placebo was as powerful as Beecher said, and if doctors wanted to know whether their drugs actually worked, it was not sufficient simply to give patients the drugs and see whether they did better than patients who didn’t interact with the doctor at all. Instead, researchers needed to assume that the placebo effect was part of every drug effect, and that drugs could be said to work only to the extent that they worked better than placebos. An accurate measure of drug efficacy would require comparing the response of patients taking it with that of patients taking placebos; the drug effect could then be calculated by subtracting the placebo response from the overall response, much as a deli-counter worker subtracts the weight of the container to determine how much lobster salad you’re getting.

In the last half of the 1950s, this calculus gave rise to a new way to evaluate drugs: the double-blind, placebo-controlled clinical trial, in which neither patient nor clinician knew who was getting the active drug and who the placebo. In 1962, when the Food and Drug Administration began to require pharmaceutical companies to prove their new drugs were effective before they came to market, they increasingly turned to the new method; today, virtually every prospective new drug has to outperform placebos on two independent studies in order to gain F.D.A. approval.

Like Franklin’s commission, the F.D.A. had determined that the only way to sort out the real from the fake in medicine was to isolate the imagination. It also echoed the royal panel by taking note of the placebo effect only long enough to dismiss it, giving it a strange dual nature: It’s included in clinical trials because it is recognized as an important part of every treatment, but it is treated as if it were not important in itself. As a result, although virtually every clinical trial is a study of the placebo effect, it remains underexplored — an outcome that reflects the fact that there is no money in sugar pills and thus no industry interest in the topic as anything other than a hurdle it needs to overcome.

When Ted Kaptchuk was asked to give the opening keynote address at the conference in Leiden, he contemplated committing the gravest heresy imaginable: kicking off the inaugural gathering of the Society for Interdisciplinary Placebo Studies by declaring that there was no such thing as the placebo effect.

When he broached this provocation in conversation with me not long before the conference, it became clear that his point harked directly back to Franklin: that the topic he and his colleagues studied was created by the scientific establishment, and only in order to exclude it — which means that they are always playing on hostile terrain. Science is “designed to get rid of the husks and find the kernels,” he told me.

Much can be lost in the threshing — in particular, Kaptchuk sometimes worries, the rituals embedded in the doctor-patient encounter that he thinks are fundamental to the placebo effect, and that he believes embody an aspect of medicine that has disappeared as scientists and doctors pursue the course laid by Franklin’s commission. “Medical care is a moral act,” he says, in which a suffering person puts his or her fate in the hands of a trusted healer.

“I don’t love science,” Kaptchuk told me. “I want to know what heals people.” Science may not be the only way to understand illness and healing, but it is the established way. “That’s where the power is,” Kaptchuk says. That instinct is why he left his position as director of a pain clinic in 1990 to join Harvard — and it’s why he was delighted when, in 2010, he was contacted by Kathryn Hall, a molecular biologist. Here was someone with an interest in his topic who was also an expert in molecules, and who might serve as an emissary to help usher the placebo into the medical establishment.

Hall’s own journey into placebo studies began 15 years before her meeting with Kaptchuk, when she developed a bad case of carpal tunnel syndrome. Wearing a wrist brace didn’t help, and neither did over-the-counter drugs or the codeine her doctor prescribed. When a friend suggested she visit an acupuncturist, Hall balked at the idea of such an unscientific approach. But faced with the alternative, surgery, she decided to make an appointment. “I was there for maybe 10 minutes,” she recalls, “when she stuck a needle here” — Hall points to a spot on her forearm — “and this awful pain just shot through my arm.” But then the pain receded and her symptoms disappeared, as if they had been carried away on the tide. She received a few more treatments, during which the acupuncturist taught her how to manipulate a spot near her elbow if the pain recurred. Hall needed the fix from time to time, but the problem mostly just went away.

“I couldn’t believe it,” she told me. “Two years of gross drugs, and then just one treatment.” All these years later, she’s still wonder-struck. “What was that?” she asks. “Rub the spot, and the pain just goes away?”

Hall was working for a drug company at the time, but she soon left to get a master’s degree in visual arts, after which she started a documentary-production company. She was telling her carpal-tunnel story to a friend one day and recounted how the acupuncturist had climbed up on the table with her. (“I was like, ‘Oh, my God, what is this woman doing?’ ” she told me. “It was very dramatic.”) She’d never been able to understand how the treatment worked, and this memory led her to wonder out loud if maybe the drama itself had something to do with the outcome.

Her friend suggested she might find some answers in Ted Kaptchuk’s work. She picked up his book about Chinese medicine, “The Web that Has No Weaver,” in which he mentioned the possibility that placebo effects figure strongly in acupuncture, and then she read a study he had conducted that put that question to the test.

Kaptchuk had divided people with irritable bowel syndrome into three groups. In one, acupuncturists went through all the motions of treatment, but used a device that only appeared to insert a needle. Subjects in a second group also got sham acupuncture, but delivered with more elaborate doctor-patient interaction than the first group received. A third group was given no treatment at all. At the end of the trial, both treatment groups improved more than the no-treatment group, and the “high interaction” group did best of all.

Kaptchuk, who before joining Harvard had been an acupuncturist in private practice, wasn’t particularly disturbed by the finding that his own profession worked even when needles were not actually inserted; he’d never thought that placebo treatments were fake medicine. He was more interested in how the strength of the treatment varied with the quality and quantity of interaction between the healer and the patient — the drama, in other words. Hall reached out to him shortly after she read the paper.

The findings of the I.B.S. study were in keeping with a hypothesis Kaptchuk had formed over the years: that the placebo effect is a biological response to an act of caring; that somehow the encounter itself calls forth healing and that the more intense and focused it is, the more healing it evokes. He elaborated on this idea in a comparative study of conventional medicine, acupuncture and Navajo “chantway rituals,” in which healers lead storytelling ceremonies for the sick. He argued that all three approaches unfold in a space set aside for the purpose and proceed as if according to a script, with prescribed roles for every participant. Each modality, in other words, is its own kind of ritual, and Kaptchuk suggested that the ritual itself is part of what makes the procedure effective, as if the combined experiences of the healer and the patient, reinforced by the special-but-familiar surroundings, evoke a healing response that operates independently of the treatment’s specifics. “Rituals trigger specific neurobiological pathways that specifically modulate bodily sensations, symptoms and emotions,” he wrote. “It seems that if the mind can be persuaded, the body can sometimes act accordingly.” He ended that paper with a call for further scientific study of the nexus between ritual and healing.

When Hall contacted him, she seemed like a perfect addition to the team he was assembling to do just that. He even had an idea of exactly how she could help. In the course of conducting the study, Kaptchuk had taken DNA samples from subjects in hopes of finding some molecular pattern among the responses. This was an investigation tailor-made to Hall’s expertise, and she agreed to take it on. Of course, the genome is vast, and it was hard to know where to begin — until, she says, she and Kaptchuk attended a talk in which a colleague presented evidence that an enzyme called COMT affected people’s response to pain and painkillers. Levels of that enzyme, Hall already knew, were also correlated with Parkinson’s disease, depression and schizophrenia, and in clinical trials people with those conditions had shown a strong placebo response. When they heard that COMT was also correlated with pain response — another area with significant placebo effects — Hall recalls, “Ted and I looked at each other and were like: ‘That’s it! That’s it!’ ”

It is not possible to assay levels of COMT directly in a living brain, but there is a snippet of the genome called rs4680 that governs the production of the enzyme, and that varies from one person to another: One variant predicts low levels of COMT, while another predicts high levels. When Hall analyzed the I.B.S. patients’ DNA, she found a distinct trend. Those with the high-COMT variant had the weakest placebo responses, and those with the opposite variant had the strongest. These effects were compounded by the amount of interaction each patient got: For instance, low-COMT, high-interaction patients fared best of all, but the low-COMT subjects who were placed in the no-treatment group did worse than the other genotypes in that group. They were, in other words, more sensitive to the impact of the relationship with the healer.

The discovery of this genetic correlation to placebo response set Hall off on a continuing effort to identify the biochemical ensemble she calls the placebome — the term reflecting her belief that it will one day take its place among the other important “-omes” of medical science, from the genome to the microbiome. The rs4680 gene snippet is one of a group that governs the production of COMT, and COMT is one of a number of enzymes that determine levels of catecholamines, a group of brain chemicals that includes dopamine and epinephrine. (Low COMT tends to mean higher levels of dopamine, and vice versa.) Hall points out that the catecholamines are associated with stress, as well as with reward and good feeling, which bolsters the possibility that the placebome plays an important role in illness and health, especially in the chronic, stress-related conditions that are most susceptible to placebo effects.

Her findings take their place among other results from neuroscientists that strengthen the placebo’s claim to a place at the medical table, in particular studies using f.M.R.I. machines that have found consistent patterns of brain activation in placebo responders. “For years, we thought of the placebo effect as the work of imagination,” Hall says. “Now through imaging you can literally see the brain lighting up when you give someone a sugar pill.”

One group with a particularly keen interest in those brain images, as Hall well knows, is her former employers in the pharmaceutical industry. The placebo effect has been plaguing their business for more than a half-century — since the placebo-controlled study became the clinical-trial gold standard, requiring a new drug to demonstrate a significant therapeutic benefit over placebo to gain F.D.A. approval.

That’s a bar that is becoming ever more difficult to surmount, because the placebo effect seems to be becoming stronger as time goes on. A 2015 study published in the journal Pain analyzed 84 clinical trials of pain medication conducted between 1990 and 2013 and found that in some cases the efficacy of placebo had grown sharply, narrowing the gap with the drugs’ effect from 27 percent on average to just 9 percent. The only studies in which this increase was detected were conducted in the United States, which has spawned a variety of theories to explain the phenomenon: that patients in the United States, one of only two countries where medications are allowed to be marketed directly to consumers, have been conditioned to expect greater benefit from drugs; or that the larger and longer-duration trials more common in America have led to their often being farmed out to contract organizations whose nurses’ only job is to conduct the trial, perhaps fostering a more placebo-triggering therapeutic interaction.

Whatever the reason, a result is that drugs that pass the first couple of stages of the F.D.A. approval process founder more and more frequently in the larger late-stage trials; more than 90 percent of pain medications now fail at this stage. The industry would be delighted if it were able to identify placebo responders — say, by their genome — and exclude them from clinical trials.

That may seem like putting a thumb on the scale for drugs, but under the logic of the drug-approval regime, to eliminate placebo effects is not to cheat; it merely reduces the noise in order for the drug’s signal to be heard more clearly. That simple logic, however, may not hold up as Hall continues her research into the genetic basis of the placebo. Indeed, that research may have deeper implications for clinical drug trials, and for the drugs themselves, than pharma companies might expect.

Since 2013, Hall has been involved with the Women’s Health Study, which has tracked the cardiovascular health of nearly 40,000 women over more than 20 years. The subjects were randomly divided into four groups, following standard clinical-trial protocol, and received a daily dose of either vitamin E, aspirin, vitamin E with aspirin or a placebo. A subset also had their DNA sampled — which, Hall realized, offered her a vastly larger genetic database to plumb for markers correlated to placebo response. Analyzing the data amassed during the first 10 years of the study, Hall found that the women with the low-COMT gene variant had significantly higher rates of heart disease than women with the high-COMT variant, and that the risk was reduced for those low-COMT women who received the active treatments but not in those given placebos. Among high-COMT people, the results were the inverse: Women taking placebos had the lowest rates of disease; people in the treatment arms had an increased risk.

These findings in some ways seem to confound the results of the I.B.S. study, in which it was the low-COMT patients who benefited most from the placebo. But, Hall argues, what’s important isn’t the direction of the effect, but rather that there is an effect, one that varies depending on genotype — and that the same gene variant also seems to determine the relative effectiveness of the drug. This outcome contradicts the logic underlying clinical trials. It suggests that placebo and drug do not involve separate processes, one psychological and the other physical, that add up to the overall effectiveness of the treatment; rather, they may both operate on the same biochemical pathway — the one governed in part by the COMT gene.

Hall has begun to think that the placebome will wind up essentially being a chemical pathway along which healing signals travel — and not only to the mind, as an experience of feeling better, but also to the body. This pathway may be where the brain translates the act of caring into physical healing, turning on the biological processes that relieve pain, reduce inflammation and promote health, especially in chronic and stress-related illnesses — like irritable bowel syndrome and some heart diseases. If the brain employs this same pathway in response to drugs and placebos, then of course it is possible that they might work together, like convoys of drafting trucks, to traverse the territory. But it is also possible that they will encroach on one another, that there will be traffic jams in the pathway.

What if, Hall wonders, a treatment fails to work not because the drug and the individual are biochemically incompatible, but rather because in some people the drug interferes with the placebo response, which if properly used might reduce disease? Or conversely, what if the placebo response is, in people with a different variant, working against drug treatments, which would mean that a change in the psychosocial context could make the drug more effective? Everyone may respond to the clinical setting, but there is no reason to think that the response is always positive. According to Hall’s new way of thinking, the placebo effect is not just some constant to be subtracted from the drug effect but an intrinsic part of a complex interaction among genes, drugs and mind. And if she’s right, then one of the cornerstones of modern medicine — the placebo-controlled clinical trial — is deeply flawed.

When Kathryn Hall told Ted Kaptchuk what she was finding as she explored the relationship of COMT to the placebo response, he was galvanized. “Get this molecule on the map!” he urged her. It’s not hard to understand his excitement. More than two centuries after d’Eslon suggested that scientists turn their attention directly to the placebo effect, she did exactly that and came up with a finding that might have persuaded even Ben Franklin.

But Kaptchuk also has a deeper unease about Hall’s discovery. The placebo effect can’t be totally reduced to its molecules, he feels certain — and while research like Hall’s will surely enhance its credibility, he also sees a risk in playing his game on scientific turf. “Once you start measuring the placebo effect in a quantitative way,” he says, “you’re transforming it to be something other than what it is. You suck out what was previously there and turn it into science.” Reduced to its molecules, he fears, the placebo effect may become “yet another thing on the conveyor belt of routinized care.”

“We’re dancing with the devil here,” Kaptchuk once told me, by way of demonstrating that he was aware of the risks he’s taking in using science to investigate a phenomenon it defined only to exclude. Kaptchuk, an observant Jew who is a student of both the Torah and the Talmud, later modified his comment. It’s more like Jacob wrestling with the angel, he said — a battle that Jacob won, but only at the expense of a hip injury that left him lame for the rest of his life.

Indeed, Kaptchuk seems wounded when he complains about the pervasiveness of research that uses healthy volunteers in academic settings, as if the response to mild pain inflicted on an undergraduate participating in an on-campus experiment is somehow comparable to the despair often suffered by people with chronic, intractable pain. He becomes annoyed when he talks about how quickly some of his colleagues want to move from these studies to clinical recommendations. And he can even be disparaging of his own work, wondering, for instance, whether the study in which placebos were openly given to irritable bowel syndrome patients succeeded only because it convinced the subjects that the sugar was really a drug. But it’s the prospect of what will become of his findings, and of the placebo, as they make their way into clinical practice, that really seems to torment him.

Kaptchuk may wish “to help reconfigure biomedicine by rejecting the idea that healing is only the application of mechanical tools.” He may believe that healing is a moral act in which “caring in the context of hope qualitatively changes clinical outcomes.” He may be convinced that the relationship kindled by the encounter between a suffering person and a healer is a central, and almost entirely overlooked, component of medical treatment. And he may have dedicated the last 20 years of his life to persuading the medical establishment to listen to him. But he may also come to regret the outcome.

After all, if Hall is right that clinician warmth is especially effective with a certain genotype, then, as she wrote in the paper presenting her findings from the I.B.S./sham-acupuncture study, it is also true that a different group will “derive minimum benefit” from “empathic attentions.” Should medical rituals be doled out according to genotype, with warmth and caring withheld in order to clear the way for the drugs? And if she is correct that a certain ensemble of neurochemical events underlies the placebo effect, then what is to stop a drug company from manufacturing a drug — a real drug, that is — that activates the same process pharmacologically? Welcomed back into the medical fold, the placebo effect may raise enough mischief to make Kaptchuk rue its return, and bewilder patients when they discover that their doctor’s bedside manner is tailored to their genes.

For the most part, most days, Kaptchuk manages to keep his qualms to himself, to carry on as if he were fully confident that scientific inquiry can restore the moral dimension to medicine. But the precariousness of his endeavors is never far from his mind. “Will this work destroy the stuff that actually has to do with wisdom, preciousness, imagination, the things that are actually critical to who we are as human beings?” he asks. His answer: “I don’t know, but I have to believe there is an infinite reserve of wisdom and imagination that will resist being reduced to simple materialistic explanations.”

The ability to hold two contradictory thoughts in mind at the same time seems to come naturally to Kaptchuk, but he may overestimate its prevalence in the rest of us. Even if his optimism is well placed, however, there’s nothing like being sick to make a person toss that kind of intelligence aside in favor of the certainties offered by modern medicine. Indeed, it’s exactly that yearning that sickness seems to awaken and that our healers, imbued with the power of science, purport to provide, no imagination required. Armed with our confidence in them, we’re pleased to give ourselves over to their ministrations, and pleased to believe that it’s the molecules, and the molecules alone, that are healing us. People do like to be cheated, after all.

Gary Greenberg is the author, most recently, of “The Book of Woe: The DSM and the Unmaking of Psychiatry.” He is a contributing editor for Harper’s Magazine. This is his first article for the magazine.

A version of this article appears in print on Nov. 11, 2018, on Page 50 of the Sunday Magazine with the headline: Why Nothing Works.

original link: http://www.nytimes.com/2018/11/07/magazine/placebo-effect-medicine.html

________________________________

 

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

 

The Enlightenment

Notes for teachers who are covering the age of the Enlightenment

A Reading in the Salon of Mme Geoffrin by Anicet Lemonnier

“A Reading in the Salon of Mme Geoffrin,” 1755, By Anicet Charles Gabriel Lemonnier. Marie Geoffrin was one of the leading female figures in the French Enlightenment. She hosted some of the most important Philosophes and Encyclopédistes of her time.

Introduction

For now, this introduction has been loosely adapted from the Wikipedia article.

French historians traditionally place the Enlightenment between 1715 (the year that Louis XIV died) and 1789 (the beginning of the French Revolution).

International historians often say that the Enlightenment began in the 1620s, with the start of the scientific revolution.

Earlier philosophers whose work influenced the Enlightenment included Bacon, Descartes, Locke, and Spinoza.

Many of the Enlightenment thinkers are known as Les philosophes -French writers and thinkers – who – circulated their ideas through meetings at scientific academies, Masonic lodges, literary salons, coffee houses, and in printed books and pamphlets.

The ideas of the Enlightenment undermined the authority of the monarchy and the Church. These ideas paved the way for the political revolutions of the 18th and 19th centuries.

Major figures of the Enlightenment included Beccaria, Diderot, Hume, Kant, Montesquieu, Rousseau, Adam Smith, and Voltaire.

Some European rulers, including Catherine II of Russia, Joseph II of Austria and Frederick II of Prussia, tried to apply Enlightenment thought on religious and political tolerance, “enlightened absolutism.”

Benjamin Franklin visited Europe and contributed to the scientific and political debates there; he brought these ideas back to Philadelphia. Thomas Jefferson incorporated Enlightenment philosophy into the Declaration of Independence (1776). James Madison, incorporated these ideas in the United States Constitution during its framing in 1787

Secondary section (to be re-titled)

In his famous 1784 essay “What Is Enlightenment?”, Immanuel Kant defined it as follows:

“Enlightenment is man’s leaving his self-caused immaturity. Immaturity is the incapacity to use one’s own understanding without the guidance of another. Such immaturity is self-caused if its cause is not lack of intelligence, but by lack of determination and courage to use one’s intelligence without being guided by another. The motto of enlightenment is therefore: Have courage to use your own intelligence!”

By mid-Century the pinnacle of purely Enlightenment thinking was being reached with Voltaire.

Born Francois Marie Arouet in 1694, he was exiled to England between 1726 and 1729, and there he studied Locke, Newton, and the English Monarchy.

Voltaire’s ethos was:  “Those who can make you believe absurdities can make you commit atrocities” – that is, if people believed in what is unreasonable, they will do what is unreasonable.

Reforms sought

The Enlightenment sought reform of Monarchy by laws which were in the best interest of the subjects, and the “enlightened” ordering of society.  In the 1750s there would be attempts in England, Austria, Prussia and France to “rationalize” the Monarchical system and its laws. When this failed to end wars, there was an increasing drive for revolution or dramatic alteration. The Enlightenment found its way to the heart of the American Declaration of Independence, and the Jacobin program of the French Revolution, as well as the American Constitution of 1787.

Common values

Many values were common to enlightenment thinkers, including:

✔ Nations exist to protect the rights of the individual, instead of the other way around.

✔ Each individual should be afforded dignity, and should be allowed to live one’s life with the maximum amount of personal freedom.

✔ Some form of Democracy is the best form of government.

✔ All of humanity, all races, nationalities and religions, are of equal worth and value.

✔ People have a right to free speech and expression, the right to free association, the right to hold to any – or no – religion; the right to elect their own leaders.

✔ The scientific method is our only ally in helping us discern fact from fiction.

✔Science, properly used, is a positive force for the good of all humanity.

✔ Classical religious dogma and mystical experiences are inferior to logic and philosophy.

✔ Theism – the belief in a God that wants morality – was held by most Enlightenment thinkers to be essential for a person to have good moral character. 

✔ Deism – to be added

✔ Some classical religious dogma has been harmful, causing crusades, Jihads, holy wars, or denial of human rights to various classes of people.

Learning Standards

Massachusetts History and Social Science Curriculum Framework

High School World History Content Standards

Topic 6: Philosophies of government and society Supporting question: How did philosophies of government shape the everyday lives of people? 34. Identify the origins and the ideals of the European Enlightenment, such as happiness, reason, progress, liberty, and natural rights, and how intellectuals of the movement (e.g., Denis Diderot, Emmanuel Kant, John Locke, Charles de Montesquieu, Jean-Jacques Rousseau, Mary Wollstonecraft, Cesare Beccaria, Voltaire, or social satirists such as Molière and William Hogarth) exemplified these ideals in their work and challenged existing political, economic, social, and religious structures.

New York State Grades 9-12 Social Studies Framework

9.9 TRANSFORMATION OF WESTERN EUROPE AND RUSSIA:

9.9d The development of the Scientific Revolution challenged traditional authorities and beliefs.  Students will examine the Scientific Revolution, including the influence of Galileo and Newton.
9.9e The Enlightenment challenged views of political authority and how power and authority were conceptualized.

10.2: ENLIGHTENMENT, REVOLUTION, AND NATIONALISM: The Enlightenment called into question traditional beliefs and inspired widespread political, economic, and social change. This intellectual movement was used to challenge political authorities in Europe and colonial rule in the Americas. These ideals inspired political and social movements.

10.2a Enlightenment thinkers developed political philosophies based on natural laws, which included the concepts of social contract, consent of the governed, and the rights of citizens.

10.2b Individuals used Enlightenment ideals to challenge traditional beliefs and secure people’s rights in reform movements, such as women’s rights and abolition; some leaders may be considered enlightened despots.

10.2c Individuals and groups drew upon principles of the Enlightenment to spread rebellions and call for revolutions in France and the Americas.

History–Social Science Content Standards for California Public Schools

7.11 Students analyze political and economic change in the sixteenth, seventeenth, and eighteenth centuries (the Age of Exploration, the Enlightenment, and the Age of Reason).
1. Know the great voyages of discovery, the locations of the routes, and the influence of cartography in the development of a new European worldview.
2. Discuss the exchanges of plants, animals, technology, culture, and ideas among Europe, Africa, Asia, and the Americas in the fifteenth and sixteenth centuries and the
major economic and social effects on each continent.
3. Examine the origins of modern capitalism; the influence of mercantilism and cottage industry; the elements and importance of a market economy in seventeenth-century Europe; the changing international trading and marketing patterns, including their locations on a world map; and the influence of explorers and map makers.
4. Explain how the main ideas of the Enlightenment can be traced back to such movements as the Renaissance, the Reformation, and the Scientific Revolution and to the Greeks, Romans, and Christianity.
5. Describe how democratic thought and institutions were influenced by Enlightenment thinkers (e.g., John Locke, Charles-Louis Montesquieu, American founders).
6. Discuss how the principles in the Magna Carta were embodied in such documents as the English Bill of Rights and the American Declaration of Independence.

AP World History

The 18th century marked the beginning of an intense period of revolution and rebellion against existing governments, and the establishment of new nation-states around the world.

I. The rise and diffusion of Enlightenment thought that questioned established traditions in all areas of life often preceded the revolutions and rebellions against existing governments.

Also see AP Worldipedia. Key Concept 5.3 Nationalism, Revolution, and Reform

Australopithecus skeleton

Australopithecus skeleton

A team of Northeast Ohio researchers announced a rare and important find – the partial skeleton of a 3.6 million-year-old early human ancestor belonging to the same species as, but much older than, the iconic 3.2 million-year-old Lucy fossil discovered in 1974.

Less than 10 such largely intact skeletons 1.5 million years or older have been found. Greater Cleveland researchers have played leading roles in three of those discoveries, reinforcing the region’s prominence in the search for humanity’s origins.

The new specimen is called Kadanuumuu (pronounced Kah-dah-NEW-moo). The nickname means “big man” in the language of the Afar tribesmen who helped unearth his weathered bones from a hardscrabble Ethiopian plain beginning in 2005.

“Big” is an apt description of both Kadanuumuu’s stature and his significance. The scientists who analyzed the long-legged fossil say it erases any doubts about stubby Lucy and her kind’s ability to walk well on two legs, and reveals new information about when and how bipedality developed.

“It’s all about human-like bipedality evolving earlier than some people think,” said Cleveland Museum of Natural History anthropologist Yohannes Haile-Selassie,

– http://www.cleveland.com/science/index.ssf/2010/06/partial_skeleton_from_lucys_sp.html

– —
“Many dozens of A. afarensis fossils have been uncovered since Lucy was discovered in 1974, but none as complete as this one. Kadanuumuu’s forearm was first extracted from a hunk of mudstone in February 2005, and subsequent expeditions uncovered an entire knee, part of a pelvis, and well preserved sections of the thorax.
“We have the clavicle, a first rib, a scapula, and the humerus,” says physical anthropologist Bruce Latimer of Case Western Reserve University in Cleveland, Ohio, one of the co-leaders on the dig. “That enables us to say something about how [Kadanuumuu] was using its arm, and it was clearly not using it the way an ape uses it. It finally takes knuckle-walking off the table.” At five and a half feet tall, Kadanuumuu would also have towered two feet over Lucy, lending support to the view that there was a high degree of sexual dimorphism in the species.”

– Archaeology, “Kadanuumuu” – Woranso-Mille, Ethiopia Volume 64 Number 1, January/February 2011 by Brendan Borrell

http://www.cleveland.com/science/index.ssf/2010/06/partial_skeleton_from_lucys_sp.html

The future of photography on phones depends on coding

Note to students: When we talk about coding, we mean computer programming (“writing code.”) But more specifically, we mean using code that uses sophisticate mathematics.

________________

From “The Future of Photography is Code”

Devin Coldewey, 10/22/2018

What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.

Oppo N3 cellphone camera

The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.

But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Samsung Galaxy S camera sensor

Photo by Petar Milošević, from Wikimedia, commons.wikimedia.org/wiki/File:Samsung_Galaxy_S_camera_sensor.jpg

Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

From Photos on Grey scale cellphone cameras

Image from FLIR Machine Vision, https://www.ptgrey.com/white-paper/id/10912

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.

The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?

In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.

Isn’t all photography computational?

The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.

For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.

These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.

In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.

The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.

Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.

Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.

Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.

All competition therefore comprises what these companies build on top of that foundation.

Image signal processing cellphone Camera math software Apple

Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.

Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.

Access to the stream allows the camera to do all kinds of things. It adds context.

Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.

A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.

This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.

These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.

What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.

But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.

Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.

Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.

If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.

Double vision

One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.

This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.

Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.

These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.

The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.

So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.

Light and code

The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.

Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.

What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

_______________

Related articles

Your Smartphone Should Suck. Here’s Why It Doesn’t. (Wired magazine article)

Great images! How can we use slow shutter speed technique during day time?

Great images!! The Exposure Triangle – A Beginner’s Guide.

_______________

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

 

Modeling DNA with Legos

Modeling DNA with Legos

Students learn best when they develop mental models. For many students this is almost automatic – they see diagrams and can internally translate them into… TBA

But for many other students … TBA

One great solution is to physically create models of molecules… TBA

Strengths and limitations of using Legos TBA

(article to be written)

Legos DNA purines pyrimidines 1

In this next image we see (TBA)

Legos DNA purines pyrimidines 4

TBA

Learning Expectations

2016 Massachusetts Science and Technology/Engineering Curriculum Framework

8.MS-PS1-1. Develop a model to describe that (a) atoms combine in a multitude of ways to produce pure substances which make up all of the living and nonliving things that we encounter.

HS-LS1-1. Construct a model of transcription and translation to explain the roles of DNA and RNA that code for proteins that regulate and carry out essential functions of life.

HS-LS1-6. Construct an explanation based on evidence that organic molecules are primarily composed of six elements, where carbon, hydrogen, and oxygen atoms may combine with nitrogen, sulfur, and phosphorus to form monomers that can further combine to form large carbon-based macromolecules.

College Board Standards for College Success: Science

LSM-PE.5.2.2 Construct a representation of DNA replication, showing how the helical DNA molecule unzips and how nucleotide bases pair with the DNA template to form a duplicate of the DNA molecule.

Benchmarks for Science Literacy, AAAS

The information passed from parents to offspring is coded in DNA molecules, long chains linking just four kinds of smaller molecules, whose precise sequence encodes genetic information. 5B/H3*

Genes are segments of DNA molecules. Inserting, deleting, or substituting segments of DNA molecules can alter genes. An altered gene may be passed on to every cell that develops from it. The resulting features may help, harm, or have little or no effect on the offspring’s success in its environment. 5B/H4*

NSTA Position Statement: The Teaching of Climate Science

NSTA National Science Teachers Association

The National Science Teachers Association (NSTA) acknowledges that decades of research and overwhelming scientific consensus indicate with increasing certainty that Earth’s climate is changing, largely due to human-induced increases in the concentrations of heat-absorbing gases (IPCC 2014; Melillo, Richmond, and Yohe 2014).

The scientific consensus on the occurrence, causes, and consequences of climate change is both broad and deep (Melillo, Richmond, and Yohe 2014). The nation’s leading scientific organizations support the core findings related to climate change, as do a broad range of government agencies, university and government research centers, educational organizations, and numerous international groups (NCSE 2017; U.S. Global Change Research Program 2017).

According to the National Academy of Sciences, “it is now more certain than ever, based on many lines of evidence, that humans are changing Earth’s climate” (NAS 2014). Scientific evidence advances our understanding of the challenges that climate change presents and of the need for people to prepare for and respond to its far-reaching implications (Melillo, Richmond, and Yohe 2014; Watts 2017).

The science of climate change is firmly rooted in decades of peer-reviewed scientific literature and is as sound and advanced as other established geosciences that have provided deep understandings in fields such as plate tectonics and planetary astronomy. As such, A Framework for K–12 Science Education (Framework) recommends that foundational climate change science concepts be included as part of a high-quality K–12 science education (NRC 2012).

Given the solid scientific foundation on which climate change science rests, any controversies regarding climate change and human-caused contributions to climate change that are based on social, economic, or political arguments—rather than scientific arguments—should not be part of a science curriculum.

NSTA recognizes that because of confusion and misinformation, many Americans do not think that the scientific basis for climate change is established and well-grounded (Leiserowitz 2005; van der Linden et al. 2015).

This belief, coupled with political efforts to actively promote the inclusion of non-scientific ideas in science classrooms (Plutzer et al. 2016), is negatively affecting science instruction in some schools. Active opposition to and the anticipation of opposition to climate change science from students, parents, other subject-area teachers, and/or school leadership is having a documented negative impact on science teachers in some states and local school districts (Plutzer et al. 2016).

Teachers are facing pressure to not only eliminate or de-emphasize climate change science, but also to introduce non-scientific ideas in science classrooms (NESTA 2011; Branch 2013; Branch, Rosenau, and Berbeco 2016).

This pressure sometimes takes the form of rhetorical tactics, such as “teach the controversy,” that are not based on science. Scientific explanations must be consistent with existing empirical evidence or stand up to empirical testing. Ideas based on political ideologies or pseudoscience that fail these empirical tests do not constitute science and should not be allowed to compromise the teaching of climate science. These tactics promote the teaching of non-scientific ideas that deliberately misinform students and increase confusion about climate science.

In conclusion, our knowledge of all the sciences, including climate science, grows and changes through the continual process of scientific exploration, investigation, and dialogue. While the details of scientific understandings about the Earth’s climate will undoubtedly evolve in the future, a large body of foundational knowledge exists regarding climate science that is agreed upon by the scientific community and should be included in science education at all levels. These understandings include the increase in global temperatures and the significant impact of human activities on these increases (U.S. Global Change Research Program 2009), as well as mitigation and resilience strategies that human societies may choose to adopt. Students in today’s classrooms will be the ones accelerating these decisions well underway in communities across the world.

NSTA confirms the solid scientific foundation on which climate change science rests and advocates for quality, evidence-based science to be taught in science classrooms in grades K–12 and higher education.

Declarations

To ensure a high-quality K–12 science education constructed upon evidence-based science, including the science of climate change, NSTA recommends that teachers of science

  • recognize the cumulative weight of scientific evidence that indicates Earth’s climate is changing, largely due to human-induced increases in the concentration of heat-absorbing gases (IPCC 2014; Melillo, Richmond, and Yohe 2014);
  • emphasize to students that no scientific controversy exists regarding the basic facts of climate change and that any controversies are based on social, economic, or political arguments and are not science;
  • deliver instruction using evidence-based science, including climate change, human impacts on natural systems, human sustainability, and engineering design, as recommended by the Framework for K–12 Science Education (Framework);
  • expand the instruction of climate change science across the K–12 span, consistent with learning progressions offered in the Framework;
  • advocate for integrating climate and climate change science across the K–12 curriculum beyond STEM (science, technology, engineering, and mathematics) classes;
  • teach climate change as any other established field of science and reject pressures to eliminate or de-emphasize climate-based science concepts in science instruction;
  • recognize that scientific argumentation is not the same as arguing beliefs and opinions. It requires the use of evidence-based scientific explanations to defend arguments and critically evaluate the claims of others;
  • plan instruction on the premise that debates and false-equivalence arguments are not demonstrably effective science teaching strategies;
  • help students learn how to use scientific evidence to evaluate claims made by others, including those from media sources that may be politically or socially biased;
  • provide students with the historical basis in science that recognizes the relationship between heat-absorbing greenhouse gases—especially those that are human-induced—and the amount of energy in the atmosphere;
  • highlight for students the datasets from which scientific consensus models are built and describe how they have been tested and refined;
  • recognize that attempts to use large-scale climate intervention to halt or reverse rapid climate change are well beyond simple solutions and will likely result in both intended and unintended consequences in the Earth system (NRC 2015; USGCRP 2017);
  • analyze different climate change mitigation strategies with students, including those that reduce carbon emissions as well as those aimed at building resilience to the effects of global climate change;
  • seek out resources and professional learning opportunities to better understand climate science and explore effective strategies for teaching climate science accurately while acknowledging social or political controversy; and
  • analyze future climate change scenarios and their relationships to societal decisions regarding energy-source and land-use choices.

Necessary Support Structures

To support the work of teachers of science, NSTA recommends that school administrators, school boards, and school and district leaders

  • ensure the use of evidence-based scientific information when addressing climate change and climate science in all parts of the school curriculum, such as social studies, mathematics, and reading;
  • provide teachers of science with ongoing professional learning opportunities to strengthen their content knowledge, enhance their teaching of scientific practices, and help them develop confidence to address socially controversial topics in the classroom;
  • support teachers as they review, adopt, and implement evidence-based science curricula and curricular materials that accurately represent the occurrence of, evidence for, and responses to climate change;
  • ensure teachers have adequate time, guidance, and resources to learn about climate science and have continued access to these resources;
  • resist pressures to promote non-scientific views that seek to deemphasize or eliminate the scientific study of climate change, or to misrepresent the scientific evidence for climate change; and
  • provide full support to teachers in the event of community-based conflict.

To support the teaching of climate change in K–12 school science, NSTA recommends that state and district policy makers

  • ensure that licensure and preparation standards for all teachers of science include science practices and climate change science content;
  • ensure that instructional materials considered for adoption are based on both recognized practices and contemporary, scientifically accurate data;
  • preserve the quality of science education by rejecting censorship, pseudoscience, logical fallacies, faulty scholarship, narrow political agendas, or unconstitutional mandates; and
  • understand that demand is increasing for a workforce that is knowledgeable about and capable of addressing climate change mitigation and building resilience to the effects of global climate change.

To support the teaching of climate change in K–12 school science, NSTA recommends that parents and other members of the community and media

  • seek the expertise of science educators on science topics, including climate change science;
  • augment the work of science teachers by supporting student learning of science at home, including the science of climate change;
  • help students understand the contributions that STEM professionals, policy makers, and educators can make to mitigate the effects of climate change and how they can make decisions that contribute to desired outcomes; and
  • clarify that societal controversies surrounding climate change are not scientific in nature, but are social, political, and economic.

To support the teaching of climate change in K–12 school science, NSTA recommends that higher education professors and administrators

  • design curricula that incorporate climate change science into science and general education coursework, and that these materials meet social, economic, mathematical, and literary general education goals;
  • provide teacher-education students with science content and pedagogy that meets the Framework‘s expectations for the grade band(s) they will teach; and
  • recognize that a solid foundation in Earth system science should be a consideration in student admissions decisions.

Adopted by the NSTA Board of Directors, September 2018