KaiserScience

Start here

Is a ‘Spectrum’ the Best Way to Talk About Autism?

In “The Atlantic” Rose Eveleth writes:

The terms “high-functioning” and “low-functioning” have no medical meaning. Nearly every expert I talked to referenced a common mantra in autism: When you’ve met one person with autism, you’ve met one person with autism. Which sounds nice, but is not particularly helpful when looking for meaning.

“With the spectrum, there’s a wide range, we’re still trying to figure out what that wide range means,” said Stephen Edelson, the director of the Autism Research Institute. “I don’t have a great answer. Scientific understanding of autism certainly continues to evolve,” said Paul Wang, the head of medical research at Autism Speaks. “I think there’s no one continuum necessarily,” says Lisa Gilotty, the autism-spectrum-disorders program chief at the National Institute of Mental Health. “It’s hard because … different people will break that up in very different ways, I’m not sure any of those ways are accurate.”

“It’s almost like if you look in the stars in the sky and say, ‘Oh, there’s Orion’s belt. And oh, there’s the Big Dipper.’ You could also look at the stars and say they cluster a different way. And I think that’s still where we are with autism,” said Jeffrey Broscoe, the director of the population health ethics department at the University of Miami.

And perhaps because the spectrum has no agreed upon poles, there is very little data about how autistic people might be distributed along the spectrum. Different studies measure things like intellectual disability, and verbal ability, and self-injurious behavior in certain populations, but researchers know very little about what the autism population looks like as a whole.

….ike so much of psychiatry, autism is a construct, a conceptual framework that will sooner or later outlive its usefulness. And the spectral characterization of autism might work for now, but it might not work forever.

“Right now the best way to approach autism is to think about it as a spectrum condition, but it’s quite possible that in the next 10 to 15 years, we’ll start understanding these better—not just genetics but the real pathophysiology,” says Broscoe. One day it might be lots of different diagnoses, each pinned to a specific cause or mutation or biological breakdown. Just as people once thought of all cancers as singular, and now think about and treat breast cancer and lung cancer and colon cancer differently. Autism, Broscoe says, “may look more like cancer one day.”

Roy Grinker, an anthropologist whose book, Unstrange Minds: Remapping the World of Autism, combines his personal experiences with an autistic daughter, and academic research into autism, laughed about the idea that autism was a single, “real” thing. “There’s not a real thing out there called autism! There are complex neural pathways that lead to different behaviors and traits that we have decided right now is best understood by a framework called autism. But I have no confidence that in 30 years we’ll still use the word autism.”

* * *

This isn’t to say there aren’t robust research efforts focused on autism. This year, the National Institutes of Health alone spent $189 million dollars on autism research. In 2014, President Obama signed a bill called the Autism CARES Act which promises $1.3 billion in federal funding for autism research over the next five years. In 2014, the organization Autism Speaks spent $21.2 million on autism research.

But most of the funding is for figuring out the causes of the disorder, trying to identify biomarkers and genetic clues, and attempts to understand potential environmental contributors. Very little of it goes to sorting out what the spectrum looks like and how the population is distributed along it.

But even looking at the data that does exist reveals that it’s tough to get a comprehensive look at gradients along the spectrum. For a while, experts might have said that the spectrum went from “high functioning” to “low functioning.” But those terms were never clearly defined. “We just don’t have good ways of measuring functioning-levels overall,” Anne Roux, a researcher at Drexel’s Autism Institute told me in an email. “For example, we know that 60 [percent to] 70 percent of people with autism have co-occurring health and mental-health diagnoses. Yet, there are really no measures that account for the role of co-occurring disorders in how people function.”

And even if you try to pick a more concrete measure, attempts to plot autistic people fall apart pretty quickly. Take the CDC data on intellectual impairment. In their most recent report, released in 2014 but using data from 2010, researchers found that 31 percent of 8-year-old children with autism qualified as intellectually disabled, with IQ scores below 70, and 23 percent qualified as “borderline” with scores between 70 and 85. But in their 2000 report, between 40 percent and 62 percent of children studied were considered intellectually disabled. So, are the majority of autistic people intellectually disabled? Or only one-third?

Part of why this information can be hard to track is due to changes in how autism is diagnosed and classified. The latest edition of the DSM, published in May of 2013, did away with Asperger’s syndrome, a condition often seen as existing just beyond one end of the autistic spectrum. People once diagnosed with Asperger’s have some of the same behaviors as autistic people do—repetitive behaviors, difficulties with social interaction—but often have far fewer problems with verbal language. Now that Asperger’s syndrome is no longer a diagnosis, some of those people fell into an autism diagnosis, and some were simply no longer considered disabled. Wang says that the shifting CDC numbers on intellectual disability reflect diagnosis, not an underlying truth about autism.

See our article on issues relating to Asperger syndrome and Autism

Egyptians, Genetics, Sociology and Race

From Nivenus, at Observation Deck:

When the cast of Exodus: Gods and Kings—Ridley Scott’s upcoming Biblical epic—was announced a lot of people made the complaint that it was overwhelmingly white, a move they decried as both inaccurate and racist. They were right. Unfortunately, in response a lot of people have peddled another historical (and racist) error: that the ancient Egyptians were black and that modern Egyptians are imposters…

…Cultures as different from one another (and Western Europe) as the Mongol Empire, northern India, Arabia, and Comanches have all been portrayed by white actors … because, again, the presumption is that a white actor is a blank slate within whom everyone can identify, including non-white people

…However, while the tendency usually is to whitewash historical peoples, the opposite also sometimes occurs. There is an increasing tendency I’ve noticed for some people, for example, to re-envision all of the ancient societies of the Old World as not simply non-white, but specifically “black.” Putting aside for a moment the fact that within Africa itself “black” is a largely meaningless term (there’s more genetic variety within Africa’s “black” population than the rest of the world combined), this is just simply false. The samurai were no more black than they were white. And neither were the ancient Egyptians.

That’s right, the ancient Egyptians weren’t black. They weren’t white either, mind you, but to presume that a culture has to be one or the other is to accept a racial dichotomy that white colonialists themselves invented for the purpose of sorting the world into “civilized” (white) and “savage” (colored) peoples. Most cultures in the world don’t really fit neatly into either category: are Latinos white or colored? The answer depends partially on who’s asking the question: most Latinos identify as white (both in the U.S. and Latin America) but most non-Latino Americans usually sort them as non-white.

The truth is that “white” is essentially a byword for “European” (sometimes northern European specifically) while “colored” basically just means everyone else. And these categories aren’t static or unchanging either. In 19th century Europe, various ethnic groups were sometimes sorted into “more” or “less” white groups. According to many British anthropologists, the Irish were “less white” than the English. According to the Nazis, Slavic-speaking peoples like Poles or Russians were “subhuman” non-Aryans. Today, virtually all of these groups are considered “equally” white (and Jews, who weren’t considered white at all, now often are).

This outdated way of talking about race was so prevalent and so dominant in academic circles that it’s been accepted as largely accurate, even by lots of non-white people. Instead of challenging the arbitrary lines in the sand 19th century racists drew up to sort people into those who were worthy of self-rule and those who weren’t, a lot of people have just flipped the idea on its head, arguing that the roots of all civilization are inherently “black” rather than “white,” as Eurocentric scholars claimed.

Which brings us to Egypt. For some reason or another—possibly because of the highly publicized discovery of Tutankhamun’s tomb in the 1920s, possibly because the Great Pyramid of Giza is one of the last remaining wonders of the ancient world—everyone wants to claim ancient Egypt for themselves….

What were the ancient Egyptians? Were they black or were they white?… Oddly, it’s occurred to relatively few people to look at how modern Egyptians think of themselves, because we have divorced ancient and modern Egypt in our minds as if they’re two completely unrelated cultures. …

…. what about how Egypt got invaded and conquered by a whole bunch of people, including the Arabs? Couldn’t that have impacted the Egyptians’ race? Well sure, that happened. Libyans, Nubians, Canaanites, Mesopotamians, Persians, Greeks, and Romans have all ruled Egypt at one point or another and the Arabs are the most recent bunch (not counting the Turks or the British). But the truth is that conquest only very rarely leads to a massive shift in the native population… genetic studies in Egypt back this up:

the genetic profile of modern Egyptians has been affected less than 15% by foreign admixture.

Egyptian hieroglyphics race

There’s also the fact that ancient Egyptians didn’t really perceive themselves as either “black” or “white.” Just look at the above painting from Pharaoh Seti I’s tomb. The top right group, with the palest skin are Libyans (Berbers), the next one over to the left are Nubians, followed by “Asiatics” (Mesopotamians).

The bottom central group are Egyptians. By their own perception Egyptians were neither particularly dark nor particularly pale, and given their xenophobic attitude towards outside cultures (which was fairly common for most ancient peoples) they would probably resent being sorted into either “race.”

So why does this matter? Why is it important that we acknowledge the Egyptians don’t fit into our constructed dichotomy of black vs. white, of European or African? Well, for one thing many modern Egyptians find it kind of offensive. Despite their modern self-identification as Arabs, most Egyptians still feel a strong claim to the historical legacy of their ancient forebears and find it pretty annoying when American scholars (and, black or white, it is mostly Americans) try to pigeonhole the pharaohs into one racial category or another for political purposes.

Secondly, it’s pretty clearly false as I’ve shown above. The ancient Egyptians were African, but that’s a pretty broad label, just like the word “Asian” includes within its meaning Turks, Indians, Samoyeds, Han Chinese, and Malays. There’s a lot of similarity between Egyptians and Nubians, that’s true. There’s also a lot of resemblance between Egyptians and Palestinians. They don’t fit neatly into one super-category or the other, not when you peel away the labels and look at the actual facts.

Egyptians Aren’t White… But They Aren’t Black Either

Also see Genetic variation, classification and race

Man Fails Paternity Test… Because Man’s Unborn Twin Is The Biological Father Of His Son

October 26, 2015 | by Justine Alford

Prepare to have your mind blown. This is the fascinating case study of a man who failed a paternity test because part of his genome actually belongs to his unborn twin. This means that the genetic father of the child is actually the man in question’s brother, who never made it past a few cells in the womb.

Yes, this sounds completely crazy and like a headline you might read in a trashy magazine. But before you write it off as that, let’s go into some more details.

It all starts off with a couple in the U.S. who were having trouble conceiving their second child. They decided to seek help and went to a fertility clinic, where eventually intrauterine insemination was performed. This involves washing and concentrating sperm before inserting it directly into the uterus of a woman around the time of ovulation to boost the chances of fertilization.

The assisted conception worked, and nine months later the happy couple welcomed a baby boy into the world. But then things started to take a turn for the weird. Testing revealed that the child’s blood type didn’t match up with his parents’.

“Both parents are A, but the child is AB,” Barry Starr from the Department of Genetics at Stanford University told IFLScience. “There are rare cases where that can happen, but their first thought was that the clinic had mixed up sperm samples.”

The couple therefore decided to take a standard paternity test, which to their dismay revealed that the man was not the child’s father. So they took another test, but the results were the same. At this point, mixing up samples didn’t seem too far-fetched, but the clinic had only dealt with one other intrauterine insemination at the same time as this couple, which involved an African-American man, and given the child’s appearance this didn’t match up.

This was when Starr was contacted by the couple’s lawyer, who suggested that they take a more powerful test: the over-the-counter 23andMe genetic service. This was because this particular test is good at looking at family relationships. The results that came back were pretty surprising, suggesting that the child’s father was actually his uncle, the man’s brother.

At this point, Starr’s team decided to delve a little deeper, with the idea that the man could possibly be a “human chimera,” i.e. an individual with different genomes. It’s actually not uncommon for multiple fertilizations to happen in the womb even when only one child is born. What can sometimes happen is two independent early embryos, at this stage just clumps of cells, actually fuse together and go on to develop normally as a single individual.

To test this theory, DNA samples were taken from both the cheek of the father, which was used for the original paternity tests, and also his sperm. Once again, the cheek cells didn’t match up with the child, but the sperm sample told a different story.

Supporting the human chimera idea, what they found was a “major” genome, accounting for roughly 90% of the sperm cells, and a “minor” genome that only represented about 10%, Starr explained. The major genome matched up with the cheek cells, but the minor genome was consistent with the child’s DNA.

“So the father is the fusion of two people, both the child’s father and uncle. That’s wicked cool,” said Starr.

Original article: Man Fails Paternity Test Because Man’s Unborn Twin Is The Biological Father Of His Son
_______________________________________________________

What is a chimera?

A genetic chimerism – or chimera – (from the creature Chimera in Greek mythology) is a single organism composed of genetically distinct cells. This can result in:

male and female organs, two blood types, or subtle variations in form.

Animal chimeras are produced by the merger of multiple fertilized eggs.

In plant chimeras, however, the distinct types of tissue may originate from the same zygote, and the difference is often due to mutation during ordinary cell division.

Normally, chimerism is not visible on casual inspection; however, it has been detected in the course of proving parentage.

Another way that chimerism can occur in animals is by organ transplantation, giving one individual tissues that developed from two genomes. For example, a bone marrow transplant can change someone’s blood type.

{Adapted from Wikipedia, Chimera, October 2015}

This diagram shows two ways that a human can be born a chimera.

The article is describing the second/lower case.

Candidates misunderstand laws of Thermodynamics

Thermodynamics is an essential component of physics and chemistry: Science standards for thermodynamics
_______________

FactCheck.Org ran this analysis:

Ben Carson claimed that prevailing theories of how the universe began and how planets and stars formed violate the second law of thermodynamics. His comments represent a misunderstanding of scientific concepts. Carson, a retired pediatric neurosurgeon and Republican presidential candidate, spoke at a rally on Sept. 22 at Cedarville University — an Ohio school that describes itself as a “Christ-centered, Baptist institution.” Carson began his discussion of science by explaining — correctly — that many studies have debunked the notion that vaccines cause autism. “That’s why we have science and scientific studies to look at these kinds of things,” he said.

He then went on to say “science is not always correct,” and claimed that the Big Bang theory is one such example (at the 1:03:13 mark):

Carson, Sept. 22: Now you’re saying, there’s a Big Bang, a big explosion, and our solar system and our universe come into perfect alignment. Now I said you also believe in the second law of thermodynamics, entropy, right? “Yeah.” And I said, that states that things move toward a state of disorganization, right? “Yeah.” I said, so, how is there a Big Bang and instead of things moving toward disorganization they become perfectly organized to the point where we can predict 70 years hence when a comet is coming. How does that work? “Well. We don’t understand everything.”

According to the Big Bang theory, that initial explosion represents the birth of the universe, about 13.8 billion years ago. The solar system that houses the Earth was born about 5 billion years ago.

SciCHECKinsertCarson talked about entropy, which is commonly thought of as a measure of order or disorder; increasing entropy essentially means an increasingly disordered state.

The second law of thermodynamics says that in any isolated system, the entropy of that system will increase or remain the same — not decrease.

Carson claims that the Big Bang theory violates the second law of thermodynamics, since the solar system has moved to what he calls a “perfectly organized” point, instead of becoming more disorganized.

But the two concepts aren’t in contradiction. A small part of a system can become more ordered, while the rest of the system sees a decrease in order in the process.

One good example of this is an ice tray in a freezer. The molecules in liquid water move into a more ordered state when they freeze into a solid. On its own then, water turning to ice appears to be a violation of the second law. But the ice in the freezer is not a closed system: The freezer also generates heat as it runs, which is radiated out into your kitchen. That heat increases entropy more than the water turning to ice decreases it.

Brian Greene, a physicist at Columbia University and author of several popular science books, gave us another easy-to-visualize example during a phone interview: the act of cleaning up a messy room.

Greene, Sept. 23: How do you take a messy room and make it ordered? That would seem to be decreasing the disorder – it was a mess, now it’s not a mess. It was disordered, now it’s ordered. How could anybody do that? It seems to violate the second law of thermodynamics!

But the answer is: you have to take into account all of the sources of order and disorder, including the body of the human who is cleaning up the room, the heat that they are generating, the fat that’s being burned as they undertake this exercise. And when you take into account everything – the molecules of air that get excited by the sweat forming on the brow of the individual doing the cleaning – when you take into account all of these features, the amount of disorder generated overly compensates – always – for the amount of order that’s being created in the room.

Moving outward to the solar system scale, the situation is the same. The increasing entropy is not violated by the formation of planets, stars and comets due to arrive in 70 years. All the factors that go into the formation of these celestial bodies work to increase disorder rather than decrease it.

As Greene told us: “The formation of a star is an entropically increasing phenomenon. It is not decreasing the amount of disorder, it is increasing the amount of disorder, even though it looks so darn ordered relative to, say, the swirling gas cloud from which it emerged.”

Planets and stars form when gases and dust in space slow down and begin to clump together, at which point gravity helps pull these clumps together and draw in more dust and gas, until those big objects are formed. “That process as we understand it is completely consistent with the second law of thermodynamics,” Greene said.

From a universe-wide perspective, the overall increasing entropy is measurable based on the leftover heat from the Big Bang, known as the cosmic microwave background radiation. According to the Big Bang theory, at the point of the initial explosion all the energy in the universe was concentrated in a state of very low entropy — an almost completely ordered state.

Ever since that explosion, that energy has been spreading out, a continually rising degree of disorder. The observed level of the background radiation is consistent with the predictions of modern cosmology. In short, Big Bang theory predicts the existence of and the specific amounts of background radiation as a result of the rising entropy of the entire system, and observations actually bear that out. “The calculations agree with the observations to fantastic precision,” Greene said.

Carson went on to claim that the presence of stars and planets is related to the existence of multiple Big Bangs that eventually might produce an ordered universe:

Carson: And then they go to the probability theory, and they say “but if there’s enough big bangs over a long enough period of time, one of them will be the perfect big bang and everything will be perfectly organized.” And I said, so you’re telling me if I blow a hurricane through a junkyard enough times over a long enough period of time after one of them there will be a 747 fully formed and ready to fly?

That is not an accurate reflection of the Big Bang theory. Though some theories of the origin of the universe suggest that the Big Bang was only one of many such explosions, these theories do not state that the currently ordered existence is a spontaneous result of one of these repeated Big Bangs.

Greene called this a “red herring,” and said the concept of multiple Big Bangs has nothing to do with how stars and planets form in this current universe. Instead, those theories involve the idea that the universe goes through cycles over many billions of years: Big Bang, expansion, contraction, “Big Crunch,” followed by another Big Bang. How the stars and planets form between each of those bangs and crunches is a separate issue.

Although there is still much to be learned about the origins of the universe, the fact is science has extremely thorough explanations for how planets and stars form, and they mesh perfectly with the laws of thermodynamics.

Editor’s Note: SciCheck is made possible by a grant from the Stanton Foundation. – Dave Levitan.  Original article : Ben Carson rewrites the laws of thermodynamics

Related discussions

Big Bang Theory and conservation of energy

Big Bang theory and 2nd law of thermodynamics

Why doesn’t evolution violate the second law of thermodynamics? Ask-A-Mathematician

TalkOrigins discussion of thermodynamics and evolution

This Face Changes the Human Story. But How? Homo naledi

For the full article see:  This Face Changes the Human Story. But How?

By Jamie Shreeve, National Geographic, Photographs by Robert Clark

September 10, 2015

A trove of bones hidden deep within a South African cave represents a new species of human ancestor, scientists announced Thursday in the journal eLife. Homo naledi, as they call it, appears very primitive in some respects—it had a tiny brain, for instance, and apelike shoulders for climbing. But in other ways it looks remarkably like modern humans. When did it live? Where does it fit in the human family tree? And how did its bones get into the deepest hidden chamber of the cave—could such a primitive creature have been disposing of its dead intentionally?

dinaledi_locator.ngsversion.1440173453603

dinaledi_cave4_FINAL.ngsversion.1440173941173

This is the story of one of the greatest fossil discoveries of the past half century, and of what it might mean for our understanding of human evolution.

Two years ago, a pair of recreational cavers entered a cave called Rising Star, some 30 miles northwest of Johannesburg. Rising Star has been a popular draw for cavers since the 1960s, and its filigree of channels and caverns is well mapped. Steven Tucker and Rick Hunter were hoping to find some less trodden passage.

In the back of their minds was another mission. In the first half of the 20th century, this region of South Africa produced so many fossils of our early ancestors that it later became known as the Cradle of Humankind. Though the heyday of fossil hunting there was long past, the cavers knew that a scientist in Johannesburg was looking for bones. The odds of happening upon something were remote. But you never know.

eep in the cave, Tucker and Hunter worked their way through a constriction called Superman’s Crawl—because most people can fit through only by holding one arm tightly against the body and extending the other above the head, like the Man of Steel in flight. Crossing a large chamber, they climbed a jagged wall of rock called the Dragon’s Back. At the top they found themselves in a pretty little cavity decorated with stalactites. Hunter got out his video camera, and to remove himself from the frame, Tucker eased himself into a fissure in the cave floor. His foot found a finger of rock, then another below it, then—empty space. Dropping down, he found himself in a narrow, vertical chute, in some places less than eight inches wide. He called to Hunter to follow him. Both men have hyper-slender frames, all bone and wiry muscle. Had their torsos been just a little bigger, they would not have fit in the chute, and what is arguably the most astonishing human fossil discovery in half a century—and undoubtedly the most perplexing—would not have occurred….

…After contorting themselves 40 feet down the narrow chute in the Rising Star cave, Tucker and Rick Hunter had dropped into another pretty chamber, with a cascade of white flowstones in one corner. A passageway led into a larger cavity, about 30 feet long and only a few feet wide, its walls and ceiling a bewilderment of calcite gnarls and jutting flowstone fingers. But it was what was on the floor that drew the two men’s attention. There were bones everywhere. The cavers first thought they must be modern. They weren’t stone heavy, like most fossils, nor were they encased in stone—they were just lying about on the surface, as if someone had tossed them in. They noticed a piece of a lower jaw, with teeth intact; it looked human.

2403_0281_Displacement_Boden_sf_Morph-Kamera_Multi5_001_0372

The bones were superbly preserved, and from the duplication of body parts, it soon became clear that there was not one skeleton in the cave, but two, then three, then five … then so many it was hard to keep a clear count. Berger had allotted three weeks for the excavation. By the end of that time, the excavators had removed some 1,200 bones, more than from any other human ancestor site in Africa—and they still hadn’t exhausted the material in just the one square yard around the skull. It took another several days digging in March 2014 before its sediments ran dry, about six inches down.

There were some 1,550 specimens in all, representing at least 15 individuals. Skulls. Jaws. Ribs. Dozens of teeth. A nearly complete foot. A hand, virtually every bone intact, arranged as in life. Minuscule bones of the inner ear. Elderly adults. Juveniles. Infants, identified by their thimble-size vertebrae. Parts of the skeletons looked astonishingly modern. But others were just as astonishingly primitive—in some cases, even more apelike than the australopithecines. “We’ve found a most remarkable creature,” Berger said. His grin went nearly to his ears.

______________________

The above is an except from the excellent National Geographic article on this discover. To read one of the actual scientific papers itself, see:
Homo naledi, a new species of the genus Homo from the Dinaledi Chamber, South Africa

Abstract: Homo naledi is a previously-unknown species of extinct hominin discovered within the Dinaledi Chamber of the Rising Star cave system, Cradle of Humankind, South Africa. This species is characterized by body mass and stature similar to small-bodied human populations but a small endocranial volume similar to australopiths. Cranial morphology of H. naledi is unique, but most similar to early Homo species including Homo erectus, Homo habilis or Homo rudolfensis. While primitive, the dentition is generally small and simple in occlusal morphology. H. naledi has humanlike manipulatory adaptations of the hand and wrist. It also exhibits a humanlike foot and lower limb. These humanlike aspects are contrasted in the postcrania with a more primitive or australopith-like trunk, shoulder, pelvis and proximal femur. Representing at least 15 individuals with most skeletal elements repeated multiple times, this is the largest assemblage of a single species of hominins yet discovered in Africa.

DOI: http://dx.doi.org/10.7554/eLife.09560.001

How to Solve a Physics Problem Undergrads Usually Get Wrong

By Rhett Allain , 07.09.15

This is a classic introductory physics problem. Basically, you have a cart on a frictionless track (call this m1) with a string that runs over a pulley to another mass hanging below (call this m2). Here’s a diagram.
sketches_spring_2015_key8
Now suppose I want to find the acceleration of the cart, after it is let go.

The string that attaches the two carts does two things.

First, the string makes the magnitude of the acceleration for both carts is the same.

Second, the magnitude of the tension on cart 1 and cart 2 has the same value (since it’s the same string).

This means I can draw the following two force diagrams for the two masses.

sketches_spring_2015_key

So, how do you find the acceleration of cart 1? It seems clear, right?

You just need to find the tension in the string since that’s the only force in the horizontal direction. You could write:

1 eqs

If I know the tension, then I can calculate the acceleration. Simple, right?

Even simpler, the tension would just be equal to the gravitational force on the hanging mass (m2).

WRONG! This is not the correct way to solve this problem — I actually remember making this exact mistake when I was an undergraduate student. But why is it wrong?

Here’s the link to the full article:

How to Solve a Physics Problem Undergrads Usually Get Wrong

Why is the tension not the same as the weight of mass 2? The answer is simple — mass 2 is not in equilibrium but instead it is accelerating downward.

Since it’s accelerating, the net force is not equal to zero (vector). This means that the tension should be smaller than the weight of mass 2 — which it is.

if-you-define-the-problem-correctly-steve-jobs

Solution to the Half-Atwood Machine

The tension in the string depends on the weight of mass 2 as well as the acceleration of mass 2. However, the acceleration of mass 2 is the same as mass 1 — but the acceleration of mass 1 depends on the tension. Does this mean you can’t solve the problem? Of course not, it just means that it’s slightly more complicated.

Let’s say mass 2 is accelerating in the negative y-direction. This means that I can write the following force equation (in the y-direction).

Now I can do a similar thing for mass 1 with its acceleration in the x-direction. Since the magnitudes of these two accelerations are the same, I will use the same variable.

Half Atwood machine 2

With two equations and two variables (a and T), I can solve for both variables. If I substitute the expression for T for mass 1 into the equation for mass 2, I get:

Half Atwood machine 3

Instead of completely solving for the acceleration, let me leave it in the form above. Think of the problem like this: suppose you consider the system that consists of both mass 1 and mass 2 and it’s accelerating.

What force causes this whole system to accelerate? It’s just the weight of mass 2. So, that is exactly what this equation shows — there is only one force (m2g) and it accelerates the total mass (m1 + m2).

From this I can solve for the acceleration.

Half Atwood machine 4

Using the values of mass 1 = 1.207 kg and mass 2 = 0.145 kg, I get an acceleration of 1.05 m/s2. This is pretty close to the experimental value (seen above) at 1.109 m/s2. I’m happy.

With the value of the acceleration, I can plug back into the original equation to solve for the tension. With this, I get a tension of 1.267 N. This is fairly close to the experimental value of 1.285 N. Again, I’m happy. It seems physics still works.

Trying to replicate climate contrarian papers

Here’s what happens when you try to replicate climate contrarian papers:
A new paper finds common errors among the 3% of climate papers that reject the global warming consensus

Dana Nuccitelli, Aug 25, 2015, The Guardian

Here’s what happens when you try to replicate climate contrarian papers

Those who reject the 97% expert consensus on human-caused global warmingoften invoke Galileo as an example of when the scientific minority overturned the majority view. In reality, climate contrarians have almost nothing in common with Galileo, whose conclusions were based on empirical scientific evidence, supported by many scientific contemporaries, and persecuted by the religious-political establishment. Nevertheless, there’s a slim chance that the 2–3% minority is correct and the 97% climate consensus is wrong.

To evaluate that possibility, a new paper published in the journal of Theoretical and Applied Climatology examines a selection of contrarian climate science research and attempts to replicate their results. The idea is that accurate scientific research should be replicable, and through replication we can also identify any methodological flaws in that research. The study also seeks to answer the question, why do these contrarian papers come to a different conclusion than 97% of the climate science literature?

This new study was authored by Rasmus Benestad, myself (Dana Nuccitelli), Stephan Lewandowsky, Katharine Hayhoe, Hans Olav Hygen, Rob van Dorland, and John Cook. Benestad (who did the lion’s share of the work for this paper) created a tool using the R programming language to replicate the results and methods used in a number of frequently-referenced research papers that reject the expert consensus on human-caused global warming. In using this tool, we discovered some common themes among the contrarian research papers.

Cherry picking was the most common characteristic they shared. We found that many contrarian research papers omitted important contextual information or ignored key data that did not fit the research conclusions. For example, in the discussion of a 2011 paper by Humlum et al. in our supplementary material, we note,

The core of the analysis carried out by [Humlum et al.] involved wavelet-based curve-fitting, with a vague idea that the moon and solar cycles somehow can affect the Earth’s climate. The most severe problem with the paper, however, was that it had discarded a large fraction of data for the Holocene which did not fit their claims.

When we tried to reproduce their model of the lunar and solar influence on the climate, we found that the model only simulated their temperature data reasonably accurately for the 4,000-year period they considered. However, for the 6,000 years’ worth of earlier data they threw out, their model couldn’t reproduce the temperature changes. The authors argued that their model could be used to forecast future climate changes, but there’s no reason to trust a model forecast if it can’t accurately reproduce the past.

We found that the ‘curve fitting’ approach also used in the Humlum paper is another common theme in contrarian climate research. ‘Curve fitting’ describes taking several different variables, usually with regular cycles, and stretching them out until the combination fits a given curve (in this case, temperature data). It’s a practice I discuss in my book, about which mathematician John von Neumann once said,

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

Good modeling will constrain the possible values of the parameters being used so that they reflect known physics, but bad ‘curve fitting’ doesn’t limit itself to physical realities. For example, we discuss research by Nicola Scafetta and Craig Loehle, who often publish papers trying to blame global warming on the orbital cycles of Jupiter and Saturn.

This particular argument also displays a clear lack of plausible physics, which was another common theme we identified among contrarian climate research. In another example, Ferenc Miskolczi argued in 2007 and 2010 papers that the greenhouse effect has become saturated, but as I also discussin my book, the ‘saturated greenhouse effect’ myth was debunked in the early 20th century. As we note in the supplementary material to our paper, Miskolczi left out some important known physics in order to revive this century-old myth.

This represents just a small sampling of the contrarian studies and flawed methodologies that we identified in our paper; we examined 38 papers in all. As we note, the same replication approach could be applied to papers that are consistent with the expert consensus on human-caused global warming, and undoubtedly some methodological errors would be uncovered. However, these types of flaws were the norm, not the exception, among the contrarian papers that we examined. As lead author Rasmus Benestad wrote,

we specifically chose a targeted selection to find out why they got different answers, and the easiest way to do so was to select the most visible contrarian papers … Our hypothesis was that the chosen contrarian paper was valid, and our approach was to try to falsify this hypothesis by repeating the work with a critical eye.

If we could find flaws or weaknesses, then we would be able to explain why the results were different from the mainstream. Otherwise, the differences would be a result of genuine uncertainty.

After all this, the conclusions were surprisingly unsurprising in my mind. The replication revealed a wide range of types of errors, shortcomings, and flaws involving both statistics and physics.

You may have noticed another characteristic of contrarian climate research – there is no cohesive, consistent alternative theory to human-caused global warming. Some blame global warming on the sun, others on orbital cycles of other planets, others on ocean cycles, and so on. There is a 97% expert consensus on a cohesive theory that’s overwhelmingly supported by the scientific evidence, but the 2–3% of papers that reject that consensus are all over the map, even contradicting each other. The one thing they seem to have in common is methodological flaws like cherry picking, curve fitting, ignoring inconvenient data, and disregarding known physics.

If any of the contrarians were a modern-day Galileo, he would present a theory that’s supported by the scientific evidence and that’s not based on methodological errors. Such a sound theory would convince scientific experts, and a consensus would begin to form. Instead, as our paper shows, the contrarians have presented a variety of contradictory alternatives based on methodological flaws, which therefore have failed to convince scientific experts.

Human-caused global warming is the only exception. It’s based on overwhelming, consistent scientific evidence and has therefore convinced over 97% of scientific experts that it’s correct.
____________________

The contradictory nature of global warming skepticism. By John Cook, Climate Communication Fellow for the Global Change Institute at the University of Queensland

A major challenge in conversing with anthropogenic global warming (AGW) skeptics is that they constantly seem to move the goalposts and change their arguments. As a consequence, they also frequently contradict themselves. One day they’ll argue the current global warming is caused by the Sun, the next that it’s “natural cycles”, the next that the planet is actually cooling, and the next day they’ll say the surface temperature record is unreliable, so we don’t even know what the global temperature is. This is why Skeptical Science has such an extensive skeptic argument list.

It should be obvious that the arguments listed above all contradict each other, yet they’re often made by the same skeptics. As one prominent example, in 2003 physicist and skeptic Fred Singer was arguing that the planet wasn’t warming, yet in 2007 he published a book arguing that the planet is warming due to a 1,500-year natural cycle. You can’t have it both ways!

It’s a testament to the robustness of the AGW theory that skeptics can’t seem to decide what their objection to it is. If there were a flaw in the theory, then every skeptic would pounce on it and make a consistent argument, rather than the current philosophy which seems to be “throw everything at the wall and see what sticks.” It would behoove AGW skeptics to decide exactly what their objection to the scientific theory is, because then it would be easier to engage in a serious discussion. . .

The contradictory nature of global warming skepticism

Table of global warming skeptic contradictions (click the link below for the full article)

_______

Some climate change skeptics compare themselves to Galileo, who in the early 17th century challenged the Church’s view that the sun revolves around the earth, and was later vindicated. However, most scientists hold that this view is flawed; and in fact the opposite is true. Climate skeptics are not like Galileo.

 

 

Teaching physics with sci-fi films

It’s amazing how motivated students can be to do real, quantitative physics, with the right set-up 🙂

Teaching physics with films

Exxon knew of climate change in 1981, email says – but it funded deniers for 27 more years

Exxon knew of climate change in 1981, email says – but it funded deniers for 27 more years: A newly unearthed missive from Lenny Bernstein, a climate expert with the oil firm for 30 years, shows concerns over high presence of carbon dioxide in enormous gas field in south-east Asia factored into decision not to tap it.

ExxonMobil, the world’s biggest oil company, knew as early as 1981 of climate change – seven years before it became a public issue, according to a newly discovered email from one of the firm’s own scientists. Despite this the firm spent millions over the next 27 years to promote climate denial.

The email from Exxon’s in-house climate expert provides evidence the company was aware of the connection between fossil fuels and climate change, and the potential for carbon-cutting regulations that could hurt its bottom line, over a generation ago – factoring that knowledge into its decision about an enormous gas field in south-east Asia. The field, off the coast of Indonesia, would have been the single largest source of global warming pollution at the time.

“Exxon first got interested in climate change in 1981 because it was seeking to develop the Natuna gas field off Indonesia,” Lenny Bernstein, a 30-year industry veteran and Exxon’s former in-house climate expert, wrote in the email. “This is an immense reserve of natural gas, but it is 70% CO2,” or carbon dioxide, the main driver of climate change.

However, Exxon’s public position was marked by continued refusal to acknowledge the dangers of climate change, even in response to appeals from the Rockefellers, its founding family, and its continued financial support for climate denial. Over the years, Exxon spent more than $30m on thinktanks and researchers that promoted climate denial, according to Greenpeace.

Exxon said on Wednesday that it now acknowledges the risk of climate change and does not fund climate change denial groups.

Some climate campaigners have likened the industry to the conduct of the tobacco industry which for decades resisted the evidence that smoking causes cancer….

Exxon knew of climate change in 1981, email says – but it funded deniers for 27 more years

Not Scared About the Pacific Northwest’s Impending Quake? You Should Be.

The Pacific Northwest is due for a continent-rending earthquake. Experts believe the odds of a Big One happening in the next half century are about one in three, the odds of a Very Big One roughly one in ten, and that, in either case, we are disastrously unprepared.

In the latest issue of The New Yorker, Kathryn Schulz writes extensively and captivatingly on the Pacific Northwest’s 700-mile-long Cascadia subduction zone, and the cataclysm that is projected to occur should it give way:

Take your hands and hold them palms down, middle fingertips touching. Your right hand represents the North American tectonic plate, which bears on its back, among other things, our entire continent, from One World Trade Center to the Space Needle, in Seattle. Your left hand represents an oceanic plate called Juan de Fuca, ninety thousand square miles in size. The place where they meet is the Cascadia subduction zone. Now slide your left hand under your right one. That is what the Juan de Fuca plate is doing: slipping steadily beneath North America. When you try it, your right hand will slide up your left arm, as if you were pushing up your sleeve. That is what North America is not doing. It is stuck, wedged tight against the surface of the other plate.

Without moving your hands, curl your right knuckles up, so that they point toward the ceiling. Under pressure from Juan de Fuca, the stuck edge of North America is bulging upward and compressing eastward, at the rate of, respectively, three to four millimetres and thirty to forty millimetres a year. It can do so for quite some time, because, as continent stuff goes, it is young, made of rock that is still relatively elastic. (Rocks, like us, get stiffer as they age.) But it cannot do so indefinitely. There is a backstop—the craton, that ancient unbudgeable mass at the center of the continent—and, sooner or later, North America will rebound like a spring. If, on that occasion, only the southern part of the Cascadia subduction zone gives way—your first two fingers, say—the magnitude of the resulting quake will be somewhere between 8.0 and 8.6.Thats the big one. If the entire zone gives way at once, an event that seismologists call a full-margin rupture, the magnitude will be somewhere between 8.7 and 9.2. That’s the very big one.

…By the time the shaking has ceased and the tsunami has receded, the region will be unrecognizable. Kenneth Murphy, who directs FEMA’s Region X, the division responsible for Oregon, Washington, Idaho, and Alaska, says, “Our operating assumption is that everything west of Interstate 5 will be toast.”

Not Scared About the Pacific Northwest’s Impending Quake? You Should Be.

_______________________________

After The Big One An immersive, reported science fiction saga about surviving the coming mega-quake.

 

The Most Devastating Quake In US History Is Headed for Portland

WRITTEN BY ADAM ROTHSTEIN

March 3, 2016 // 06:01 AM EST


There is a 22 percent chance that by the time you finish reading this sentence, there will have been an earthquake somewhere on earth. This is a probability that is hard to grasp—it seems both obvious and diffuse. The world is a big place, and most earthquakes are relatively small.

But consider this: Geologists put the chance of a full rupture of the Cascadian Subduction Zone—that’s the fault line off the coast of California, Oregon, Washington, and British Columbia—at 7-15 percent over the next fifty years.1 This would result in a 8.7 to 9.3 Mw earthquake. The biggest quake in recorded history, the 1960 Valdivia quake in Chile, weighed in at 9.5 Mw; and the recent 2011 Tōhoku earthquake off the coast of Japan measured at 9.0 Mw. Relatively speaking, there is a significant chance the Pacific Northwest region will see an earthquake of historical magnitude in the not-so-distant future.

The chance of a slightly smaller (8.3 to 8.6 magnitude) earthquake is judged to beabout 37 percent over the same time frame.2 This is still a massive quake: the 1989 Loma Prieta quake that struck the Santa Cruz area during the World Series was “only” a 6.9 Mw, and the 1906 Great San Francisco quake is estimated to have been around 7.8 Mw. (See here for more on how we measure major earthquakes.) As a resident of Portland, Oregon, I had to take a pause after reading figures like that.

We shouldn’t merely be concerned about the earthquake, but about the uncertainty of probabilities. How can we bet for or against such a large-scale catastrophe? If there was a one-third chance I would be hit by a car if I stepped into the street without looking, would I do it? Being hit by a car would be a terrible way to settle the matter one way or the other.

There have been 41 of these giant quakes in the region in the last 10,000 years.3The last one hit in 1700 AD, coinciding with records of a massive tsunami that hit Japan and Pacific Northwest natives’ oral traditions depicting a massive battle between a thunderbird and a whale. This history is written in the local geology: cutaway river banks still show the line of debris and soil that was washed into new locations, and the continental shelf is banded by the flow of undersea landslides. Along the coast of Washington, dead forests still stand, where cedar groves were killed as they land they grew on was dropped more than six feet into salt water.

The Northwest has changed quite a bit in the last three hundred years. A battle between two mythical creatures across the contemporary I-5 corridor would probably involve not just massive floods and shaking, but a massive collapse of local infrastructure. It could destroy the means for sustaining everything we consider to be the bedrock of a normal modern life.

Because Portland has been my hometown for nearly nine years, I went looking for answers about this chance event, if and when it were to happen here. I found thousands of pages worth of studies and reports, written by hundreds of public employees who’ve long been working on this very question. The Federal Emergency Management Agency, the Oregon State Office of Geology and Mineral Industries, the Oregon Department of Transportation, the Oregon Office of Emergency Management, the Portland Bureau of Emergency Management, the city Bureau of Transportation and even the Parks Department—all of these agencies and more have taken a crack at telling parts of the story about what might happen during a Cascadia Subduction Zone event. The accounts, informed by geologists, seismologists, geographers, engineers, transit experts, and city officials, are detailed, compelling, and often exhaustive.

Some of it is quite alarming. One study declared the possibility that of 2,671 bridges in the “strong” shaking zone, 399 would be at least partially destroyed, and 621 heavily damaged.4 That means 38 percent of the region’s bridges, out of service, all at once.

There are systemic vulnerabilities affecting Oregon as well. Nearly all the petroleum products for the entire state are imported through one particular area of Northwest Portland.5 Despite being a modern state, Oregon is still cut off from the rest of the country by its terrain, and connected by only a limited number of roads, railroads, and sea lanes. I read hundreds, if not thousands of other facts, possibilities, probabilities, and potentialities like this, which remind me how amazing it is that our society holds together even in the best of times.

But these reports, too, are strictly in the language of estimates, in scenarios, in potential plans. And naturally so; there are no guarantees in engineering, let alone in emergencies. It is impossible for anyone to say exactly which bridges will collapse, which roads will be blocked, and which buildings will have electricity and sewer service. Similarly, there is no way to predict exactly how many people will die: either immediately, or in the long and difficult rebuilding process when water and electricity may be scarce. But there are estimates. There are scenarios.

The numbers began to slip through my fingers. To avoid the stress of gambling over the lifecycles of bridges and tunnels, I started to resign myself to fate. I took to telling myself, if it’s going to happen, it’s going to happen. But fate is a solipsistic wall erected between oneself and the world—a world which is always comprised of confounding, frustrating, and mysterious facts.

So, instead of trusting in luck or throwing up our hands to fate, let’s tell a story. This story routes around probability, by imagining a scenario in which the Cascadia Subduction zone finally shifts, and the earthquake memorably described as “The Really Big One” by the New Yorker’s Kathryn Schulz comes to pass. This is the story of what happens next.

http://motherboard.vice.com/after-the-big-one