New information requires prior basic information
Building Pyramids: A model of of knowledge representation
Efrat Furst, Post-doc Fellow at the Learning Incubator, SEAS, Harvard University. Her background is in cognitive-neuroscientific research and professional development for educators.
Archived from https://sites.google.com/view/efratfurst/pyramids
Every new piece of knowledge is learnt on the basis of already existing knowledge.
The principle that organizes the knowledge is ‘Making Meaning’, or the ability to integrate and use a new concept in the context of what we already know.
In this pyramid model, every brick is a ‘piece of knowledge’ and the correct placement, on top of previous layer represents ‘meaning’, the final structure requires both.
Every pyramid is also a brick in a higher-level pyramid. To learn a new piece of information (orange triangles) effectively, it should be learned on the basis of existing prior knowledge (gray triangles). Without prior knowledge (top panel), the new information cannot be integrated meaningfully (create a structure), and would most likely not survive overtime.

see also BLOOM’S TAXONOMY—THAT PYRAMID IS A PROBLEM by Doug Lemov
Higher order learning abilities like critical thinking, and creativity are depended on the existence of broad and well-established domain-specific knowledge, in one or more areas.
Without this base, new high-level information cannot be structured appropriately, and hence will not be useful and will not be retained (top panel).
The wider and more varied the basis of prior knowledge is, the higher, more complex and more creative structures it can support (bottom panel).

Willingham, D. T. (2007). Critical thinking. American Educator, 31(3), 8-19
When the same routine of information is rehearsed during a session, a fast and impressive improvement may be evident . The gain, however, may not last long, when it is largely dependent on the specific context (of time, place, content, method, specific sequence etc.). When context fades as time goes by, the same level of performance cannot be maintained (top panel).

However, when the study or practice in done in effective ways that emphasize crating meaningful connections to prior knowledge (elaboration), and between the newly learned items, we are building a stable structure of knowledge that may survive the passage of time and the absence of the learning context (bottom panel).
Prof. Robert Bjork on the distinction between Learning and Performance.
Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way
Often we want learning or practice to be fun for ourselves of for our students, in order to build a positive experience. But if we wish to build knowledge through this experience, we must make sure that something is actually being built.
Effective learning should include explicit elements of connecting the new knowledge to prior knowledge in meaningful ways (bottom panel), rather than just playing around with the new concept (top panel). Effective learning maybe more effortful (in a good way) than fun, but the long term results is usually rewarding.

Prof Robert Bjork on Desirable Difficulties
Some things can be learned independently: when the relevant prior knowledge is available and when the learner is able to make the required connections between the new information and the existing knowledge (top panel).
But for learning some other things guidance is essential: to supply information, or to select the relevant information. Often guidance is needed to establish the nature of the relationships between the new and the existing information: a concrete example or a clear explanation that would make the pieces “fall” into the right place. With the appropriate guidance (bottom panel) more can be learned.

From neuroscience to the classroom
26th September 2018, by Efrat Furst
Can neuroscience add anything to our understanding of the classroom? And what should teachers make of it? Efrat Furst looks into how this lens might prove useful in the future.
https://researched.org.uk/from-neuroscience-to-the-classroom/
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Alzheimer’s disease
Alzheimer’s disease

Possible causes of Alzheimer’s diseases
We currently don’t know the cause of all forms of Alzheimer’s disease. There may be more than one cause. But today we have increasingly strong evidence that many cases are caused by a combination of a genetic mutation and Herpes virus.
Prions
Two proteins central to the pathology of Alzheimer’s disease act as prions—misshapen proteins that spread through tissue like an infection by forcing normal proteins to adopt the same misfolded shape—according to new UC San Francisco research.
Using novel laboratory tests, the researchers were able to detect and measure specific, self-propagating prion forms of the proteins amyloid beta (A-β) and tau in postmortem brain tissue of 75 Alzheimer’s patients. In a striking finding, higher levels of these prions in human brain samples were strongly associated with early-onset forms of the disease and younger age at death.
by University of California, San Francisco
Alzheimer’s disease is a ‘double-prion disorder,’ study shows
Herpes virus
Alzheimer’s: The heretical and hopeful role of infection
David Robson writes
…. The “amyloid beta hypothesis” has inspired countless trials of drugs that aimed to break up these toxic plaques. Yet this research has ended in many disappointments, without producing the desired improvements in patients’ prognosis. This has led some to wonder whether the amyloid beta hypothesis may be missing an important part of the story. “The plaques that Alzheimer observed are the manifestation of the disease, not the cause,” says geriatrics scientist Tamas Fulop at the University of Sherbrooke in Canada.
Scientists studying Alzheimer’s have also struggled to explain why some people develop the disease while others don’t. Genetic studies show that the presence of a gene variant – APOE4 – can vastly increase someone’s chances of building the amyloid plaques and developing the disease.
But the gene variant does not seal someone’s fate as many people carry APOE4 but don’t suffer from serious neurodegeneration. Some environmental factors must be necessary to set off the genetic time bomb, prompting the build-up of the toxic plaques and protein tangles.
Early evidence
Could certain microbes act as a trigger? That’s the central premise of the infection hypothesis.
Itzhaki has led the way with her examinations into the role of the herpes simplex virus (HSV1), which is most famous for causing cold sores on the skin around the mouth. Importantly, the virus is known to lie dormant for years, until times of stress or ill health, when it can become reactivated – leading to a new outbreak of the characteristic blisters.
While it had long been known that the virus could infect the brain – leading to a dangerous swelling called encephalitis that required immediate treatment – this was thought to be a very rare event. In the early 1990s, however, Itzhaki’s examinations of post-mortem tissue revealed that a surprising number of people showed signs of HSV1 in their neural tissue, without having suffered from encephalitis.
Importantly, the virus didn’t seem to be a risk for the people without the APOE4 gene variant, most of whom did not develop dementia. Nor did the presence of APOE4 make much difference to the risk of people without the infection.
Instead, it was the combination of the two that proved to be important. Overall, Itzhaki estimates that the two risk factors make it 12 times more likely that someone will develop Alzheimer’s, compared to people without the gene variant or the latent infection in their brain.
Itzhaki hypothesised that this was due to repeated reactivation of the latent virus – which, during each bout, invades the brain and somehow triggers the production of amyloid beta, until eventually, people start to show the cognitive decline that marks the onset of dementia.
Itzhaki says that her findings were met with a high degree of scepticism by other scientists. “We had the most awful trouble getting it published.” Many assumed that the experiments were somehow contaminated, she says, leading to an illusory result. Yet she had been careful to avoid this possibility, and the apparent link between HSV1 infection and Alzheimer’s disease has now been replicated in many different populations.
One paper, published earlier this year, examined cohorts from Bordeaux, Dijon, Montpellier and rural France. By tracking certain antibodies, they were able to detect who had been infected with the herpes simplex virus. The researchers found that the infection roughly tripled the risk of developing Alzheimer’s in APOE4 carriers over a seven-year follow-up period – but had no effect in people who were not carrying the gene.
“The herpes virus was only able to have a deleterious effect if there was APOE4,” says Catherine Helmer at the University of Bordeaux in France, who conducted the research.
To date, the most compelling evidence for the infection hypothesis comes from a large study in Taiwan, published in 2018, which looked at the progress of 8,362 people carrying a herpes simplex virus. Crucially, some of the participants were given antiviral drugs to treat the infection. As the infection hypothesis predicted, this reduced the risk of dementia.
Overall, those taking a long course of medication were around 90% less likely to develop dementia over the 10-year study period than the participants who had not received any treatment for their infection.
“It’s a result that is so striking, it’s hard to believe,” says Anthony Komaroff, a professor at Harvard Medical School and a senior physician at Brigham and Women’s Hospital in Boston, who recently reviewed the current state of the research into the infection hypothesis for the Journal of the American Medical Association. Although he remains cautious about lending too much confidence to any single study, he is now convinced that the idea demands more attention. “It’s such a dramatic result that it must be taken seriously,” he says.
Komaroff knows of no theoretical objections to the theory. “I haven’t heard anyone, even world-class Alzheimer’s experts who are dubious about the infection hypothesis, give a good reason why it has to be bunkum,” he adds. We simply need more studies providing direct evidence for the link, he says, to be able to convince the sceptics.
– from Alzheimer’s: The heretical and hopeful role of infection, BBC Future, David Robson, 6th October 2021
==============
Alzheimer’s disease: mounting evidence that herpes virus is a cause, The Conversation US, Oct 19, 2018
Ruth Itzhaki, Professor Emeritus of Molecular Neurobiology, University of Manchester
More than 30m people worldwide suffer from Alzheimer’s disease – the most common form of dementia. Unfortunately, there is no cure, only drugs to ease the symptoms. However, my latest review, suggests a way to treat the disease. I found the strongest evidence yet that the herpes virus is a cause of Alzheimer’s, suggesting that effective and safe antiviral drugs might be able to treat the disease. We might even be able to vaccinate our children against it.
The virus implicated in Alzheimer’s disease, herpes simplex virus type 1 (HSV1), is better known for causing cold sores. It infects most people in infancy and then remains dormant in the peripheral nervous system (the part of the nervous system that isn’t the brain and the spinal cord). Occasionally, if a person is stressed, the virus becomes activated and, in some people, it causes cold sores.
We discovered in 1991 that in many elderly people HSV1 is also present in the brain. And in 1997 we showed that it confers a strong risk of Alzheimer’s disease when present in the brain of people who have a specific gene known as APOE4.
The virus can become active in the brain, perhaps repeatedly, and this probably causes cumulative damage. The likelihood of developing Alzheimer’s disease is 12 times greater for APOE4 carriers who have HSV1 in the brain than for those with neither factor.
Later, we and others found that HSV1 infection of cell cultures causes beta-amyloid and abnormal tau proteins to accumulate. An accumulation of these proteins in the brain is characteristic of Alzheimer’s disease.
We believe that HSV1 is a major contributory factor for Alzheimer’s disease and that it enters the brains of elderly people as their immune system declines with age. It then establishes a latent (dormant) infection, from which it is reactivated by events such as stress, a reduced immune system and brain inflammation induced by infection by other microbes.
Reactivation leads to direct viral damage in infected cells and to viral-induced inflammation. We suggest that repeated activation causes cumulative damage, leading eventually to Alzheimer’s disease in people with the APOE4 gene.
Presumably, in APOE4 carriers, Alzheimer’s disease develops in the brain because of greater HSV1-induced formation of toxic products, or less repair of damage.
New treatments? The data suggest that antiviral agents might be used for treating Alzheimer’s disease. The main antiviral agents, which are safe, prevent new viruses from forming, thereby limiting viral damage.
In an earlier study, we found that the anti-herpes antiviral drug, acyclovir, blocks HSV1 DNA replication, and reduces levels of beta-amyloid and tau caused by HSV1 infection of cell cultures.
It’s important to note that all studies, including our own, only show an association between the herpes virus and Alzheimer’s – they don’t prove that the virus is an actual cause. Probably the only way to prove that a microbe is a cause of a disease is to show that an occurrence of the disease is greatly reduced either by targeting the microbe with a specific anti-microbial agent or by specific vaccination against the microbe.
Excitingly, successful prevention of Alzheimer’s disease by use of specific anti-herpes agents has now been demonstrated in a large-scale population study in Taiwan. Hopefully, information in other countries, if available, will yield similar results.
=================================
Corroboration of a Major Role for Herpes Simplex Virus Type 1 in Alzheimer’s Disease
Ruth F. Itzhaki, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
Front. Aging Neurosci., 19 October 2018, https://doi.org/10.3389/fnagi.2018.00324
Strong evidence has emerged recently for the concept that herpes simplex virus type 1 (HSV1) is a major risk for Alzheimer’s disease (AD). This concept proposes that latent HSV1 in brain of carriers of the type 4 allele of the apolipoprotein E gene (APOE-ε4) is reactivated intermittently by events such as immunosuppression, peripheral infection, and inflammation, the consequent damage accumulating, and culminating eventually in the development of AD….
===================
How an outsider in Alzheimer’s research bucked the prevailing theory — and clawed for validation
Sharon Begley, Stat News, 10/29/2018
Robert Moir was damned if he did and damned if he didn’t. The Massachusetts General Hospital neurobiologist had applied for government funding for his Alzheimer’s disease research and received wildly disparate comments from the scientists tapped to assess his proposal’s merits.
It was an “unorthodox hypothesis” that might “fill flagrant knowledge gaps,” wrote one reviewer, but another said the planned work might add little “to what is currently known.” A third complained that although Moir wanted to study whether microbes might be involved in causing Alzheimer’s, no one had proved that was the case.
As if scientists are supposed to study only what’s already known, an exasperated Moir thought when he read the reviews two years ago.
He’d just had a paper published in a leading journal, providing strong data for his idea that beta-amyloid, a hallmark of Alzheimer’s disease, might be a response to microbes in the brain. If true, the finding would open up vastly different possibilities for therapy than the types of compounds virtually everyone else was pursuing.
But the inconsistent evaluations doomed Moir’s chances of winning the $250,000 a year for five years that he was requesting from the National Institutes of Health. While two reviewers rated his application highly, the third gave him scores in the cellar. Funding rejected.
Complaints about being denied NIH funding are as common among biomedical researchers as spilled test tubes after a Saturday night lab kegger. The budgets of NIH institutes that fund Alzheimer’s research at universities and medical centers cover only the top 18 percent or so of applications. There are more worthy studies than money.
Moir’s experience is notable, however, because it shows that, even as one potential Alzheimer’s drug after another has failed for the last 15 years (the last such drug, Namenda, was approved in 2003), researchers with fresh approaches — and sound data to back them up — have struggled to get funded and to get studies published in top journals. Many scientists in the NIH “study sections” that evaluate grant applications, and those who vet submitted papers for journals, have so bought into the prevailing view of what causes Alzheimer’s that they resist alternative explanations, critics say.
“They were the most prominent people in the field, and really good at selling their ideas,” said George Perry of the University of Texas at San Antonio and editor-in-chief of the Journal of Alzheimer’s Disease. “Salesmanship carried the day.”
Dating to the 1980s, the amyloid hypothesis holds that the disease is caused by sticky agglomerations, or plaques, of the peptide beta-amyloid, which destroy synapses and trigger the formation of neuron-killing “tau tangles.” Eliminating plaques was supposed to reverse the disease, or at least keep it from getting inexorably worse. It hasn’t. The reason, more and more scientists suspect, is that “a lot of the old paradigms, from the most cited papers in the field going back decades, are wrong,” said MGH’s Rudolph Tanzi, a leading expert on the genetics of Alzheimer’s.
Even with the failure of amyloid orthodoxy to produce effective drugs, scientists who had other ideas saw their funding requests repeatedly denied and their papers frequently rejected. Moir is one of them.
For years in the 1990s, Moir, too, researched beta-amyloid, especially its penchant for gunking up into plaques and “a whole bunch of things all viewed as abnormal and causing disease,” he said. “The traditional view is that amyloid-beta is a freak, that it has a propensity to form fibrils that are toxic to the brain — that it’s irredeemably bad. In the 1980s, that was a reasonable assumption.”
But something had long bothered him about the “evil amyloid” dogma. The peptide is made by all vertebrates, including frogs and lizards and snakes and fish. In most species, it’s identical to humans’, suggesting that beta-amyloid evolved at least 400 million years ago. “Anything so extensively conserved over that immense span of time must play an important physiological role,” Moir said.
What, he wondered, could that be?
In 1994, Moir changed hemispheres to work as a postdoctoral fellow with Tanzi. They’d hit it off over beers at a science meeting in Amsterdam. Moir liked that Tanzi’s lab was filled with energetic young scientists — and that in cosmopolitan Boston, he could play the hyper-kinetic (and bone-crunching) sport of Australian rules football. Tanzi liked that Moir was the only person in the world who could purify large quantities of the molecule from which the brain makes amyloid.
Moir initially focused on genes that affect the risk of Alzheimer’s — Tanzi’s specialty. But Moir’s intellectual proclivities were clear even then. His mind is constantly noodling scientific puzzles, colleagues say, even during down time. Moir took a vacation in the White Mountains a decade ago with his then-6-year-old son and a family friend, an antimicrobial expert; in between hikes, Moir explained a scientific roadblock he’d hit, and the friend explained a workaround.
Moir’s inclination toward unconventional thinking took flight in 2007. He was (and still is) in the habit of spending a couple of hours Friday afternoons on what he calls “PubMed walkabouts,” casually perusing that database of biomedical papers. One summer day, a Corona in hand, he came across a paper on something called LL37. It was described as an “antimicrobial peptide” that kills viruses, fungi, and bacteria, including — maybe especially — in the brain.
What caught his eye was that LL37’s size and structure and other characteristics were so similar to beta-amyloid, the two might be twins.
Moir hightailed it to Tanzi’s office next door. Serendipitously, Tanzi (also Corona-fueled) had just received new data from his study of genes that increase the risk of Alzheimer’s disease. Many of the genes, he saw, are involved in innate immunity, the body’s first line of defense against germs. If immune genetics affect Alzheimer’s, and if the chief suspect in Alzheimer’s (beta-amyloid) is a virtual twin of an antimicrobial peptide, maybe beta-amyloid is also an antimicrobial, Moir told Tanzi.
If so, then the plaques it forms might be the brain’s last-ditch effort to protect itself from microbes, a sort of Spider-Man silk that binds up pathogens to keep them from damaging the brain. Maybe they save the brain from pathogens in the short term only to themselves prove toxic over the long term.
Tanzi encouraged Moir to pursue that idea. “Rob was trained [by Marshall] to think out of the box,” Tanzi said. “He thinks so far out of the box he hasn’t found the box yet.”
Moir spent the next three years testing whether beta-amyloid can kill pathogens. He started simple, in test tubes and glass dishes. Those are relatively cheap, and Tanzi had enough funding to cover what Moir was doing: growing little microbial gardens in lab dishes and then trying to kill them.
Day after day, Moir and his junior colleagues played horticulturalists. They added staph and strep, the yeast candida, and the bacteria pseudomonas, enterococcus, and listeria to lab dishes filled with the nutrient medium agar. Once the microbes formed a thin layer on top, they squirted beta-amyloid onto it and hoped for an Alexander Fleming discovery-of-penicillin moment.
How an outsider in Alzheimer’s research bucked the prevailing theory — and clawed for validation. Stat News
_________________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Autoimmune disease
Autoimmune diseases occur when the body’s immune system targets and damages the body’s own cells.

Our bodies have an immune system: a network of special cells and organs that defends the body from germs and other foreign invaders.
At the core of the immune system is the ability to tell the difference between self and nonself: between what’s you and what’s foreign.
If the system becomes unable to tell the difference between self and nonself then the body makes autoantibodies (AW-toh-AN-teye-bah-deez) that attack normal cells by mistake.
At the same time, we always have regulatory T cells. They keep the rest of our immune system in line. If they fail to work correctly then other white blood cells can mistakenly attack parts of our body. This causes the damage we know as autoimmune disease.
The body parts that are affected depend on the type of autoimmune disease. There are more than 100 known types.
Overall, autoimmune diseases are common, affecting more than 23.5 million Americans. They are a leading cause of death and disability. Some autoimmune diseases are rare, while others, such as Hashimoto’s disease, affect many people.
(Intro adapted from U.S. Department of Health & Human Services, Office on Women’s Health)
Causes
There are many different auto-immune diseases. Each one has a separate cause. In fact, each particular autoimmune disorder itself may have several different causes.
Medical researchers are still learning how auto-immune diseases develop. They seem to be a combination of genetic mutations and some trigger in the environment.
TBA: The hygiene hypothesis
Examples
Crohn’s disease
Diabetes (Type 1 diabetes mellitus)
Guillain-Barre syndrome
Inflammatory bowel disease (IBD)
Lupus (Systemic lupus erythematosus)
Multiple sclerosis (MS)
Rheumatoid arthritis
Treatment
Many autoimmune disorders can now be partially treated with biologics (artificial biological molecules.) These biologics modulate the immune system. These can treat – but not cure – some auto-immune diseases.
Infliximab, etanercept, adalimumab, etc.
Learning Standards
Massachusetts Comprehensive Health Curriculum Framework
Students will gain the knowledge and skills to select a diet that supports health and reduces the risk of illness and future chronic diseases. PreK–12 Standard 4
Through the study of Prevention students will
8.1 Describe how the body fights germs and disease naturally and with medicines and immunization.
Through the study of Signs, Causes, and Treatment students will
8.2 Identify the common symptoms of illness and recognize that being responsible for individual health means alerting caretakers to any symptoms of illness
8.5 Identify ways individuals can reduce risk factors related to communicable and chronic diseases
8.13 Explain how the immune system functions to prevent and combat disease
Benchmarks for Science Literacy, AAAS
The immune system functions to protect against microscopic organisms and foreign substances that enter from outside the body and against some cancer cells that arise within. 6C/H1*
Some allergic reactions are caused by the body’s immune responses to usually harmless environmental substances. Sometimes the immune system may attack some of the body’s own cells. 6E/H1
Developing writing skills: Verb wheel
Verb wheel
This is a verb wheel inspired by Bloom’s taxonomy. Every level within the cognitive domain has actions and verbs that are specific to it. This chart illustrates the 6 levels, followed by the verbs that are associated with them. It then shows the different activities which students engage in, which is associated with that level.
By utilizing these verbs and activities, it allows educators to address questions in such a way that students “climb the staircase” of Bloom’s Taxonomy and can eventually be able to master the material.

Footnotes
TBA
Learning Standards
TBA
Thinking well requires knowing facts
On his blog, Rough Type, author Nicholas Carr writes:

With lots of kids heading to school this week, an old question comes back to the fore: Can thinking be separated from knowing?
Many people, and not a few educators, believe that the answer is yes. Schools, they suggest, should focus on developing students’ “critical thinking skills” rather than on helping them beef up their memories with facts and other knowledge about the world. With the Internet, they point out, facts are always within easy reach. Why bother to make the effort to cram stuff into your own long-term memory when there’s such a capacious store of external, or “transactive,” memory to draw on? A kid can google the facts she needs, plug them into those well-honed “critical thinking skills,” and – voila! – brilliance ensues.
That sounds good, but it’s wrong. The idea that thinking and knowing can be separated is a fallacy, as the University of Virginia psychologist Daniel Willingham explains in his book Why Don’t Students Like School.
This excerpt from Willingham’s book seems timely:
I defined thinking as combining information in new ways. The information can come from long-term memory — facts you’ve memorized — or from the environment. In today’s world, is there a reason to memorize anything? You can find any factual information you need in seconds via the Internet. Then too, things change so quickly that half of the information you commit to memory will be out of date in five years — or so the argument goes. Perhaps instead of learning facts, it’s better to practice critical thinking, to have students work at evaluating all that information available on the Internet, rather than trying to commit some small part of it to memory.
This argument is false. Data from the last thirty years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts, and that’s true not simply because you need something to think about. The very processes that teachers care about most — critical thinking processes such as reasoning and problem solving — are intimately intertwined with factual knowledge that is in long-term memory (not just found in the environment).
It’s hard for many people to conceive of thinking processes as intertwined with knowledge. Most people believe that thinking processes are akin to those of a calculator. A calculator has available a set of procedures (addition, multiplication, and so on) that can manipulate numbers, and those procedures can be applied to any set of numbers. The data (the numbers) and the operations that manipulate the data are separate. Thus, if you learn a new thinking operation (for example, how to critically analyze historical documents), it seems like that operation should be applicable to all historical documents, just as a fancier calculator that computes sines can do so for all numbers.
But the human mind does not work that way. When we learn to think critically about, say, the start of the Second World War, it does not mean that we can think critically about a chess game or about the current situation in the Middle East or even about the start of the American Revolutionary War. Critical thinking processes are tied to the background knowledge. The conclusion from this work in cognitive science is straightforward: we must ensure that students acquire background knowledge with practicing critical thinking skills.
Willingham goes on the explain that once a student has mastered a subject — once she’s become an expert — her mind will become fine-tuned to her field of expertise and she’ll be able to fluently combine transactive memory with biological memory.
But that takes years of study and practice. During the K – 12 years, developing a solid store of knowledge is essential to learning how to think. There’s still no substitute for a well-furnished mind.
________________________________
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Bloom’s Taxonomy
Bloom’s taxonomy is a widely accepted model about how students learn, created in the 1950s by Benjamin Samuel Bloom, an American educational psychologist.
It is a set of hierarchical models used to classify educational learning objectives into levels of complexity and specificity. They cover learning objectives in cognitive, affective and sensory domains.
Bloom edited the first volume of the standard text, Taxonomy of Educational Objectives in 1956. A second edition arrived in 1964, and a revised version in 2001.
In the original version of the taxonomy, the cognitive domain is broken into six levels of objectives: Knowledge, Comprehension, Application, Analysis, Synthesis, Evaluation. In the 2001 revised edition of Bloom’s taxonomy, the levels are changed to: Remember, Understand, Apply, Analyze, Evaluate, and Create.
This above introduction was excerpted and adapted from Wikipedia by RK.
“Bloom’s taxonomy.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 27 Sep. 2018.

Despite Bloom’s intentions for this to be used in college and graduate schools, it is now frequently used in American kindergarten through high school curriculum learning objectives, assessments and activities. Bloom himself was skeptical of this.
Despite popular belief, the taxonomy had no scientific basis. Richard Morshead (1965) pointed out on the publication of the second volume that the classification was not a properly constructed taxonomy: it lacked a systemic rationale of construction.
Morshead, Richard W. (1965). “On Taxonomy of educational objectives Handbook II: Affective domain”. Studies in Philosophy and Education. 4 (1)
This criticism was acknowledged in 2001 when a revision was made to create a taxonomy on more systematic lines. Nonetheless, there is skepticism that the hierarchy indicated is adequate. Some teachers do see the three lowest levels as hierarchically ordered, but view the higher levels as parallel.
Bloom himself was aware that the distinction between categories in some ways is arbitrary. Any task involving thinking entails multiple mental processes.
The most common criticism, perhaps most important to hear today, is that curriculum designers implicitly – and often explicitly – mistakenly dismiss the lowest levels of the pyramid as unworthy of teaching. Common Core skills-based curricular and professional development drill into teachers the idea that we shouldn’t be teaching students “facts”; rather, we should encourage students to ask questions and investigate, and learn the material, organically, for themselves.
What this doctrine misses is the fact that today’s knowledge in math, science history, etc., is literally the product of thousands of thinkers and writers, and millions of man-hours of thinking, research, and peer-review. Constructing a substantial knowledge of algebra could take a student 20 or 30 years – or they could be taught supposedly “lower level facts” about the rules of algebra.
As you read the modern day evaluations of Bloom’s taxonomy, below, note the consensus: The learning of lower level skills is necessary to enable the building of higher level skills. And New information requires prior basic information
Thinking well requires knowing facts
Psychologist Daniel Willingham explains in his book Why Don’t Students Like School:
[Modern teachers have been told that] perhaps instead of learning facts, it’s better to practice critical thinking, to have students work at evaluating all that information available on the Internet, rather than trying to commit some small part of it to memory.
This argument is false. Data from the last thirty years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts, and that’s true not simply because you need something to think about. The very processes that teachers care about most — critical thinking processes such as reasoning and problem solving — are intimately intertwined with factual knowledge that is in long-term memory (not just found in the environment)….
Critical thinking processes are tied to the background knowledge. The conclusion from this work in cognitive science is straightforward: we must ensure that students acquire background knowledge with practicing critical thinking skills.
From “Why Don’t Students Like School.”
Bloom’s Taxonomy: A Deeper Learning Perspective
In Education Week, Ron Berger writes
The problem is that both versions present a false vision of learning. Learning is not a hierarchy or a linear process. This graphic gives the mistaken impression that these cognitive processes are discrete, that it’s possible to perform one of these skills separately from others. It also gives the mistaken impression that some of these skills are more difficult and more important than others. It can blind us to the integrated process that actually takes place in students’ minds as they learn.
My critique of this framework is not intended to blame anyone. I don’t assume that Benjamin Bloom and his team, or the group who revised his pyramid, necessarily intended for us to see these skills as discrete or ranked in importance. I also know that thoughtful educators use this framework to excellent ends–to emphasize that curriculum and instruction must focus in a balanced way on the full range of skills, for all students from all backgrounds.
But my experience suggests that what most of us take away from this pyramid is the idea that these skills are discrete and hierarchical. That misconception undermines our understanding of teaching and learning, and our work with students.
from – Here’s What’s Wrong With Bloom’s Taxonomy: A Deeper Learning Perspective, By Ron Berger, Chief Academic Officer at EL Education.
Bloom’s Taxonomy – That Pyramid is a Problem
Doug Lemov writes
Bloom’s is a ‘framework.’ This is to say it an idea – one that’s compelling in many ways perhaps but not based on data or cognitive science, say. In fact it was developed pretty much before there was such a thing as cognitive science. So it’s almost assuredly got some value to it and it’s almost assuredly gotten some things wrong.
…Generally when teachers talk about “Bloom’s taxonomy,” they talk with disdain about “lower level” questions. They believe, perhaps because of the pyramid image which puts knowledge at the bottom, that knowledge-based questions, especially via recall and retrieval practice, are the least productive thing they could be doing in class. No one wants to be the rube at the bottom of the pyramid.
But this, interestingly is not what Bloom’s argued—at least according to Vanderbilt’s description. Saying knowledge questions are low value and that knowledge is the necessary precondition for deep thinking are very different things.
More importantly believing that knowledge questions—even mere recall of facts—are low value doesn’t jibe with the overwhelming consensus of cognitive science, summarized here by Daniel Willingham, who writes,
Data from the last thirty years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts, and that’s true not simply because you need something to think about.
The very processes that teachers care about most — critical thinking processes such as reasoning and problem solving — are intimately intertwined with factual knowledge that is in long-term memory (not just found in the environment)
In other words there are two parts to the equation. You not only have to teach a lot of facts to allow students to think deeply but you have to reinforce knowledge enough to install it in long-term memory or you can’t do any of the activities at the top of the pyramid.
Or more precisely you can do them but they are going to be all but worthless. Knowledge reinforced by recall and retrieval practice, is the precondition.

… I’m going to propose a revision to the Bloom ‘pyramid’ so the graphic is far more representative. I’m calling it Bloom’s Delivery Service. In it, knowledge is not at the bottom of a pyramid but is the fuel that allows the engine of thinking to run.
Teachlikeachampion.com/blog/blooms-taxonomy-pyramid-problem/
A Critical Appraisal of Bloom’s Taxonomy
Seyyed Mohammad Ali Soozandehfar and Mohammad Reza Adeli
American Research Journal of English and Literature (ARJEL), Volume 2, 2016
… In 1999, Dr. Lorin Anderson, a former student of Bloom’s, and his colleagues published an updated version of Bloom’s Taxonomy that takes into account a broader range of factors that have an impact on teaching and learning. This revised taxonomy attempts to correct some of the problems with the original taxonomy. Unlike the 1956 version, the revised taxonomy differentiates between “knowing what,” the content of thinking, and “knowing how,” the procedures used in solving problems.
… Today’s world is a different place, however, than the one Bloom’s Taxonomy reflected in 1956. Educators have learned a great deal more about how students learn and teachers teach and now recognize that teaching and learning encompasses more than just thinking. It also involves the feelings and beliefs of students and teachers as well as the social and cultural environment of the classroom.
… as anyone who has worked with a group of educators to classify a group of questions and learning activities according to the Taxonomy can attest, there is little consensus about what seemingly self-evident terms like “analysis,” or “evaluation” mean.
In addition, so many worthwhile activities, such as authentic problems and projects, cannot be mapped to the Taxonomy, and trying to do that would diminish their potential as learning opportunities.
…. it has been maintained that Bloom’s Taxonomy is more often than not interpreted incorrectly. Booker (2007) believes that “Bloom’s Taxonomy has been used to devalue basic skills education and has promoted “higher order thinking” at its expense” (2007, p.248).
In other words, lower order skills such as knowledge and comprehension are being considered as less critical or invaluable skills.
Being referred to as lower order skills does not make knowledge or comprehension any less important, rather they are arguably the most important cognitive skills because knowledge of and comprehension of a subject is vital in advancing up the levels of the taxonomy.
Therefore, in line with Booker’s conclusion, the Taxonomy is being improperly used. Bloom never stated that any of his cognitive levels were less important, just that they followed a hierarchical structure.
More on Bloom’s Taxonomy
A Roof without Walls: Benjamin Bloom’s Taxonomy and the Misdirection of American Education, By Michael Booker
Abstract: Plato wrote that higher order thinking could not start until the student had mastered conventional wisdom. The American educational establishment has turned Plato on his head with the help of a dubious approach to teaching developed by one Benjamin Bloom.
Bloom’s taxonomy was intended for higher education, but its misappropriation has resulted in a serious distortion of the purpose of the K–12 years. Michael Booker attributes the inability of American children to compete internationally to a great extent to our reliance on Bloom in expecting critical and advanced thinking from kids who have been trained to regard facts and substantive knowledge as unimportant.
Bloom’s Taxonomy has become influential to the point of dogma in American Colleges of Education.
Bloom’s Taxonomy has been used to devalue basic skills education and has promoted “higher order thinking”at its expense.
Shortchanging basic skills education has resulted in producing students who misunderstand true higher-order thinking and who are not equipped for advanced education.
…. Soon after it was published, a body of research began to build around the taxonomy. In 1970, Cox and Wildemann collected an index of the existing research into Bloom’s Taxonomy. According to their study, 118 research projects of various sorts had been conducted in the previous decade and a half.
A review of their data, however, shows that most of the research lacked experimental results that might either confirm or invalidate it. The results noted are not reassuring. Initial studies showed that individuals skilled in the Taxonomy frequently could not agree on the classification of test items or objectives.
… This adds up to an extraordinary misreading of the Taxonomy. Standards intended for college students get pushed down to the K–12 system. Instead of teaching those K–12 students hierarchically, the foundation of the structure is ignored. The push is made to the highest levels of the Taxonomy, especially level six, Evaluation. … I will quote its caveats about Evaluation.
For the most part, the evaluations customarily made by an individual are quick decisions not preceded by very careful consideration of the various aspects of the object, idea or activity being judged. These might be termed opinions rather than judgments.…For purposes of classification, only those evaluations which are or can be made with distinct criteria in mind are considered.
Despite these warnings, typical Evaluation questions take the form of “What do you think about x?”and “Do you agree with x?” These questions are often accompanied by praise for what education literature misidentifies as the “Socratic method.” The result of this strategy is to occupy class time with vacuous opining.
When I speak with my fellow community college instructors, we rarely complain about student ’lack of advanced intellectual skills. Our chief source of frustration is that they haven’t mastered the basics needed to succeed in college-level work. Since I teach philosophy, I don’t expect my students to come to class knowing any content about my subject area.
Still, it would be lovely if they exited high school with some knowledge of world history, science, English, and geography. A large cohort (much to my frustration) doesn’t know how many grams are in a kilogram or when to use an apostrophe. I have a friend, Dr. Lawrence Barker, who once taught statistics at a state university. Each quarter he quizzed his incoming statistics students about basic math. The majority, he learned, couldn’t determine the square root of one without access to a calculator. He left teaching and is now happily employed by the Centers for Disease Control.
A Roof without Walls: Benjamin Bloom’s Taxonomy and the Misdirection of American Education, Michael Booker, Academic Questions 20(4):347-355 · December 2007
Alternative models of learning
Rex Heer, at the Iowa State University Center for Excellence in Learning and Teaching created this model. He writes:
Among other modifications, Anderson and Krathwohl’s (2001) revision of the original Bloom’s taxonomy (Bloom & Krathwohl, 1956) redefines the cognitive domain as the intersection of the Cognitive Process Dimension and the Knowledge Dimension.
This document offers a three-dimensional representation of the revised taxonomy of the cognitive domain.
Although the Cognitive Process and Knowledge dimensions are represented as hierarchical steps, the distinctions between categories are not always clear-cut.
For example, all procedural knowledge is not necessarily more abstract than all conceptual knowledge; and an objective that involves analyzing or evaluating may require thinking skills that are no less complex than one that involves creating. It is generally understood, nonetheless, that lower order thinking skills are subsumed by, and provide the foundation for higher order thinking skills.
A Model of Learning Objectives by Rex Heer
The Knowledge Dimension classifies four types of knowledge that learners may be expected to acquire or construct— ranging from concrete to abstract.

The Cognitive Process Dimension represents a continuum of increasing cognitive complexity – from lower order thinking skills to higher order thinking skills.

Based on this, Rex Heer develops this three dimensional model.
Again, please note that – as Bloom himself always intended – remembering facts (misunderstood as the “lowest” part of the method) – is actually the most important part: remembering facts is the base on which everything else depends.
One can’t engage in higher level critical thinking skills on a subject without first knowing the content of the subject.

Model by Rex Heer, Iowa State University, Center for Excellence in Learning and Teaching, Jan 2012. Creative Commons Attribution Non Commercial-ShareAlike 3.0 Unported License.
Speculative history: Possibility of prehuman civilization
Could an intelligent species have lived on Earth before humanity? Could it even have developed an industrial civilization? If one had existed on Earth – many millions of years prior to our own era – what traces would it have left and would they be detectable today?
It only took five minutes for Gavin Schmidt to out-speculate me. Schmidt is the director of NASA’s Goddard Institute for Space Studies (a.k.a. GISS), a world-class climate-science facility. One day last year, I came to GISS with a far-out proposal. In my work as an astrophysicist, I’d begun researching global warming from an “astrobiological perspective.” That meant asking whether any industrial civilization that rises on any planet will, through its own activity, trigger its own version of a climate shift. I was visiting GISS that day hoping to gain some climate-science insights and, perhaps, collaborators. That’s how I ended up in Gavin’s office.
Just as I was revving up my pitch, Gavin stopped me in my tracks. “Wait a second,” he said. “How do you know we’re the only time there’s been a civilization on our own planet?”
It took me a few seconds to pick up my jaw off the floor. I had certainly come into Gavin’s office prepared for eye rolls at the mention of “exo-civilizations.” But the civilizations he was asking about would have existed many millions of years ago. Sitting there, seeing Earth’s vast evolutionary past telescope before my mind’s eye, I felt a kind of temporal vertigo. “Yeah,” I stammered. “Could we tell if there’d been an industrial civilization that deep in time?” We never got back to aliens. Instead, that first conversation launched a new study we’ve recently published in the International Journal of Astrobiology. …
We’re used to imagining extinct civilizations in terms of sunken statues and subterranean ruins. These kinds of artifacts of previous societies are fine if you’re only interested in timescales of a few thousands of years. But once you roll the clock back to tens of millions or hundreds of millions of years, things get more complicated. When it comes to direct evidence of an industrial civilization—things like cities, factories, and roads—the geologic record doesn’t go back past what’s called the Quaternary period 2.6 million years ago…. Go back much further than the Quaternary, and everything has been turned over and crushed to dust….
So could researchers find clear evidence that an ancient species built a relatively short-lived industrial civilization long before our own? Perhaps, for example, some early mammal rose briefly to civilization building during the Paleocene epoch, about 60 million years ago. There are fossils, of course. But the fraction of life that gets fossilized is always minuscule and varies a lot depending on time and habitat. It would be easy, therefore, to miss an industrial civilization that lasted only 100,000 years—which would be 500 times longer than our industrial civilization has made it so far.
From Was There a Civilization on Earth Before Humans? A look at the available evidence, Adam Frank, The Atlantic, 4/13/2018
Michele Diodati writes
Strange as it may seem, if a nuclear war or a natural catastrophe put an end to humanity more or less suddenly, the archaeologists of a very distant future, millions of years away, would not find any of our artifacts and probably not even the fossil remains of our skeletons.
Today, the works built by industrial civilization — highways, railways, bridges, skyscrapers, megacities, dams, etc. — tower mighty in front of our eyes…. But the unraveling of all these works on the geological time scale corresponds to a blink of an eye.
The oldest territory of a certain extent currently existing on land is the Negev desert in Israel, which can be dated to around 1.8 million years ago. But complex organisms have inhabited the Earth since an era more than 280 times more ancient. Therefore, the traces of their existence are buried at various depths in sites that are mostly inaccessible and unknown.
f the human species disappeared tomorrow, in less than two million years, erosion and sedimentation, which act relentlessly on the territories currently inhabited, would erase any trace of the cities and artifacts produced by industrial civilization.
Considering that urbanized humanity occupies less than 1% of the total Earth’s surface, the chances of recovering the buried remains of our civilization in the distant future are reduced to a minimum. Moreover, fossilization is an extremely rare process that depends on many factors, including the climatic conditions and the ratio between soft and hard tissues of the dead specimen.
For example, of all the countless billions of dinosaurs that have lived on Earth for about 200 million years, we own the nearly complete fossils of a few thousand specimens. It corresponds only to a handful of dinosaur fossils for every 100,000 years, for all the thousands of taxa (taxonomic units) that existed in those 100,000 years.
As a result, the chances of encountering the remains of an industrial civilization that existed millions of years before, but only lasted a few thousand years, are minimal. So, is it indeed impossible to find traces of any possible industrial civilizations of the past? And should we resign ourselves to not leaving any mark of our own existence? Not exactly. If it is true that the artifacts deteriorate very quickly and that fossils are scarce, however, the global impact of industrial civilization on the climate and mineralogy of the planet remains.
…. In summary, if the current industrial civilization were to disappear from Earth’s face in a few centuries, all that would remain in the distant future of its activities and its very existence would be the traces, incorporated into Anthropocene’s sediments, of the planetary pollution it has caused in a very short geological time. Traces originated by:
-
-
the release of vast quantities of carbon of biological origin into the environment to produce energy;
-
the global rise in temperatures due to the greenhouse effect;
-
the alteration of the nitrogen cycle;
-
the acidification of the oceans and the creation of dead zones due to lack of oxygen;
-
the extinction of numerous animal species and the uncontrolled proliferation of a few others;
-
peaks in the diffusion of rare metals for industrial use and possibly isotopes of radioactive elements useful for the construction of atomic weapons;
-
synthetic pollutants such as PCB, CFC, and plastic materials.
-
from The Silurian Hypothesis, Medium.com article
As an example of how fast nature can reclaim human towns. “This is Houtouwan, an abandoned Chinese fishing village just 40 miles off the east coast of Shanghai. The village was once home to 3,000 people. Now it’s home to just five, and greenery has started to reclaim the settlement from civilization. The village was emptied out in the ’90s – an insight into the scale of China’s population shift away from rural life.”
China’s Green Ghost Village
Consider Hadrian’s Wall in England. From 2000 years ago to today the differences is astonishing. It was built “as a way to secure the empire from would-be attackers. They built “milecastles,” or forts, within the wall at intervals of approximately one Roman mile. Stretching 73 miles across some of the most dramatic countryside in England, Hadrian’s Wall dates back to the 1st century AD.”
A critical point: In this unit we are not talking about ancient human civilizations. Humans in our modern form have been around for about 100,000 years. And recorded human history, when towns and cities developed, mostly only goes back some 6,000 years. As long ago as that seems to most people that is just a blink of the eye on geological time scales. The questions being asked here about possible civilizations millions of hundreds of millions of years ago.
Articles
Did Antarctica remain entirely unvisited by humans until the early 19th century? History.Stackexchange.Com
Could an Industrial Prehuman Civilization Have Existed on Earth before Ours? Scientific American
Was There a Civilization On Earth Before Humans? A look at the available evidence, Adam Frank, The Atlantic, 4/13/2018
The Silurian Hypothesis, Medium.com article
Technosignatures of ET life elsewhere in our solar system
THE BIG QUESTIONS Did Intelligent Space Aliens Once Live in Our Solar System? NBC News
The Silurian Hypothesis: Would it be possible to detect an industrial civilization in the geological record? Gavin A. Schmidt, Adam Frank
If an industrial civilization had existed on Earth many millions of years prior to our own era, what traces would it have left and would they be detectable today? We summarize the likely geological fingerprint of the Anthropocene, and demonstrate that while clear, it will not differ greatly in many respects from other known events in the geological record. We then propose tests that could plausibly distinguish an industrial cause from an otherwise naturally occurring climate event.
Prior Indigenous Technological Species, Jason T. Wright
One of the primary open questions of astrobiology is whether there is extant or extinct life elsewhere the Solar System. Implicit in much of this work is that we are looking for microbial or, at best, unintelligent life, even though technological artifacts might be much easier to find. SETI work on searches for alien artifacts in the Solar System typically presumes that such artifacts would be of extrasolar origin, even though life is known to have existed in the Solar System, on Earth, for eons.
But if a prior technological, perhaps spacefaring, species ever arose in the Solar System, it might have produced artifacts or other technosignatures that have survived to present day, meaning Solar System artifact SETI provides a potential path to resolving astrobiology’s question.
Here, I discuss the origins and possible locations for technosignatures of such a prior indigenous technological species, which might have arisen on ancient Earth or another body, such as a pre-greenhouse Venus or a wet Mars. In the case of Venus, the arrival of its global greenhouse and potential resurfacing might have erased all evidence of its existence on the Venusian surface. In the case of Earth, erosion and, ultimately, plate tectonics may have erased most such evidence if the species lived Gyr ago. Remaining indigenous technosignatures might be expected to be extremely old, limiting the places they might still be found to beneath the surfaces of Mars and the Moon, or in the outer Solar System.
Neuroplasticity
Introduction
Neuroplasticity is the ability of the brain to change throughout an individual’s life.
Research has shown that many aspects of the brain can be altered even through adulthood. However, the developing brain (in the womb, and early childhood) exhibits a much higher degree of plasticity than the adult brain.
Neuroplasticity can be observed at multiple scales, from microscopic changes in individual neurons to larger-scale changes such as cortical remapping in response to injury.
Behavior, environmental stimuli, thought, and emotions may also cause neuroplastic change. This has significant implications for learning, memory, and recovery from brain damage.
At the single cell level, synaptic plasticity refers to changes in the connections between neurons, whereas non-synaptic plasticity refers to changes in their intrinsic excitability.
– Adapted from, “Neuroplasticity.” Wikipedia, The Free Encyclopedia. 21 Sep. 2018
Step-by-step
The brain is made of several types of nerves connected to each other in an intricate web, always creating new connections as we grow and learn.

I found the following sequence of GIFs on Mr. Gruszka’s Earth Science GIFtionary.
Whenever you learn something new, you grow some dendrites that made a new circuit in your brain.

Neurons send and receive electrical signals between different parts of the brain.

How are neurons connected?
Signals enter a neuron cell body through a dendrite, and then this may send an electrical signal out along the axon, towards another neuron.
Dendrites and axon terminals grow relatively easy.
Whenever we take the time to learn something new, our brain grows new dendrite and axon terminal connections.

Image from commons.wikimedia.org/wiki/File:Neuron.svg
Whenever you continuously practice learning a new skill, your brain rewires itself.

When your brain rewires itself, new patterns are possible.
These new patterns not only store information, they help your brain learn similar information more efficiently.
For instance, the more time you spend learning how to read music and play a musical instrument, the easier it will be over time to develop your skills in this area.
The same is true for developing skills in mathematics or problem solving.

This process works best if you continue to challenge yourself, practicing the skills you have and attempting to learn new ones.
As you do this, new connections are made in your brain, and to some extent one can literally become smarter, and better at learning.
However, unless your brain is challenged to do something difficult, or review what it already knows how to do, it will not produce new patterns.
If you stop practicing and learning, then eventually the brain will begin to lose some of those neural connections.

When the brain prunes dendrites, we forget how to do something that we learned, or forget a fact that we used to know. In this sense, we say “Use it or lose it.”

Usually, all the dendrites required for a certain skill are not pruned, so we are in luck.
With review and practice, we can make new connections that replace the lost ones.
What is meant by brain plasticity?
The flexibility of the brain to make new connections and patterns:
This is because “plastic” does not just mean the material we make things out of. It has a second meaning of flexible, or changeable.

Also, brain cells can even travel within the brain to become neurons where they are needed.
For instance, when neurons die, new cells can migrate to where they are needed and become neurons. This can happen throughout life.

https://uag-earthsci.blogspot.com/2017/09/day-004-giftionary-introduction-to-brain.html
Learn how nerves work here.
How antibiotics work
Antibiotics are chemicals that disrupt and kill bacteria.
Note that they don’t kill viruses, fungi, or parasites.
For example, influenza (“the flu”) is a virus, not a bacteria. Therefore antibiotics can’t help fight the influenza virus.
Introduction
Antibiotics work by blocking vital processes in bacteria, killing the bacteria or stopping them from multiplying.
This helps the body’s natural immune system to fight the infection.
Different antibiotics work against different types of bacteria.
-
Antibiotics that affect a wide range of bacteria are called broad spectrum antibiotics (eg, amoxicillin and gentamicin).
-
Antibiotics that affect only a few types of bacteria are called narrow spectrum antibiotics (eg, penicillin).
Different types of antibiotics work in different ways.
For example, penicillin destroys bacterial cell walls, while other antibiotics can affect the way the bacterial cell works.
Doctors choose an antibiotic according to the bacteria that usually cause a particular infection.
Sometimes your doctor will do a test to identify the exact type of bacteria causing your infection and its sensitivity to particular antibiotics.
Antibiotic medicines may contain one or more active ingredients and be available under different brand names. The medicine label should tell you the active ingredient and the brand name.
_ from NPS MedicineWise, Australian Govt. Dept. of Health
Simple animation showing how an antibiotic disrupts the building of a cell wall.
Once the cell wall is disrupted, water can enter, making the cell swell, and eventually burst.

Image from Waterborne Diseases: Typhoid, By Olivia W.
Ways that antibiotics can disrupt bacteria
You can right-click on each image to expand it, or click here for the original page. It shows us several different types of antibiotics. Each has a different way of disrupting a bacteria,
This image is from “Mechanisms of Bacterial Resistance to Aminoglycoside Antibiotics”, 2019 RCSB PDB Video Challenge for High School Students. from the PDB-101 website. This is an educational portal of the RCSB PDM (protein data bank.)

and

Related content
What is an antibiotic? Form Learn.Genetics, Univ. of Utah
Learning Standards
MassachusettsComprehensive Health
8.1 Describe how the body fights germs and disease naturally and with medicines and
immunization
8.5 Identify ways individuals can reduce risk factors related to communicable and chronic diseases
8.6 Describe the importance of early detection in preventing the progression of disease
8.7 Explain the need to follow prescribed health care procedures given by parents and health care providers
8.13 Explain how the immune system functions to prevent and combat disease
8.19 Explain the prevention and control of common communicable infestations, diseases, and infections
Benchmarks for Science Education, AAAS
Inoculations use weakened germs (or parts of them) to stimulate the body’s immune system to react. This reaction prepares the body to fight subsequent invasions by actual germs of that type. Some inoculations last for life. 8F/H4
If the body’s immune system cannot suppress a bacterial infection, an antibacterial drug may be effective—at least against the types of bacteria it was designed to combat. Less is known about the treatment of viral infections, especially the common cold. However, more recently, useful antiviral drugs have been developed for several major kinds of viral infections, including drugs to fight HIV, the virus that causes AIDS. 8F/M6** (SFAA)
Pasteur found that infection by disease organisms (germs) caused the body to build up an immunity against subsequent infection by the same organisms. He then produced vaccines that would induce the body to build immunity to a disease without actually causing the disease itself. 10I/M3*
Investigations of the germ theory by Pasteur, Koch, and others in the 19th century firmly established the modern idea that many diseases are caused by microorganisms. Acceptance of the germ theory has led to changes in health practices. 10I/M4*
Current health practices emphasize sanitation, the safe handling of food and water, the pasteurization of milk, isolation, and aseptic surgical techniques to keep germs out of the body; vaccinations to strengthen the body’s immune system against subsequent infection by the same kind of microorganisms; and antibiotics and other chemicals and processes to destroy microorganisms. 10I/M7** (BSL)
The science wars: postmodernism as a threat against truth and reason
The science wars was an intellectual war between scientific realists and postmodernist critics.
The debate was about whether anything that humans could learn or talk about actually has meaning – or whether all words (even for science and math) ultimately only conveyed internal biases and feelings. Thus, in this view, nothing could ever be objectively said about the world.
Misunderstanding the debate
The science wars were often misunderstood by observers. Outsiders imagined that the debate was whether the intellectual paradigms of a culture affected the way data was interpreted. After all, it is noted, the same data can cause the investigator to reach different conclusions based on their internal biases.
However, this had nothing to do with the science wars. Scientists acknowledge that all people operate within intellectual paradigms, and that this of course affects how people might interpret data.
Rather, in the science wars, deconstructionists and postmodernists went much further: Many held that science tells us nothing about the real world. Some said things such as “DNA molecules are a myth of Western culture;” “the idea that 2 + 2 = 4 is white colonialist thinking,” etc. Some in this group denied that math and science had any more existence or legitimacy than “other ways of thinking” about subjects.
Ironically, this kind of thinking was foreseen by George Orwell.
In the end, the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality, was tacitly denied by their philosophy.
– George Orwell, Nineteen Eighty-Four
Scientific realists (such as Norman Levitt, Paul R. Gross, Jean Bricmont and Alan Sokal) understand and explain that scientific knowledge is real.
In contrast, many postmodernists and deconstructionists openly reject the reality and useful of science itself. Many openly reject scientific objectivity, the scientific method, Empiricism, and scientific knowledge.
Postmodernists and deconstructionists interpret Thomas Kuhn‘s ideas about scientific paradigms to mean that scientific theories are only social constructs, and not actual descriptions of reality.
Some philosophers like Paul Feyerabend argued that other, non-realist forms of knowledge production were just as valid. Therefore, for example:
a Native American thinking about nature would come up with his or her own ideas that are different from ideas in supposed “colonialist” science textbooks, and that those ideas – even when never backed by experiment – would literally be just as “true” as the ideas found by science (ideas which actually have been tested, and found to be true no matter the ethnicity of the person involved.)
a woman thinking about nature would come up with her own ideas that are different from ideas in supposed “male” science textbooks, and that those ideas – even when never backed by experiment – would literally be just as “true” as the ideas found by science (ideas which actually have been tested, and found to be true no matter the ethnicity of the person involved.)
There were attempts to bring postmodernism/deconstructionism into science back in the 1990s. There is a new attempt to do so today in the 2020s under the misleading motto “decolonize the curriculum.”
Some of these postmodernist attempts to do so at first look like a parody, but it turns out that the authors are serious.
For example, an increasing number of postmodernists claim that math itself is “colonialist.” The example shown below is becoming increasingly common.

Can you imagine what would happen if we allowed people to “decolonize” math, science, and engineering practices? Every piece of technology created by people indoctrinated with this view would be dangerous.

In the 1990’s, Scientific realists were quick to realize the danger. Large swaths of deconstructionist and postmodernist writings rejected any possibility of objectivity and realism. This not only undercut the entire idea of mathematics, and all of science, but also of philosophy and human rights.
The works of Jacques Derrida, Gilles Deleuze, Jean-François Lyotard and others claimed to say something about reality, but realists (scientists and anyone who believed in rational thought) recognized that such postmodern writings were deliberately incomprehensible or meaningless.
Example of how postmodernists understand basic logic
Some people misunderstand (or deliberately misrepresent) images like this to promote the idea that “truth is relative.” They say things like “The object is a triangle when viewed by one person, but a square when viewed by someone else, and a circle when seen by yet another person. So reality is relative, not absolute.”
The problem of course is that their claims are not only false, they are irrational.
In this example there is an actual three dimensional object (a fact in the real world.) The geometric projection of this object contains only a small part of information about the object as a whole.
Thus, a viewer who only looks at the object from one direction only receives some of the information, and does not yet know about the rest. Yet that lack of knowledge doesn’t change the reality of what the three dimensional object actually is.
If a postmodernist concluded, “I see a circle, therefore it is a circle” and then make a mathematical model of the object as a circle or sphere, their model would have predictions which immediately turn out to be wrong. Not “wrong” from one culture’s point of view, or from one religion’s point of view, or one gender’s point of view, but actually objectively wrong in reality.
News
Related articles on this website
Why does science matter?
Relativism Truth and Reality
Science denialism
Suggested reading (articles)
Campus Craziness: A New War on Science, Skeptic Magazine, Volume 22 Number 4
Suggested reading (books)
Science Wars: The Next Generation (Science for the People)
Higher Superstition: The Academic Left and Its Quarrels with Science, Paul R. Gross and Norman Levitt, 1994
Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science, Alan Sokal and Jean Bricmont, 1999
In 1996, Alan Sokal published an essay in the hip intellectual magazine Social Text parodying the scientific but impenetrable lingo of contemporary theorists. Here, Sokal teams up with Jean Bricmont to expose the abuse of scientific concepts in the writings of today’s most fashionable postmodern thinkers.
From Jacques Lacan and Julia Kristeva to Luce Irigaray and Jean Baudrillard, the authors document the errors made by some postmodernists using science to bolster their arguments and theories. Witty and closely reasoned, Fashionable Nonsense dispels the notion that scientific theories are mere “narratives” or social constructions, and explored the abilities and the limits of science to describe the conditions of existence.
Book reviews
Richard Dawkins’ review of Intellectual Impostures by Alan Sokal and Jean Bricmont.





