KaiserScience

Home » Articles posted by New England Blogger (Page 28)

Author Archives: New England Blogger

Developing writing skills: Verb wheel

Verb wheel

This is a verb wheel inspired by Bloom’s taxonomy. Every level within the cognitive domain has actions and verbs that are specific to it. This chart illustrates the 6 levels, followed by the verbs that are associated with them. It then shows the different activities which students engage in, which is associated with that level.

By utilizing these verbs and activities, it allows educators to address questions in such a way that students “climb the staircase” of Bloom’s Taxonomy and can eventually be able to master the material.

Verb Wheel Based on Bloom's Taxonomy

Footnotes

TBA

Learning Standards

TBA

Thinking well requires knowing facts

On his blog, Rough Type, author Nicholas Carr writes:

Mind Thinking Thoughts

With lots of kids heading to school this week, an old question comes back to the fore: Can thinking be separated from knowing?

Many people, and not a few educators, believe that the answer is yes. Schools, they suggest, should focus on developing students’ “critical thinking skills” rather than on helping them beef up their memories with facts and other knowledge about the world. With the Internet, they point out, facts are always within easy reach. Why bother to make the effort to cram stuff into your own long-term memory when there’s such a capacious store of external, or “transactive,” memory to draw on? A kid can google the facts she needs, plug them into those well-honed “critical thinking skills,” and – voila! – brilliance ensues.

That sounds good, but it’s wrong. The idea that thinking and knowing can be separated is a fallacy, as the University of Virginia psychologist Daniel Willingham explains in his book Why Don’t Students Like School

This excerpt from Willingham’s book seems timely:

I defined thinking as combining information in new ways. The information can come from long-term memory — facts you’ve memorized — or from the environment. In today’s world, is there a reason to memorize anything? You can find any factual information you need in seconds via the Internet. Then too, things change so quickly that half of the information you commit to memory will be out of date in five years — or so the argument goes. Perhaps instead of learning facts, it’s better to practice critical thinking, to have students work at evaluating all that information available on the Internet, rather than trying to commit some small part of it to memory.

This argument is false. Data from the last thirty years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts, and that’s true not simply because you need something to think about. The very processes that teachers care about most — critical thinking processes such as reasoning and problem solving — are intimately intertwined with factual knowledge that is in long-term memory (not just found in the environment).

It’s hard for many people to conceive of thinking processes as intertwined with knowledge. Most people believe that thinking processes are akin to those of a calculator. A calculator has available a set of procedures  (addition, multiplication, and so on) that can manipulate numbers, and those procedures can be applied to any set of numbers. The data (the numbers) and the operations that manipulate the data are separate. Thus, if you learn a new thinking operation (for example, how to critically analyze historical documents), it seems like that operation should be applicable to all historical documents, just as a fancier calculator that computes sines can do so for all numbers.

But the human mind does not work that way. When we learn to think critically about, say, the start of the Second World War, it does not mean that we can think critically about a chess game or about the current situation in the Middle East or even about the start of the American Revolutionary War. Critical thinking processes are tied to the background knowledge. The conclusion from this work in cognitive science is straightforward: we must ensure that students acquire background knowledge with practicing critical thinking skills.

Willingham goes on the explain that once a student has mastered a subject — once she’s become an expert — her mind will become fine-tuned to her field of expertise and she’ll be able to fluently combine transactive memory with biological memory.

But that takes years of study and practice. During the K – 12 years, developing a solid store of knowledge is essential to learning how to think. There’s still no substitute for a well-furnished mind.

________________________________

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.

§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

Bloom’s Taxonomy

Bloom’s taxonomy is a widely accepted model about how students learn, created in the 1950s by Benjamin Samuel Bloom, an American educational psychologist.

It is a set of hierarchical models used to classify educational learning objectives into levels of complexity and specificity. They cover learning objectives in cognitive, affective and sensory domains.

Bloom edited the first volume of the standard text, Taxonomy of Educational Objectives in 1956. A second edition arrived in 1964, and a revised version in 2001.

In the original version of the taxonomy, the cognitive domain is broken into six levels of objectives: Knowledge, Comprehension, Application, Analysis, Synthesis, Evaluation. In the 2001 revised edition of Bloom’s taxonomy, the levels are changed to: Remember, Understand, Apply, Analyze, Evaluate, and Create.

This above introduction was excerpted and adapted from Wikipedia by RK.
“Bloom’s taxonomy.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 27 Sep. 2018.

Bloom's Taxonomy

Despite Bloom’s intentions for this to be used in college and graduate schools, it is now frequently used in American kindergarten through high school curriculum learning objectives, assessments and activities. Bloom himself was skeptical of this.

Despite popular belief, the taxonomy had no scientific basis.  Richard Morshead (1965) pointed out on the publication of the second volume that the classification was not a properly constructed taxonomy: it lacked a systemic rationale of construction.

Morshead, Richard W. (1965). “On Taxonomy of educational objectives Handbook II: Affective domain”. Studies in Philosophy and Education. 4 (1)

This criticism was acknowledged in 2001 when a revision was made to create a taxonomy on more systematic lines. Nonetheless, there is skepticism that the hierarchy indicated is adequate.  Some teachers do see the three lowest levels as hierarchically ordered, but view the higher levels as parallel.

Bloom himself was aware that the distinction between categories in some ways is arbitrary.  Any task involving thinking entails multiple mental processes.

The most common criticism, perhaps most important to hear today, is that curriculum designers implicitly – and often explicitly – mistakenly dismiss the lowest levels of the pyramid as unworthy of teaching. Common Core skills-based curricular and professional development drill into teachers the idea that we shouldn’t be teaching students “facts”; rather, we should encourage students to ask questions and investigate, and learn the material, organically, for themselves.

What this doctrine misses is the fact that today’s knowledge in math, science history, etc., is literally the product of thousands of thinkers and writers, and millions of man-hours of thinking, research, and peer-review. Constructing a substantial knowledge of algebra could take a student 20 or 30 years – or they could be taught supposedly “lower level facts” about the rules of algebra.

As you read the modern day evaluations of Bloom’s taxonomy, below, note the consensus: The learning of lower level skills is necessary to enable the building of higher level skills. And New information requires prior basic information

Thinking well requires knowing facts

Psychologist Daniel Willingham explains in his book Why Don’t Students Like School:

[Modern teachers have been told that] perhaps instead of learning facts, it’s better to practice critical thinking, to have students work at evaluating all that information available on the Internet, rather than trying to commit some small part of it to memory.

This argument is false. Data from the last thirty years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts, and that’s true not simply because you need something to think about. The very processes that teachers care about most — critical thinking processes such as reasoning and problem solving — are intimately intertwined with factual knowledge that is in long-term memory (not just found in the environment)….

Critical thinking processes are tied to the background knowledge. The conclusion from this work in cognitive science is straightforward: we must ensure that students acquire background knowledge with practicing critical thinking skills.

From “Why Don’t Students Like School.”

Bloom’s Taxonomy: A Deeper Learning Perspective

In Education Week, Ron Berger writes

The problem is that both versions present a false vision of learning. Learning is not a hierarchy or a linear process. This graphic gives the mistaken impression that these cognitive processes are discrete, that it’s possible to perform one of these skills separately from others. It also gives the mistaken impression that some of these skills are more difficult and more important than others. It can blind us to the integrated process that actually takes place in students’ minds as they learn.

My critique of this framework is not intended to blame anyone. I don’t assume that Benjamin Bloom and his team, or the group who revised his pyramid, necessarily intended for us to see these skills as discrete or ranked in importance. I also know that thoughtful educators use this framework to excellent ends–to emphasize that curriculum and instruction must focus in a balanced way on the full range of skills, for all students from all backgrounds.

But my experience suggests that what most of us take away from this pyramid is the idea that these skills are discrete and hierarchical. That misconception undermines our understanding of teaching and learning, and our work with students.

from – Here’s What’s Wrong With Bloom’s Taxonomy: A Deeper Learning Perspective, By Ron Berger, Chief Academic Officer at EL Education.

Bloom’s Taxonomy – That Pyramid is a Problem

Doug Lemov writes

Bloom’s is a ‘framework.’  This is to say it an idea – one that’s compelling in many ways perhaps but not based on data or cognitive science, say. In fact it was developed pretty much before there was such a thing as cognitive science. So it’s almost assuredly got some value to it and it’s almost assuredly gotten some things wrong.

…Generally when teachers talk about “Bloom’s taxonomy,” they talk with disdain about “lower level” questions.  They believe, perhaps because of the pyramid image which puts knowledge at the bottom, that knowledge-based questions, especially via recall and retrieval practice, are the least productive thing they could be doing in class.  No one wants to be the rube at the bottom of the pyramid.

But this, interestingly is not what Bloom’s argued—at least according to Vanderbilt’s description. Saying knowledge questions are low value and that knowledge is the necessary precondition for deep thinking are very different things.

More importantly believing that knowledge questions—even mere recall of facts—are low value doesn’t jibe with the overwhelming consensus of cognitive science, summarized here by Daniel Willingham, who writes,

Data from the last thirty years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts, and that’s true not simply because you need something to think about.

The very processes that teachers care about most — critical thinking processes such as reasoning and problem solving — are intimately intertwined with factual knowledge that is in long-term memory (not just found in the environment)

In other words there are two parts to the equation.  You not only have to teach a lot of facts to allow students to think deeply but you have to reinforce knowledge enough to install it in long-term memory or you can’t do any of the activities at the top of the pyramid.

Or more precisely you can do them but they are going to be all but worthless. Knowledge reinforced by recall and retrieval practice, is the precondition.

Bloom's Taxonomy revised delivery

… I’m going to propose a revision to the Bloom ‘pyramid’ so the graphic is far more representative. I’m calling it Bloom’s Delivery Service. In it, knowledge is not at the bottom of a pyramid but is the fuel that allows the engine of thinking to run.

Teachlikeachampion.com/blog/blooms-taxonomy-pyramid-problem/

A Critical Appraisal of Bloom’s Taxonomy

Seyyed Mohammad Ali Soozandehfar and Mohammad Reza Adeli

American Research Journal of English and Literature (ARJEL), Volume 2, 2016

… In 1999, Dr. Lorin Anderson, a former student of Bloom’s, and his colleagues published an updated version of Bloom’s Taxonomy that takes into account a broader range of factors that have an impact on teaching and learning. This revised taxonomy attempts to correct some of the problems with the original taxonomy. Unlike the 1956 version, the revised taxonomy differentiates between “knowing what,” the content of thinking, and “knowing how,” the procedures used in solving problems.

… Today’s world is a different place, however, than the one Bloom’s Taxonomy reflected in 1956. Educators have learned a great deal more about how students learn and teachers teach and now recognize that teaching and learning encompasses more than just thinking. It also involves the feelings and beliefs of students and teachers as well as the social and cultural environment of the classroom.

… as anyone who has worked with a group of educators to classify a group of questions and learning activities according to the Taxonomy can attest, there is little consensus about what seemingly self-evident terms like “analysis,” or “evaluation” mean.

In addition, so many worthwhile activities, such as authentic problems and projects, cannot be mapped to the Taxonomy, and trying to do that would diminish their potential as learning opportunities.

…. it has been maintained that Bloom’s Taxonomy is more often than not interpreted incorrectly. Booker (2007) believes that “Bloom’s Taxonomy has been used to devalue basic skills education and has promoted “higher order thinking” at its expense” (2007, p.248).

In other words, lower order skills such as knowledge and comprehension are being considered as less critical or invaluable skills.

Being referred to as lower order skills does not make knowledge or comprehension any less important, rather they are arguably the most important cognitive skills because knowledge of and comprehension of a subject is vital in advancing up the levels of the taxonomy.

Therefore, in line with Booker’s conclusion, the Taxonomy is being improperly used. Bloom never stated that any of his cognitive levels were less important, just that they followed a hierarchical structure.

More on Bloom’s Taxonomy

A Roof without Walls: Benjamin Bloom’s Taxonomy and the Misdirection of American Education, By Michael Booker

Abstract: Plato wrote that higher order thinking could not start until the student had mastered conventional wisdom. The American educational establishment has turned Plato on his head with the help of a dubious approach to teaching developed by one Benjamin Bloom.

Bloom’s taxonomy was intended for higher education, but its misappropriation has resulted in a serious distortion of the purpose of the K–12 years. Michael Booker attributes the inability of American children to compete internationally to a great extent to our reliance on Bloom in expecting critical and advanced thinking from kids who have been trained to regard facts and substantive knowledge as unimportant.

Bloom’s Taxonomy has become influential to the point of dogma in American Colleges of Education.

Bloom’s Taxonomy has been used to devalue basic skills education and has promoted “higher order thinking”at its expense.

Shortchanging basic skills education has resulted in producing students who misunderstand true higher-order thinking and who are not equipped for advanced education.

…. Soon after it was published, a body of research began to build around the taxonomy. In 1970, Cox and Wildemann collected an index of the existing research into Bloom’s Taxonomy. According to their study, 118 research projects of various sorts had been conducted in the previous decade and a half.

A review of their data, however, shows that most of the research lacked experimental results that might either confirm or invalidate it. The results noted are not reassuring. Initial studies showed that individuals skilled in the Taxonomy frequently could not agree on the classification of test items or objectives.

… This adds up to an extraordinary misreading of the Taxonomy. Standards intended for college students get pushed down to the K–12 system. Instead of teaching those K–12 students hierarchically, the foundation of the structure is ignored. The push is made to the highest levels of the Taxonomy, especially level six, Evaluation. … I will quote its caveats about Evaluation.

For the most part, the evaluations customarily made by an individual are quick decisions not preceded by very careful consideration of the various aspects of the object, idea or activity being judged. These might be termed opinions rather than judgments.…For purposes of classification, only those evaluations which are or can be made with distinct criteria in mind are considered.

Despite these warnings, typical Evaluation questions take the form of “What do you think about x?”and “Do you agree with x?” These questions are often accompanied by praise for what education literature misidentifies as the “Socratic method.” The result of this strategy is to occupy class time with vacuous opining.

When I speak with my fellow community college instructors, we rarely complain about student ’lack of advanced intellectual skills. Our chief source of frustration is that they haven’t mastered the basics needed to succeed in college-level work. Since I teach philosophy, I don’t expect my students to come to class knowing any content about my subject area.

Still, it would be lovely if they exited high school with some knowledge of world history, science, English, and geography. A large cohort (much to my frustration) doesn’t know how many grams are in a kilogram or when to use an apostrophe. I have a friend, Dr. Lawrence Barker, who once taught statistics at a state university. Each quarter he quizzed his incoming statistics students about basic math. The majority, he learned, couldn’t determine the square root of one without access to a calculator. He left teaching and is now happily employed by the Centers for Disease Control.

A Roof without Walls: Benjamin Bloom’s Taxonomy and the Misdirection of American Education, Michael Booker, Academic Questions 20(4):347-355 · December 2007

Alternative models of learning

Rex Heer, at the Iowa State University Center for Excellence in Learning and Teaching created this model. He writes:

Among other modifications, Anderson and Krathwohl’s (2001) revision of the original Bloom’s taxonomy (Bloom & Krathwohl, 1956) redefines the cognitive domain as the intersection of the Cognitive Process Dimension and the Knowledge Dimension.

This document offers a three-dimensional representation of the revised taxonomy of the cognitive domain.

Although the Cognitive Process and Knowledge dimensions are represented as hierarchical steps, the distinctions between categories are not always clear-cut.

For example, all procedural knowledge is not necessarily more abstract than all conceptual knowledge; and an objective that involves analyzing or evaluating may require thinking skills that are no less complex than one that involves creating. It is generally understood, nonetheless, that lower order thinking skills are subsumed by, and provide the foundation for higher order thinking skills.

A Model of Learning Objectives by Rex Heer

The Knowledge Dimension classifies four types of knowledge that learners may be expected to acquire or construct— ranging from concrete to abstract.

Knowedge Dimension based on Bloom's

The Cognitive Process Dimension represents a continuum of increasing cognitive complexity – from lower order thinking skills to higher order thinking skills.

Cognitive Processes dimension based on Bloom's

Based on this, Rex Heer develops this three dimensional model.

Again, please note that – as Bloom himself always intended – remembering facts (misunderstood as the “lowest” part of the method) – is actually the most important part: remembering facts is the base on which everything else depends.

One can’t engage in higher level critical thinking skills on a subject without first knowing the content of the subject.

Rex Heer Revised Bloom's taxonomy

Model by Rex Heer, Iowa State University, Center for Excellence in Learning and Teaching, Jan 2012. Creative Commons Attribution Non Commercial-ShareAlike 3.0 Unported License.

Yafi16-catherine-glaiser-course-design (PDF)
.

Speculative history: Possibility of prehuman civilization

Could an intelligent species have lived on Earth before humanity? Could it even have developed an industrial civilization? If one had existed on Earth – many millions of years prior to our own era – what traces would it have left and would they be detectable today?

It only took five minutes for Gavin Schmidt to out-speculate me. Schmidt is the director of NASA’s Goddard Institute for Space Studies (a.k.a. GISS), a world-class climate-science facility. One day last year, I came to GISS with a far-out proposal. In my work as an astrophysicist, I’d begun researching global warming from an “astrobiological perspective.” That meant asking whether any industrial civilization that rises on any planet will, through its own activity, trigger its own version of a climate shift. I was visiting GISS that day hoping to gain some climate-science insights and, perhaps, collaborators. That’s how I ended up in Gavin’s office.

Just as I was revving up my pitch, Gavin stopped me in my tracks. “Wait a second,” he said. “How do you know we’re the only time there’s been a civilization on our own planet?”

It took me a few seconds to pick up my jaw off the floor. I had certainly come into Gavin’s office prepared for eye rolls at the mention of “exo-civilizations.” But the civilizations he was asking about would have existed many millions of years ago. Sitting there, seeing Earth’s vast evolutionary past telescope before my mind’s eye, I felt a kind of temporal vertigo. “Yeah,” I stammered. “Could we tell if there’d been an industrial civilization that deep in time?” We never got back to aliens. Instead, that first conversation launched a new study we’ve recently published in the International Journal of Astrobiology. …

We’re used to imagining extinct civilizations in terms of sunken statues and subterranean ruins. These kinds of artifacts of previous societies are fine if you’re only interested in timescales of a few thousands of years. But once you roll the clock back to tens of millions or hundreds of millions of years, things get more complicated. When it comes to direct evidence of an industrial civilization—things like cities, factories, and roads—the geologic record doesn’t go back past what’s called the Quaternary period 2.6 million years ago…. Go back much further than the Quaternary, and everything has been turned over and crushed to dust….

So could researchers find clear evidence that an ancient species built a relatively short-lived industrial civilization long before our own? Perhaps, for example, some early mammal rose briefly to civilization building during the Paleocene epoch, about 60 million years ago. There are fossils, of course. But the fraction of life that gets fossilized is always minuscule and varies a lot depending on time and habitat. It would be easy, therefore, to miss an industrial civilization that lasted only 100,000 years—which would be 500 times longer than our industrial civilization has made it so far.

From Was There a Civilization on Earth Before Humans? A look at the available evidence, Adam Frank, The Atlantic, 4/13/2018

By Michael Osadciw

Michele Diodati writes

Strange as it may seem, if a nuclear war or a natural catastrophe put an end to humanity more or less suddenly, the archaeologists of a very distant future, millions of years away, would not find any of our artifacts and probably not even the fossil remains of our skeletons.

Today, the works built by industrial civilization — highways, railways, bridges, skyscrapers, megacities, dams, etc. — tower mighty in front of our eyes…. But the unraveling of all these works on the geological time scale corresponds to a blink of an eye.
The oldest territory of a certain extent currently existing on land is the Negev desert in Israel, which can be dated to around 1.8 million years ago. But complex organisms have inhabited the Earth since an era more than 280 times more ancient. Therefore, the traces of their existence are buried at various depths in sites that are mostly inaccessible and unknown.

f the human species disappeared tomorrow, in less than two million years, erosion and sedimentation, which act relentlessly on the territories currently inhabited, would erase any trace of the cities and artifacts produced by industrial civilization.

Considering that urbanized humanity occupies less than 1% of the total Earth’s surface, the chances of recovering the buried remains of our civilization in the distant future are reduced to a minimum. Moreover, fossilization is an extremely rare process that depends on many factors, including the climatic conditions and the ratio between soft and hard tissues of the dead specimen.

For example, of all the countless billions of dinosaurs that have lived on Earth for about 200 million years, we own the nearly complete fossils of a few thousand specimens. It corresponds only to a handful of dinosaur fossils for every 100,000 years, for all the thousands of taxa (taxonomic units) that existed in those 100,000 years.

As a result, the chances of encountering the remains of an industrial civilization that existed millions of years before, but only lasted a few thousand years, are minimal. So, is it indeed impossible to find traces of any possible industrial civilizations of the past? And should we resign ourselves to not leaving any mark of our own existence? Not exactly. If it is true that the artifacts deteriorate very quickly and that fossils are scarce, however, the global impact of industrial civilization on the climate and mineralogy of the planet remains.

…. In summary, if the current industrial civilization were to disappear from Earth’s face in a few centuries, all that would remain in the distant future of its activities and its very existence would be the traces, incorporated into Anthropocene’s sediments, of the planetary pollution it has caused in a very short geological time. Traces originated by:

    • the release of vast quantities of carbon of biological origin into the environment to produce energy;

    • the global rise in temperatures due to the greenhouse effect;

    • the alteration of the nitrogen cycle;

    • the acidification of the oceans and the creation of dead zones due to lack of oxygen;

    • the extinction of numerous animal species and the uncontrolled proliferation of a few others;

    • peaks in the diffusion of rare metals for industrial use and possibly isotopes of radioactive elements useful for the construction of atomic weapons;

    • synthetic pollutants such as PCB, CFC, and plastic materials.

from The Silurian Hypothesis, Medium.com article

As an example of how fast nature can reclaim human towns. “This is Houtouwan, an abandoned Chinese fishing village just 40 miles off the east coast of Shanghai. The village was once home to 3,000 people. Now it’s home to just five, and greenery has started to reclaim the settlement from civilization. The village was emptied out in the ’90s – an insight into the scale of China’s population shift away from rural life.”

China’s Green Ghost Village

Consider Hadrian’s Wall in England. From 2000 years ago to today the differences is astonishing. It was built “as a way to secure the empire from would-be attackers. They built “milecastles,” or forts, within the wall at intervals of approximately one Roman mile. Stretching 73 miles across some of the most dramatic countryside in England, Hadrian’s Wall dates back to the 1st century AD.”

A critical point: In this unit we are not talking about ancient human civilizations. Humans in our modern form have been around for about 100,000 years. And recorded human history, when towns and cities developed, mostly only goes back some 6,000 years. As long ago as that seems to most people that is just a blink of the eye on geological time scales. The questions being asked here about possible civilizations millions of hundreds of millions of years ago.

Articles

Did Antarctica remain entirely unvisited by humans until the early 19th century? History.Stackexchange.Com

Could an Industrial Prehuman Civilization Have Existed on Earth before Ours? Scientific American

Was There a Civilization On Earth Before Humans? A look at the available evidence, Adam Frank, The Atlantic, 4/13/2018

The Silurian Hypothesis, Medium.com article

Technosignatures of ET life elsewhere in our solar system

THE BIG QUESTIONS Did Intelligent Space Aliens Once Live in Our Solar System? NBC News

The Silurian Hypothesis: Would it be possible to detect an industrial civilization in the geological record? Gavin A. Schmidt, Adam Frank

If an industrial civilization had existed on Earth many millions of years prior to our own era, what traces would it have left and would they be detectable today? We summarize the likely geological fingerprint of the Anthropocene, and demonstrate that while clear, it will not differ greatly in many respects from other known events in the geological record. We then propose tests that could plausibly distinguish an industrial cause from an otherwise naturally occurring climate event.

Prior Indigenous Technological Species, Jason T. Wright

One of the primary open questions of astrobiology is whether there is extant or extinct life elsewhere the Solar System. Implicit in much of this work is that we are looking for microbial or, at best, unintelligent life, even though technological artifacts might be much easier to find. SETI work on searches for alien artifacts in the Solar System typically presumes that such artifacts would be of extrasolar origin, even though life is known to have existed in the Solar System, on Earth, for eons.

But if a prior technological, perhaps spacefaring, species ever arose in the Solar System, it might have produced artifacts or other technosignatures that have survived to present day, meaning Solar System artifact SETI provides a potential path to resolving astrobiology’s question.

Here, I discuss the origins and possible locations for technosignatures of such a prior indigenous technological species, which might have arisen on ancient Earth or another body, such as a pre-greenhouse Venus or a wet Mars. In the case of Venus, the arrival of its global greenhouse and potential resurfacing might have erased all evidence of its existence on the Venusian surface. In the case of Earth, erosion and, ultimately, plate tectonics may have erased most such evidence if the species lived Gyr ago. Remaining indigenous technosignatures might be expected to be extremely old, limiting the places they might still be found to beneath the surfaces of Mars and the Moon, or in the outer Solar System.

Neuroplasticity

Introduction

Neuroplasticity is the ability of the brain to change throughout an individual’s life.

Research has shown that many aspects of the brain can be altered even through adulthood. However, the developing brain (in the womb, and early childhood) exhibits a much higher degree of plasticity than the adult brain.

Neuroplasticity can be observed at multiple scales, from microscopic changes in individual neurons to larger-scale changes such as cortical remapping in response to injury.

Behavior, environmental stimuli, thought, and emotions may also cause neuroplastic change.  This has significant implications for learning, memory, and recovery from brain damage.

At the single cell level, synaptic plasticity refers to changes in the connections between neurons, whereas non-synaptic plasticity refers to changes in their intrinsic excitability.

– Adapted from, “Neuroplasticity.” Wikipedia, The Free Encyclopedia. 21 Sep. 2018

Step-by-step

The brain is made of several types of nerves connected to each other in an intricate web, always creating new connections as we grow and learn.

eloquent areas of brain

I found the following sequence of GIFs on Mr. Gruszka’s Earth Science GIFtionary.

Whenever you learn something new, you grow some dendrites that made a new circuit in your brain.

Neurons in the brain Synapses

Neurons send and receive electrical signals between different parts of the brain.

neurons axons dendrites nerves

How are neurons connected?

Signals enter a neuron cell body through a dendrite, and then this may send an electrical signal out along the axon, towards another neuron.

Dendrites and axon terminals grow relatively easy.

Whenever we take the time to learn something new, our brain grows new dendrite and axon terminal connections.

neuron axon dendrite

Image from commons.wikimedia.org/wiki/File:Neuron.svg

Whenever you continuously practice learning a new skill, your brain rewires itself.

Brain plasticity nerves new connections

When your brain rewires itself, new patterns are possible.

These new patterns not only store information, they help your brain learn similar information more efficiently.

For instance, the more time you spend learning how to read music and play a musical instrument, the easier it will be over time to develop your skills in this area.

The same is true for developing skills in mathematics or problem solving.

New dendrite connection GIF nerves brain

This process works best if you continue to challenge yourself, practicing the skills you have and attempting to learn new ones.

As you do this, new connections are made in your brain, and to some extent one can literally become smarter, and better at learning.

However, unless your brain is challenged to do something difficult, or review what it already knows how to do, it will not produce new patterns.

If you stop practicing and learning, then eventually the brain will begin to lose some of those neural connections.

brain unused connections dendrite pruning

When the brain prunes dendrites, we forget how to do something that we learned, or forget a fact that we used to know.  In this sense, we say “Use it or lose it.”

Synaptic pruning synapse elimination.gif

Usually, all the dendrites required for a certain skill are not pruned, so we are in luck.

With review and practice, we can make new connections that replace the lost ones.

What is meant by brain plasticity?

The flexibility of the brain to make new connections and patterns:

This is because “plastic” does not just mean the material we make things out of. It has a second meaning of flexible, or changeable.

Brain plasticity

Also, brain cells can even travel within the brain to become neurons where they are needed.

For instance, when neurons die, new cells can migrate to where they are needed and become neurons. This can happen throughout life.

Brain plasticity moving neurons

https://uag-earthsci.blogspot.com/2017/09/day-004-giftionary-introduction-to-brain.html

Learn how nerves work here.

 

How antibiotics work

Antibiotics are chemicals that disrupt and kill bacteria.

Note that they don’t kill viruses, fungi, or parasites.

For example, influenza (“the flu”) is a virus, not a bacteria. Therefore antibiotics can’t help fight the influenza virus.

Introduction

Antibiotics work by blocking vital processes in bacteria, killing the bacteria or stopping them from multiplying.

This helps the body’s natural immune system to fight the infection.

Different antibiotics work against different types of bacteria.

  • Antibiotics that affect a wide range of bacteria are called broad spectrum antibiotics (eg, amoxicillin and gentamicin).

  • Antibiotics that affect only a few types of bacteria are called narrow spectrum antibiotics (eg, penicillin).

Different types of antibiotics work in different ways.

For example, penicillin destroys bacterial cell walls, while other antibiotics can affect the way the bacterial cell works.

Doctors choose an antibiotic according to the bacteria that usually cause a particular infection.

Sometimes your doctor will do a test to identify the exact type of bacteria causing your infection and its sensitivity to particular antibiotics.

Antibiotic medicines may contain one or more active ingredients and be available under different brand names. The medicine label should tell you the active ingredient and the brand name.

_ from NPS MedicineWise, Australian Govt. Dept. of Health

Simple animation showing how an antibiotic disrupts the building of a cell wall.

Once the cell wall is disrupted, water can enter, making the cell swell, and eventually burst.

antibiotic cell wall

Image from Waterborne Diseases: Typhoid, By Olivia W.

 

Ways that antibiotics can disrupt bacteria

You can right-click on each image to expand it,  or click here for the original page.  It shows us several different types of antibiotics. Each has a different way of disrupting a bacteria,

This image is from “Mechanisms of  Bacterial Resistance to Aminoglycoside Antibiotics”, 2019 RCSB PDB Video Challenge for High School Students. from the PDB-101 website. This is an educational portal of the RCSB PDM (protein data bank.)

Mechanisms of antibiotics

and

Mechanisms of antibiotics 2

 

Related content

What is an antibiotic? Form Learn.Genetics, Univ. of Utah

Learning Standards

MassachusettsComprehensive Health

8.1 Describe how the body fights germs and disease naturally and with medicines and
immunization

8.5 Identify ways individuals can reduce risk factors related to communicable and chronic diseases
8.6 Describe the importance of early detection in preventing the progression of disease

8.7 Explain the need to follow prescribed health care procedures given by parents and health care providers

8.13 Explain how the immune system functions to prevent and combat disease

8.19 Explain the prevention and control of common communicable infestations, diseases, and infections

Benchmarks for Science Education, AAAS

Inoculations use weakened germs (or parts of them) to stimulate the body’s immune system to react. This reaction prepares the body to fight subsequent invasions by actual germs of that type. Some inoculations last for life. 8F/H4

If the body’s immune system cannot suppress a bacterial infection, an antibacterial drug may be effective—at least against the types of bacteria it was designed to combat. Less is known about the treatment of viral infections, especially the common cold. However, more recently, useful antiviral drugs have been developed for several major kinds of viral infections, including drugs to fight HIV, the virus that causes AIDS. 8F/M6** (SFAA)

Pasteur found that infection by disease organisms (germs) caused the body to build up an immunity against subsequent infection by the same organisms. He then produced vaccines that would induce the body to build immunity to a disease without actually causing the disease itself. 10I/M3*

Investigations of the germ theory by Pasteur, Koch, and others in the 19th century firmly established the modern idea that many diseases are caused by microorganisms. Acceptance of the germ theory has led to changes in health practices. 10I/M4*

Current health practices emphasize sanitation, the safe handling of food and water, the pasteurization of milk, isolation, and aseptic surgical techniques to keep germs out of the body; vaccinations to strengthen the body’s immune system against subsequent infection by the same kind of microorganisms; and antibiotics and other chemicals and processes to destroy microorganisms. 10I/M7** (BSL)

The science wars: postmodernism as a threat against truth and reason

The science wars was an intellectual war between scientific realists and postmodernist critics.

The debate was about whether anything that humans could learn or talk about actually has meaning – or whether all words (even for science and math) ultimately only conveyed internal biases and feelings. Thus, in this view, nothing could ever be objectively said about the world.

Misunderstanding the debate

The science wars were often misunderstood by observers. Outsiders imagined that the debate was whether the intellectual paradigms of a culture affected the way data was interpreted. After all, it is noted, the same data can cause the investigator to reach different conclusions based on their internal biases.

However, this had nothing to do with the science wars. Scientists acknowledge that all people operate within intellectual paradigms, and that this of course affects how people might interpret data.

Rather, in the science wars, deconstructionists and postmodernists went much further: Many held that science tells us nothing about the real world. Some said things such as “DNA molecules are a myth of Western culture;” “the idea that 2 + 2 = 4 is white colonialist thinking,” etc.  Some in this group denied that math and science had any more existence or legitimacy than “other ways of thinking” about subjects.

Ironically, this kind of thinking was foreseen by George Orwell.

In the end, the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality, was tacitly denied by their philosophy.
– George Orwell, Nineteen Eighty-Four

Scientific realists (such as Norman Levitt, Paul R. Gross, Jean Bricmont and Alan Sokal) understand and explain that scientific knowledge is real.

In contrast, many postmodernists and deconstructionists openly reject the reality and useful of science itself. Many openly reject scientific objectivity, the scientific method, Empiricism, and scientific knowledge.

Postmodernists and deconstructionists interpret Thomas Kuhn‘s ideas about scientific paradigms to mean that scientific theories are only social constructs, and not actual descriptions of reality.

Some philosophers like Paul Feyerabend argued that other, non-realist forms of knowledge production were just as valid. Therefore, for example:

a Native American thinking about nature would come up with his or her own ideas that are different from ideas in supposed “colonialist” science textbooks, and that those ideas – even when never backed by experiment – would literally be just as “true” as the ideas found by science (ideas which actually have been tested, and found to be true no matter the ethnicity of the person involved.)

a woman thinking about nature would come up with her own ideas that are different from ideas in supposed “male” science textbooks, and that those ideas – even when never backed by experiment – would literally be just as “true” as the ideas found by science (ideas which actually have been tested, and found to be true no matter the ethnicity of the person involved.)

There were attempts to bring postmodernism/deconstructionism into science back in the 1990s. There is a new attempt to do so today in the 2020s under the misleading motto “decolonize the curriculum.”

Some of these postmodernist attempts to do so at first look like a parody, but it turns out that the authors are serious.

For example, an increasing number of postmodernists claim that math itself is “colonialist.” The example shown below is becoming increasingly common.

Decolonize Math 1

Can you imagine what would happen if we allowed people to “decolonize” math, science, and engineering practices? Every piece of technology created by people indoctrinated with this view would be dangerous.

Decolonize math 2

In the 1990’s, Scientific realists were quick to realize the danger. Large swaths of deconstructionist and postmodernist writings rejected any possibility of objectivity and realism. This not only undercut the entire idea of mathematics, and all of science, but also of philosophy and human rights.

The works of Jacques DerridaGilles DeleuzeJean-François Lyotard and others claimed to say something about reality, but realists (scientists and anyone who believed in rational thought) recognized that such postmodern writings were deliberately incomprehensible or meaningless.

Example of how postmodernists understand basic logic

Some people misunderstand (or deliberately misrepresent) images like this to promote the idea that “truth is relative.” They say things like “The object is a triangle when viewed by one person, but a square when viewed by someone else, and a circle when seen by yet another person. So reality is relative, not absolute.”

The problem of course is that their claims are not only false, they are irrational.

Geometrical shape projections seen from many different points of view POV

In this example there is an actual three dimensional object (a fact in the real world.)  The geometric projection of this object contains only a small part of information about the object as a whole.

Thus, a viewer who only looks at the object from one direction only receives some of the information, and does not yet know about the rest.  Yet that lack of knowledge doesn’t change the reality of what the three dimensional object actually is.

If a postmodernist concluded, “I see a circle, therefore it is a circle” and then make a mathematical model of the object as a circle or sphere, their model would have predictions which immediately turn out to be wrong. Not “wrong” from one culture’s point of view, or from one religion’s point of view, or one gender’s point of view, but actually objectively wrong in reality.

News

Fake News Comes to Academia How three scholars gulled academic journals to publish hoax papers on ‘grievance studies.’

Related articles on this website

Why does science matter?

Relativism Truth and Reality

Science denialism

Suggested reading (articles)

Campus Craziness: A New War on Science, Skeptic Magazine, Volume 22 Number 4

Anti-intellectualism

Suggested reading (books)

Science Wars: The Next Generation (Science for the People)

Higher Superstition: The Academic Left and Its Quarrels with Science, Paul R. Gross and Norman Levitt, 1994

Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science, Alan Sokal and Jean Bricmont, 1999

In 1996, Alan Sokal published an essay in the hip intellectual magazine Social Text parodying the scientific but impenetrable lingo of contemporary theorists. Here, Sokal teams up with Jean Bricmont to expose the abuse of scientific concepts in the writings of today’s most fashionable postmodern thinkers.

From Jacques Lacan and Julia Kristeva to Luce Irigaray and Jean Baudrillard, the authors document the errors made by some postmodernists using science to bolster their arguments and theories. Witty and closely reasoned, Fashionable Nonsense dispels the notion that scientific theories are mere “narratives” or social constructions, and explored the abilities and the limits of science to describe the conditions of existence.

Book reviews

Richard Dawkins’ review of Intellectual Impostures by Alan Sokal and Jean Bricmont.

Zombie based geography

I want to share these ideas with other educators and with students.

_______________________________________________

Zombie-Based Learning (ZBL) is the brainchild of David Hunter, former teacher from the Bellevue Big Picture school, in a suburb of Seattle, Washington.  It uses Project-Based Learning to encourage active engagement, problem solving and critical thinking skills.

Student Zombie Map

Student photo made available from Zombie-Based Learning (ZBL),

When the zombies attack, where should we run, where regroup, and where rebuild our lives? Those questions, key to survival, can focus student attention on a highly motivating and dangerously overlooked fact: Geography skills can save you from the zombie apocalypse!

Use students’ natural desire to survive zombie assaults to motivate study of a complete curriculum based on the 2012 National Geography Standards, and then to apply those skills in a series of scenarios based on surviving when the attacks come to your own neighborhood.

http://zombiebased.com/

=====================

Making History is Project-Based Learning curriculum created by award-winning teacher David Hunter, designed for standards-based classrooms. Launched on Kickstarter, it’s nine units with projects for middle school students. Teach cross-content or by individual subject, with a time travel backstory to drive students’ interest and engagement. The narrative follows a group of entrepreneurial and altruistic students who go back in time, and work together to invent or discover critical breakthroughs BEFORE they occur in our true historical timeline.

http://makinghistorypbl.com/

Handouts

Zombies worksheet

Mysterious link between immune system and mental illness

He Got Schizophrenia. He Got Cancer. And Then He Got Cured.

A bone-marrow transplant treated a patient’s leukemia — and his delusions, too. Some doctors think they know why.

By Moises Velasquez-Manoff
Mr. Velasquez-Manoff is a science writer.

The man was 23 when the delusions came on. He became convinced that his thoughts were leaking out of his head and that other people could hear them. When he watched television, he thought the actors were signaling him, trying to communicate. He became irritable and anxious and couldn’t sleep.

Dr. Tsuyoshi Miyaoka, a psychiatrist treating him at the Shimane University School of Medicine in Japan, eventually diagnosed paranoid schizophrenia. He then prescribed a series of antipsychotic drugs. None helped. The man’s symptoms were, in medical parlance, “treatment resistant.”

A year later, the man’s condition worsened. He developed fatigue, fever and shortness of breath, and it turned out he had a cancer of the blood called acute myeloid leukemia. He’d need a bone-marrow transplant to survive. After the procedure came the miracle. The man’s delusions and paranoia almost completely disappeared. His schizophrenia seemingly vanished.

Years later, “he is completely off all medication and shows no psychiatric symptoms,” Dr. Miyaoka told me in an email. Somehow the transplant cured the man’s schizophrenia.

A bone-marrow transplant essentially reboots the immune system. Chemotherapy kills off your old white blood cells, and new ones sprout from the donor’s transplanted blood stem cells. It’s unwise to extrapolate too much from a single case study, and it’s possible it was the drugs the man took as part of the transplant procedure that helped him. But his recovery suggests that his immune system was somehow driving his psychiatric symptoms.

At first glance, the idea seems bizarre — what does the immune system have to do with the brain? — but it jibes with a growing body of literature suggesting that the immune system is involved in psychiatric disorders from depression to bipolar disorder.

The theory has a long, if somewhat overlooked, history. In the late 19th century, physicians noticed that when infections tore through psychiatric wards, the resulting fevers seemed to cause an improvement in some mentally ill and even catatonic patients.

Inspired by these observations, the Austrian physician Julius Wagner-Jauregg developed a method of deliberate infection of psychiatric patients with malaria to induce fever. Some of his patients died from the treatment, but many others recovered. He won a Nobel Prize in 1927.

One much more recent case study relates how a woman’s psychotic symptoms — she had schizoaffective disorder, which combines symptoms of schizophrenia and a mood disorder such as depression — were gone after a severe infection with high fever.

Modern doctors have also observed that people who suffer from certain autoimmune diseases, like lupus, can develop what looks like psychiatric illness. These symptoms probably result from the immune system attacking the central nervous system or from a more generalized inflammation that affects how the brain works.

Indeed, in the past 15 years or so, a new field has emerged called autoimmune neurology. Some two dozen autoimmune diseases of the brain and nervous system have been described. The best known is probably anti-NMDA-receptor encephalitis, made famous by Susannah Cahalan’s memoir “Brain on Fire.” These disorders can resemble bipolar disorder, epilepsy, even dementia — and that’s often how they’re diagnosed initially. But when promptly treated with powerful immune-suppressing therapies, what looks like dementia often reverses. Psychosis evaporates. Epilepsy stops. Patients who just a decade ago might have been institutionalized, or even died, get better and go home.

Admittedly, these diseases are exceedingly rare, but their existencesuggests there could be other immune disorders of the brain and nervous system we don’t know about yet.

Dr. Robert Yolken, a professor of developmental neurovirology at Johns Hopkins, estimates that about a third of schizophrenia patients show some evidence of immune disturbance. “The role of immune activation in serious psychiatric disorders is probably the most interesting new thing to know about these disorders,” he told me.

Studies on the role of genes in schizophrenia also suggest immune involvement, a finding that, for Dr. Yolken, helps to resolve an old puzzle. People with schizophrenia tend not to have many children. So how have the genes that increase the risk of schizophrenia, assuming they exist, persisted in populations over time? One possibility is that we retain genes that might increase the risk of schizophrenia because those genes helped humans fight off pathogens in the past. Some psychiatric illness may be an inadvertent consequence, in part, of having an aggressive immune system.

Which brings us back to Dr. Miyaoka’s patient. There are other possible explanations for his recovery. Dr. Andrew McKeon, a neurologist at the Mayo Clinic in Rochester, Minn., a center of autoimmune neurology, points out that he could have suffered from a condition called paraneoplastic syndrome. That’s when a cancer patient’s immune system attacks a tumor — in this case, the leukemia — but because some molecule in the central nervous system happens to resemble one on the tumor, the immune system also attacks the brain, causing psychiatric or neurological problems. This condition was important historically because it pushed researchers to consider the immune system as a cause of neurological and psychiatric symptoms. Eventually they discovered that the immune system alone, unprompted by malignancy, could cause psychiatric symptoms.

Another case study from the Netherlands highlights this still-mysterious relationship. In this study, on which Dr. Yolken is a co-author, a man with leukemia received a bone-marrow transplant from a schizophrenic brother. He beat the cancer but developed schizophrenia. Once he had the same immune system, he developed similar psychiatric symptoms.

The bigger question is this: If so many syndromes can produce schizophrenia-like symptoms, should we examine more closely the entity we call schizophrenia?

Some psychiatrists long ago posited that many “schizophrenias” existed — different paths that led to what looked like one disorder. Perhaps one of those paths is autoinflammatory or autoimmune.

If this idea pans out, what can we do about it? Bone marrow transplant is an extreme and risky intervention, and even if the theoretical basis were completely sound — which it’s not yet — it’s unlikely to become a widespread treatment for psychiatric disorders. Dr. Yolken says that for now, doctors treating leukemia patients who also have psychiatric illnesses should monitor their psychiatric progress after transplantation, so that we can learn more.

And there may be other, softer interventions. A decade ago, Dr. Miyaoka accidentally discovered one. He treated two schizophrenia patients who were both institutionalized, and practically catatonic, with minocycline, an old antibiotic usually used for acne. Both completely normalized on the antibiotic. When Dr. Miyaoka stopped it, their psychosis returned. So he prescribed the patients a low dose on a continuing basis and discharged them.

Minocycline has since been studied by others. Larger trials suggest that it’s an effective add-on treatment for schizophrenia. Some have argued that it works because it tamps down inflammation in the brain. But it’s also possible that it affects the microbiome — the community of microbes in the human body — and thus changes how the immune system works.

Dr. Yolken and colleagues recently explored this idea with a different tool: probiotics, microbes thought to improve immune function. He focused on patients with mania, which has a relatively clear immunological signal. During manic episodes, many patients have elevated levels of cytokines, molecules secreted by immune cells. He had 33 mania patients who’d previously been hospitalized take a probiotic prophylactically. Over 24 weeks, patients who took the probiotic (along with their usual medications) were 75 percent less likely to be admitted to the hospital for manic attacks compared with patients who didn’t.

The study is preliminary, but it suggests that targeting immune function may improve mental health outcomes and that tinkering with the microbiome might be a practical, cost-effective way to do this.

Watershed moments occasionally come along in medical history when previously intractable or even deadly conditions suddenly become treatable or preventable. They are sometimes accompanied by a shift in how scientists understand the disorders in question.

We now seem to have reached such a threshold with certain rare autoimmune diseases of the brain. Not long ago, they could be a death sentence or warrant institutionalization. Now, with aggressive treatment directed at the immune system, patients can recover. Does this group encompass a larger chunk of psychiatric disorders? No one knows the answer yet, but it’s an exciting time to watch the question play out.

Moises Velasquez-Manoff, the author of “An Epidemic of Absence: A New Way of Understanding Allergies and Autoimmune Diseases” and an editor at Bay Nature magazine, is a contributing opinion writer.

.

Related readings

https://en.wikipedia.org/wiki/Neuroimmunology

Emerging Subspecialties in Neurology: Autoimmune neurology

https://education.questdiagnostics.com/insights/104

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5499978/

6 page PDF article. http://www.med.or.jp/english/pdf/2004_09/425_430.pdf

https://www.quora.com/What-are-some-autoimmune-neurological-disorders-How-are-they-treated

 

Metabolism

What are we learning about?

* Hundreds of chemical reactions occur simultaneously in every living cell. 

* The entire set of them are collectively known as metabolism .

* in some reactions, complex molecules are broken down to produce energy

* in other reactions, energy is used to build up complex molecules.

Anabolism

From Greek ἁνά, “upward” and βάλλειν, “to throw”

All the chemical pathways in which cells bond smaller molecules together to make macromolecules (larger ones.)

The energy source is another set of processes, catabolism (see below)

Anabolism is used to

create news cells

build muscles and tissues.

grow and mineralize bone

 

Catabolism

From the Greek κάτω kato, “downward” and βάλλειν ballein, “to throw”

All the chemical pathways in which cells break down large molecules into smaller ones.

Cells gain energy from the breakdown, or create smaller pieces, which become building materials in anabolism.

Catablism is used to:

break proteins down into amino acids

break DNA molecules down into individual nucelotides

convert sugar into ATP and other small organic molecules

Another way to show these metabolic pathways:

anabolic and catabolic metabolism

BBC Bitezie revisions

Endocrine hormones regulate our metabolism

Hormones can be classified as anabolic or catabolic.

Anabolic hormones are the anabolic steroids, which stimulate protein synthesis and muscle growth, and insulin.

Catabolic hormones include

cortisol (breaks down large molecules into simple sugars, for quick energy)

glucagon (breaks down large molecules into glucose and fatty acids)

adrenaline – increases blood flow to muscles, output of the heart, blood sugar level.

Our article on the endocrine hormone system.

 

What does metabolism look like inside a cell? Here’s a simplified view:

Metabolism anabolism catabolism

Image from An Introduction to Nutrition, v. 1.0. 2012books.lardbucket.org/books/an-introduction-to-nutrition

 

Metabolic map

This is a metro-style map of the metabolism of most life on Earth.

Metabolism pathways Wikimedia

Image by Bert Chan, Hong Kong, via Wikimedia. https://www.behance.net/bertchan

Interactive Metabolic Pathways Map – New Edition | Sigma-Aldrich

 

Related articles

Scaling-and-biophysics: As animals get larger and larger, how would their metabolism need to change?

Cellular respiration: An introduction

Interactive metabolism maps or apps

Metabolic pathways from Learn.Genetics

Clickable metabolic map from metabolicpathways.teithe.gr

Wiley college textbook step-by-step animations

Virtual Metabolic Human

Roche biochemical pathway online map

Learning Standards

MS-LS1-3.  Use argument supported by evidence for how the body is a system of interacting subsystems composed of groups of cells. Emphasis is on the conceptual understanding that cells form tissues and tissues form organs specialized for particular body functions. Examples could include the interaction of subsystems within a system and the normal functioning of those systems.

MS-LS1-7. Develop a model to describe how food is rearranged through chemical reactions forming new molecules that support growth and/or release energy as this matter moves through an organism. Emphasis is on describing that molecules are broken apart and put back together and that in this process, energy is released.

TBA