Home » coding
Category Archives: coding
These labs were designed by Prof. Chris Orban for Physics 1250 at The Ohio State University at Marion. They are useful at the high school and college level. No calculus knowledge or prior programming experience is required.
The nice part about these programming labs is that there is no software to install. The compiling and executing and visualization is all done within your web browser! This is accomplished using a programming framework called p5js.org which is very similar to C/C++.
Note to students: When we talk about coding, we mean computer programming (“writing code.”) But more specifically, we mean using code that uses sophisticate mathematics.
From “The Future of Photography is Code”
Devin Coldewey, 10/22/2018
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.
Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.
The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging
Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream
The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.
Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.
DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.
This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)
Thomas T. Thomas writes:
From our perspective at the human scale, a tabletop is a flat plane
,but at the atomic level, the flat surface disappears into a lumpy swarm of molecules.
Aficionados of fractal imagery will understand this perfectly: any natural feature like the slope of a hill or shore of a coast can be broken down into smaller and smaller curves and angles, endlessly subject to refinement. In fractal geometry, which is driven by simple equations, the large curves mirror the small curves ad infinitum.
The emergent property is not an illusion… The flatness of the tabletop is just as real—and more useful for setting out silverware and plates—than the churning atoms that actually compose it. The hill and its slope are just as real—and more useful for climbing—than the myriad tiny angles and curves, the surfaces of the grains of sand and bits of rock, that underlie the slope.
Emergent property works on greater scales, too. From space the Earth presents as a nearly perfect sphere, a blue-white marble decorated with flashes of green and brown, but still quite smooth. That spherical shape only becomes apparent from a great distance. Viewed from the surface, it’s easy enough for the eye to see a flat plane bounded by the horizon and to focus on hills and valleys as objects of great stature which, from a distance of millions of miles, do not even register as wrinkles.
Emergent properties come into play only when the action of thousands, millions, or billions of separate and distinct elements are perceived and treated as a single entity. “Forest” is an emergent property of thousands of individual trees. The concept of emergent properties can be extremely useful to describe some of the situations and events that we wrestle with daily.
Conway’s game of life
BOIDS: Birds flocking
Classical physics is an emergent property of quantum mechanics
2016 Massachusetts Science and Technology/Engineering Curriculum Framework
Appendix VIII Value of Crosscutting Concepts and Nature of Science in Curricula
In grades 9–12, students can observe patterns in systems at different scales and cite patterns as empirical evidence for causality in supporting their explanations of phenomena. They recognize that classifications or explanations used at one scale may not be useful or need revision using a different scale, thus requiring improved investigations and experiments. They use mathematical representations to identify certain patterns and analyze patterns of performance in order to re-engineer and improve a designed system.
Next Gen Science Standards HS-PS2 Motion and Stability
Crosscutting Concepts: Different patterns may be observed at each of the scales at which a system is studied and can provide evidence for causality in explanations of phenomena. (HS-PS2-4)
A Framework for K-12 Science Education
Scale, proportion, and quantity. In considering phenomena, it is critical to recognize what is relevant at different measures of size, time, and energy and to recognize how changes in scale, proportion, or quantity affect a system’s structure or performance…. The understanding of relative magnitude is only a starting point. As noted in Benchmarks for Science Literacy, “The large idea is that the way in which things work may change with scale. Different aspects of nature change at different rates with changes in scale, and so the relationships among them change, too.” Appropriate understanding of scale relationships is critical as well to engineering—no structure could be conceived, much less constructed, without the engineer’s precise sense of scale.
Dimension 2, Crosscutting Concepts, A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012)
How to program in Scratch, using Boolean logic
- Boolean operators include:
- AND, OR, NOT, < , = , >
- In other words, is one sprite touching some other thing? The answer by definition must be true or false.
- In other words, is one sprite touching something of a certain color? The answer by definition must be true or false.
- In other words, is a certain key being pressed?
- In other words, is the mouse being used?
- Why use Boolean operators?
- To focus a search, particularly when your topic contains multiple search terms.
- To connect various pieces of information to find exactly what you’re looking for.
A Boolean block is a hexagonal block (shaped after the Boolean elements in flowcharts)
The block contains a condition. The answer to the condition will be either true or false.
It’s important to determine if a statement (expression) is “true” or “false”.
Ways to determine TRUE and FALSE are prevalent in all kinds of decision making.
A mathematically precise way of asking if something is TRUE or FALSE is called a Boolean operation.
It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century.
Boolean data is associated with conditional statements. For example, the following statement is really
a set of questions that can be answered as TRUE or FALSE.
IF (I want to go to a movie) AND (I have more than $10) THEN (I can go to the movie)
We can combine several “boolean” statements that have true/false meaning into a single statement
using words like AND and OR, and NOT).
“If I want to go to the movie AND I have enough money, then I will go to the movie.”
BOTH conditions have to evaluate to true (have to be true) before the entire expression is true.
Some terms you already learned in math are really Boolean operators
Less than < [ ] < [ ] > Equal to < [ ] = [ ] > Greater than < [ ] < [ ] >
For example: (The height of a building) < 20 meters
For any building we look at, this statement will either be true or false.
Go through what each Boolean block does (page 68)
Book “Adventures in Coding”, Eva Holland and Chris Minnick, Wiley, 2016. Pages 50-59
Computational Thinking 6-8.CT.c.2 Describe how computers store, manipulate, and transfer data types and files (e.g., integers, real numbers, Boolean Operators) in a binary system.
CSTA K-12 Computer Science Standards
CT.L2-14 Examine connections between elements of mathematics and computer science
including binary numbers, logic, sets and functions.
CPP.L2-05 Implement problem solutions using a programming language, including: looping behavior, conditional statements, logic, expressions, variables, and functions.
On a separate sheet of paper please answer these questions (or fill in the blanks)
1. AI is intelligence exhibited by machines. It doesn’t mean that ____________ .
2. What does it mean for a computer to be intelligent?
3. People can do more than solve problems: we are aware, sentient, sapient, and conscious. What does it mean for someone to be sapient?
4. What does it mean for someone to be sentient?
5. Some AI problems have successfully been solved. People are so used to them that we often don’t even call them AI. You may already be using some AIs in your own life. List & clearly describe two examples of these AIs.
6. There is a field of computer science called “philosophy of artificial intelligence.” List 2 of the questions that field is working to answer.
7. How are strong and weak AIs different?
8. In our resource, look at the 2 articles we link to “How to Help Self-Driving Cars Make Ethical Decisions”, or “What Will It Take to Build a Virtuous AI?” Summarize the first article, in your own words, in 3 well-written paragraphs. Summarize the second article for homework.
Coding weather forcecasting
Unleash Your Inner Geek With These Excellent Weather Radar Programs
Gibson Ridge Software, LLC (GRS) was created in March 2005 and produces viewers for weather radar data. GRS applications include GRLevel2 for viewing Level II radar data and GRLevel3 for viewing Level III data. Both viewers feature high speed, high quality radar displays with an intuitive user interface. All GRS applications are written in multithreaded C++ using the base Windows APIs for speed and efficiency.
Massachusetts Earth Science
8.MS-ESS2-5. Interpret basic weather data to identify patterns in air mass interactions and the relationship of those patterns to local weather.
8.MS-ESS2-6. Describe how interactions involving the ocean affect weather and climate on a regional scale, including the influence of the ocean temperature as mediated by
energy input from the Sun and energy loss due to evaporation or redistribution via
Common Core Math Skills
STANDARD (CCSS.MATH.PRACTICE) INTRODUCTION TO PROGRAMMING THE EV3
MP1 Make sense of problems and persevere in solving them Chapters are all based around solving real-world robot problems; students must make sense of the problems to inform their solutions
MP2 Reason abstractly and quantitatively Programming requires students to reason about physical quantities in the world to plan a solution, then calculate or estimate them for the robot
MP4 Model with mathematics Many processes, including the process of programming itself, must be systematically modeled on both explicit and implicit levels
HS-ETS1-2. Design a solution to a complex real-world problem by breaking it down into smaller, more manageable problems that can be solved through engineering.
Hit the ground running: Coding lessons from Code.Org
We’re using Blockly, a visual coding language.
What are “conditionals”? “On One Condition” If-then-else conditional flowchart lesson
Course 4. Stage 12. Artist Functions
Homework: Write a paragraph explaining how loops work, how WHILE loops work,
and how DO…WHILE loops work.
Course 4. Stage 16. Bees – Functions with parameters
Homework: What is a “Hello, World!” program?
Excelwithbusiness.com: Say “Hello, world!”.
How would we tell a computer to write “Hello, World!” in Blockly?
Course 4, Stage 19: variables super challenge
Homework :Go to TutorialsPoint (link below) Choose “programming environment”.
(1) What’s the purpose of a text editor?
(2) What’s the purpose of a compiler?
(3) What’s the purpose of an interpreter?
Hour of code programs
Why Johnny can’t code
By David Brin, Salom Magazine, Sept 14, 2006
BASIC used to be on every computer a child touched — but today there’s no easy way for kids to get hooked on programming.
Also see our main page on What is mathematics, really? Is it made by humans or a feature of the universe? Math in art & poetry.
For three years — ever since my son Ben was in fifth grade — he and I have engaged in a quixotic but determined quest: We’ve searched for a simple and straightforward way to get the introductory programming language BASIC to run on either my Mac or my PC.
Why on Earth would we want to do that, in an era of glossy animation-rendering engines, game-design ogres and sophisticated avatar worlds? Because if you want to give young students a grounding in how computers actually work, there’s still nothing better than a little experience at line-by-line programming.
Only, quietly and without fanfare, or even any comment or notice by software pundits, we have drifted into a situation where almost none of the millions of personal computers in America offers a line-programming language simple enough for kids to pick up fast. Not even the one that was a software lingua franca on nearly all machines, only a decade or so ago. And that is not only a problem for Ben and me; it is a problem for our nation and civilization.
Oh, today’s desktops and laptops offer plenty of other fancy things — a dizzying array of sophisticated services that grow more dazzling by the week. Heck, I am part of that creative spasm.
Only there’s a rub. Most of these later innovations were brought to us by programmers who first honed their abilities with line-programming languages like BASIC. Yes, they mostly use higher level languages now, stacking and organizing object-oriented services, or using other hifalutin processes that come prepackaged and ready to use, the way an artist uses pre-packaged paints. (Very few painters still grind their own pigments. Should they?)
And yet the thought processes that today’s best programmers learned at the line-coding level still serve these designers well. Renowned tech artist and digital-rendering wizard Sheldon Brown, leader of the Center for Computing in the Arts, says:
“In my Electronics for the Arts course, each student built their own single board computer, whose CPU contained a BASIC ROM [a chip permanently encoded with BASIC software]. We first did this with 8052’s and then with a chip called the BASIC Stamp. The PC was just the terminal interface to these computers, whose programs would be burned into flash memory. These lucky art students were grinding their own computer architectures along with their code pigments — along their way to controlling robotic sculptures and installation environments.”
But today, very few young people are learning those deeper patterns. Indeed, they seem to be forbidden any access to that world at all.
And yet, they are tantalized! Ben has long complained that his math textbooks all featured little type-it-in-yourself programs at the end of each chapter — alongside the problem sets — offering the student a chance to try out some simple algorithm on a computer. Usually, it’s an equation or iterative process illustrating the principle that the chapter discussed. These “TRY IT IN BASIC” exercises often take just a dozen or so lines of text. The aim is both to illustrate the chapter’s topic (e.g. statistics) and to offer a little taste of programming.
Only no student tries these exercises.
Not my son or any of his classmates. Nor anybody they know. Indeed, I would be shocked if more than a few dozen students in the whole nation actually type in those lines that are still published in countless textbooks across the land. Those who want to (like Ben) simply cannot.
Now, I have been complaining about this for three years. But whenever I mention the problem to some computer industry maven at a conference or social gathering, the answer is always the same: “There are still BASIC programs in textbooks?”
At least a dozen senior Microsoft officials have given me the exact same response. After taking this to be a symptom of cluelessness in the textbook industry, they then talk about how obsolete BASIC is, and how many more things you can do with higher-level languages. “Don’t worry,” they invariably add, “the newer textbooks won’t have any of those little BASIC passages in them.”
All of which is absolutely true. BASIC is actually quite tedious and absurd for getting done the vast array of vivid and ambitious goals that are typical of a modern programmer. Clearly, any kid who wants to accomplish much in the modern world would not use it for very long. And, of course, it is obvious that newer texts will abandon “TRY IT IN BASIC” as a teaching technique, if they haven’t already.
But all of this misses the point. Those textbook exercises were easy, effective, universal, pedagogically interesting — and nothing even remotely like them can be done with any language other than BASIC. Typing in a simple algorithm yourself, seeing exactly how the computer calculates and iterates in a manner you could duplicate with pencil and paper — say, running an experiment in coin flipping, or making a dot change its position on a screen, propelled by math and logic, and only by math and logic:
All of this is priceless. As it was priceless 20 years ago. Only 20 years ago, it was physically possible for millions of kids to do it. Today it is not.
In effect, we have allowed a situation to develop that is like a civilization devouring its seed corn. If an enemy had set out to do this to us — quietly arranging so that almost no school child in America can tinker with line coding on his or her own — any reasonably patriotic person would have called it an act of war.
Am I being overly dramatic? Then consider a shift in perspective.
First ponder the notion of programming as a series of layers. At the bottom-most level is machine code. I showed my son the essentials on scratch paper, explaining the roots of Alan Turing’s “general computer” and how it was ingeniously implemented in the first four-bit integrated processor, Intel’s miraculous 1971 4004 chip, unleashing a generation of nerdy guys to move bits around in little clusters, adding and subtracting clumps of ones and zeroes, creating the first calculators and early desktop computers like the legendary Altair.
This level of coding is still vital, but only at the realm of specialists at the big CPU houses. It is important for guys like Ben to know about machine code — that it’s down there, like DNA in your cell — but a bright kid doesn’t need to actually do it, in order to be computer-literate. (Ben wants to, though. Anyone know a good kit?)
The layer above that is often called assembler, though there are many various ways that user intent can be interpreted down to the bit level without actually flicking a series of on-off switches. Sets of machine instructions are grouped, assembled and correlated with (for example) ASCII-coded commands. Some call this the “boringest” level. Think of the hormones swirling through your body. Even a glimpse puts me to sleep. But at least I know that it is there.
The third layer of this cake is the operating system of your computer. Call it BIOS and DOS, along with a lot of other names. This was where guys like Gates and Wozniak truly propelled a whole industry and way of life, by letting the new desktops communicate with their users, exchange information with storage disks and actually show stuff on a screen. Cool.
Meanwhile, the same guys were offering — at the fourth layer — a programming language that folks could use to create new software of their very own. BASIC was derived from academic research tools like beloved old FORTRAN (in which my doctoral research was coded onto punched paper cards, yeesh). It was crude. It was dry. It was unsuitable for the world of the graphic user interface. BASIC had a lot of nasty habits. But it liberated several million bright minds to poke and explore and aspire as never before.
The “scripting” languages that serve as entry-level tools for today’s aspiring programmers — like Perl and Python — don’t make this experience accessible to students in the same way. BASIC was close enough to the algorithm that you could actually follow the reasoning of the machine as it made choices and followed logical pathways.
Repeating this point for emphasis: You could even do it all yourself, following along on paper, for a few iterations, verifying that the dot on the screen was moving by the sheer power of mathematics, alone. Wow!
(Indeed, I would love to sit with my son and write “Pong” from scratch. The rule set — the math — is so simple. And he would never see the world the same, no matter how many higher-level languages he then moves on to.)
The closest parallel I can think of is the WWII generation of my father — guys for whom the ultra in high tech was automobiles. What fraction of them tore apart jalopies at home? Or at least became adept at diagnosing and repairing the always fragile machines of that era? One result of that free and happy spasm of techie fascination was utterly strategic. When the “Arsenal of Democracy” began churning out swarms of tanks and trucks and jeeps, these were sent to the front and almost overnight an infantry division might be mechanized, in the sure and confident expectation that there would be thousands of young men ready (or trainable) to maintain these tools of war. (Can your kid even change the oil nowadays? Or a tire?)
The parallel technology of the ’70s generation was IT.
Not every boomer soldered an Altair from a kit, or mastered the arcana of DBASE. But enough of them did so that we got the Internet and Web. We got Moore’s Law and other marvels. We got a chance to ride another great technological wave.
So, what’s the parallel hobby skill today?
What tech-marvel has boys and girls enthralled, tinkering away, becoming expert in something dazzling and practical and new?
Shooting ersatz aliens in “Halo”?
Dressing up avatars in “The Sims”?
Oh sure, there’s creativity in creating cool movies and Web pages. But except for the very few who will make new media films, do you see a great wave of technological empowerment coming out of all this?
OK, I can hear the sneers. Are these the rants of a grouchy old boomer? Feh, kids today! (And get the #$#*! off my lawn!)
Fact is, I just wanted to give my son a chance to sample some of the wizardry standing behind the curtain, before he became lost in the avatar-filled and glossy-rendered streets of Oz. Like the hero in “TRON,” or “The Matrix,” I want him to be a user who can see the lines that weave through the fabric of cyberspace — or at least know some history about where it all came from. At the very minimum, he ought to be able to type those examples in his math books and use the computer the way it was originally designed to be used: to compute.
Hence, imagine my frustration when I discovered that it simply could not be done.
Yes, yes: For three years I have heard all the rationalized answers. No kid should even want BASIC, they say. There are higher-level languages like C++ (Ben is already — at age 14 — on page 200 of his self-teaching C++ book!) and yes, there are better education programs like Logo. Hey, what about Visual Basic! Others suggested downloadable versions like q-basic, y-basic, alphabetabasic…
Indeed, I found one that was actually easy to download, easy to turn on, and that simply let us type in some of those little example programs, without demanding that we already be manual-chomping fanatics in order to even get started using the damn thing. Chipmunk Basic for the Macintosh actually started right up and let us have a little clean, algorithmic fun. Extremely limited, but helpful. All of the others, every last one of them, was either too high-level (missing the whole point!) or else far, far too onerous to figure out or use. Certainly not meant to be turn-key usable by any junior high school student. Appeals for help online proved utterly futile.
Until, at last, Ben himself came up with a solution. An elegant solution of startling simplicity. Essentially: If you can’t beat ’em, join ’em.
While trawling through eBay, one day, he came across listings for archaic 1980s-era computers like the Apple II. “Say, Dad, didn’t you write your first novel on one of those?” he asked.
“Actually, my second. ‘Startide Rising.’ On an Apple II with Integer Basic and a serial number in five digits. It got stolen, pity. But my first novel, ‘Sundiver,’ was written on this clever device called a typewrit –”
“Well, look, Dad. Have you seen what it costs to buy one of those old Apples online, in its original box? Hey, what could we do with it?”
“Huh?” I stared in amazement.
Then, gradually, I realized the practical possibilities.
Let’s cut to the chase. We did not wind up buying an Apple II. Instead (for various reasons) we bought a Commodore 64 (in original box) for $25. It arrived in good shape. It took us maybe three minutes to attach an old TV. We flicked the power switch … and up came a command line. In BASIC.
Uh. Problem solved?
I guess. At least far better than any other thing we’ve tried!
We are now typing in programs from books, having fun making dots move (and thus knowing why the dots move, at the command of math, and not magic). There are still problems, like getting an operating system to make the 5141c disk drive work right. Most of the old floppies are unreadable. But who cares? (Ben thinks that loading programs to and from tape is so cool. I gurgle and choke remembering my old Sinclair … but whatever.)
What matters is that we got over a wretched educational barrier. And now Ben can study C++ with a better idea where it all came from. In the nick of time.
Problem solved? Again, at one level.
And yet, can you see the irony? Are any of the masters of the information age even able to see the irony?
This is not just a matter of cheating a generation, telling them to simply be consumers of software, instead of the innovators that their uncles were. No, this goes way beyond that. In medical school, professors insist that students have some knowledge of chemistry and DNA before they are allowed to cut open folks. In architecture, you are at least exposed to some physics.
But in the high-tech, razzle-dazzle world of software? According to the masters of IT, line coding is not a deep-fabric topic worth studying. Not a layer that lies beneath, holding up the world of object-oriented programming. Rather, it is obsolete!
Or, at best, something to be done in Bangalore. Or by old guys in their 50s, guaranteeing them job security, the same way that COBOL programmers were all dragged out of retirement and given new cars full of Jolt Cola during the Y2K crisis.
All right, here’s a challenge. Get past all the rationalizations. (Because that is what they are.) It would be trivial for Microsoft to provide a version of BASIC that kids could use, whenever they wanted, to type in all those textbook examples. Maybe with some cool tutorial suites to guide them along, plus samples of higher-order tools. It would take up a scintilla of disk space and maybe even encourage many of them to move on up. To (for example) Visual Basic!
Or else, hold a big meeting and choose another lingua franca, so long as it can be universal enough to use in texts, the way that BASIC was.
Instead, we are told that “those textbooks are archaic” and that students should be doing “something else.” Only then watch the endless bickering over what that “something else” should be — with the net result that there is no lingua franca at all, no “basic” language so common that textbook publishers can reliably use it as a pedagogical aide.
The textbook writers and publishers aren’t the ones who are obsolete, out-of-touch and wrong. It is people who have yanked the rug out from under teachers and students all across the land.
Let me reiterate. Kids are not doing “something else” other than BASIC. Not millions of them. Not hundreds or tens of thousands of them. Hardly any of them, in fact. It is not their fault. Because some of them, like my son, really want to. But they can’t. Not without turning into time travelers, the way we did, by giving up (briefly) on the present and diving into the past. (I also plan to teach him how to change the oil and fix a tire!) By using the tools of a bygone era to learn more about tomorrow.
If this is a test, then Ben and I passed it, ingeniously. In contrast, Microsoft and Apple and all the big-time education-computerizing reformers of the MIT Media Lab are failing, miserably. For all of their high-flown education initiatives (like the “$100 laptop”), they seem bent on providing information consumption devices, not tools that teach creative thinking and technological mastery.
Web access for the poor would be great. But machines that kids out there can understand and program themselves? To those who shape our technical world, the notion remains not just inaccessible, but strangely inconceivable.
David Brin is an astrophysicist whose international best-selling novels include “Earth,” and recently “Existence.” ” The Postman” was filmed in 1997. His nonfiction book about the information age – The Transparent Society – won the Freedom of Speech Award of the American Library Association. (http://www.davidbrin.com)
How does a computer understand (interpret and execute) a high level programming language?
What’s the difference between a high-level computer language and a low-level language? How does a computer interpret these languages, so the program can run? What is computer programming?
How does a computer understand a computer program http://guyhaas.com/bfoit/itp/Programming.html
BBC Bitesize Revision: Running a program, the CPU, etc. (a 5 page step-by-step resource) http://www.bbc.co.uk/education/guides/z2342hv/revision/1
How do you communicate with computers? Through a programming language. Source code and language differences: Learntocodewith.me
Break down how code gets translated from the code programmers write, to the code computers read, the difference between compiled and interpreted code, and what makes “just-in-time” compilers so fast and efficient. The Basics of Compiled Languages, Interpreted Languages, and Just-in-Time Compilers
How do computers understand programming languages? How do you “teach” a computer a language? Explanation by Christian Benesch, software engineer and architect (among other explanations) here
How does a computer understand a computer program? Codeconquest.com How does coding work?
What is a program? What is a programming language? Depending on the language used, and the particular implementation of the language used, the process to translate high-level language statements to actions may involve compilation and interpretation. Introduction to Programming (Wikiversity)
I. Write a sophisticated Scratch computer program, on your own, not using someone else’s code. You must first come see me with your idea, and then present quick updates, showing your progress.
Checkpoint 1 See me with your specific idea, by 5/30/17. 10 points.
Checkpoint 2: Show me the code you have each day in class. You need to be clearly explain how your code works. Your code should have many comment sections. By the time that finals come around, your program must be complete. If done well you can earn up to an additional 90 points.
II. Write a 4 page paper on one of the following topics.
No cover page. Upper left of the 1st page will have your name, my name/class, date and a title. Use 12 point Arial or Times New Roman font, double spaced, 1″ margins. You may add small diagrams and pictures, but they don’t count towards the length of your paper. MLA Works Cited is an additional page. You must use at least four sources of information, which must be cited in MLA format.
For these topics, most Wikipedia articles are acceptable sources, however, you may not use Wikipedia for more than 2 of your sources, and you must first show me the specific , so I can make sure that it’s Ok.
A) Computers don’t actually think. So how do they know what to do with the code we write? What goes on under the hood, so to speak? I’ve prepared many sources that you can use: How-a-computer-interprets-instructions
B) the development of computers and software: Choose 1 of these systems: the classic IBM-PC, Apple II, Apple Macintosh, Commodore Vic-20, or Commodore 64.
C) the development and programming of second generation classic video games. Choose 1 or 2 of these systems: Odyssey, Atari 2600 (aka Atari VCS), Magnavox Odyssey 2, Mattel Intellivision, Vectrex, and Colecovision. What kind of hardware was in these computers? How did they work? How were they programmed? In what language were they programmed? What was the software capable of?
D) the development and programming of third generation classic video games for neo-classic video games. Choose 1 or 2 of these systems: Sega Master System (aka the SMS), Nintendo (aka the NES or Famicon), Atari 7800. What kind of hardware was in these computers? How did they work? How were they programmed? In what language were they programmed? What was the software capable of?
E) the development and programming of fifth generation classic video games for neo-classic video games. Choose 1 or 2 of these systems: Sega Saturn, Sony Playstation (PSX 1), Nintendo 64. What kind of hardware was in these computers? How did they work? How were they programmed? In what language were they programmed? What was the software capable of?