KaiserScience

Home » math

Category Archives: math

Advertisements

PEMDAS The Math Equation That Tried to Stump the Internet

from The Math Equation That Tried to Stump the Internet

Math PEMDAS ambiguous meme

Excerpted from the NY Times article, The Math Equation That Tried to Stump the Internet, by Steven Strogatz, 8/2/2019

… The question above has a clear and definite answer, provided we all agree to play by the same rules governing “the order of operations.” When, as in this case, we are faced with several mathematical operations to perform — to evaluate expressions in parentheses, carry out multiplications or divisions, or do additions or subtractions — the order in which we do them can make a huge difference.

When confronted with 8 ÷ 2(2+2), everyone on Twitter agreed that the 2+2 in parentheses should be evaluated first. That’s what our teachers told us: Deal with whatever is in parentheses first. Of course, 2+2 = 4. So the question boils down to 8÷2×4.

And there’s the rub. Now that we’re faced with a division and a multiplication, which one takes priority? If we carry out the division first, we get 4×4 = 16; if we carry out the multiplication first, we get 8÷8 = 1.

Which way is correct? The standard convention holds that multiplication and division have equal priority. To break the tie, we work from left to right. So the division goes first, followed by the multiplication. Thus, the right answer is 16.

More generally, the conventional order of operations is to evaluate expressions in parentheses first. Then you deal with any exponents. Next come multiplication and division, which, as I said, are considered to have equal priority, with ambiguities dispelled by working from left to right. Finally come addition and subtraction, which are also of equal priority, with ambiguities broken again by working from left to right.

Now realize… PEMDAS is arbitrary. Furthermore, in my experience as a mathematician, expressions like 8÷2×4 look absurdly contrived.

No professional mathematician would ever write something so obviously ambiguous. We would insert parentheses to indicate our meaning and to signal whether the division should be carried out first, or the multiplication.

The last time this came up on Twitter, I reacted with indignation: It seemed ridiculous that we spend so much time in our high-school curriculum on such sophistry. But now, having been enlightened by some of my computer-oriented friends on Twitter, I’ve come to appreciate that conventions are important, and lives can depend on them.

We know this whenever we take to the highway. If everyone else is driving on the right side of the road (as in the U.S.), you would be wise to follow suit. The same goes if everyone else is driving on the left, as in the United Kingdom. It doesn’t matter which convention is adopted, as long as everyone follows it.

Likewise, it’s essential that everyone writing software for computers, spreadsheets and calculators knows the rules for the order of operations and follows them. For the rest of us, the intricacies of PEMDAS are less important than the larger lesson that conventions have their place. They are the double-yellow line down the center of the road — an unending equals sign — and a joint agreement to understand one another, work together, and avoid colliding head-on.

Ultimately, 8 ÷ 2(2+2) is less a statement than a brickbat; it’s like writing the phrase “Eats shoots and leaves” and concluding that language is capricious. Well, yes, in the absence of punctuation, it is; that’s why we invented the stuff.

– Steven Strogatz is a professor of mathematics at Cornell and the author of “Infinite Powers: How Calculus Reveals the Secrets of the Universe.”_

Ambiguous PEMDAS

Professor Oliver Knill addresses the same phenomenon here:

Even in mathematics, ambiguities can be hard to spot. The phenomenon seen here in arithmetic goes beyond the usual PEMDAS rule and illustrates an ambiguity which can lead to heated arguments and discussions.

What is 2x/3y-1 if x=9 and y=2 ?

Did you get 11 or 2? If you got 11, then you are in the BEMDAS camp, if you got 2, you are in the BEDMAS camp.  In either case you can relax because you have passed the test. If you got something different you are in trouble although! There are arguments for both sides. But first a story….[and there is a very cool story here, click the link below.  But here is the important conclusion]

The PEMDAS problem is not a “problem to be solved”. It is a matter of fact that there are different interpretations and that a human for  example reads x/yz with x=3,y=4 and z=5 as 3/20 while a machine (practically all programming languages) give a different result.

There are authorities which have assigned rules (most pupils are taught PEMDAS) which is one reason why many humans asked about 3/4*5 give 3/20 which most machines asked give 15/4:

I type this in Mathematica x=3; y=4; z=5; x/y z and get 15/4

It is a linguistic problem, not a mathematical problem. In case of a linguistic problem, one can not solve it by imposing a new rule. The only way to solve the problem is to avoid it. One can avoid it to put brackets.

Ambiguous PEMDAS, from Oliver Knill at Harvard University

That Vexing Math Equation? Here’s an Addition

Steven Strogatz, rofessor of Applied Mathematics, Cornell Univ, looks at a similar problem, and agrees that “questions” like these are deliberately badly written:

Recently I wrote about a math equation that had managed to stir up a debate online. The equation was this one:  8 ÷ 2(2+2) = ?

The issue was that it generated two different answers, 16 or 1, depending on the order in which the mathematical operations were carried out….

… The question was not meant to ask anything clearly. Quite the contrary, its obscurity seems almost intentional. It is certainly artfully perverse, as if constructed to cause mischief.

The expression 8 ÷ 2(2+2) uses parentheses – typically a tool for reducing confusion – in a jujitsu manner to exacerbate the murkiness. It does this by juxtaposing the numeral 2 and the expression (2+2), signifying implicitly that they are meant to be multiplied, but without placing an explicit multiplication sign between them. The viewer is left wondering whether to use the sophisticated convention for implicit multiplication
from algebra class or to fall back on the elementary PEMDAS convention from middle school.

Picks: “So the problem, as posed, mixes elementary school notation with high school notation in a way that doesn’t make sense. People who remember their elementary school math well say the answer is 16. People who remember their algebra are more likely to answer 1.”

Much as we might prefer a clear-cut answer to this question, there isn’t one. You say tomato, I say tomahto. Some spreadsheets and software systems flatly refuse to answer the question – they balk at its garbled structure. That’s my instinct, too, and that of most mathematicians I’ve spoken with. If you want a clearer answer, ask a clearer question.

That Vexing Math Equation? Here’s an Addition, The New York Times, Aug 5, 2019

Advertisements

Programming Labs for Physics

coding-snippet

These labs were designed by Prof. Chris Orban for Physics 1250 at The Ohio State University at Marion. They are useful at the high school and college level. No calculus knowledge or prior programming experience is required.

The nice part about these programming labs is that there is no software to install. The compiling and executing and visualization is all done within your web browser! This is accomplished using a programming framework called p5js.org which is very similar to C/C++.

Introduction to the p5.js programming framework

Related video:

The Physics of Video Games! STEM coding.

.

What does it mean to divide a fraction by a fraction?

What does it mean to divide a fraction by a fraction?

In this lesson from Virtual Nerd we’ll learn what it means.

What does it mean to divide a fraction by a fraction? Virtual Nerd

Divide fraction by a fraction

 

The future of photography on phones depends on coding

Note to students: When we talk about coding, we mean computer programming (“writing code.”) But more specifically, we mean using code that uses sophisticate mathematics.

________________

From “The Future of Photography is Code”

Devin Coldewey, 10/22/2018

What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.

Oppo N3 cellphone camera

The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.

But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Samsung Galaxy S camera sensor

Photo by Petar Milošević, from Wikimedia, commons.wikimedia.org/wiki/File:Samsung_Galaxy_S_camera_sensor.jpg

Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

From Photos on Grey scale cellphone cameras

Image from FLIR Machine Vision, https://www.ptgrey.com/white-paper/id/10912

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.

The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?

In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.

Isn’t all photography computational?

The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.

For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.

These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.

In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.

The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.

Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.

Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.

Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.

All competition therefore comprises what these companies build on top of that foundation.

Image signal processing cellphone Camera math software Apple

Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.

Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.

Access to the stream allows the camera to do all kinds of things. It adds context.

Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.

A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.

This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.

These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.

What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.

But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.

Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.

Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.

If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.

Double vision

One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.

This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.

Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.

These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.

The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.

So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.

Light and code

The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.

Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.

What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

_______________

Related articles

Your Smartphone Should Suck. Here’s Why It Doesn’t. (Wired magazine article)

Great images! How can we use slow shutter speed technique during day time?

Great images!! The Exposure Triangle – A Beginner’s Guide.

_______________

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

 

Detecting genetic disorders with 3d face scans

Detecting genetic disorders with 3d face scans

Johan at the Phineas Gage Fan Club writes:

Following on from last week’s post on smile measuring software, The Scotsman (via Gizmodo) reports on the work by Hammond and colleagues at UCL, who are developing 3d face scans as a quick, inexpensive alternative to genetic testing. This is not as crazy as it sounds at first since it is known that in a number of congenital conditions, the hallmark behavioural, physiological or cognitive deficits are also (conveniently) accompanied by characteristic appearances. The classic example of this is Down syndrome, which you need no software to recognise. More examples appear in the figure above, where you can compare the characteristic appearances of various conditions to the unaffected face in the middle.

Hammond’s software can be used to identify 30 congenital conditions, ranging from Williams syndrome (a sure topic of a future post) to Autism,

https://phineasgage.wordpress.com/2007/09/16/detecting-genetic-disorders-3d-face-scans/

Face scan Williams syndrome

Face scan Fragile X and Jacobson

========================================

Diagnostically relevant facial gestalt information from ordinary photos

Rare genetic disorders affect around 8% of people, many of whom live with symptoms that greatly reduce their quality of life. Genetic diagnoses can provide doctors with information that cannot be obtained by assessing clinical symptoms, and this allows them to select more suitable treatments for patients. However, only a minority of patients currently receive a genetic diagnosis.

Alterations in the face and skull are present in 30–40% of genetic disorders, and these alterations can help doctors to identify certain disorders, such as Down’s syndrome or Fragile X.

Extending this approach, Ferry et al. trained a computer-based model to identify the patterns of facial abnormalities associated with different genetic disorders. The model compares data extracted from a photograph of the patient’s face with data on the facial characteristics of 91 disorders, and then provides a list of the most likely diagnoses for that individual. The model used 36 points to describe the space, including 7 for the jaw, 6 for the mouth, 7 for the nose, 8 for the eyes and 8 for the brow.

This approach of Ferry et al. has three advantages. First, it provides clinicians with information that can aid their diagnosis of a rare genetic disorder. Second, it can narrow down the range of possible disorders for patients who have the same ultra-rare disorder, even if that disorder is currently unknown. Third, it can identify groups of patients who can have their genomes sequenced in order to identify the genetic variants that are associated with specific disorders.

 

Quentin Ferry et al, eLife 2014;3:e02020

========================================

This App Uses Facial Recognition Software to Help Identify Genetic Conditions

A geneticist uploads a photo of a patient’s face, and Face2Gene gathers data and generates a list of possible syndromes

… Face2Gene, the tool Abdul-Rahman used, was created by the Boston startup, FDNA. The company uses facial recognition software to aid clinical diagnoses of thousands of genetic conditions, such as Sotos syndrome (cerebral gigantism), Kabuki syndrome (a complicated disorder that features developmental delay, intellectual disability and more) and Down syndrome.

This App Uses Facial Recognition Software to Help Identify Genetic Conditions, Smithsonian Magazine

 

Related resources

How phenotypes lead to genotypes (infographic?)

Scientific journal articles

Detecting Genetic Association of Common Human Facial Morphological Variation Using High Density 3D Image Registration
Shouneng Peng et al, PLoS Comput Biol. 2013 Dec; 9(12)

tba

Uses of imaginary numbers

I. What are imaginary numbers?

Calvin & Hobbes Imaginary

(A) Ask your math teacher 😉 That’s a major part of high school math.

(B) See Ask Dr. Math: What is an imaginary number? What is i?

Better Explained: A Visual, Intuitive Guide to Imaginary Numbers

The Number System Complex Imaginary Rational

II. Are they “imaginary” or are they real in some sense?

How can one show that imaginary numbers really do exist? In the same way that one would show that fractions exist. First, let’s first show that fractions exist.

Of course, that’s something you know already, but the point is that exactly the same argument shows that imaginary numbers existHow can one show that imaginary numbers really do exist? Univ. of Toronto, Philip Spencer

Here’s a great video showing how imaginary numbers can be thought of as just as real as other numbers: Imaginary numbers are not some wild invention, they are the deep and natural result of extending our number system. Welch Labs .

III. How are imaginary numbers used?

I. Alternating current circuits

AC generator Wire through magnet

“The handling of the impedance of an AC circuit with multiple components quickly becomes unmanageable if sines and cosines are used to represent the voltages and currents.”

“A mathematical construct which eases the difficulty is the use of complex exponential functions. ”

.

II. In Economics

Economics calculator

Image from St. Lawrence University, Mathematics-Economics Combined Major

“Complex numbers and complex analysis do show up in Economic research. For example, many models imply some difference-equation in state variables such as capital, and solving these for stationary states can require complex analysis.”

and

“The application of complex numbers had been attempyed in the past by various economists, especially for explaining economic dynamics and business fluctuations in economic system In facr, the cue was taken from electrical systems. Ossicilations in economic activity level gets represented by sinosidual curves The concept of Keynesian multiplier and the concept of accelerator were combined in models to trace the path of economic variables like income, employment etc over time. This is where complex numbers come in.”
{By sensekonomikx, Yahoo Answers, Complex numbers in Economics?}

 

IV. Why use imaginary math for real numbers?

Electrical engineers and economists study real world objects and get real world answers, yet they use complex functions with imaginary numbers. Couldn’t we just use “regular” math?

Welch Labs plotting imaginary

Image from Imaginary Numbers Are Real, Welch Labs

Answer:
Imaginary numbers transform complex equations in the real X-Y axis into simpler functions in the “imaginary” plane.

 This lets us transform complicated problems into simpler ones.

Here is an explanation from “Ask Dr. Math” ( The Math Forum at, National Council of Teachers of Mathematics.)

Complex imaginary 1

complex imaginary 2

Examples of real world uses:

http://hyperphysics.phy-astr.gsu.edu/hbase/electric/impcom.html

Careers That Use Complex Numbers, by Stephanie Dube Dwilson

Imaginary numbers in real life: Ask Dr. Math

Imaginary numbers, Myron Berg, Dickinson State Univ.

 

V. The entire universe runs on complex numbers

If we look only at things in our everyday life – objects with masses larger than atoms, and moving at speeds far lower than the speed of light – then we can pretend that the entire word is made of solid objects (particles) following more or less “common sense” rules – the classical laws of physics.

But there’s so much more to our universe – and when we look carefully, we find that literally all of our classical laws of physics are only approximations of a more general, and often bizarre law – the laws of quantum mechanics. And QM laws follow a math that uses complex numbers!  When you have time, you might want to look at our intro to the development of QM and at deeper, high school level look at what QM really is .

Scott Aaronson writes about a central, hard to believe feature of quantum mechanics “Nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.”

He points out that this weird reality seems to be a basic feature of the universe itself “This transformation is just a mirror reversal of the plane. That is, it takes a two-dimensional Flatland creature and flips it over like a pancake, sending its heart to the other side of its two-dimensional body. But how do you apply half of a mirror reversal without leaving the plane? You can’t! If you want to flip a pancake by a continuous motion, then you need to go into … dum dum dum … THE THIRD DIMENSION. More generally, if you want to flip over an N-dimensional object by a continuous motion, then you need to go into the (N+1)st dimension. But what if you want every linear transformation to have a square root in the same number of dimensions? Well, in that case, you have to allow complex numbers. So that’s one reason God might have made the choice She did.”

 – PHYS771 Quantum Computing Since Democritus, Lecture 9: Quantum. Aaronson is Professor of Computer Science at The University of Texas at Austin.

VI. Negative Probabilities

In 1942, Paul Dirac wrote a paper “The Physical Interpretation of Quantum Mechanics” where he introduced the concept of negative energies and negative probabilities: “Negative energies and probabilities should not be considered as nonsense. They are well-defined concepts mathematically, like a negative of money.”

The idea of negative probabilities later received increased attention in physics and particularly in quantum mechanics. Richard Feynman argued[2] that no one objects to using negative numbers in calculations: although “minus three apples” is not a valid concept in real life, negative money is valid. Similarly he argued how negative probabilities as well as probabilities above unity possibly could be useful in probability calculations.

  • Wikipedia, Negative Probabilities, 3/18

John Baez ( mathematical physicist at U. C. Riverside in California) writes about a related, very weird topic, negative probabilities.

The physicists Dirac and Feynman, both bold when it came to new mathematical ideas, both said we should think about negative probabilities. What would it mean to say something had a negative chance of happening?

I haven’t seen many attempts to make sense of this idea… or even work with this idea. Sometimes in math it’s good to temporarily put aside making sense of ideas and just see if you can develop rules to consistently work with them. For example: the square root of -1. People had to get good at using it before they understood what it really was: a rotation by a quarter turn in the plane. Here’s an interesting attempt to work with negative probabilities:

Gábor J. Székely, Half of a coin: negative probabilities, Wilmott Magazine (July 2005), p.66–68

He uses rigorous mathematics to study something that sounds absurd: half a coin. Suppose you make a bet with an ordinary fair coin, where you get 1 dollar if it comes up heads and 0 dollars if it comes up tails. Next, suppose you want this bet to be the same as making two bets involving two separate ‘half coins’. Then you can do it if a half coin has infinitely many sides numbered 0,1,2,3, etc., and you win n dollars when side number n comes up….

… and if the probability of side n coming up obeys a special formula…

and if this probability can be negative whenever n is even!

This seems very bizarre, but the math is solid, even if the problem of interpreting it may drive you insane.

By the way, it’s worth remembering that for a long time mathematicians believed that negative numbers made no sense. As late as 1758 the British mathematician Francis Maseres claimed that negative numbers “… darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple.”

So opinions on these things can change. By the way: experts on probability theory will like Székely’s use of ‘probability generating functions’. Experts on generating functions and combinatorics will like how the probabilities for the different sides of the half-coin coming up involve the Catalan numbers.

Learning standards

Massachusetts Mathematics Curriculum Framework 2017

Number and Quantity Content Standards: The Complex Number System

A. Perform arithmetic operations with complex numbers.

B. Represent complex numbers and their operations on the complex plane.

C. Use complex numbers in polynomial identities and equations.

Common Core Mathematics

High School: Number and Quantity » The Complex Number System

 

 

The Black Swan, Nassim Taleb

In his book The Black Swan, Nassim Taleb develops two ideas, Mediocristan and Extremistan, to help explain his Black Swan Theory.

Mediocristan is where normal things happen, things that are expected, whose probabilities of occurring are easy to compute, and whose impact is not terribly huge. The bell curve and the normal distribution are emblems of Mediocristan. Low-impact changes have the highest probabilities of occurring, and huge, wide-impact changes have a very small probability of occurring.

Bell curve describing Mediocristan

Examples: Nature is full of things that follow a normal distribution. Height of humans is a simple example. If you take a few hundred people, and take their average height, there is no human whose height would significantly disrupt the average if added to the sample. Height/weight of people, or life expectancy, are from Mediocristan.

Properties: In Mediocristan, nothing is scalable, everything is constrained by boundary conditions, time, the limits of biological variation, the limits of hourly compensation, etc. Because of such constraints and the limits of our knowledge, random variation of attributes exists in Mediocristan, and can be usefully described by Gaussian probability models. In such “orderly” randomness models, probability distributions are such that no single instantiation of the value of an attribute can greatly affect the sum of all values in the distribution. Even the most extreme attribute values do not materially affect the mean value of a distribution, because the more extreme any value is, the more improbable it is that the extreme value will actually occur in nature.

Exstremistan is a different beast. In Extremistan, nothing can be predicted accurately and events that seemed unlikely or impossible occur frequently and have a huge impact.

Examples: In Extremistan, a single new observation can completely disrupt the aggregate. Imagine a room full of 30 random people. If you asked everyone their salary and calculated the average, the odds are the average would seem pretty reasonable. However, if you added Bill Gates to the room and then calculated the average salary, your average would jump up by a huge margin. One observation had a disproportionate effect on the average. This is Exstremistan. Things like book sales, whether a movie becomes a hit, or a viral video on the internet all have similar characteristics, and therefore reside in Extremistan.

Properties: A winner takes all competitions. As in: a small number of individuals or companies win everything. More inequality and less social justice are inevitable. Actions by individuals and small groups generate increasingly extreme results. As in: “eventually, one man might be able to declare war on the world and win.” Systemic events, both negative and positive, will occur at a high frequency, faster and with more extreme outcomes than ever before.

[Taleb’s central critique of bell curves is that they are often applied to areas that are subject to the dynamics of Extremistan, even though it only accurately describes Mediocristan.]

Source

https://assaadmouawad.wordpress.com/2011/11/11/mediocristan-vs-extremistan/

Nassim Nicholas Taleb, author of the bestselling book, The Black Swan, divides the world into 2 countries: Mediocristan and Extremistan. Looks like these two countries have completely different laws governing them. What are these laws? And how are they different? Let’s look at these questions in this article.

Mediocristan: Let’s start with Nassim’s favorite thought experiment. Assume that you round up a thousand people randomly selected from the general population and have them stand next to each other in one stadium. Imagine the heaviest person you can think of and add him to the sample. Assuming he weighs three times the average, between 400 and 500 pounds, he will represent a very small fraction of the total weight of the entire population (in this case about half a percent). In Mediocristan, when your sample is large, no single instance will significantly change the aggregate or the total. So who all belong to Mediocristan? Things like height, weight, income of a baker or a prostitute, car accidents, mortality rates, IQ etc.

Strange country of Extremistan: Now, let’s turn to the same people whom we lined up in a stadium and add up their net worth. Add to them net worth of Bill Gates which according to wikipedia is $58 billion. Now ask the same question: How much of the total wealth would he represent? 99.9 percent? Indeed, all others would represent no more than a rounding error for this net worth. For someone’s weight to represent such a share, he would need to weigh fifty million pounds! Same thing can be observed about book sales of randomly selected authors and adding J. K. Rowling to the list . In Extremistan, inequalities are such that one single observation can disproportionately impact aggregate, or the total. Nassim calls such events/things black swans. Matters that belong to Extremistan are: wealth, book sales per author, name recognition as a “celebrity”, speakers of a language, damage caused by earthquake, deaths in war, sizes of companies, financial markets etc.

How does this help? Nassim observes that the law of averages or the bell-curve statistics works well in Mediocristan. When friends from Mars will visit earth, they can check a small sample of people and learn a lot about people from Mediocristan. However, if you try to apply bell-curve to Extremistan it can get you in trouble. Let’s say you want to cross a river during your wildlife trek and you ask the local villager, “How deep is the river?” Villager says, “On an average 4 feet”. Now, in Extremistan, you don’t know whether it is: 4 feet +/- 1 foot or 4 feet and in one or two places 50 feet deep. Thanks to Satyam scam and the money I lost in a single day, I didn’t take time to understand what a black swan means. Next time you apply bell curve statistics to your decision (such as stock purchase), ask whether you are applying the right law in the right land.

Source

http://www.catalign.in/2009/01/black-swan-and-laws-of-mediocristan-vs.html

In his remarkable book, “The Black Swan”, Taleb describes at length the characteristics of environments that can be subject to black swans (unforeseeable, high-impact events).

When we make a forecast, we usually explicitly or implicitly base it on an assumption of continuity in a statistical series. For example, a company building its sales forecast for next year considers past sales, estimates a trend based on these sales, makes some adjustments based on current circumstances and then generates a sales forecast. The hypothesis (or rather assumption, as it is rarely explicit) in this process is that each additional year is not fundamentally different from the previous years. In other words, the distribution of possible values for next year’s sales is Gaussian (or “normal”): the probability that sales are the same is very high; the probability of an extreme variation (doubling or dropping to zero) is very low. In fact, the higher the envisaged variation, the lower the probability that such variation will occur. As a result, it is reasonable to discard extreme values in the forecasts: no marketing director is working on an assumption of sales dropping to zero.

Now, the assumption that a Gaussian-shaped curve’s fit with a potential distribution of outcomes will be the best fit is just that: an assumption. It is based simply on observation of the past. Never before have our sales dropped by 20%, 50% let alone 100%. 10, 20 or 30 years of data can confirm this (observation of the past on a large number of data). But this is only an observation of the past, not a law of physics.

Now, if we reason theoretically, not historically, on sales trends, we must recognize that there are many situations in which sales can vary widely. A sudden boycott of our products, for example (Danish dairy products in the Middle East after the Muhammad cartoons), a tidal wave in Japan, which deprives us of an essential supplier, a technological breakthrough that makes our products obsolete (NCR in 1971), the collapse of the Euro, etc. Suddenly deprived of oxygen, our sales are collapsing.

This is the black swan. The reason is simple: sales, like many statistical series, do not follow a Gaussian distribution. The probability of a large variation may be relatively low, but the reality is that in fact it cannot be calculated, because the distribution is unknown and cannot be estimated (this is what economist Frank Knight calls true uncertainty). We can thus be in a year in which the extreme value radically changes the historical distribution. We are in the domain of “fat tails”, ie unlike normally distributed series, high values can have a high probability of occurring. …

source https://silberzahnjones.com/2011/11/10/welcome-to-extremistan/

A black swan is an unpredictable, rare, but nevertheless high-impact event. The concept is easily demonstrated and well known but naming these events as “black swans” was popularised by Nassim Nicholas Taleb in his book of the same name, which was described in The Sunday Times as one of the 12 most influential books since the Second World War.

http://rationalwiki.org/wiki/Black_swan