KaiserScience

Home » Optics

Category Archives: Optics

The future of photography on phones depends on coding

Note to students: When we talk about coding, we mean computer programming (“writing code.”) But more specifically, we mean using code that uses sophisticate mathematics.

________________

From “The Future of Photography is Code”

Devin Coldewey, 10/22/2018

What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.

Oppo N3 cellphone camera

The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.

But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Samsung Galaxy S camera sensor

Photo by Petar Milošević, from Wikimedia, commons.wikimedia.org/wiki/File:Samsung_Galaxy_S_camera_sensor.jpg

Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

From Photos on Grey scale cellphone cameras

Image from FLIR Machine Vision, https://www.ptgrey.com/white-paper/id/10912

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.

The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?

In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.

Isn’t all photography computational?

The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.

For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.

These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.

In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.

The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.

Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.

Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.

Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.

All competition therefore comprises what these companies build on top of that foundation.

Image signal processing cellphone Camera math software Apple

Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.

Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.

Access to the stream allows the camera to do all kinds of things. It adds context.

Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.

A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.

This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.

These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.

What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.

But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.

Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.

Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.

If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.

Double vision

One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.

This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.

Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.

These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.

The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.

So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.

Light and code

The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.

Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.

What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

_______________

Related articles

Your Smartphone Should Suck. Here’s Why It Doesn’t. (Wired magazine article)

Great images! How can we use slow shutter speed technique during day time?

Great images!! The Exposure Triangle – A Beginner’s Guide.

_______________

This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.
§107. Limitations on Exclusive Rights: Fair Use. Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

 

Seaport Academy explores with microscopes

At Seaport Academy, science education isn’t about drills and worksheets. We motivate students with hands-on manipulatives, interactive apps, three dimensional animations, connections to the world around then, and labs. Here we’re learning how to explore the microscopic world with a microscope.

Seaport class microscope
.

Microscope insect bee leg
We examine animal fur, scales and skin, plant pollen, seeds and leaves, and insect parts.

Here we see a student’s point-of-view when discovering the anatomy of a honeybee leg.

 

 

 

 

 

Compound microscope

Used when a specimen is translucent (some light passes thru it)

Usually higher power 10x to 300x

The observer sees all the way thru the specimen being studied.

Has more than two sets of lenses.

Has an eyepiece lens  (or ocular) and two or more sets of objective lenses

They sit on on a nosepiece that can revolve

The specimen is placed on the stage of this microscope.

Parts of the microscope

Label Microscope parts

  1. eyepiece (ocular) – where you look through to see the image

  2. body tube – Holds the eyepiece and connects it down to the objectives

  3. fine adjustment knob – Moves the body of the microscope up/down more slowly; fine control. Gets the specimen exactly focused. We only use this after we first use the coarse adjustment knob.

  4. nosepiece – rotating piece at the bottom of the body tube. Lets us choose between several lenses (objectives.)

  5. high power objective — used for high power magnification (the longer objective lens)

  6. low power objective — used for low power magnification

  7. diaphragm – controls amount of light going through the specimen

  8. light/mirror – source of light, usually found near the base of the microscope.

  9. base – supports the microscope

  10. coarse adjustment knob — Moves body of the microscope up/down more quickly; Gets specimen approximately focused.

  11. arm – Holds main part of the microscope to the base.

  12. stage clips – hold the slide in place.

  13. inclination joint – used to tilt the microscope

Learning Standards

College Board Standards for College Success: Science

LSM-PE.2.1.2 Gather data, based on observations of cell functions made using a microscope or on cell descriptions obtained from print material, that can be used as evidence to support the claim that there are a variety of cell types.

LSM-PE.2.2.1 Describe, based on observations of cells made using a microscope and on information gathered from print and electronic resources, the internal structures (and the functions of these structures) of different cell types (e.g., amoeba, fungi, plant root, plant leaf, animal muscle, animal skin).

2006 Massachusetts Science and Technology/Engineering Curriculum Framework

Inquiry, Experimentation, and Design in the Classroom: SIS2. Design and conduct scientific investigations. Properly use instruments, equipment, and materials (e.g., scales, probeware, meter sticks, microscopes, computers) including set-up, calibration (if required), technique, maintenance, and storage.

China’s Floating City Mirage

China’s Floating City – Was this a real mirage, a misinterpretation of a reflection, or a hoax?

from “Floating Cities are Generally not Fata Morgana Mirage.” Discussion by Mick West, Oct 20, 2015, on Metabunk.org.

A video is being widely shared on social media (and the “weird news” sections of more traditional media) claiming to show the image of an impossibly large city rising above the fog in the city of Foshan (佛山), Guangdong province, China. Here is a composite image from the video.

Mirage hoax China city

Some have said this is an example of a fata morgana, a type of mirage where light is bent though the atmosphere in such a way to create the illusion of buildings on the horizon.

This is utterly impossible in this case, as fata morgana only creates a very thin strip of such an illusion very close to the horizon, and appears small and far away. It does not create images high in the sky.

Fata Morgana Mirage in Greenland by Jack Stephens

Besides, a fata morgana might create the illusion of buildings by stretching landscape features, or it might distort existing buildings. But what it cannot do it create a perfect image of existing nearby buildings, complete with windows.

China floating city illusion

It is important to note that no expert has actually looked at this video and said it was a fata morgana.

The second and more common type of “floating city” illusions is with buildings that are simply rising up out of clouds or low fog, and hence appear to be floating above them. This has led to “floating city” stories in the past, with this recent example, also from China.

China city in clouds

This is simply a photo of building across the river, but when cropped it appears like they are floating, which led to all kinds of wild stories of “ghost cities”.

This actually came from mistranslations of the original news reports, where local people (who knew exactly what they were looking at) were simply marveling at how pretty the scene looked, with the buildings appearing to float above clouds.

Could the Foshan video be of real buildings obscured by clouds? It does not appear so. Look at some real buildings in Foshan (and keep in mind it’s not entirely clear if Foshan is the actual setting of either the top or the bottom of the video.

Consider what it would take for these buildings to appear like they do in the video, with the road beneath them. The scale is simply impossible. The image has to be composited somehow, and the possibilities are:

  • Computer generated buildings spliced into the video of the road.

  • Two different videos spliced together

  • The video is shot though glass, and the buildings are behind the camera, or to the side (with the glass at around 45°, like a half open window/door)

It’s unfortunate that many people leap for the “fata morgana” or other mirage explanation when it’s quite clear that this is far too high in the sky to be anything like that.

Types floating city illusion hoax

Resources

https://www.metabunk.org/floating-cities-are-generally-not-fata-morgana-mirages.t6922/

http://www.cnn.com/2015/10/20/world/china-floating-city-video-feat/index.html

https://www.snopes.com/floating-city-china/

An Introduction to Mirages, Andrew T. Young

Fata Morgana between the Continental Divide and the Missouri River

Learning Standards

2016 Massachusetts Science and Technology/Engineering Curriculum Framework

HS-PS4-3. Evaluate the claims, evidence, and reasoning behind the idea that electromagnetic radiation can be described by either a wave model or a particle model, and that for some situations involving resonance, interference, diffraction, refraction, or the photoelectric effect, one model is more useful than the other.

A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012)

Core Idea PS4: Waves and Their Applications in Technologies for Information Transfer
When a wave passes an object that is small compared with its wavelength, the wave is not much affected; for this reason, some things are too small to see with visible light, which is a wave phenomenon with a limited range of wavelengths corresponding to each color. When a wave meets the surface between two different materials or conditions (e.g., air to water), part of the wave is reflected at that surface and another part continues on, but at a different speed. The change of speed of the wave when passing from one medium to another can cause the wave to change direction or refract. These wave properties are used in many applications (e.g., lenses, seismic probing of Earth).

The wavelength and frequency of a wave are related to one another by the speed of travel of the wave, which depends on the type of wave and the medium through which it is passing. The reflection, refraction, and transmission of waves at an interface between two media can be modeled on the basis of these properties.

All electromagnetic radiation travels through a vacuum at the same speed, called the speed of light. Its speed in any given medium depends on its wavelength and the properties of that medium. At the surface between two media, like any wave, light can be reflected, refracted (its path bent), or absorbed. What occurs depends on properties of the surface and the wavelength of the light.

SAT Subject Area Test in Physics

Waves and optics:

  • Reflection and refraction, such as Snell’s law and changes in wavelength and speed
  • Ray optics, such as image formation using pinholes, mirrors, and lenses

 

Fair use: This website is educational. Materials within it are being used in accord with the Fair Use doctrine, as defined by United States law.

§107. Limitations on Exclusive Rights: Fair Use

Notwithstanding the provisions of section 106, the fair use of a copyrighted work, including such use by reproduction in copies or phone records or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use, the factors to be considered shall include:

the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work. (added pub. l 94-553, Title I, 101, Oct 19, 1976, 90 Stat 2546)

Soundly Proving the Curvature of the Earth at Lake Pontchartrain

Excerpted from an article by Mick West

A classic experiment to demonstrate the curvature of a body of water is to place markers (like flags) a fixed distance above the water in a straight line, and then view them along that line in a telescope. If the water surface is flat then the markers will appear also in a straight line. If the surface of the water is curved (as it is here on Earth) then the markers in the middle will appear higher than the markers at the ends.

Here’s a highly exaggerated diagram of the effect by Alfred Russel Wallace in 1870, superimposed over an actual photograph.

Lake Pontchartrain power lines demonstrating the curvature Metabunk

This is a difficult experiment to do as you need a few miles for the curvature to be apparent. You also need the markers to be quite high above the surface of the water, as temperature differences between the water and the air tend to create significant refraction effects close to the water.

However Youtuber Soundly has found a spot where there’s a very long line of markers permanently fixed at constant heights above the water line, clearly demonstrating the curve. It’s a line of power transmission towers at Lake Pontchartrain, near New Orleans, Louisiana.

The line of power lines is straight, and they are all the same size, and the same height above the water. They are also very tall, and form a straight line nearly 16 miles long. Far better than any experiment one could set up on a canal or a lake. You just need to get into a position where you can see along the line of towers, and then use a powerful zoom lense to look along the line to make any curve apparent

One can see quite clearly in the video and photos that there’s a curve. Soundly has gone to great lengths to provide multiple videos and photos of the curve from multiple perspectives. They all show the same thing: a curve.

Lake Pontchartrain curve around Earth

One objection you might make is that the towers could be curving to the right. However the same curve is apparent from both sides, so it can only be curving over the horizon.

c

20170722-105907-h6wr6

People have asked why the curve is so apparent in one direction, but not in the other. The answer is compressed perspective. Here’s a physical example:

c

Compressed perspective on a car

That’s my car, the roof of which is slightly curved both front to back and left to right. I’ve put some equal sized chess pawns on it in two straight lines. If we step back a bit and zoom in we get:

Compressed perspective on a car II

Notice a very distinct curve from the white pieces, but the “horizon” seems to barely curve at all.

Similarly in the front-back direction, where there’s an even greater curve:

Compressed perspective on a car III

There’s a lot more discussion with photos here Soundly Proving the Curvature of the Earth at Lake Pontchartrain

 

 

Lord Of The Rings Optics challenge

A great physics problem for senior year students:

In J. R. R. Tolkien’s The Lord of the Rings (volume 2, p. 32), Legolas the Elf claims to be able to accurately count horsemen and discern their hair color (yellow) 5 leagues away on a bright, sunny day.

“Riders!” cried Aragorn, springing to his feet. “Many riders on swift steeds are coming towards us!”
“Yes,” said Legolas,”there are one hundred and five. Yellow is their hair, and bright are their spears. Their leader is very tall.”
Aragorn smiled. “Keen are the eyes of the Elves,” he said.
“Nay! The riders are little more than five leagues distant,” said Legolas.”

Make appropriate estimates and argue that Legolas must have very strange-looking eyes, have some means of non-visual perception, or have made a lucky guess. (1 league ~ 3.0 mi.)

On land, the league is most commonly defined as three miles, though the length of a mile could vary from place to place and depending on the era.
At sea, a league is three nautical miles (3.452 miles; 5.556 kilometres).

Several solutions are possible, depending on the estimating assumptions

Eye focusing rays of light figure_10_24_labeled

When parallel light waves strike a concave lens the waves striking the lens surface at a right angle goes straight through but light waves striking the surface at other angles diverge. In contrast, light waves striking a convex lens converge at a single point called a focal point. The distance from the long axis of the lens to the focal point is the focal length. Both the cornea and the lens of the eye have convex surfaces and help to focus light rays onto the retina. The cornea provides for most of the refraction but the curvature of the lens can be adjusted to adjust for near and far vision.

I.

By Chad Orzel is an Associate Professor in the Department of Physics and Astronomy at Union College in Schenectady, NY

The limiting factor here is the wave nature of light– light passing through any aperture will interfere with itself, and produce a pattern of bright and dark spots.
So even an infinitesimally small point source of light will appear slightly spread out, and two closely spaced point sources will begin to run into one another.
The usual standard for determining whether two nearby sources can be distinguished from one another is the Rayleigh criterion:

Rayleigh Criterion circular aperature

sine of the angular separation between two objects = 1.22 x ratio of the light wavelength to the diameter of the (circular) aperture, through which the light passes.
To get better resolution, you need either a smaller wavelength or a larger aperture.

Legolas says that the riders are “little more than five leagues distant.”
A league is something like three miles, which would be around 5000 meters, so let’s call it 25,000 meters from Legolas to the Riders.
Visible light has an average wavelength of around 500 nm, which is a little more green than the blond hair of the Riders, but close enough for our purposes.

The sine of a small angle can be approximated by the angle itself.

The angle = the size of the separation between objects divided by the distance from the objects to the viewer.

Putting it all together, Legolas’s pupils would need to be 0.015 m in diameter.
That’s a centimeter and a half, which is reasonable, provided he’s an anime character. I don’t think Tolkien’s Elves are described as having eyes the size of teacups, though.

We made some simplifying assumptions to get that answer, but relaxing them only makes things worse. Putting the Riders farther away, and using yellower light would require Legolas’s eyes to be even bigger. And the details he claims to see are almost certainly on scales smaller than one meter, which would bump things up even more.

Any mathematical objections to these assumptions? Sean Barrett writes:

“The sine of a small angle can be approximated by the angle itself, which in turn is given, for this case, by the size of the separation between objects divided by the distance from the objects to the viewer.”

Technically this is not quite right; the separation divided by the distance is not the angle itself, but rather the tangent of the angle. (SOHCAHTOA: sin = opposite/hypoteneuse; tangent = opposite/adjacent.)

Because the cos of a very small angle is very nearly 1, however, the tangent is just as nearly equal the angle as is the sine. But that doesn’t mean you can just skip that step. And there’s really not much need to even mention the angle; with such a very tiny angle, clearly the hypoteneuse and the adjacent side have essentially the same length (the distance to either separated point is also essentially 25K meters), and so you can correctly say that the sine itself is in this case approximated by the separation divided by the distance, and never mention the angle at all.

(You could break out a calculator to be on the safe side, but if you’re going to do that you need to know the actual formulation to compute the angle, not compute it as opposite/adjacent! But, yes, both angle (in radians) and the sine are also 1/25000 to about 10 sig figs.)

II. Another solution

Using the Rayleigh Criterion. In order for two things, x distance apart, to be discernible as separate, at an angular distance θ, to an instrument with a circular aperture with diameter a:

θ > arcsin(1.22 λ/a)

5 leagues is approximately 24000 m.
Sssume that each horse is ~2 m apart from each other
So arctan (1/12000) ≅ θ.
We can use the small-angle approximation (sin(θ) ≅ tan(θ) ≅ θ when θ is small)
So we get 1/12000 ≅ 1.22 λ/a

Yellow light has wavelengths between 570 and 590 nm, so we’ll use 580.

a ≅ 1.22 * (580E-9 m)* 12000 ≅ .0085 m.

8 mm is about as far as a human pupil will dilate, so for Legolas to have pupils this big in broad daylight must be pretty odd-looking.
Edit: The book is Six Ideas that Shaped Physics: Unit Q, by Thomas Moore

III. Great discussion on the Physics StackExchange

Could Legolas actually see that far? Physics StackExchange discussion

Here, Kyle Oman writes:

For a human-like eye, which has a maximum pupil diameter of about mm and choosing the shortest wavelength in the visible spectrum of about 390 nm, the angular resolution works out to about 5.3×105  (radians, of course).

At a distance of 24 km, this corresponds to a linear resolution (θd, where is the distance) of about 1.2m1. So counting mounted riders seems plausible since they are probably separated by one to a few times this resolution.

Comparing their heights which are on the order of the resolution would be more difficult, but might still be possible with dithering.

Does Legolas perhaps wiggle his head around a lot while he’s counting? Dithering only helps when the image sampling (in this case, by elven photoreceptors) is worse than the resolution of the optics. Human eyes apparently have an equivalent pixel spacing of something like a few tenths of an arcminute, while the diffraction limited resolution is about a tenth of an arcminute, so dithering or some other technique would be necessary to take full advantage of the optics.

An interferometer has an angular resolution equal to a telescope with a diameter equal to the separation between the two most widely separated detectors. Legolas has two detectors (eyeballs) separated by about 10 times the diameter of his pupils75 mm or so at most. This would give him a linear resolution of about 15cm at a distance of 24 km, probably sufficient to compare the heights of mounted riders.

However, interferometry is a bit more complicated than that. With only two detectors and a single fixed separation, only features with angular separations equal to the resolution are resolved, and direction is important as well.

If Legolas’ eyes are oriented horizontally, he won’t be able to resolve structure in the vertical direction using interferometric techniques. So he’d at the very least need to tilt his head sideways, and probably also jiggle it around a lot (including some rotation) again to get decent sampling of different baseline orientations. Still, it seems like with a sufficiently sophisticated processor (elf brain?) he could achieve the reported observation.

Luboš Motl points out some other possible difficulties with interferometry in his answer, primarily that the combination of a polychromatic source and a detector spacing many times larger than the observed wavelength lead to no correlation in the phase of the light entering the two detectors. While true, Legolas may be able to get around this if his eyes (specifically the photoreceptors) are sufficiently sophisticated so as to act as a simultaneous high-resolution imaging spectrometer or integral field spectrograph and interferometer. This way he could pick out signals of a given wavelength and use them in his interferometric processing.

A couple of the other answers and comments mention the potential difficulty drawing a sight line to a point 24 km away due to the curvature of the Earth. As has been pointed out, Legolas just needs to have an advantage in elevation of about 90 meters (the radial distance from a circle 6400 km in radius to a tangent 24 km along the circumference; Middle-Earth is apparently about Earth-sized, or may be Earth in the past, though I can’t really nail this down with a canonical source after a quick search). He doesn’t need to be on a mountaintop or anything, so it seems reasonable to just assume that the geography allows a line of sight.

Finally a bit about “clean air”. In astronomy (if you haven’t guessed my field yet, now you know…) we refer to distortions caused by the atmosphere as “seeing”.

Seeing is often measured in arcseconds (3600 arcse60 arcmi13600 arcsec = 60arcmin = 1∘), referring to the limit imposed on angular resolution by atmospheric distortions.

The best seeing, achieved from mountaintops in perfect conditions, is about arcsec,
or in radians 4.8×106 . This is about the same angular resolution as Legolas’ amazing interferometric eyes.

I’m not sure what seeing would be like horizontally across a distance of 24 km. On the one hand there is a lot more air than looking up vertically; the atmosphere is thicker than 24 km but its density drops rapidly with altitude. On the other hand the relatively uniform density and temperature at fixed altitude would cause less variation in refractive index than in the vertical direction, which might improve seeing.

If I had to guess, I’d say that for very still air at uniform temperature he might get seeing as good as 1 arcsec, but with more realistic conditions with the Sun shining, mirage-like effects probably take over limiting the resolution that Legolas can achieve.

 

IV. Also on StackExchange, the famous Luboš Motl writes:

Let’s first substitute the numbers to see what is the required diameter of the pupil according to the simple formula:

θ=1.220.4μmD=2m24kmθ=1.220.4μmD=2m24km
I’ve substituted the minimal (violet…) wavelength because that color allowed me a better resolution i.e. smaller θθ. The height of the knights is two meters.
Unless I made a mistake, the diameter DD is required to be 0.58 centimeters. That’s completely sensible because the maximally opened human pupil is 4-9 millimeter in diameter.
Just like the video says, the diffraction formula therefore marginally allows to observe not only the presence of the knights – to count them – but marginally their first “internal detailed” properties, perhaps that the pants are darker than the shirt. However, to see whether the leader is 160 cm or 180 cm is clearly impossible because it would require the resolution to be better by another order of magnitude. Just like the video says, it isn’t possible with the visible light and human eyes. One would either need a 10 times greater eye and pupil; or some ultraviolet light with 10 times higher frequency.
It doesn’t help one to make the pupils narrower because the resolution allowed by the diffraction formula would get worse. The significantly more blurrier images are no helpful as additions to the sharpest image. We know that in the real world of humans, too. If someone’s vision is much sharper than the vision of someone else, the second person is pretty much useless in refining the information about some hard-to-see objects.

The atmospheric effects are likely to worsen the resolution relatively to the simple expectation above. Even if we have the cleanest air – it’s not just about the clean air; we need the uniform air with a constant temperature, and so on, and it is never so uniform and static – it still distorts the propagation of light and implies some additional deterioration. All these considerations are of course completely academic for me who could reasonably ponder whether I see people sharply enough from 24 meters to count them. 😉

Even if the atmosphere worsens the resolution by a factor of 5 or so, the knights may still induce the minimal “blurry dots” at the retina, and as long as the distance between knights is greater than the distance from the (worsened) resolution, like 10 meters, one will be able to count them.

In general, the photoreceptor cells are indeed dense enough so that they don’t really worsen the estimated resolution. They’re dense enough so that the eye fully exploits the limits imposed by the diffraction formula, I think. Evolution has probably worked up to the limit because it’s not so hard for Nature to make the retinas dense and Nature would be wasting an opportunity not to give the mammals the sharpest vision they can get.

Concerning the tricks to improve the resolution or to circumvent the diffraction limit, there aren’t almost any. The long-term observations don’t help unless one could observe the location of the dots with the precision better than the distance of the photoreceptor cells. Mammals’ organs just can’t be this static. Image processing using many unavoidably blurry images at fluctuating locations just cannot produce a sharp image.

The trick from the Very Large Array doesn’t work, either. It’s because the Very Large Array only helps for radio (i.e. long) waves so that the individual elements in the array measure the phase of the wave and the information about the relative phase is used to sharpen the information about the source. The phase of the visible light – unless it’s coming from lasers, and even in that case, it is questionable – is completely uncorrelated in the two eyes because the light is not monochromatic and the distance between the two eyes is vastly greater than the average wavelength. So the two eyes only have the virtue of doubling the overall intensity; and to give us the 3D stereo vision. The latter is clearly irrelevant at the distance of 24 kilometers, too. The angle at which the two eyes are looking to see the 24 km distant object are measurably different from the parallel directions. But once the muscles adapt into this slightly non-parallel angles, what the two eyes see from the 24 km distance is indistinguishable.

 

V. Analyzed in “How Far Can Legolas See?” by minutephysics (Henry Reich)

 

 

Ray Tracing

This lesson is from Rick Matthews, Professor of Physics, Wake Forest University.

Lesson 1, convex lens
The object is far from the lens.

Convex Lens ray tracing GIF

 

Lesson 2, convex lens
The object is near the lens.

Convex Lens ray tracing GIF Object near lens

The rules for concave lenses, are similar:

A horizontal ray is refracted outward, as if emanating from the near focal point.

A ray that strikes the middle of the lens continues in a straight line.

A ray coming from the object, far from the far focal point, will leave the lens horizontal.

Lesson 3, concave lens.
Note that object placement has little effect on the nature of the image.
The rays diverge.

Concave Lens ray tracing GIF

__________________________

In every case:

if the rays leaving the lens actually intersect then the image is real.

If the rays leaving the lens diverge then someone looking back through the lens
would see a virtual image:
Your mind would extrapolate where you think the image should be,
even though one isn’t really there, as shown below with the dotted lines.

concave-lens

image from Giancoli Physics, 6th edition

http://users.wfu.edu/matthews/courses/tutorials/RayTrace/RayTracing.html

 

Light pollution

dark-sky-power-outage-los-angeles-call-911

Source: http://www.pbs.org/seeinginthedark/astronomy-topics/light-pollution.html

This is what we see on a night without clouds, if there was no light pollution:

Some camera filters can filter out some of the glare

dark-sky-with-astronomik-filter-by-frank-hollis

Enter a caption

from http://photography-on-the.net/forum/showthread.php?t=1063821

Here are the various levels of polluted vs dark skies:

dark-sky-various-levels

This video from Sunchaser Pictures shows what LA night skies could look like without light pollution.

“An experimental timelapse created for SKYGLOWPROJECT.COM, a crowdfunded quest to explore the effects and dangers of urban light pollution in contrast with some of the most incredible Dark Sky Preserves in North America. Visit the site for more!
Inspired by the “Darkened Cities” stills project by Thierry Cohen, this short film imagines the galaxy over the glowing metropolis of Los Angeles through composited timelapse and star trail astrophotography. Shot by Gavin Heffernan (SunchaserPictures.com) and Harun Mehmedinovic (Bloodhoney.com). SKYGLOW is endorsed by the International Dark Sky Association”

also at

____________________________________________

This lesson is from http://darksky.org/light-pollution/

Less than 100 years ago, everyone could look up and see a spectacular starry night sky. Now, millions of children across the globe will never experience the Milky Way where they live. The increased and widespread use of artificial light at night is not only impairing our view of the universe, it is adversely affecting our environment, our safety, our energy consumption and our health.

What is Light Pollution?

Most of us are familiar with air, water, and land pollution, but did you know that light can also be a pollutant?

The inappropriate or excessive use of artificial light – known as light pollution – can have serious environmental consequences for humans, wildlife, and our climate. Components of light pollution include:

  • Glare – excessive brightness that causes visual discomfort
  • Skyglow – brightening of the night sky over inhabited areas
  • Light trespass – light falling where it is not intended or needed
  • Clutter – bright, confusing and excessive groupings of light sources

Light pollution is a side effect of industrial civilization. Its sources include building exterior and interior lighting, advertising, commercial properties, offices, factories, streetlights, and illuminated sporting venues.

The fact is that much outdoor lighting used at night is inefficient, overly bright, poorly targeted, improperly shielded, and, in many cases, completely unnecessary. This light, and the electricity used to create it, is being wasted by spilling it into the sky, rather than focusing it on to the actual objects and areas that people want illuminated.

Glossary of Lighting Terms

How Bad is Light Pollution?

With much of the Earth’s population living under light-polluted skies, over lighting is an international concern. If you live in an urban or suburban area all you have to do to see this type of pollution is go outside at night and look up at the sky.

According to the 2016 groundbreaking “World Atlas of Artificial Night Sky Brightness,” 80 percent of the world’s population lives under skyglow.

In the United States and Europe 99 percent of the public can’t experience a natural night!

2003-blackout-ontario-canada-by-todd-carlson

If you want to find out how bad light pollution is where you live, use this interactive map created from the”World Atlas” data or the NASA Blue Marble Navigator for a bird’s eye view of the lights in your town. Google Earth users can download an overlay also created from the “World Atlas” data. And don’t forget to check out the Globe at Night interactive light pollution map data created with eight years of data collected by citizen scientists.

Effects of Light Pollution

For three billion years, life on Earth existed in a rhythm of light and dark that was created solely by the illumination of the Sun, Moon and stars. Now, artificial lights overpower the darkness and our cities glow at night, disrupting the natural day-night pattern and shifting the delicate balance of our environment. The negative effects of the loss of this inspirational natural resource might seem intangible. But a growing body of evidence links the brightening night sky directly to measurable negative impacts including

Light pollution affects every citizen. Fortunately, concern about light pollution is rising dramatically. A growing number of scientists, homeowners, environmental groups and civic leaders are taking action to restore the natural night. Each of us can implement practical solutions to combat light pollution locally, nationally and internationally.

You Can Help!

The good news is that light pollution, unlike many other forms of pollution, is reversible and each one of us can make a difference! Just being aware that light pollution is a problem is not enough; the need is for action. You can start by minimizing the light from your own home at night. You can do this by following these simple steps.

  • Learn more. Check out our Light Pollution blog posts
  • Only use lighting when and where it’s needed
  • If safety is concern, install motion detector lights and timers
  • Properly shield all outdoor lights
  • Keep your blinds drawn to keep light inside
  • Become a citizen scientist and helping to measure light pollution

Learn more about Outdoor Lighting Basics

Then spread the word to your family and friends and tell them to pass it on. Many people either don’t know or don’t understand a lot about light pollution and the negative impacts of artificial light at night. By being an ambassador and explaining the issues to others you will help bring awareness to this growing problem and inspire more people to take the necessary steps to protect our natural night sky. IDA has many valuable resources to help you including Public Outreach Materials, How to Talk to Your Neighbor, Lighting Ordinances and Residential and Business Lighting.

Want to do more? Get Involved Now

dark-sky-galaxy-viewed-without-light-pollution