Episode 134: Dynamic Range T(h)errory

Download the Video! (37.8 MB, 19:53)

The German word “Terrorie” was coined by a kid in a Physics lesson of my late colleague Helmut Mohr in Hamburg. It is what it sounds like – and today’s video is full of it. No GIMP, no images, only the blackboard and me talking. Please consider this as a WARNING. 😉

We had a lively discussion in the forum about the theory behind making images, circling around the term “dynamic range”. There is a big difference between light and dark parts of our world, often more that a camera can catch. And nearly always more than fits onto paper or a computer screen.

The process of squeezing this big range into the small output range is called Post Processing. Either you do it via RAW anf GIMP – or the smart chip in your camera does it while saving your iage as JPEG. What I forgot to say – if you do it, you can redo it. The RAW file still exists. If the chip does it, the RAW file is discarded and you are stuck with the version of the image made by the chip.

I got a lot of information about this subject from a wonderful paper by Karl Lang at Adobe(R). Worth to download and read, even if you decide to skip the video this week.


02:04 Orders of Magnitude
04:00 How much light is in a scene? (Dynamic range ramp up)
06:00 There is no black and white
06:30 Dynamic range of a scene
06:50 Dynamic range of LCD and prints
08:50 Dynamic range of the camera
09:50 Exposure = slide the dynamic range
11:05 Post processing by the camera
12:15 RAW -> GIMP -> print
13:00 Slides and egatives in analog photography
15:05 A source at Adobe(R)
15:15 8 Bits – a problem (sometimes)
17:10 Why is it possible to make images? Because our eyes are no camera and our brain no computer.

Creative Commons License
“Meet the GIMP”  by Rolf Steinort is licensed under a Creative Commons Attribution-No Derivative Works 3.0 Germany License.
Permissions beyond the scope of this license may be available at http://meetthegimp.org.

15 thoughts on “Episode 134: Dynamic Range T(h)errory

  1. Very nice look into the theory, definitely worth viewing.
    A small addition: There _are_ image sensors which can capture the same dynamic range as our eyes, even including light adaption. However, it is not possible for each _sensor pit_ to have this dynamic range. Not even close. The trick is to combine different pit types of different sensitivity and different light intensity/output voltage relations on one sensor plane. This is called a single-shot sensor, compared to a multi-shot sensor — which is normal one you just expose several times (bracketing) as shown on Rolf’s blackboard and combine the results. The later are far cheaper than the first, though.
    Our eyes are even a combination of both, cones and rods have different sensitivity and can “move” their active dynamic range along a very wide scale via a change in the reciptory chemicals in the cells, but this takes some time, especially down the scale.

  2. If you want to have a look at such a sensor, the Cyphera NN1 front camera has one. This bot is from esa and was designed for computer science classes in universities and also as an experimental device of all sorts, especially testing machine vision algorithms. I’ve played with it quite a lot in university 😉 As far as I know it’s no longer manufactured and was never sold to individuals but is still around at several places and can be leased or borrowed, especially if you’re a student (I’ve had it at home for almost half a year).
    The images from the camera are delivered as 32bit float pgm streams of 256×256 pixel once per second. It’s pitiful by today’s standards and it was not much back then (2005), but 10⁸ dynamic range is huge. The sensor has 12 different pit types of different sensitivity, but for the middle range two pits are always grouped together with different polarization microlenses before them lowering the total range. This might seem strange but makes it possible the discern reflective surfaces like mirrors or transparent obstructions like glass doors and windows, so it will not crash into them.

  3. Rolf,
    I just want to say thank you. I learned a lot. It was extremely interesting and perhaps I should ask your forgiveness for saying so but it was also kind of poetic in a way. I look forward to the next one.

  4. Nachbarnebenan get your facts ready before comment.
    As first the robot is named Cyphera N11. Second, it is not from ESA. I got invented by ESA interim students and was allowd to have the ESA logo. Third, it was only made as a few pairs for selected universitys. It was overcome when invented already. It was never popular or wide used. Fourt, the camera has 248×248 resolution. Nobody ever used it because it has obscure image format no program knows. Today students work with Lego or different robots. It has a real interface program to control unlike that old modem emulation stuff. It has a normal webcam everyone can use. And it is cheap.

  5. Pingback: Links 23/2/2010: OpenNode Beta, Drupal Adoption | Boycott Novell

  6. I just want to say thank you! For the first time I got the feeling of beginning to understand the matter… A long time I’ve searched the internet, and read a lot, but with this 20 Minutes of explaining I learned more than in 8 hours of using google!

    Please, if you would do this favour, continue in another Episode! I would love to hear more about the technical backgroud underlying the ‘arts’ part :-)! Finally somebody explaining the topic in a way, I can understand it. I don’t care if its correct in somewhat details; if I’m interested in, I can read them for myself. But breaking the important facts down to this level helps a great deal, to have the necessary background understanding the details.

    In addition, pretty nice, seeing the possible future of our schools ;-)… Chalk on blackboard is always the right thing… I hate Overhead-Projections!

    Regards, Axel

    PS: And by the way, not wanting to imply anything, but I know a lot of people using PS, who would probably not be able to explain this matter in such an easy way…

  7. Just had to say this is the best way I have saw this theory explained with out twisting your brain into knots with jargon that so many photographers like to use with out even them selves understanding what there talking about. And I think you did a great job showing people what RAW actually is. As so many people shoot RAW thinking its a magic file that will make everything better this episode just adds understanding to why people will choose a format.

    As always love all of your camera theory thank you Rolf.

  8. Rolf, in this episode you have outperformed yourself, congratulations. I wish I had a teacher like you when I was a kid!

    I hope that you dare to venture further down this alley. After all, these are very fundamental facts of digital photography and in the end, the best program (aka Gimp) can not help me when I choose to ignore them.

    Regards, Stefan

    PS: A very easy and clever alternative to hdr ala qtpfsgui (where you either need a doctorate in mathematics or a teacher like you) is enfuse from http://enblend.sourceforge.net/ Check it out!

  9. Hi Rolf

    Thank you for a really nice show, I really enjoyed it.
    Thanks for the link to the ‘Rendering the print: the art of photography’ doc, it looks great too (haven’t read it yet, though).


  10. Hi Rolf

    Great episode , i have really learned a lot today . Keep up the great work . 🙂
    And , please , do some photo editing in future shows ( maybe new tricks ? ) . I really loved it when you showed us in the first episodes the basic stuff . Trying to imitate your work nearly two years ago was my first step in post processing , I kind of miss my german photography teacher . I miss Philippe too , he’s not around for quite some time .

    Have a great week-end ahead of you .

  11. Pingback: Dynamic range in use | Ramon Sadornil

  12. Pingback: Entenent el rang dinàmic | Ramon Sadornil

  13. Pingback: MTG: Эпизоды 1 квартала 2010 « Цифровая фотография

Anything to add from your side of the computer?

This site uses Akismet to reduce spam. Learn how your comment data is processed.