Share this on Facebook
download .zip with all pictures
Not really. See, as you stated, convetional imaging uses brightness 0-255 for each channel. We could use more if we wanted a REALLY pretty image, but regular display devices don’t have a large dynamic range, they can only display “LDR” images. When we use multiple images to create a HDR image, what we are doing is calculating the radiance, the actual rate at which light comes to the camera sensor. This information is usually stored in more than 8 bits (0-255), and in order to actually see the difference in a monitor usually it is [tone mapped](http://en.wikipedia.org/wiki/Tone_mapping), creating these images that we are discussing. However, with a monitor capable of showing high range, one would not need to use tone mapping, and the radiance image generated could be used to display the scene as it was when it was shot.
Well, if HDR is a regular image then why do we need it? Because our light capture devices work like “photon” buckets, we open the camera hole and let some light in, it fills our buckets and we calculate how much light was at that point. If we have bright elements in our scene, such as a sun in the sky or a lamp it fills these buckets pretty quick and they start dripping to surrounding buckets, and we don’t have enough time to fill buckets to get the light for other parts of the scene. This is why we take several pictures with different exposure times, to estimate how much light comes from each point. We use lower exposition time to find out the light of the brighter sources and high exposition time to get those really dark guys.