In recent years, the image sensors in digital cameras have improved in many ways. The increases in spatial resolution are well known. Equally important, but less obvious, are improvements in noise level and dynamic range. At this point digital cameras have gotten so good it is challenging to display the full richness of their image data. A low noise imager can capture subtly varying detail that can only be seen by turning up the display contrast unnaturally high. A high dynamic range (HDR) imager presents the opposite problem: its data cannot be displayed without making the contrast unnaturally low. To convey visual information to a human observer, it is often necessary to present an image that is not physically correct, but which reveals all the visually important variations in color and intensity. A discipline known as computational photography has emerged at the intersection of photography, computer vision, and computer graphics, and the twin problems of detail enhancement and HDR range compression (also called tone mapping) have become recognized as important topics.
Given an individual image patch, it is not difficult to find display parameters that will effectively convey the local visual information. The problem is this patch must coexist with all the other image patches around it, and these must join into a single, globally coherent image. Many techniques have been proposed to find an image that simultaneously displays everything clearly, while still looking like a natural image. In struggling to bring about a global compromise between all the local constraints, these techniques tend to introduce visually disturbing artifacts, such as halos around strong edges, or distortions of apparent contrast, sharpness, and position of local features.
No entries found