02.08.2021

The perfect photo. What is HDR+ and how to activate it on your smartphone


The cameras of the Pixel and Nexus smartphones have never been anything special, but over the past four years they have taken a powerful leap forward and now occupy the first lines of the ratings. Why did it happen? Because Google has implemented a software post-processing engine called HDR+. In this article, we will explain how it works and how to enable HDR+ on your smartphone, regardless of brand.

What is HDR

To fully understand how HDR+ works, you will first need to understand regular HDR.

The main problem of all smartphone cameras is the small size of the matrix (or rather, photocells) and, as a result, insufficient coverage of the dynamic range. To correct this drawback, the HDR (High-Dynamic Range) algorithm was developed, the principle of which is as follows: the camera takes a frame with a standard exposure level for a given scene, then takes an underexposed frame, on which only overexposed areas of the original image will be clearly visible, then An overexposed photo in which only the dark details of the original photo are visible, and everything else is overexposed. Further, the images are superimposed on each other and combined using special algorithms, the quality of which depends on the manufacturer of the camera software. The result is a picture with good detail in both the shadows and the brighter areas.

The disadvantages of HDR are obvious: a long shooting time leads to the fact that moving objects caught in the frame will double, and even a little shaking will blur the picture.

What is HDR+

Smart heads have come up with an algorithm devoid of the shortcomings of HDR. However, it has only one name in common with HDR.

HDR+ stands for High-Dynamic Range + Low Noise. He gained his fame for a number of outstanding features: the algorithm is able to eliminate noise with virtually no loss of detail, improve the quality of color reproduction, which is extremely important in low light and at the edges of the frame, at the same time it greatly expands the dynamic range of photography. HDR +, unlike standard HDR, is almost not afraid of smartphone shaking and movement in the frame.

The first HDR+-enabled smartphone was the Nexus 5. Due to not the best white balance and a small aperture (f2.4), the camera of this smartphone was considered nothing more than a strong middling. Everything changed with the release of the Android 4.4.2 update. It was it that brought with it support for the HDR + mode and the amazing quality of night shots. Although they were not very bright across the entire field of the frame, thanks to HDR + they practically did not contain noise while maintaining small details and had excellent (for smartphones in 2013) color reproduction.

History of HDR+

How does a company that has never done a camera come up with an algorithm that works wonders using conventional, by flagship standards, Nexus and Pixel cameras?

It all started in 2011, when Sebastian Thrun, CEO of Google X (now just X), was looking for a camera for Google Glass augmented reality goggles. The weight and size requirements were very strict. The size of the camera matrix had to be even smaller than in smartphones, which would have an extremely bad effect on the dynamic range and would lead to a lot of noise in the photo.

There was only one way out - to try to improve the photo programmatically, using algorithms. This task was to be solved by Marc Levoy, a lecturer in the computer science department at Stanford University, an expert in the field of computational photography. He focused on software-based image capture and processing technology.

Mark formed a team known as Gcam, which began to study the Image Fusion method (fusion of images), based on combining a series of images into one frame. Photos processed using this method turned out to be brighter and sharper, with a small amount of noise. In 2013, the technology made its debut in Google Glass, and then, in the same year, renamed HDR +, appeared in the Nexus 5.


How HDR+ works

What about dynamic range expansion? As we already know, using a fast shutter speed saves us from overexposed areas. It remains only to remove the noise in the dark area using the previously described algorithm.

At the final stage, the resulting image is post-processed: the algorithm minimizes vignetting due to light hitting the matrix at an oblique angle, corrects chromatic aberration by replacing pixels at high-contrast edges with neighboring ones, increases green saturation, shifts blue and magenta hues towards blue, enhances sharpness (sharping ) and performs a number of other steps to improve the quality of the photo.



On the left is a photo from a Samsung stock camera in HDR, and on the right is a photo created in Gcam in HDR +. It can be seen that the algorithm sacrificed the detail of the sky to draw objects on the ground.





2022
maccase.ru - Android. Brands. Iron. news