31.01.2021

How to choose a smartphone with a good camera. Glossary: ​​Webcams What does smartphone camera interpolation mean?


Sensors are devices that detect only grayscale (gradations of light intensity - from completely white to completely black). To enable the camera to distinguish colors, an array of colored filters is applied to the silicon using a photolithography process. In sensors that use microlenses, filters are placed between the lens and the photodetector. In scanners that use trilinear CCDs (three adjacent CCDs that respond to red, blue, and green, respectively), or high-end digital cameras, which also use three sensors, light of its specific color is filtered on each sensor. (Note that some cameras with multiple sensors use combinations of several colors in filters, rather than the standard three). But for single-sensor devices like most consumer digital cameras, color filter arrays (CFA) are used to handle different colors.

In order for each pixel to have its own primary color, a filter of the corresponding color is placed above it. The photons, before hitting a pixel, first pass through a filter that only lets through the waves of their own color. Light of a different length will simply be absorbed by the filter. Scientists have determined that any color in the spectrum can be obtained by mixing just a few primary colors. There are three such colors in the RGB model.

Different color filter arrays are developed for each application. But in most digital camera sensors, the most popular are the Bayer pattern filter arrays. This technology was invented in the 70s by Kodak when it was conducting research on spatial separation. In this system, the filters are alternately staggered, and the number of green filters is twice as many as red or blue. The order is such that the red and blue filters are located between the green ones.

This quantitative ratio is explained by the structure of the human eye - it is more sensitive to green light. And the checkerboard pattern ensures the same color images no matter how you hold the camera (vertically or horizontally). When reading information from such a sensor, the colors are recorded sequentially in lines. The first line should be BGBGBG, the next line should be GRGRGR, etc. This technology is called sequential RGB.

In CCD cameras, all three signals are combined together not on the sensor, but in the imaging device, after the signal has been converted from analog to digital. In CMOS sensors, this alignment can occur directly on the chip. In any case, the primary colors of each filter are mathematically interpolated based on the colors of the neighboring filters. Note that in any image, most of the dots are a mixture of primary colors, and only a few actually represent pure red, blue, or green.

For example, to determine the influence of neighboring pixels on the color of the central one, a 3x3 matrix of pixels will be processed during linear interpolation. Take, for example, the simplest case - three pixels - with blue, red and blue filters, arranged in one line (BRB). Let's say you are trying to get the resulting color value of a red pixel. If all colors are equal, then the color of the central pixel is calculated mathematically as two parts of blue to one part of red. In fact, even simple linear interpolation algorithms are much more complex, they take into account the values ​​of all surrounding pixels. If the interpolation does not work well, then you get teeth at the boundaries of the color change (or color artifacts appear).

Note that the word "resolution" in the field of digital graphics is used incorrectly. Purists (or pedants, whichever you prefer) familiar with photography and optics know that resolution is a measure of the ability of the human eye or instrument to distinguish between individual lines on a grid of resolutions, such as the ISO grid shown below. But in the computer industry, it is customary to call the number of pixels by resolution, and since this is the custom, we will also follow this convention. Indeed, even the developers call the number of pixels in the sensor resolution.


Let's count?

The image file size depends on the number of pixels (resolution). The more pixels, the larger the file. For example, the image of VGA standard sensors (640x480 or 307200 active pixels) will occupy about 900 kilobytes in uncompressed form. (307200 pixels by 3 bytes (R-G-B) = 921600 bytes, which is approximately 900 kilobytes) An image of a 16 MP sensor will take up about 48 megabytes.

It would seem that it is so - to count the number of pixels in the sensor to determine the size of the resulting image. However, camera manufacturers provide a bunch of different numbers, and each time they claim that this is the true resolution of the camera.

The total pixel count includes all the pixels physically present on the sensor. But only those that participate in receiving the image are considered active. About five percent of all pixels will not participate in the image acquisition. These are either defective pixels or pixels used by the camera for another purpose. For example, masks may exist to determine the level of dark current or to determine the aspect ratio.

Aspect ratio - the ratio between the width and height of the sensor. In some sensors, for example, with a resolution of 640x480, this ratio is 1.34: 1, which corresponds to the aspect ratio of most computer monitors. This means that the images created by such sensors will fit exactly on the monitor screen, without pre-cropping. In many cameras, the aspect ratio corresponds to the format of traditional 35mm film, where the ratio is 1: 1.5. This allows you to take pictures of a standard size and shape.


Resolution Interpolation

In addition to optical resolution (the real ability of pixels to respond to photons), there is also a resolution increased by a software and hardware complex using interpolating algorithms. As with color interpolation, resolution interpolation analyzes the data of neighboring pixels mathematically. In this case, as a result of interpolation, intermediate values ​​are created. Such "embedding" of new data can be done quite smoothly, while the interpolated data will be something in between, between the real optical data. But sometimes, during such an operation, various interference, artifacts, and distortions may appear, as a result of which the image quality will only deteriorate. Therefore, many pessimists believe that resolution interpolation is not at all a way to improve image quality, but only a method of enlarging files. When choosing a device, pay attention to the resolution indicated. Don't be too happy about the high interpolated resolution. (It is marked as interpolated or enhanced.)

Another process of image processing at the software level is sub-sampling. In essence, this is the reverse process of interpolation. This process is carried out at the stage of image processing, after the data has been converted from analog to digital form. This deletes data from different pixels. In CMOS sensors, this operation can be performed on the chip itself, temporarily disabling the reading of certain lines of pixels, or reading data only from selected pixels.

Downsampling has two functions. First, to compact the data - to store more images in memory of a certain size. The smaller the number of pixels, the smaller the file size will be, and the more pictures you can fit on a memory card or internal memory devices and the less often you have to download photos to your computer or change memory cards.

The second function of this process is to create images of a specific size for specific purposes. Cameras with a 2MP sensor are quite tough to take a standard photo of 8x10 inches. But if you try to send such a photo by mail, it will noticeably increase the size of the letter. Downsampling allows you to process an image so that it looks good on your friends' monitors (if you don't set the goal for detail) and at the same time is sent quickly enough even on machines with slow connections.

Now that we are familiar with the principles of sensors, we know how the image is obtained, let's look a little deeper and touch on more complex situations that arise in digital photography.

Camera interpolation, why and what is it?

  1. Type 8 Mp matrix, and 13 Mp the picture itself
  2. In order not to twist unnecessary wires to the matrix, megapuxels are inflated right in the process.
  3. This is when a pixel is divided into several, so that when enlarged, the image is not in squares. Doesn't add real resolution. Smears the drawing.
  4. interpolation is finding an unknown value from known values.
    the quality of interpolation in a photo (approximation to the original) will depend on a well-designed software
  5. The camera sensor is 8MP, and the image is stretched to 13MP. Disconnect unambiguously. Photos will be 13MP, but the quality will be like 8MP (there will be more digital noise).
  6. The real resolution there is in lines per mm without blurring in any case at 2MP.
  7. Well just bloated pixels
    For example, many web cameras, it is written that 720, etc. you look at the settings and there is 240x320
  8. Interpolation - in a general sense - the use of a less complex function in the calculation in order to achieve a result that is as close as possible to the absolute, achievable only with the help of the most accurate and correct actions.
    In this case, to put it simply, the programmers praise themselves that the pictures taken with the phone are slightly different from those taken by more complex devices - cameras.
  1. Loading ... which matrices are better than Live MOS or CMOS ??? "Live MOS Sensor is the trade name for a variety of image sensors developed by Panasonic and also used in Leica products ...
  2. Loading ... what is a Fresnel lens Copying articles from Wikipedia without specifying the source is not good. 1. Fresnel lens 2. Conventional lens The main advantage of the Fresnel lens is its e ...
  3. Loading ... Tell me, is the Fujifilm FinePix S4300 26-ZOOM camera semi-professional? It is an advanced soap dish soap dish, supurzum. unsuitable for photo sessions. look here http://torg.mail.ru/digitalphoto/all/?param280=1712,1711amp;price=22000,100000 Damn, these big ...
  4. Loading ... What is the difference between a DSLR and an optical viewfinder? what's better? Mirrored viewfinder - sighting is carried out using a system of mirrors, the light passes directly through the lens itself and ...
  5. Loading ... What is the difference between CMOS sensors and CCD sensors for CCD cameras in video cameras? CMOS Matrix (CMOS) is a digital device, so it can be mounted on one chip with all other gut ...

Interpolation of images occurs in all digital photographs at a certain stage, be it dematrization or scaling. It happens whenever you resize or unfold an image from one pixel grid to another. Image resizing is necessary when you need to increase or decrease the number of pixels, while repositioning can occur in a wide variety of situations: correcting lens distortion, changing perspective, or rotating an image.


Even if the same image is resized or scanned, the results can vary significantly depending on the interpolation algorithm. Since any interpolation is just an approximation, the image will lose some quality whenever it is interpolated. This chapter is intended to provide a better understanding of what affects the result - and thereby help you minimize any loss in image quality caused by interpolation.

Concept

The essence of interpolation is to use available data to obtain expected values ​​at unknown points. For example, if you wanted to know what the temperature was at noon, but measured it at 11 and at one o'clock, you can guess its value using linear interpolation:

If you had an extra dimension at half past eleven, you might notice that the temperature rose faster before noon, and use that extra dimension for quadratic interpolation:

The more temperature measurements you have around midday, the more complex (and expectedly more accurate) your interpolation algorithm can be.

Image resizing example

Image interpolation works in two dimensions and tries to achieve the best approximation in the color and brightness of a pixel based on the values ​​of the surrounding pixels. The following example illustrates how scaling works:

planar interpolation
Original before after no interpolation

Unlike fluctuations in air temperature and the ideal gradient above, pixel values ​​can change much more dramatically from point to point. As with the temperature example, the more you know about the surrounding pixels, the better the interpolation will work. This is why the results quickly deteriorate as the image is stretched, and in addition, interpolation can never add detail to the image that is not there.

Image rotation example

Interpolation also happens every time you rotate or change the perspective of an image. The previous example was deceiving because this is a special case in which interpolators usually work well. The following example shows how quickly image detail can be lost:

Image degradation
Original turn by 45 ° turn 90 °
(no loss)
2 turns 45 ° 6 turns at 15 °

Rotating 90 ° is no waste because no pixels need to be placed on the border between the two (and therefore split). Notice how much of the detail is lost on the first turn, and how quality continues to decline on the next. This means that it follows avoid rotations as much as possible; if an uneven frame requires rotation, you should not rotate it more than once.

The above results use the so-called "bicubic" algorithm and show significant quality degradation. Notice how the overall contrast decreases as the color intensity decreases, how dark halos appear around the light blue. The results can be significantly better depending on the interpolation algorithm and the imaged subject.

Interpolation Algorithm Types

The generally accepted interpolation algorithms can be divided into two categories: adaptive and non-adaptive. Adaptive methods vary depending on the subject of interpolation (sharp edges, smooth texture), while non-adaptive methods treat all pixels the same.

Non-adaptive algorithms include: nearest neighbor method, bilinear, bicubic, splines, cardinal sine function (sinc), Lanczos method and others. Depending on the complexity, they use 0 to 256 (or more) contiguous pixels for interpolation. The more adjacent pixels they include, the more accurate they can be, but this is achieved at the expense of a significant increase in processing time. These algorithms can be used for both unwrapping and image scaling.

Adaptive algorithms includes many of the commercial algorithms in licensed software such as Qimage, PhotoZoom Pro, Genuine Fractals, and others. Many of them apply different versions of their algorithms (based on pixel-by-pixel analysis) when they detect the presence of a border - in order to minimize unsightly interpolation defects in the places where they are most visible. These algorithms are primarily designed to maximize the defect-free detail of magnified images, so that some of them are not suitable for rotating or changing the perspective of an image.

Nearest Neighbor Method

It is the most basic of all interpolation algorithms and requires the least processing time, since it takes into account only one pixel - the one closest to the interpolation point. As a result, each pixel just gets bigger.

Bilinear interpolation

Bilinear interpolation considers a 2x2 square of known pixels surrounding the unknown. The weighted average of these four pixels is used as the interpolated value. As a result, the images look significantly smoother than the result of the nearest neighbor method.

The diagram on the left refers to the case where all known pixels are equal, so the interpolated value is simply their sum divided by 4.

Bicubic interpolation

Bicubic interpolation goes one step further than bilinear interpolation by considering an array of 4x4 surrounding pixels - 16 pixels in total. different distances from an unknown pixel, the nearest pixels are weighted more in the calculation. Bicubic interpolation produces significantly sharper images than the previous two methods, and is arguably optimal in terms of processing time and output quality. For this reason, it has become standard for many image editing programs (including Adobe photoshop), printer drivers, and built-in camera interpolation.

Higher order interpolation: splines and sinc

There are many other interpolators that take into account more surrounding pixels and thus are more computationally intensive. These algorithms include splines and cardinal sine (sinc), and they retain most of the image information after interpolation. As a consequence, they are extremely useful when an image requires several rotations or perspective changes in separate steps. However, for single magnifications or rotations, such higher-order algorithms give little visual improvement with a significant increase in processing time. Moreover, in some cases, the cardinal sine algorithm performs worse on a smooth section than bicubic interpolation.

Observed interpolation defects

All non-adaptive interpolators try to find the optimal balance between three unwanted defects: boundary halos, blur, and aliasing.

Even the most advanced non-adaptive interpolators are always forced to increase or decrease one of the above defects at the expense of the other two - as a result, at least one of them will be noticeable. Notice how the boundary halo looks like a de-sharpening defect with an unsharp mask, and how it enhances the apparent sharpness by sharpening it.

Adaptive interpolators may or may not create the above-described defects, but they can also generate textures unusual for the original image or single pixels at large scales:

On the other hand, some "defects" of adaptive interpolators can also be considered as advantages. Since the eye expects to see in finely textured areas such as foliage detail down to the smallest detail, such designs can deceive the eye from a distance (for certain types of material).

Smoothing

Anti-aliasing or anti-aliasing is a process that attempts to minimize the appearance of jagged or jagged diagonal borders that give text or images a rough digital look:


300%

Anti-aliasing removes these jaggies and gives the impression of softer edges and higher resolution. It takes into account how much the ideal border overlaps adjacent pixels. A stepped border is simply rounded up or down with no intermediate value, while a smoothed border produces a value proportional to how much of the border falls into each pixel:

An important consideration when magnifying images is to prevent excessive aliasing due to interpolation. Many adaptive interpolators detect edges and adjust to minimize aliasing while maintaining edge sharpness. Since the smoothed border contains information about its position at more high definition, it is quite possible that a powerful adaptive (border-determining) interpolator will be able to at least partially reconstruct the border when zoomed in.

Optical and digital zoom

Many compact digital cameras can carry out both optical and digital zoom (zoom). Optical zoom is achieved by moving the zoom lens to amplify the light before hitting the digital sensor. In contrast, digital zoom degrades the quality because it simply interpolates the image after the sensor has received it.


optical zoom (10x) digital zoom (10x)

Even though a photo using digital zoom contains the same number of pixels, its detail is distinctly less than when using optical zoom. Digital zoom should be almost completely eliminated minus when it helps to display a distant subject on your camera's LCD screen. On the other hand, if you usually shoot in JPEG and want to crop and enlarge the image later, digital zoom has the advantage that it is interpolated before any compression artifacts are introduced. If you find that you need digital zoom too often, buy a teleconverter, or better yet, a long focal length lens.

Market mobile phones filled with models with cameras with huge resolutions. Even relatively inexpensive smartphones with sensors with a resolution of 16-20 Mp. An ignorant customer is chasing a "cool" camera and prefers the phone with a higher resolution. He does not even know that he is falling for the bait of marketers and salespeople.

What is permission?

Camera resolution is a parameter that indicates the final size of the image. It only determines how large the resulting image will be, that is, its width and height in pixels. Important: the quality of the picture does not change. The photo may turn out to be of low quality, but it is large due to the resolution.

Resolution does not affect quality. It was impossible not to mention this in the context of smartphone camera interpolation. Now you can go directly to the point.

What is phone camera interpolation?

Camera interpolation is an artificial increase in image resolution. That is, images, not That is, this is special software, thanks to which a picture with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less).

By analogy, camera interpolation is similar to binoculars. These devices enlarge the image, but do not make it better or more detailed. So if interpolation is indicated in the specifications for the phone, then the actual resolution of the camera may be lower than stated. This is neither bad nor good, it just is.

What is it for?

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers on the advertising poster the resolution of the phone's camera and position it as an advantage or something good. Not only does resolution by itself not affect the quality of photos, it can also be interpolated.

Just 3-4 years ago, many manufacturers were chasing the number of megapixels and different ways tried to cram them into their smartphones sensors with as much a large number... This is how smartphones appeared with cameras with a resolution of 5, 8, 12, 15, 21 megapixels. At the same time, they could take pictures like the cheapest soap dishes, but buyers, having seen the sticker "18 megapixel camera", immediately wanted to buy such a phone. With the advent of interpolation, selling such smartphones has become easier due to the ability to artificially add megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

The technical side

What is camera interpolation in a phone technically, because all the text above described only the main idea?

With the help of special software, new pixels are "drawn" on the image. For example, to enlarge the image by 2 times, a new line is added after each line of pixels in the picture. Each pixel in this new line is filled with color. The fill color is calculated by a special algorithm. The very first way is to fill a new line with the colors of the nearest pixels. The result of such processing will be terrible, but this method requires a minimum of computational operations.

The most commonly used method is different. That is, new lines of pixels are added to the original image. Each pixel is filled with color, which, in turn, is calculated as the average of neighboring pixels. This method gives better results, but requires more computational operations.

Fortunately, modern mobile processors fast, and in practice the user does not notice how the program is editing the image, trying to artificially increase its size.

There are many advanced interpolation methods and algorithms that are constantly being improved: the transition boundaries between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is trivial and is unlikely to take root in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in movies does a small blurry picture become clear after applying a couple of filters. In practice, this cannot be.

Do you need interpolation?

Many users unknowingly ask questions on various forums on how to interpolate the camera, believing that this will improve the quality of images. In fact, interpolation will not only not improve the quality of the picture, but it can even make it worse, because new pixels will be added to the photos, and due to the not always accurate calculation of the colors for filling in the photo, there may be undefined areas, graininess. As a result, the quality drops.

So phone interpolation is a marketing gimmick that is completely unnecessary. It can increase not only the photo resolution, but also the cost of the smartphone itself. Do not fall for the tricks of sellers and manufacturers.

Camera interpolation is an artificial increase in image resolution. It is the image, not the size of the matrix. That is, this is special software, thanks to which a picture with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less). By analogy, camera interpolation is like a magnifying glass or binoculars. These devices enlarge the image, but do not make it better or more detailed. So if interpolation is indicated in the specifications for the phone, then the actual resolution of the camera may be lower than stated. This is neither bad nor good, it just is.

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers on the advertising poster the resolution of the phone's camera and position it as an advantage or something good. Not only does resolution by itself not affect the quality of photos, it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in various ways tried to cram sensors with as many sensors as possible into their smartphones. This is how smartphones appeared with cameras with a resolution of 5, 8, 12, 15, 21 megapixels. At the same time, they could take pictures like the cheapest soap dishes, but buyers, having seen the sticker "18 Mp Camera", immediately wanted to buy such a phone. With the advent of interpolation, selling such smartphones has become easier due to the ability to artificially add megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

What is camera interpolation in a phone technically, because all the text above described only the main idea?

With the help of special software, new pixels are "drawn" on the image. For example, to enlarge the image by 2 times, a new line is added after each line of pixels in the image. Each pixel in this new line is filled with color. The fill color is calculated by a special algorithm. The very first way is to fill a new line with the colors of the nearest pixels. The result of such processing will be terrible, but such a method requires a minimum of computational operations.

The most commonly used method is different. That is, new lines of pixels are added to the original image. Each pixel is filled with color, which, in turn, is calculated as the average of neighboring pixels. This method gives better results, but requires more computational operations. Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program is editing the image, trying to artificially increase its size. smartphone camera interpolation There are many advanced interpolation methods and algorithms that are constantly being improved: the transition boundaries between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is trivial and is unlikely to take root in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in movies does a small blurry picture become clear after applying a couple of filters. In practice, this cannot be.
.html


2021
maccase.ru - Android. Brands. Iron. news