Know how a smartphone camera really works.
Smartphones and cameras have really come far from where it all began. In November 2000 the first smartphone with a camera was invented “Sharp J-SH04 “. At that time it had a resolution of 0.11 MegaPixel.
Today we have smartphones that are equipped with 108MegaPixel cameras and that number is absurd. It’s almost 1000 times improvement in the last two decades.
Now let’s talk about how photography is initiated in a smartphone.
Three elements complete this equation and they are Lens, Sensor, and Computational Imaging.
Understanding the SmartPhone Camera Photography
To understand the science behind smartphone camera photography, you will need to know about the important elements that combine together to produce the image.
Important Elements in the SmartPhone Camera Photography
- Lens and Sensor
A traditional model of a camera such as a DSLR contains an interchangeable Lens system basically this means that you can take a lens and change it with another lens of different focal lengths while using the same sensor.
While on a smartphone there are no moving parts so smartphones with multiple camera systems have multiple lenses. For instance, Samsung Galaxy S20 Ultra has four Rear Cameras and each one has a different focal length and independent sensor.
So with a smartphone, we have a challenge in that it has a tiny lens that restricts your depth of field and a small sensor does not give you an optimum low light performance. But how do phones produce a really good image? Well, the answer lies in the Third Component which is explained below.
- Computational Imaging:
This might sound like a really academic term but I assure you it is not as complicated as it seems. So, what is this? It basically means utilizing the computer part of your phone along with a series of algorithms that will overcome the shortcomings of the smartphone camera and lens.
To further explain these Algorithms Let’s take an example of low-light photography. When we take a picture in low light we are using a high ISO, which will produce more noise in the photo, especially in shadow areas. To overcome this, the smartphone has an algorithm that kicks in and removes the noise and smoothens the image.
Furthermore, your smartphone may have another algorithm that sharpens the image. So this is how Computational Imaging works.
Google has an entire team that works on unique ideas on how to implement technology in their pixel phones.
Let’s take an example of a Google Smartphone “GOOGLE PIXEL 3“.
This smartphone revolutionized smartphone photography with its “NIGHT SIGHT” mode and this is a perfect example of what you can do with computational imaging.
So what it basically does in its night sight mode measures how steady you are holding your phone.
Then it calculates how many exposures can it make and after a couple of seconds, it utilizes the Optical Image Stabilization in the lens.
It stitches the best part from each exposure and then forms it into a single image. The result is a bright picture with low noise and accurate colors.
A smartphone’s camera may not replace a Professional camera but these days, the pictures & Videos produced by phones are so good that there is no need to have a DLSR.