Why Every Photo You Take Is “Fake” – Review Geek

Someone is taking a photo with a Samsung Galaxy S23 Ultra smartphone.
Justin Duino / Review Geek

Smartphones are under fire for “faking” or “spoofing” high-quality photos. But every photo in existence contains some kind of fake, and that’s not a bad thing.

Artificial intelligence has invaded your smartphone camera with a singular purpose: to spoil your photos and fill your head with lies. At least that’s the idea you might see in some headlines. Smartphone camera technology is evolving rapidly, leading to some confusion as to what is “real” and “fake.”

Well, I have good news. every photo in existence is “fake”. It doesn’t matter if it was shot with a smartphone from 2023 or a movie camera from 1923. There’s always some shenanigans going on behind the scenes.

Physical limitations of phone cameras

If you stuck a full-sized camera lens on a phone, it would be monstrous. Smartphones need to be small, compact and somewhat durable, so they tend to use incredibly small camera sensors and lenses.

This teensy-weensy rig creates several physical limitations. While the smartphone may have a 50MP sensor, the sensor size is actually quite small, meaning less light can reach each pixel. This results in reduced low-light performance and can introduce noise into the image.

Lens size is also important. Small camera lenses can’t bring in a ton of light, so you’ll end up with reduced dynamic range and, once again, reduced low-light performance. A small lens also means a small aperture that cannot produce a shallow depth of field for background blur or bokeh effects.

At the physical level, smartphones cannot take high-quality photos. Advances in sensor and lens technology have greatly improved the quality of smartphone cameras, but the best smartphone cameras come from brands that use “computational photography.”

Phone cameras use software to “cheat”.

Justin Duino / Review Geek

The best smartphone cameras come from Apple, Google and Samsung, the three leaders in software development. This is not accidental. To overcome the hardware limitations of smartphone cameras, these brands use “computational photography” to process and enhance photos.

Smartphones use many computational photography techniques to produce high-quality images. Some of these techniques are predictable; The phone will automatically adjust the photo’s color and white balance, or it can “beautify” the subject by sharpening and brightening its face.

But the most advanced techniques in computational photography go beyond simple image editing.

Take stacking for example. When you press the shutter button on your phone, it takes several pictures within a few milliseconds. Each image was created with slightly different settings. some are blurry, some are exaggerated, and others are magnified. All of these photos are combined to create an image with high dynamic range, strong colors and minimal motion blur.

An example of night photography on the iPhone 11.
Apple:

Stacking is a fundamental concept in HDR photography, and it is the starting point for many algorithms in computational photography. Night mode, for example, uses stacking to create a bright nighttime image without long exposure times (which would cause motion blur and other issues).

And, as I mentioned earlier, smartphone cameras can’t create shallow depth of field. To overcome this problem, most smartphones offer a portrait mode that uses software to estimate depth. The results are pretty impressive, especially if you have long or curly hair, but it’s better than nothing.

Some people think computational photography is “cheating” because it distorts the capabilities of your smartphone’s camera and creates an “unrealistic” image. I’m not sure why this would be a serious concern. Computational photography is imperfect, but it allows you to take high-quality photos using low-end equipment. In most cases, this gets you closer to a “realistic” and “natural” image with a sense of depth and dynamic range.

The best example of this “deception” is Samsung’s “moon controversy”. To promote the zoom capabilities of the Galaxy S22 Ultra, Samsung decided to create a lunar photography algorithm. Basically, it’s artificial intelligence that makes crappy pictures of the moon look a little less ridiculous by adding details that don’t exist in the original image. It’s a useless feature, but if you need to photograph the moon with a sub-penny camera, I’d consider some “cheating” necessary.

That said, I do worry about the misleading ways some companies market their computational photography tools. And my biggest gripe is the “shot on an iPhone” or “shot on a Pixel” nonsense that phone manufacturers churn out every year. These commercials are made with multi-million dollar budgets, big fat lenses and professional editing. The idea that you can replay one of these ads with nothing but a smartphone is nothing short of an outright lie.

This is not news

Very broken camera.

Some people are unhappy with computational photography. They argue that it distorts reality and therefore must be bad. Cameras should give you the exact image that enters the camera lens; anything else is a lie.

Here’s the thing. every photo contains some level of “fake”. It doesn’t matter if the photo was taken with a phone, a DSLR camera, or a film camera.

Let’s take a look at the film photography process. Camera film is coated with a light-sensitive emulsion. When the camera shutter is opened, this emulsion is exposed to light, leaving an invisible chemical imprint on the image. The film is soaked through a series of chemicals to produce a permanent negative, which is projected onto emulsion-coated paper to create a printed image (okay, photo paper needs a chemical wash too, but that’s the gist of it).

Each step in this process affects the appearance of the image. One brand of film may oversaturate reds and greens while another brand may look dull. Darkroom chemicals can change the color or white balance of the image. And printing an image on photographic paper introduces more variables, which is why many film labs use a reference sheet (or computer) to dial in color and exposure.

Most people who owned a film camera were not professional photographers. They didn’t control the printing process, and they certainly didn’t choose the chemical composition of their film. Doesn’t that sound familiar? Film makers and photo labs were the “computational photography” of their day.

And what about modern DSLR and mirrorless cameras? Well, I’m sorry, but all digital cameras do some photo processing. They can adjust the image for lens distortion or reduce photo noise. But the most common form of processing is actually file compression, which can completely change the color and white balance of an image (a JPEG contains only a few million colors). Some cameras allow you to save RAW image files, which are minimally processed, but tend to look “flat” or “dull” without professional editing.

All the photos are “fake” and it’s not a big deal

A person using the 100x zoom on the Samsung Galaxy S23 Ultra
Justin Duino / Review Geek

Reality is an important part of photography. Sometimes we want a photo that accurately represents a moment in time, flaws and all. But more often than not, we ask our cameras to capture a ok image, even in adverse conditions, we ask for a fake.

This trick requires technological advances beyond the camera lens. And computational photography, despite its flaws and marketing spin, is the technology we need right now.

That said, companies like Google, Apple and Samsung need to be more transparent with their customers. We are constantly bombarded with truth-telling ads, leading many to believe that smartphones are comparable to full-size or professional-grade cameras. This is simply not true, and until customers understand what is going on, they will continue to resent computational photography.



Source link