The ability to control depth of field is a key element of image quality that helps make DSLR and mirrorless cameras superior to compact ones when aiming and shooting. A low depth of field, that effect that allows you to put a “sper-well-focused” person in front of a blurred background, is such a sought after resource that the phones try to emulate it by means of multiple lenses and computational photography. But really, what is the depth of field, and how is it controlled?
Wide or low depth of field?
Simply put, depth of field refers to where your image is focused. If there are sharp objects near and far from the camera, you have a wide depth of field. If the foreground or background is blurred, you have a low depth of field. A common analog for this is a pool: at its deepest point there will be more water. Similarly, with a wide depth of field more things will be focused.
An example of when to use a wide depth of field could be a landscape, where the whole image is the protagonist of the photo. This applies particularly if you have both foreground and background elements that you want to keep focused, as in this photo:
A low depth of field is the opposite; when you have only one protagonist that you want to isolate from what you have in front or behind. This is often used to make portraits and can be useful if the background of your image is full of distracting things, as in the following photo:
The definition of depth of field dictionary adds a little more to the previous description, describing it as the distance between the closest point and the furthest from acceptable focus. It seems simple, but now you may be wondering: what approach is acceptable?
The acceptable approach has to do with the confusion circle and other advanced topics, but, in short, it's about what looks focused on you. Technically, the lens of a camera can only focus on a single plane in space, like a slice of bread in a loaf. Everything in front of and behind that plane is out of focus, or out of focus. However, our eyes can only see a certain amount of detail, including those of blur. If a blur point is too small for our eyes to detect, that area seems focused.
This short video of Adorama (in English) is a great introduction to the circle of confusion and the acceptable approach.
How to control depth of field
The depth of field is determined by the relationship between the aperture (f-stop) and the focal length of your lens, the distance of the person portrayed and the size (format) of the sensor.
The most common way to change the depth of field is by adjusting the lens aperture, which determines how much light can pass through it to the camera sensor. The smaller the opening, the greater the depth of field. A very wide opening creates soft bottoms with a shallow depth of field. In this case the depth of field can be so low that the eye of the person portrayed may be focused and the tips of his eyelashes can no longer. These wide openings (f / 1.4 or f / 2, for example) direct more attention to the person when blurring the background, although large or approaching objects may not be fully focused.
On the other hand, a small aperture (such as f / 11 or f / 16) retains more focused elements. A small opening is what is recommended to take landscape photographs, as you may need sharpness from the foreground and elements that may be very close, to elements that are in the distance, such as the horizon line or a sunset .
You may have noticed that phones usually have lenses with seemingly very wide apertures (such as the iPhone 11 Pro and its f / 1.8 aperture) and still offer great depth. Why? This is because the sensor size also has to do with depth of field. The technical explanation of this is, well, something technical, but what you should know is: the larger the sensor, the easier it is to get those beautiful blurred backgrounds. However, a large sensor needs a large lens that simply does not fit inside a phone.
However, phones can mimic a blurry background through software, producing results that can be impressively realistic in the right situation, and even allowing you to alter the depth of field after taking the picture. However, computational portrait modes can still fail in many cases and do not always work for each type of portrait, such as those in which the person is too close or too far away.
Do not forget that the depth of field is the area that appears acceptably sharp. That means that if you shoot as wide as possible with a large sensor camera and the background doesn't look as blurred as you want, you can still do more. Get more out of the background and it will look more blurry without changing anything on the camera.
Similarly, the closer the person's camera is, the more blurred the background is. Macro photographs are usually taken with smaller apertures, even when the background is out of focus, because the camera is so close to what you are photographing that the depth of field is low regardless of the aperture. Some macro photographers even use the technique of focus stacking to obtain greater depth; This refers to making several shots at different focal distances and joining them in a post-production program to obtain a stronger image.
Telephones also create low depths, compared to a wide angle. This is one of the reasons why telephones are often used to make portraits, while the wide angle is used for landscapes. Of course, like everything in the photo, this is not an immovable rule.
While the mathematics that explains the depth of field is complex, the techniques to control it are not. To create a softer background, use a larger aperture (a lower number), use a large sensor camera, corner the person you are going to portray, or leave the bottom; You can maximize the effect with a combination of these factors. To get clearer images with more clear details, use a smaller opening and get away from the person, or park it in the background.