Announced in the presentation of the new iPhone, but not available until the first iOS 13.2 Beta that Apple launched yesterday, Deep Fusion is one of the main novelties of the camera of the new iPhone, both the basic model (iPhone 11) and the more advanced models (11 Pro and 11 Pro Max).
It is an image processing system that is supposed to it will offer better results when capturing snapshots with the camera of the new iPhone since you can see better the details of the photograph because it will combine several images into one. We explain how it works.
Deep Fusion is designed for intermediate lighting situations, such as those we often take indoors. In optimal light conditions it will not be used, and when the light is too low, Night Mode will be used. Depending on the lens you are using and the lighting conditions, the iPhone 11 and 11 Pro camera will work like this:
- He wide angle will use Smart HDR when the scenes we photograph have good lighting, Night Mode when lighting is poor, and Deep Fusion when lighting conditions are intermediate.
- He telephoto lens you will use Deep Fusion very frequently, since it is the least luminous target. You will use Smart HDR when there are bright scenes. There is no Night Mode with the telephoto lens, the wide angle with 2x digital zoom is used.
- He Ultra Wide Angle you will always use Smart HDR, since it has neither Night Mode nor Deep Fusion.
Contrary to what happens with the night mode, in which we see an icon of a moon appear on the screen, Deep Fusion is completely transparent to the user, you cannot activate or deactivate it, you will not even know if it has been used or not. Apple wants it to be fully automatic processing without user intervention. What does he do? We can sum it up like this:
- Before you even touched the shutter button, the camera will have already taken four pictures with a fast shutter speed, to "freeze" the image, and three other pictures at a normal speed. When you press the shutter button it will take a photo with more exposure time to capture the details.
- The three normal photographs and the longest exposure photography are combined into one. This photograph is combined with the best short exposure photograph (fast speed) and processed to eliminate noise.
- Now a much deeper processing occurs, in which the different elements that appear in the photo are analyzed (hair, skin, tissues, sky, walls …) and collect data from one and another photograph to get as much detail as possible in the image .
Apple has shown Deep Fusion with photographs of people wearing sweaters, because it is in the detail of the fabric of the fabric where you can see how Deep Fusion shows the maximum detail of the photographs even in intermediate lighting conditions. We will have to try this new camera function to see if it is really as exceptional as Apple says.
The best accessories for your iPhone
Are you looking for a new case for your iPhone? An accessory for the Apple Watch? Maybe a Bluetooth speaker? Do not miss these offers on accessories and get the most out of Apple's mobile: