The photograph has become something fundamental in the mobiles of the present beyond the high range, but with an increasingly important weight in the flagships. OnePlus has made a relative leap in its OnePlus 7 Pro with a triple camera that was placed in that minimum of requirements that the first line seems to have, and to talk about it and the future of photography in the brand and in the industry we have been able to talk to Zake Zhang, product manager of OnePlus.
We had the opportunity to speak with him in a context as favorable as the company's offices in Taipei, which also house one of the photographic laboratories. Thus, among many mobile phones, motivational phrases and corporate colors we sit with him and ask him about this topic that we like so much.
The weight of hardware and software in a good photograph and transparency about artificial intelligence
We were able to do the interview after having been participating in a few sessions of feedback photographic with other media and the people of OnePlus. In addition to Zhang, other engineers working in the Taipei laboratory were present and both the points to improve and what we generally looked for in the mobile photography were discussed, and some of those points are mentioned in the interview.
Speaking of what a part or another brings in a camera in terms of hardware and software, for you what percentage of a good camera is the sensor and how much processing?
ZZ: Regarding image quality, both software and hardware are important, but I think this second one is to a greater extent. Probably more than 80% is based on hardware, something that we see even in professional DSLR cameras: the greater the sensor, the greater the amount of light, especially in the scenes with low illumination (the full frame).
But we cannot go as far in smartphones as in cameras because of the size limitation: there is less space and the sensor size is also limited by this. So since there is a limitation we try to maximize the available space to put the largest possible sensor.
That in relation to hardware. Speaking of software, we have to see how to maximize hardware. I believe that in what we have spoken before you have hit the spot: the mobile photography is completely different from the traditional one with DSLR. Smartphone photography relies heavily on computational photography and algorithms, which is something you cannot do with a DSLR.
So in the near future we will see more change in mobile photography than in the traditional one. That is why I think that 20% of the weight of the software in the mobile photography will go to more, will have a more important role in the image quality, because of those algorithms and computational photography, the rules of the game will change.
We are already seeing scene detection by artificial intelligence, bokeh for portrait mode … And in the near future there will be more algorithms for video. In fact, the video depends more on the hardware because it takes more frames per second (30, 60, etc.) than the photograph (one), and that is why it is more demanding.
Speaking of AI, something that is quite fashionable to highlight for years in mobile cameras, how is OnePlus implementing it? What is your approach with her in photography?
ZZ: In fact, we haven't publicly talked about AI per se, we use the expression "intelligent scene detection", because we don't want to emphasize the use of AI just because everyone is doing it. I think that calling him that right now is fair because it is a limited detection. In the near future we could call it AI or "AI detection", when more scenes are included in the algorithm and when certain settings are automatically offered, automatically detecting them.
Do you agree that the detection of scenes as such is not AI, so that this is used as a kind of claim (without being exactly so)?
ZZ: S. This is done in many brands as something new or flashy and, well, we are all working with AI for years. We prefer to be transparent in this regard.
The use of AI as a claim is made in many brands as something new or flashy and, well, we are all working with AI for years. We prefer to be transparent in this regard.
In fact, in some cases we also talk about AI in the creation of portrait modes. In relation to this very fashionable function, what effect do you look for in OnePlus: a more gradual and realistic blur (Apple style) or a flatter and more striking one (Pixel style)?
ZZ: For him bokeh several cameras are used, in the case of the One Plus 7 Pro the telephoto camera is used and the standard one to obtain the depth information when combining them (in the previous generation we only used the standard one): two different photos are taken and the processor takes them unites
I think so, that the iPhone achieve a more gradual blur. We are working on that line, in a portrait with a more natural bokeh, like a DSLR.
The size of the sensor, the pixels and the number of rear cameras in the future according to OnePlus
How much can a mobile sensor grow?
ZZ: We believe this to be different with folding phones. The way the smartphones change, we will have to redesign the entire structure of the smartphones and I think this will affect the cameras because we can introduce a larger sensor.
"With the foldable phones we will have to redesign the entire phone structure and this will allow us to integrate larger sensors"
We are seeing three and four rear cameras today. What do you think is coming after the triple / cudruple camera? How will the zoom evolve?
ZZ: Actually, we don't want to follow the direction of multiple cameras because among other things it requires more space in the structure. What we really want is to give users more possibilities to their smartphones, in all scenarios. Right now we have large angles, telephones, etc. And you don't have to move.
With regard to zoom, we currently have fixed lenses and it is not possible to have a "real" optical zoom (because the lenses will stand out), so what we want is to keep the phone's shape and structure trying to add the desired characteristics, the maximum we can .
What we want is for a single sensor to be able to act for zoom and wide angle, so we don't have to put three sensors, as with DSLRs. There are also rotating cameras, and it is interesting as a concept, but putting it into practice means having a thicker structure.
As for the pixels and size, what is the path of OnePlus: more megapixels or larger pixels?
ZZ: We have opted for the combination of pixels to have more options. When you shoot in automatic mode, you get a photo of 12 megapixels (by combination of 48 megapixels), which capture more light and thus you get better performance when this is not abundant. If it is of day, if the light abounds, you can activate the shot at 48 megapixels and get more resolution, and have more detailed images in case you want to print it. Hence we look for sensors with greater resolution.
To what degree is important what happens in social networks (or the feedback What comes from you) to establish a processing?
ZZ: Our goal is to give the user the best picture: the most realistic. But we also consider the emotional component: we don't want the photos to be flat. I believe that the main mission of our camera division is to obtain the best hardware and create software that will allow the user to have the best photo.
As you mentioned, in social networks we share photos with filters, etc. I think people have to have all the possible options: there are people who prefer the original photos, another who prefers more saturated photos, with more contrast … Even preferences are perceived according to the user's gender, for all this I believe that what we can do is give the best picture we can and then they do postprocessing that they prefer.
In fact, we are adding options in the galley so that there are more postprocessing options for those who want to edit, maybe you see it this year at the end. We always think about what we can do in automatic processing to help at this level, but with limits; In the end, once you take the picture there is no going back, you can't change it.
In relation to the front cameras we have seen many options, recently in fact we have even seen cameras under the screen without the hole. What do we hope to see in this regard in the coming OnePlus?
ZZ: We are constantly testing and analyzing solutions. When we see that the technology is mature and can be a standard, we add it in our smartphones. And next year we will see cameras under the screen in the OnePlus probably.
You are part of the manufacturers that work with their own camera app. Have you considered working with GCam?
ZZ: I do not think that at the moment our team has plans in that sense, but I think what we can do is investigate what that camera is doing well and try to achieve
Up to what level can you demand or request features from software manufacturers?
ZZ: An example is the screen of the OnePlus 7 Pro. We put it to Samsung and they told us that it was something they had not done before. It's something like a pulse between them and us and in the end we got what we wanted (and people liked it).
The same thing happens in cameras: once we know what we want, we propose it and try to get suppliers to give us what we want. We spend a lot of time talking with them (in San Diego with Qualcomm, in Korea with Samsung, etc.) to finally reach the point we want.
What are we going to see in the photograph of the next OnePlus? What can you advance us?
ZZ: I think the main area of research and progress is going to be computational photography, algorithms, software. We are going to improve the HDR +, how to identify more scenes … Also in which the night mode is also included in the HDR.
So in a way the HDR is probably one of the points with the most room for improvement when it comes to computational photography. There are also the videos: people record more videos and we will also focus on taking advantage of algorithms and software to obtain higher quality videos.