What’s going on with this new hardware for cameras from Sony?
Now, the announcement from Sony is fairly low-level. But, parsing through the technical documentation reveals some interesting results. All stemming from the underlying way Sony is stacking pixels. Effectively by layering the CMOS image sensors’ photodiodes and pixel transistors onto separate, stacked substrates. That is, as shown in the image below, as opposed to layering on the same substrate in a side-by-side configuration. The end result is, according to Sony, that the new camera hardware that could potentially make its way to future smartphone builds, nearly doubles the saturation signal level. That’s as compared to conventional sensors. And, in effect, results in nearly twice as much light exposure on a per-pixel level. That, in turn, “widens dynamic range and reduces noise, thereby substantially improving imaging properties.” Summarily, the result of the new architecture should be better range and fewer artifacts in images. In addition, according to Sony, leading to better prevention of both underexposure and overexposure. All of which should lead to better imaging overall, wherever this two-layer sensor is in use.
This could result in a forward shift for smartphone camera technology, led once again by Sony
The most obvious potential improvement for cameras and camera technology in general from this Sony innovation would be on the smartphone front. And the timing of the advancement couldn’t be better, with competition in the space heating up significantly over the past few generations. But while some OEMs are focused primarily on megapixel count, Sony is taking a different route. Better still, it’s one that could benefit more than just top-end Sony smartphones. Since the company is still the leading manufacturer of camera hardware used in most handsets.