Create super reality video experience
We have developed a wide range of signal processing technologies for video products including super-resolution processing that converts various video formats into 4K/8K quality, noise reduction, tone and color conversion, and motion-blur elimination. These technologies enable high resolution, high dynamic range, and wide color gamut for various kinds of video with various levels of quality. They compensate well for the video degradation in spatiotemporal resolution, tone, contrast, and color resulting from noise, data compression, and other factors. We will deliver our own brand of texture and reality through the high-quality imaging which is our fundamental imaging technology, in pursuit of even greater improvement in new video experiences.
Visual codecs, used for data compression of 2D/3D video, are indispensable for distributing large amounts of video data over the Internet and for recording and storage. Sony has contributed to the international standardization in MPEG and has developed customized codec technology for each of its products. Along with the spread of new video formats such as 8K, VR, and free-viewpoint video, the amount of visual media data will continue to increase. We are developing VVC, the latest video codec standard that achieves the highest compression ratio. Also, we are developing codecs for volumetric video that provides new video experiences such as point clouds and CG Mesh.
We are researching holographic displays with the aim of providing holographic representations in the user’s space as if the objects shown were actually there. In order to achieve this, we expect ongoing progress in device technology such as light field reconstruction display, and also require technology that naturally reproduces video content. Additionally, we are developing signal processing technology to maximize device performance, and technology to create appropriate viewpoints for the user and compose various content with consideration to human visual characteristics. We will contribute the progress of display technology to reproduce intuitive and natural holographic imaging.
We are developing a multicamera system that achieves high performance by placing multiple sensors with different characteristics in parallel. In recent years, the digitization of 3D information in the real world, the so-called digital twin, has been tackled in various fields, and the use of depth sensors that can measure distance from sensor to subject has begun to spread. By combining depth sensor with a conventional camera, we can more easily acquire three-dimensional information with high accuracy from images and depth taken at multiple viewpoints. In addition to technology for detecting the corresponding points between sensors, we are also developing fusion technology that fuses multiple sensor data each of which has different characteristic.