AI eyes born taking hints from the human eye.

Event-based vision sensor (EVS)

By combining AI with a sensor that works like the retina in the human eye, we are expanding our field of view. Having this AI eye installed in various devices such as robots, drones, and automobiles increases their autonomy and brings unprecedented convenience and safety.

Christian Brandli Sony Advanced Visual Sensing AG CEO

Christian Brandli Sony Advanced Visual Sensing AG CEO

Vision for AI that senses the surrounding situation.

Event-Based Vision Sensors (EVS) are image sensors that use smart pixels. These smart pixels are inspired by the way human eyes work such that they detect both stationary as well as moving objects immediately. This can be used in robotics, mobile devices, mixed reality, automotive and a range of industrial applications. The purpose of our sensors is not to capture beautiful pictures and videos but to efficiently sense information about the world. This information is then in many cases analyzed by AI algorithms. In one instance we used our sensor to "sleep" while nothing is happening in the scene but once a person entered the scene, it would capture visual data that is then analyzed by a AI algorithm to determine the position of the person, the age, the gender and other aspects.

Image sensor with smart pixel
"Event-Based Vision Sensors"

High performance by processing similar to the nerve cells in the retina.

The way EVSs process visual information is similar to the nerve cells in the retina of our eyes. By compressing data in the pixels before processing it, it is possible to rapidly capture and evaluate high-speed, high-dynamic-range visual information. This is the same type of processing principle that happens in the retina cells of our eyes before the information is sent to the brain. As a result, power consumption is reduced, and both a high-speed reaction time and high-dynamic-range visual information are achieved.

Practicality made possible because of Sony Advanced Visual Sensing.

Apart from substantial improvements in the pixel architecture, which allow us to build sensors with ever increasing resolution, we have developed a portfolio of event-based algorithms and solutions. From collision avoidance systems for drones, to localization systems for mixed reality applications, to head pose tracking for TVs, to high-speed fault detection in industrial applications, to sensor fusion and event-based 3D cameras for a whole range of applications.

Expected use for autonomous devices including drones.

Through advances in EVS and edge AI, it will be possible to create ever more powerful "smart cameras" that allow to perform a substantial part of the scene understanding in the sensor or camera. Intelligent Vision Sensor "IMX500" released in this May is a great example for what we expect to happen more in the future: the borders between sensing and processing will vanish in similar ways they already vanished in our eyes and brains.
If we look for instance at the first application we tackled with smart cameras, the safety of drones will increase which will allow them to become increasingly autonomous and they can be expected to contribute to a wide range of fields, such as their use in agriculture and construction site activities. Drones equipped with AI can take over dangerous jobs such as investigation of disasters where humans are at high risk.

Drone with collision avoidance system powered by EVS installed

Unleashing AI from the digital world into the real world.

Sony is uniquely positioned to leverage the AI opportunity. No other company spans across more aspects of the content value chain: Sony Semiconductor Solutions captures the photons, Sony Imaging Products & Solutions develops broadcast & production, Sony Pictures Entertainment creates movies, PlayStation™ Network offers various entertainment content and you can watch them through your Bravia®. If Sony translates this unique width across the content value chain to the interaction value chain, it will become one of the AI giants. If we use the various data we have to train AI models, the deep understanding of semiconductors to build AI accelerators, our content creation and distribution network to create interactive content, and our electronics capabilities to build robots and interactive devices, it will be hard for pure software companies to compete with this ecosystem.
For myself this is very exciting since we will contribute to a key part of these AI systems but also as an end user. I am very much looking forward to the point when Sony brings AI from the digital world of search engines, social networks and e-mail filters of today to the real world in the form of personal assistants, autonomous driving and exciting new games of tomorrow.

Page Top