Introducing the Sony Group's activities designed to increase the accessibility of its products and services
CSUN is the World's Largest International Conference on Accessibility.
The 36th CSUN Assistive Technology Conference was held online from March 8th to 12th, 2021. Sony has participated since 2018, and this is the third exhibition, but it was the first time to be held virtually.
Sony introduced accessible products via live streams so that customers can easily understand and feel familiar with them, even though virtually. For BRAVIA® and Walkman®, we demonstrated the "screen reading" function that reads out the texts displayed on the screen. For PlayStation®5, we demonstrated to see the situation before and after switching such as color correction so that customers can experience the abundant setting menus prepared. In addition to the products, we also introduced a retail kiosk with braille and integrated audio description, which is being developed in collaboration with Braille Institute, as an accessibility initiative. All of these live streams were provided with subtitles. In addition, by accepting questions from customers who were watching at any time by voice or text chat, we have made it possible to easily communicate with customers online as well as face-to-face. At the same time, Senior Executive Vice President, Corporate Executive Officer of Sony Corporation*1 Shiro Kambe, and President and COO of Sony North America Michael Fasulo*2 (at that time) sent messages regarding Sony's accessibility, and for the first time, a virtual session was held during the period to introduce Sony’s ongoing accessibility initiatives for products and services to achieve the ease of use for everyone, such as in-house awareness events and collaboration with organizations for people with disabilities.
Sony Interactive Entertainment's PlayStation®5 provides functions for customizing system-setting and other screens to make them easier to view and use. Users can invert colors, reduce, and enlarge text size, set auto-scroll speeds, and reduce controller motion settings. The PlayStation®5 also features a screen text-to-speech function, which allows you to hear the text on the screen spoken aloud, or to read out the current screen operation.
In addition, the DualSense™ Wireless Controller supports a more comfortable, more intense game playing environment and experience than ever before. For example, you can customize it for greater ease of use by reassigning your favorites buttons to suit your preferences or use the built-in microphone's voice input function to input text faster and more easily than the virtual keyboard. As the vibration changes according to the situation in the game, you can also experience the action unfold on-screen as you pull a bowstring to its limit or hit the brakes on a speeding car.
Voice input and the text-to-speech function are supported for the following languages.
Japanese, English (USA), English (UK), German, Italian, French, French (Canada), Spanish, Spanish (Latin America）
The Last of Us Part II for PlayStation®4 builds on the functionality established in Uncharted 4: A Thief's End with more than 60 accessibility settings for visual, auditory and motor support.
All commands can be mapped to any controller input, including touchpad swipes and the motion-sensing function for shaking the controller. You can replace the action of pressing a button repeatedly by pressing it continuously instead (long press). In addition, you can customize the size, color and contrast of the information that constantly appears on the screen (heads-up display). Through touchpad input with the DUALSHOCK®4, you can use the screen enlargement function to enlarge any part of the screen.
Sony Pictures Entertainment (Japan) Inc. and Aniplex, which produce Anime and other products under the umbrella of Sony Music Entertainment provide barrier-free audio descriptions and barrier-free Japanese captions for some films.
The barrier-free audio description system uses audio to convey information, such as people's movements, movie scenes, captions, and on-screen messages, primarily for people with visual impairments. In addition to dialog, music, and sound effects on the main audio channel, the audio describes scenes as they appear on the screen. This enables viewers to visualize each scene more clearly and realistically.
Barrier-free captions are primarily for people with hearing disabilities. They show the names of the people speaking and their lines as well as meaningful information on the on-screen audio, such as music, sound effects, and ambient noises. The system describes in textual form who is speaking and the on-screen sounds. Barrier-free audio description system and barrier-free captions enable more people to enjoy movies and videos.
Some of the contents supported by barrier-free audio descriptions and Japanese captions are introduced below.*