30th International Joint Conference on Artificial Intelligence (IJCAI-21) will held in Montreal-themed virtual reality from August 19th to August 26th , 2021 due to the Covid-19 pandemic.
August 19 ~ 26, 2021
(IJCAI 2021 is a Virtual-only Conference)
It is Sony's pleasure to become a Platinum sponsor of IJCAI 2021.

Recruit information for IJCAI-2021

Our event information

Technologies & Business use case

Recruit information
for IJCAI-2021

Our event information

Technologies &
Business use case

Recruit information for IJCAI-2021
We look forward to highly motivated individuals applying to Sony so that we can work together to fill the world with emotion and pioneer the future with dreams and curiosity. Join us and be part of a diverse, innovative, creative, and original team to inspire the world.
For Sony AI positions, please see https://ai.sony/joinus/jobroles/.
*The special job offer for IJCAI-2021 has closed. Thank you for many applications.

Our event information
Invited Talk
AI x Robotics for activating and augmenting human abilities / capabilities.

Sony believes that AI and Robotics technology can activate and augment human capabilities and abilities. In 1999, Sony introduced AIBO to the consumer market. This was a four-legged autonomous robot with pseudo instincts and emotions that could interact with people like pets. The responses of the non-verbal robot was interpreted by humans with some degrees of freedom and it was this freedom of the interpretation that contributed to the smoothness of the interactions.
Then in 2010, Sony developed a robot prototype called Rudy, which focused on more specifically physical interaction between humans and robots This feature was achieved by a special force-controlled actuator (VA) and a method called generalized inverse dynamics (GID) calculation, which enabled Rudy to physically interact with objects and humans in the real world safely. For example, it was able to assist a physically handicapped patient to control Rudy with a simple GUI to fetch a towel on a shelf. This was an example of augmenting human's physical capability.
The VA and GID was then also applied for bilateral-controlled manipulator to augment the human hand's skills and capability. Using micro-end effector equipped with dedicated force sensors, we achieved to augment and improve the accuracy of the force sensitivity and position of the human hand by 10 times..
Now as we consider the AI and Robotics in the field of Gastronomy, we are exploring the possibilities of augmenting the chef's imagination and creativity for new recipe generation, and the Chef's skills in the actual cooking process.
In this talk, I will explain the activation and augmentation of these various human abilities, and the behavior control architecture behind them.
Speaker:
Masahiro Fujita - VP, Senior Chief Researcher, Sony Group Corporation / Director, Sony AI
Industry Day Talk 01
Temporal Graph-based Hypothesis Generation

Scientific research is led by the hypothesis - the supposition or proposal that forms the basis for further investigation. Traditionally, such hypotheses are formed by the researcher and their team, taking into account vast amounts of previous scientific research. However, this task is becoming more difficult, as the volume of data and research has grown exponentially. We formulate this hypothesis generation problem as a future connectivity prediction in a dynamic attributed graph. The key is to capture the temporal evolution of node-pair (term-pair) relations. We propose an inductive edge (node-pair) embedding method to utilize both the graphical structure and node attribute to encode the temporal node-pair relationship.
Speaker:
Dr. Uchenna Akujuobi - Research Scientist, Sony AI
Speaker info:
Scientific research is led by the hypothesis - the supposition or proposal that forms the basis for further investigation. Traditionally, such hypotheses are formed by the researcher and their team, taking into account vast amounts of previous scientific research. However, this task is becoming more difficult, as the volume of data and research has grown exponentially. We formulate this hypothesis generation problem as a future connectivity prediction in a dynamic attributed graph. The key is to capture the temporal evolution of node-pair (term-pair) relations. We propose an inductive edge (node-pair) embedding method to utilize both the graphical structure and node attribute to encode the temporal node-pair relationship.
Industry Day Talk 02
Operationalizing AI Ethics

This talk will cover some of the steps Sony AI is taking to operationalize AI ethics and some of the key research questions Sony AI will be exploring around fairness, transparency, and accountability. Given that AI ethics is an area with many competing values and the need for highly contextualized solutions, research will play a key role in bridging the gap between the goals of AI ethics and their operationalization. This talk will in particular discuss some of the challenges of addressing both privacy law and technical fairness issues in the computer vision context.
Speaker:
Alice Xiang - Senior Research Scientist, Sony AI
Speaker info:
Alice Xiang is a Senior Research Scientist at Sony AI and the Head of the AI Ethics Office for Sony Group. In these roles, Alice leads teams of AI ethics researchers and practitioners who work closely with business units to develop more ethical AI solutions. Alice previously worked as the Head of Fairness, Transparency, and Accountability Research at the Partnership on AI. She also served as a Visiting Scholar at Tsinghua University's Yau Mathematical Sciences Center. Core areas of Alice's research include algorithmic bias mitigation, explainability, causal inference, and algorithmic governance. She has been recognized as one of the 100 Brilliant Women in AI Ethics. Alice is both a statistician and lawyer, with experience developing machine learning models and serving as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master's in Development Economics from Oxford, a Master's in Statistics from Harvard, and a Bachelor's in Economics from Harvard.

Technologies &
Business use case
Technology 01
Prediction One : A Predictive Analytics Tool For Non-Machine Learning Professionals

Predictive analytics is a machine learning technology to predict future outcomes from data. It can be beneficial in a wide range of businesses. However, the technical hurdles to run predictive analytics are high and the shortage of ML professionals is a challenge.
Prediction One is a GUI software that allows non-ML professionals to run predictive analytics. It has UI and functions tailored to non-ML experts, such as fast AutoML technologies and reasoning technologies to explain prediction results. The software is used widely in the Sony group. A new business with Prediction One at its core was launched in Japan, and there are over 20,000 user companies in Japan.
Technology 02
Autonomous moving camera robot for filming live concerts
We are developing a robotics platform technology to enable easy development of robot applications for various applications using core functions of movement and manipulation. As one of the robot applications, we will introduce the development of a moving camera robot that is filming powerful images of artists from the stage of a live music concert.
A moving camera robot is a robot system that cooperates in "Safe, smooth movement as intended" and "Filming as aimed", for high quality filming with movement near people. To achieve that, we are developing the robot application technologies related to integrated recognition technology of environment and human recognition, spatial-temporal high-precision movement planning and robot programming tools.
Young engineers from IIT also participate in this activity as core development members, and while participating in the next generation live streaming solution project promoted by the entertainment team, this activity is carried out as a company-wide activity by obtaining experiments and feedback in the actual live music event.

Publications
Publication 01
#J31 Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog
Video streams:
Tuesday, 24 August, 2021 (22:30pm EDT - Room Green 2)
Thursday, 26 August, 2021 (14:40pm EDT - Room Red 3)
Posters:
Tuesday, 24 August, 2021 (23:00pm EDT - Room Green)
Thursday, 26 August, 2021 (15:10pm EDT - Room Red)
Authors:
Jesse Thomason - University of Southern California
Aishwarya Padmakumar - Amazon
Jivko Sinapov - Tufts University
Nick Walker - University of Washington
Yuqian Jiang - UT Austin
Harel Yedidsion - UT Austin
Justin Hart - UT Austin
Peter Stone - University of Texas at Austin and Sony AI
Raymond Mooney - Univ. of Texas at Austin
Link: