ICCV 2025

10/19 (Sun) ~ 10/23 (Thu), 2025 at Hawai'i Convention Center, Hawai'i, US

  • ICCV 2025 Logo

ICCV 2025 is the premier international computer vision event comprising the main conference and several co-located workshops and tutorials. It is considered one of the top conferences in computer vision, alongside CVPR and ECCV.

Sony is proud to be a Silver sponsor of ICCV 2025.We look forward to this year's exciting exhibition opportunities, featuring a variety of ways to connect with participants in person. Sony will exhibit and participate.

Sony's Technical Programs at ICCV 2025

Poster Sessions

DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic

more infoReal-world object detection systems, such as those in autonomous driving and surveillance, must continuously learn new object categories and simultaneously adapt to changing environmental conditions. Existing approaches, Class Incremental Object Detection (CIOD) and Domain Incremental Object Detection (DIOD)—only address one aspect of this challenge. CIOD struggles in unseen domains, while DIOD suffers from catastrophic forgetting when learning new classes, limiting their real-world applicability. To overcome these limitations, we introduce Dual Incremental Object Detection (DuIOD), a more practical setting that simultaneously handles class and domain shifts in an exemplar-free manner. We propose DuET, a Task Arithmetic-based model merging framework that enables stable incremental learning while mitigating sign conflicts through a novel Directional Consistency Loss. Unlike prior methods, DuET is detector-agnostic, allowing models like YOLO11 and RT-DETR to function as real-time incremental object detectors. To comprehensively evaluate both retention and adaptation, we introduce the Retention-Adaptability Index (RAI), which combines the Average Retention Index (Avg RI) for catastrophic forgetting and the Average Generalization Index for domain adaptability into a common ground. Extensive experiments on the Pascal Series and Diverse Weather Series demonstrate DuET’s effectiveness, achieving a +13.12% RAI improvement while preserving 89.3% Avg RI on the Pascal Series (4 tasks), as well as a +11.39% RAI improvement with 88.57% Avg RI on the Diverse Weather Series (3 tasks), outperforming existing methods.

- Authors: Munish Monga (Sony Research India), Vishal Chudasama (Sony Research India), Pankaj Wasnik (Sony Research India), Biplab Banerjee
- Date/Time: [Poster Session 1] October 21st (Tue) 11:30 - 13:30 (HST)
- Poster #: 288

< Link >
- https://iccv.thecvf.com/virtual/2025/poster/2403
- https://arxiv.org/abs/2506.21260

Image Intrinsic Scale Assessment: Bridging the Gap Between Quality and Resolution

more infoImage Quality Assessment (IQA) measures and predicts perceived image quality by human observers. Although recent studies have highlighted the critical influence that variations in the scale of an image have on its perceived quality, this relationship has not been systematically quantified. To bridge this gap, we introduce the Image Intrinsic Scale (IIS), defined as the largest scale where an image exhibits its highest perceived quality. We also present the Image Intrinsic Scale Assessment (IISA) task, which involves subjectively measuring and predicting the IIS based on human judgments. We develop a subjective annotation methodology and create the IISA-DB dataset, comprising 785 image-IIS pairs annotated by experts in a rigorously controlled crowdsourcing study with verified reliability. Furthermore, we propose WIISA (Weak-labeling for Image Intrinsic Scale Assessment), a strategy that leverages how the IIS of an image varies with downscaling to generate weak labels. Experiments show that applying WIISA during the training of several IQA methods adapted for IISA consistently improves the performance compared to using only ground-truth labels. We will release the code, dataset, and pre-trained models upon acceptance.

- Authors: Vlad Hosu (Sony AI), Lorenzo Agnolucci, Daisuke Iso (Sony AI), Dietmar Saupe
- Date/Time: [Poster Session 3] October 22nd (Wed) 10:45 - 12:45 (HST)
- Poster #: 269

< Link >
- https://iccv.thecvf.com/virtual/2025/poster/1850
- https://arxiv.org/abs/2502.06476

Learning Hierarchical Line Buffer for Image Processing

more infoIn recent years, neural networks have achieved significant progress in offline image processing. However, in online scenarios, particularly in on-chip implementations, memory usage emerges as a critical bottleneck due to the limited memory resources of integrated image processors. In this study, we focus on reducing the memory footprint of neural networks for on-chip image processing by optimizing network design for efficient memory utilization. Specifically, we consider a typical scenario in which images outputted from an image sensor are processed sequentially using line buffers in a line-by-line manner. This setting necessitates the modeling of both intra-line and inter-line correlations—capturing dependencies among pixels within a single line group and across different line groups, respectively. To model intra-line correlations, we propose a progressive feature enhancement strategy, where line pixels are processed with expanding strip convolutions in multiple stages. For inter-line correlation modeling, we introduce a hierarchical line buffer formulation, where features extracted from previous lines are incrementally reused and compressed across multiple hierarchical levels. Comprehensive experiments on various image processing tasks, including RAW denoising, Gaussian denoising, and super-resolution, demonstrate that the proposed method achieves a superior trade-off between performance and memory efficiency than previous solutions, e.g., up to 1dB PSNR gain in RAW denoising at one-fifth of peak memory usage.

- Authors: Jiacheng Li (Sony AI), Feiran Li (Sony AI), Daisuke Iso (Sony AI)
- Date/Time: [Poster Session 3] October 22nd (Wed) 10:45 - 12:45 (HST)
- Poster #: 106

< Link >
- https://iccv.thecvf.com/virtual/2025/poster/1359
- https://openaccess.thecvf.com/content/ICCV2025/papers/Li_Learning_Hierarchical_Line_Buffer_for_Image_Processing

Beyond RGB: Adaptive Parallel Processing for RAW Object Detection

more infoObject detection models are typically applied to standard RGB images processed through Image Signal Processing (ISP) pipelines, which are designed to enhance sensor-captured RAW images for human vision. However, these ISP functions can lead to a loss of critical information that may be essential in optimizing for computer vision tasks, such as object detection. In this work, we introduce Raw Adaptation Module (RAM), a module designed to replace the traditional ISP, with parameters optimized specifically for RAW object detection. Inspired by the parallel processing mechanisms of the human visual system, RAM departs from existing learned ISP methods by applying multiple ISP functions in parallel rather than sequentially, allowing for a more comprehensive capture of image features. These processed representations are then fused in a specialized module, which dynamically integrates and optimizes the information for the target task. This novel approach not only leverages the full potential of RAW sensor data but also enables task-specific pre-processing, resulting in superior object detection performance. Our approach outperforms RGB-based methods and achieves state-of-the-art results across diverse RAW image datasets under varying lighting conditions and dynamic ranges.

- Authors: Shani Gamrian (Sony Research), Hila Barel (Sony Research), Feiran Li (Sony Research), Masakazu Yoshimura (Sony Group Corporation), Daisuke Iso (Sony Research)
- Date/Time: [Poster Session 2] October 21st (Tue) 15:00 - 17:00 (HST)
- Poster #: 49

< Link >
- https://iccv.thecvf.com/virtual/2025/poster/1450
- https://arxiv.org/abs/2503.13163

Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models

more infoParameter-Efficient Fine-Tuning (PEFT) of text-to-image models has become an increasingly popular technique with many applications. Among the various PEFT methods, Low-Rank Adaptation (LoRA) and its variants have gained significant attention due to their effectiveness, enabling users to fine-tune models with limited computational resources. However, the approximation gap between the low-rank assumption and desired fine-tuning weights prevents the simultaneous acquisition of ultra-parameter-efficiency and better performance. To reduce this gap and further improve the power of LoRA, we propose a new PEFT method that combines two classes of adaptations, namely, transform and residual adaptations. In specific, we first apply a full-rank and dense transform to the pre-trained weight. This learnable transform is expected to align the pre-trained weight as closely as possible to the desired weight, thereby reducing the rank of the residual weight. Then, the residual part can be effectively approximated by more compact and parameter-efficient structures, with a smaller approximation error. To achieve ultra-parameter-efficiency in practice, we design highly flexible and effective tensor decompositions for both the transform and residual adaptations. Additionally, popular PEFT methods such as DoRA can be summarized under this transform plus residual adaptation scheme. Experiments are conducted on fine-tuning Stable Diffusion models in subject-driven and controllable generation. The results manifest that our method can achieve better performances and parameter efficiency compared to LoRA and several baselines.

- Authors: Zerui Tao, Yuhta Takida (Sony AI), Naoki Murata (Sony AI), Qibin Zhao, Yuki Mitsufuji (Sony Group Corporation/Sony AI)
- Date/Time: [Poster Session 4] October 22nd (Wed) 14:30 - 16:30 (HST)
- Poster #: 137

< Link >
- https://iccv.thecvf.com/virtual/2025/poster/1237
- https://arxiv.org/abs/2501.08727

TITAN-Guide: Taming Inference-Time Alignment for Guided Text-to-Video Diffusion Models

more infoIn the recent development of conditional diffusion models still require heavy supervised fine-tuning for performing control on a category of tasks. Training-free conditioning via guidance with off-the-shelf models is a favorable alternative to avoid further fine-tuning on the base model. However, the existing training-free guidance frameworks either heavy memory requirements or sub-optimal control due to rough estimation. These shortcomings limit the applicability to control diffusion models that require intense computation, such as Text-to-Video (T2V) diffusion models. In this work, we propose Taming Inference Time Alignment for Guided Text-to-Video Diffusion Model, so-called TITAN-Guide, which overcomes memory space issues, and provides more optimal control in the guidance process compared to the counterparts. In particular, we develop an efficient method for optimizing diffusion latents without backpropagation from a discriminative guiding model. In particular, we study forward gradient descents for guided diffusion tasks with various options on directional directives. In our experiments, we demonstrate the effectiveness of our approach in efficiently managing memory during latent optimization, while previous methods fall short. Our proposed approach not only minimizes memory requirements but also significantly enhances T2V performance across a range of diffusion guidance benchmarks.

- Authors: Christian Simon (Sony Group Corporation), Masato Ishii (Sony AI), Akio Hayakawa (Sony AI), Zhi Zhong (Sony Group Corporation), Shusuke Takahashi (Sony Group Corporation), Takashi Shibuya (Sony AI), Yuki Mitsufuji (Sony Group Corporation/Sony AI)
- Date/Time: [Poster Session 4] October 22nd (Wed) 14:30 - 16:30 (HST)
- Poster #: 170

< Link >
- https://iccv.thecvf.com/virtual/2025/poster/420
- https://arxiv.org/abs/2508.00289

ToF-Splatting: Dense SLAM using Sparse Time-of-Flight Depth and Multi-Frame Integration

more infoTime-of-Flight (ToF) sensors provide efficient active depth sensing at relatively low power budgets; among such designs, only very sparse measurements from low-resolution sensors are considered to meet the increasingly limited power constraints of mobile and AR/VR devices. However, such extreme sparsity levels limit the seamless usage of ToF depth in SLAM. In this work, we propose ToF-Splatting, the first 3D Gaussian Splatting-based SLAM pipeline tailored for using effectively very sparse ToF input data. Our approach improves upon the state of the art by introducing a multi-frame integration module, which produces dense depth maps by merging cues from extremely sparse ToF depth, monocular color, and multi-view geometry. Extensive experiments on both synthetic and real sparse ToF datasets demonstrate the viability of our approach, as it achieves state-of-the-art tracking and mapping performances on reference datasets.

- Authors: Andrea Conti (Sony DepthSensing Solutions, Belgium), Matteo Poggi, Valerio Cambareri (Sony DepthSensing Solutions, Belgium), Martin R. Oswald, Stefano Mattoccia
- Date/Time: [Poster Session 6] October 23rd (Thu) 14:30 - 16:30 (HST)
- Poster #: 349

< Link >
- https://iccv.thecvf.com/virtual/2025/poster/1105
- https://arxiv.org/abs/2504.16545

Workshops

Organizers / Keynote
2nd AI for Content Generation; Quality Enhancement and Streaming

more info- Date/Time: October 20th (Mon) 9:00 - 17:30 (HST)
- Place: 317 B
- Organizers: Marcos V. Conde, Radu Timofte, Eduard Zamfir, Julian Tanke (Sony AI), Takashi Shibuya(Sony AI), Yuki Mitsufuji (Sony Group Corporation/Sony AI), Varun Jain, Fan Zhang, Heather Yu
- Keynote Title: Advances in Audiovisual Generative Models
- Keynote Speaker: Yuki Mitsufuji (Sony Group Corporation/Sony AI)

Organizers
Generative AI for Audio-Visual Content Creation

more info- Date/Time: October 19th (Sun) 8:55 - 12:30 (HST)
- Place: 323 B
- Organizers: Masato Ishii (Sony AI), Takashi Shibuya (Sony AI), Yuki Mitsufuji (Sony Group Corporation/Sony AI), Ho Kei Cheng, Alexander Schwing, Prem Seetharaman, Oriol Nieto,Justin Salamon, David Bourgin, Bryan Russell, Ziyang Chen, Sanjoy Chowdhury

Poster
First Workshop on Skilled Activity Understanding; Assessment and Feedback Generation
EgoOops: A Dataset for Mistake Action Detection from Egocentric Videos referring to Procedural Texts

more info- Date/Time: October 19th (Sun) 14:00 - 18:00 (HST)
- Place: 318 A
- Authors: Yuto Haneji, Taichi Nishimura (Sony Interactive Entertainment), Hirotaka Kameko, Keisuke Shirai, Tomoya Yoshida, Keiya Kajimura, Koki Yamamoto, Taiyu Cui, Tomohiro Nishimoto, Shinsuke Mori

Spotlight / Poster
2nd Workshop on Neuromorphic Vision (NeVi): Advantages and Applications of Event Cameras
Lattice-allocated Real-time Line Segment Feature Detection and Tracking Using Only an Event-based Camera

more info- Date/Time: [Spotlight] October 20th (Mon) 09:25 - 09:35 (HST) / [Poster] October 20 (Mon) 10:35 - 11:55 (HST)
- Place: 303 B
- Authors: Mikihiro Ikura, Arren Glover, Masayoshi Mizuno (Sony Interactive Entertainment), Chiara Bartolozzi

Poster
2nd Workshop on Neuromorphic Vision (NeVi): Advantages and Applications of Event Cameras
GraphEnet: Event-driven Human Pose Estimation with a Graph Neural Network

more info- Date/Time: [Spotlight] October 20th (Mon) 09:25 - 09:35 (HST) / [Poster] October 20 (Mon) 10:35 - 11:55 (HST)
- Place: 303 B
- Authors: Gaurvi Goyal, Pham Cong Thuong, Arren Glover, Masayoshi Mizuno (Sony Interactive Entertainment), Chiara Bartolozzi

Poster
The 3rd workshop on Binary and Extreme Quantization for Computer Vision
Extreme Compression of Adaptive Neural Images

more info- Date/Time: [Spotlight] October 20th (Mon) 09:25 - 09:35 (HST) / [Poster] October 20 (Mon) 8:15 - 12:00 (HST)
- Place: 308 A
- Paper: Leo Hoshikawa (Sony Interactive Entertainment), Marcos V. Conde, Takeshi Ohashi (Sony Group Corporation), Atsushi Irie (Sony Group Corporation)

Poster
The Second Workshop on Multimodal Representation and Retrieval
Towards reporting bias in visual-language datasets: bimodal augmentation by decoupling object-attribute association

more info- Date/Time: October 20th (Mon) 8:30 - 12:30 (HST)
- Place: 308 B
- Authors: Qiyu Wu (Sony Group Corporation/The University of Tokyo), Mengjie Zhao (Sony Group Corporation), Yutong He, Lang Huang, Junya Ono (Sony Group Corporation), Hiromi Wakaki (Sony Group Corporation), Yuki Mitsufuji (Sony Group Corporation/Sony AI)

Poster
9th Workshop and Competition on Affective & Behavior Analysis in-the-wild (ABAW)
Multimodal Viewer Responses to Japanese Manzai Comedy Dataset for Affective Computing

more info- Date/Time: October 19th (Sun) 8:00 - 12:30 (HST)
- Place: 313 C
- Authors: Kazuki Kawamura (Sony CSL-Kyoto/Sony Group Corporation/The University of Tokyo), Kengo Nakai, Jun Rekimoto (Sony CSL-Kyoto/The University of Tokyo)

Poster
5th Workshop on Open-World 3D Scene Understanding
Sparse Multiview Open-Vocabulary 3D Detection

more info- Date/Time: October 19th (Sun) 13:30 - 17:30 (HST)
- Place: 306 B
- Authors: Olivier Moliner (Centre for Mathematical Sciences, Lund University/Sony Corporation, Lund Laboratory, Sweden), Viktor Larsson, Kalle Astrom

Poster
Generative AI for Audio-Visual Content Creation
SpecMaskFoley: Efficient Yet Effective Synchronized Video-to-audio Synthesis via Pretraining and ControlNet

more info- Date/Time: October 19th (Sun) 8:55 - 12:30 (HST)
- Place: 323 B
- Authors: Zhi Zhong (Sony Group Corporation), Akira Takahashi (Sony Group Corporation), Shuyang Cui (Sony Group Corporation), Keisuke Toyama (Sony Group Corporation), Shusuke Takahashi (Sony Group Corporation), Yuki Mitsufuji (Sony Group Corporation/Sony AI)

Poster
Advances in Image Manipulation Workshop and Challenges
DMS: Diffusion-Based Multi-Baseline Stereo Generation for Improving Self-Supervised Depth Estimation

more info- Date/Time: October 20th (Mon) 8:00 - 18:00 (HST)
- Place: 311
- Authors: Zihua Liu, Yizhou Li (Sony Semiconductor Solutions Group), Songyan Zhang, Masatoshi Okutomi

Poster
Advances in Image Manipulation Workshop and Challenges
AIM 2025 Challenge on Real-World RAW Image Denoising

more info- Date/Time: October 20th (Mon) 8:00 - 18:00 (HST)
- Place: 311
- Authors: Marcos V. Conde, Feiran Li (Sony AI), Jiacheng Li (Sony AI), Beril Besbinar (Sony AI), Vlad Hosu (Sony AI), Daisuke Iso (Sony AI), Radu Timofte

Tutorial

Lecture
A Tour Through AI-powered Photography and Imaging
The Full Stack of RAW Denoising: From Pixels to Silicon

more info- Date/Time: [Workshop] October 19th (Sun) 09:30 - 12:30 (HST) / [Speak] October 19th (Sun) 09:45 - 10:15 (HST)
- Place: 326 B
- Speakers: Jiacheng Li (Sony AI), Feiran Li (Sony AI)

Challenge

Advances in Image Manipulation(AIM) Workshop and Challenges
AIM 2025 Real-World RAW Denoising Challenge

more info- Date/Time: October 20th (Mon) 8:00 - 17:00 (HST)
- Place: 311
- Organizers: Marcos V. Conde, Radu Timofte, Feiran Li (Sony AI), Jiacheng Li (Sony AI), Beril Besbinar (Sony AI), Vlad Hosu (Sony AI), Daisuke Iso (Sony AI)

Exhibition Booth
Open Hours

Visit our booth at the exhibition to explore our latest technology firsthand and engage with our team.

Location: #505 (KAMEHAMEHA1 Hall)
Sony Booth Open Hours:
- October 21st (Tue) 11:30 - 17:00​ (HST)
- October 22nd (Wed) 10:45 - 16:30 (HST)
- October 23rd (Thu) 10:45 - 16:30 (HST)

Exhibition Booth
Technology Presentations

ICCV 2025 Technology Presentation Timetable

DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic

more infoPresenter : Munish Monga
Date/Time :
- Oct 21st (Tue) 15:00-15:15
- Oct 22nd (Wed) 12:00-12:15
Link :
- https://iccv.thecvf.com/virtual/2025/poster/2403
- https://arxiv.org/abs/2506.21260

Image Intrinsic Scale Assessment: Bridging the Gap Between Quality and Resolution

more infoPresenter : Vlad Hosu
Date/Time :
- Oct 21st (Tue) 12:15-12:30
- Oct 22nd (Wed) 15:00-15:15
Link :
- https://iccv.thecvf.com/virtual/2025/poster/1850
- https://arxiv.org/abs/2502.06476
- https://github.com/SonyResearch/IISA

Beyond RGB: Adaptive Parallel Processing for RAW Object Detection

more infoPresenter : Shani Gamrian
Date/Time :
- Oct 21st (Tue) 12:30-12:45
- Oct 22nd (Wed) 12:30-12:45
Link :
- https://iccv.thecvf.com/virtual/2025/poster/1450
- https://arxiv.org/abs/2503.13163
- https://github.com/SonyResearch/RawAdaptationModule

Generative and Protective AI for Content Creation

more infoPresenter : Takashi Shibuya
Date/Time :
- Oct 21st (Tue) 11:45-12:00
- Oct 22nd (Wed) 15:15-15:30
Link :
- https://github.com/sony/creativeai
- https://genprocc.github.io/

Learning Hierarchical Line Buffer for Image Processing

more infoPresenter : Jiacheng Li
Date/Time :
- Oct 21st (Tue) 12:00-12:15
Link :
- https://iccv.thecvf.com/virtual/2025/poster/1359

Collaborative Research with Universities and Hosting Conversational Agents Competition

more infoPresenter : Hiromi Wakaki
Date/Time :
- Oct 21st (Tue) 15:30-15:45
- Oct 22nd (Wed) 14:45-15:00
Link :
- VinaBench (CVPR 2025)
- CARE (EMNLP 2025)
- Wordplay Workshop (EMNLP 2025)
- CPDC

Multimodal NLP for Content Creation

more infoPresenter : Qiyu Wu
Date/Time :
- Oct 21st (Tue) 15:15-15:30
- Oct 22nd (Wed) 14:30-14:45
Link :
- BiAug (MRR at ICCV2025)
- DeepResonance (EMNLP 2025)
- MVD (RepL4NLP at NAACL 2025)
- GLOV (TMLR)

Sparse Multiview Open-Vocabulary 3D Detection

more infoPresenter : Olivier Moliner
Date/Time :
- Oct 21st (Tue) 15:45-16:00
- Oct 22nd (Wed) 12:15-12:30
Link :
- https://www.arxiv.org/abs/2509.15924

Vision as a Coach: Grounded AI Coaching from Video & Images

more infoPresenter : Kazuki Kawamura
Date/Time :
- Oct 22nd (Wed) 11:45-12:00
References :
- Kawamura, K., & Rekimoto, J. FastPerson: Enhancing Video-Based Learning through Video Summarization that Preserves Linguistic and Visual Contexts. AH’24.
- Kawamura, K., & Rekimoto, J. SakugaFlow: A Stagewise Illustration Framework Emulating the Human Drawing Process and Providing Interactive Tutoring for Novice Drawing Skills. GenAICHI’25.
- Kawamura, K., & Rekimoto, J. DDSupport: Language Learning Support System that Displays Differences and Distances from Model Speech. UIST ’21 / ICMLA’22.
- Kawamura, K., & Rekimoto, J. AIx Speed: Playback Speed Optimization Using Listening Comprehension of Speech Recognition Models. UIST ’22 / AH ’23.
- Kawamura, K., Nakai, K., & Rekimoto, J. A Multimodal Dataset of Viewer Responses to Japanese Manzai Comedy. ICCVW’25.

Lattice-allocated Real-time Line Segment Feature Detectionand Tracking Using Only an Event-based Camera

more infoPresenter : Masayoshi Mizuno
Date/Time :
- Oct 21st (Tue) 12:45-13:15, 16:00-16:30
- Oct 22nd (Wed) 11:15-11:45, 15:45-16:15
Link :
- https://iccv.thecvf.com/virtual/2025/workshop/2731
- https://sites.google.com/view/nevi-2025/home-page?authuser=0
- https://arxiv.org/abs/2510.06829

Event Report

Sony had a strong presence at ICCV 2025 in Hawai'i, US, where we introduced our latest research and technologies.
We were delighted to welcome many visitors to our booth and engage in meaningful conversations with researchers and professionals from around the world.
Our Technology Blog captures these moments along with details of our accepted papers.

Visit to learn more about our contributions:
https://www.sony.com/en/SonyInfo/technology/stories/entries/ICCV2025_report/

Follow us on LinkedIn!


Sony Women in Technology Award with Nature


This annual award honors outstanding early to mid-career women researchers pioneering breakthroughs in science, technology, engineering, and mathematics.
Each year, Sony and Nature recognize three researchers with a prize of $250,000 USD each and a chance to showcase work on nature.com.

The application period for the 2026 award is now closed. Eligible ICCV attendees are encouraged to apply next year. Join our newsletter to be the first to know when the next cycle opens.

Career Information

We look forward to working with highly motivated individuals to fill the world with emotion and to pioneer future innovation through dreams and curiosity. If interested, please access our career site and/or consider visiting the Sony booth (Booth#: 505) at the ICCV exhibition hall (KAMEHAMEHA1 Hall) to know more about Sony Group.

Career Site Link: https://www.sony.com/en/SonyInfo/Careers/

Sony Group Technology Portal

You can explore our technology by clicking HERE.