Xingyu Chen (陈星宇)

I got my M.S. from the Institute of Artificial Intelligence and Robotics at Xi'an Jiaotong University and my B.S. from Chongqing University.

In 2021 and 2022, I spent a good time at Tencent AI lab as a research intern, working with Xuan Wang and Qi Zhang, hosted by Jue Wang.

My research topics include computer vision, machine learning, and graphics, specifically in spatial intelligence.

Email  |  Google Scholar  |  GitHub  |  Twitter

headshot
Research

The world we see is constantly changing: how do intelligent systems generalize to new observations? This question led me to quest for an understanding of the mechanisms underlying spatial intelligence and to develop methods for enabling artificial intelligence with this remarkable capability.

Specifically, I am investigating how generalizability can emerge from reusable 3D & 4D representations, how these representations of the dynamic 3D world could be learned from images & videos, and how inductive biases could serve as expert knowledge to reduce unknown parameters and make learning more efficient.

Equal Contribution *, Corresponding Author †, Project Lead ⚑

L2G-NeRF: Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields
Yue Chen*, Xingyu Chen*⚑, Xuan Wang†, Qi Zhang, Yu Guo†, Ying Shan, Fei Wang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
project page / arXiv / paper / code / supplementary / video / poster

We combine local and global alignment via differentiable parameter estimation solvers to achieve robust bundle-adjusting Neural Radiance Fields.

UV Volumes for Real-time Rendering of Editable Free-view Human Performance
Yue Chen*, Xuan Wang*, Xingyu Chen, Qi Zhang, Xiaoyu Li, Yu Guo†, Jue Wang, Fei Wang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
project page / arXiv / paper / code / supplementary / video / poster

We separate high-frequency human appearance from 3D volume and encode them into a 2D texture, which enables real-time rendering and retexturing.

PROCA: Place Recognition under Occlusion and Changing Appearance via Disentangled Representations
Yue Chen, Xingyu Chen†⚑, Yicen Li
IEEE International Conference on Robotics and Automation (ICRA), 2023
arXiv / paper / code / video / poster

We decompose the image representation into place, appearance, and occlusion code and use the place code as a descriptor to retrieve images.

Sparse Semantic Map-Based Monocular Localization in Traffic Scenes Using Learned 2D-3D Point-Line Correspondences
Xingyu Chen, Jianru Xue†, Shanmin Pang
IEEE Robotics and Automation Letters (RA-L), 2022
arXiv / paper

Given a sparse semantic map (e.g., pole lines, traffic sign midpoints), we estimate camera poses by learning 2D-3D point-line correspondences.

Ha-NeRF😆: Hallucinated Neural Radiance Fields in the Wild
Xingyu Chen, Qi Zhang†, Xiaoyu Li, Yue Chen, Ying Feng, Xuan Wang, Jue Wang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
project page / arXiv / paper / supplementary / code / video / poster

We recover NeRF from tourism images with variable appearance and occlusions, and consistently render free-occlusion views with hallucinated appearances.

Using Detection, Tracking and Prediction in Visual SLAM to Achieve Real-time Semantic Mapping of Dynamic Scenarios
Xingyu Chen, Jianru Xue†, Jianwu Fang, Yuxin Pan, Nanning Zheng
IEEE Intelligent Vehicles Symposium (IV), 2020
arXiv / paper

Instead of detecting objects in all frames, we detect only keyframes and embed an efficient prediction mechanism in other frames to find dynamic objects.

Navigation Command Matching for Vision-based Autonomous Driving
Yuxin Pan, Jianru Xue†, Pengfei Zhang, Wanli Ouyang, Jianwu Fang, Xingyu Chen
IEEE International Conference on Robotics and Automation (ICRA), 2020
ResearchGate / paper

We propose a navigation command matching model to discriminate actions generated from sub-optimal policies via smooth rewards.

Projects

I am passionate about bridging the physical and digital worlds by building next-generation AR and robotics.

Kuafu (Autonomous Driving)

GPS-Denied Navigation, Intelligent Vehicle Future Challenge (IVFC)
Odometry, Mapping, Localization.

Robotic Hand

Hand gesture recognition
Sensor fusion of IMU and BLE
Robotic hand controller

Robotic Arm

Teleoperation
Hand gesture recognition
Four-bar linkage structure


Invited Talks
Inferring the physical world and camera poses from images
ETH Zurich, 2023

Sharing the intuition of dealing with dynamic objects in our previous work and give a prospect of handling the tracking problem via neural fields.

光影幻象:神经辐射场中的时空流转
Neural Radiance Fields for Unconstrained Photo Collections

深蓝学院 (Shenlan College online education), 2022

Introduction about Neural Radiance Fields (NeRF) for unconstrained photo collections. Including NeRF, NeRF in the Wild, and Ha-NeRF

Academic Services
  • Reviewer of Computer Vision Conferences: CVPR, ICCV, ECCV
  • Reviewer of Machine Learning Conferences: NeurIPS, ICLR, ICML

template adapted from this awesome website
Last updated: July 2024