Li Jin
金立
Ph.D. student
School of Software
Shandong University
Email: jinli@mail.sdu.edu.cn
I am a second-year Ph.D. student supervised by Prof. Xueying Qin at Shandong University (SDU). I also worked as a visiting Ph.D. student in the Visual Computing and Learning Lab (VCL), at Peking University (PKU), advised by professors from pku vcl lab led by Prof. Baoquan Chen.
|
|
My previous research centered on object pose estimation within 3D object perception, encompassing both instance-level and category-level tasks. Recently, I shifted my focus to 3D object perception in open-world scenarios, exploring shape, pose, semantics, and material aspects.
Currently, I am developing large-scale datasets with canonical, semantic, and other attributes to train 3D perception models for open-world applications.
|
Publications
|
*: equal contribution; † corresponding author(s)
|
|
One-shot 3D Object Canonicalization based on Geometric and Semantic Consistency
Li Jin,
Yujie Wang,
Wenzheng Chen,
Qiyu Dai,
Qingzhe Gao,
Xueying Qin,
Baoquan Chen†
CVPR 2025 (Highlight Presentation, Top 13.5%)
project page /
code
We introduce a one-shot category-level 3D object canonicalization method that synergistically combines geometric priors and semantic cues extracted from large language models (LLMs) and vision-language models (VLMs). Additionally, we construct the Canonical Objaverse Dataset (COD), which contains 32,000 samples.
|
|
Prior-free 3D Object Tracking
Xiuqiang Song,
Li Jin,
Zhang Zhengxian,
Jiachen Li,
Fan Zhong,
Guofeng Zhang,
Xueying Qin†
CVPR 2025 (Highlight Presentation, Top 13.5%)
We propose a novel prior-free 3D object tracking method that operates without relying on any models or training data. This method consists of a geometry generation module and a pose optimization module, which can automatically and iteratively enhance each other to gradually construct the necessary information for tracking.
|
|
A Visual Servo System for Robotic On-Orbit Servicing Based on 3D Perception of Non-Cooperative Satellite
Panpan Zhao,
Li Jin,
Yeheng Chen,
Jiachen Li,
Xiuqiang Song,
Wenxuan Chen,
Nan Li,
Wenjuan Du,
Ke Ma,
Xiaokun Wang,
Yuehua Li,
Xiangxu Meng,
Xueying Qin†
ICRA 2025
We present a 3D perception-based visual servo system for non-cooperative satellites. This system integrates reconstruction and tracking, using an alternating iterative strategy to enhance shape perception and pose estimation accuracy in complex orbital conditions.
|
|
CSS-Net: Domain Generalization in Category-level
Pose Estimation via Corresponding Structural
Superpoints
Li Jin,
Xibin Song,
Jia Li,
Changhe Tu,
Xueying Qin†
ICME 2024 (Oral Presentation, Top 15%)
We propose a domain generalization method for category-level pose estimation based on structural superpoints. Trained solely on simulated data, this method effectively generalizes to unseen domain distributions in real datasets.
|
|
Implicit Coarse-to-Fine 3D Perception for Category-level Object Pose Estimation from Monocular RGB Image
Jia Li,
Li Jin,
Xibin Song,
Yeheng Chen,
Nan Li,
Xueying Qin†
ICRA 2024
We propose a category-level pose estimation framework from a single RGB image—Feature Auxiliary Perception Network (FAP-Net). This method aims to address the ambiguity in object translation and size caused by relying solely on RGB images.
|
|
Online Hand-Eye Calibration with Decoupling by 3D Textureless Object Tracking
Li Jin,
Kang Xie,
Wenxuan Chen,
Xin Cao,
Yuehua Li,
Jiachen Li,
Jiankai Qian,
Xueying Qin†
ICRA 2023
We propose a novel hand-eye calibration method based on the natural 3D object, which can work online and automatically even if the object is textureless or weakly textured.
|
|