教师名录 当前所在位置:首页 > 师资队伍 > 教师名录

郭遥

副教授,博士生导师

通讯地址: 转化医学大楼N220
电子邮件: yao.guo at sjtu.edu.cn

  • 个人简介

    本课题组常年招收对计算机视觉、社交机器人、人体行为与认知分析、手术机器人、康复与辅助机器人等领域感兴趣的本科生以及博士后,欢迎发送邮件联系。
    【重要通知】2025年的研究生名额已招满!

    NEWS IN THE PAST YEAR
    2024.09 We showcased NeuroDoc AI, a pioneered robot for stroke diagnosis in CIFTIS 2024@Beijing, which has been widely reported by the People's Daily and other media!
    2024.09 Our paper entitled "Microassembly: A Review on Fundamentals, Applications and Recent Developments" was accepted by Engineering!
    2024.08 Our collaboration paper was accepted by Biosensors and Bioelectronics!
    2024.06 Yunfei Luan was awarded the Outstanding Bachelor's Thesis@SJTU (Top 1%)!
    2024.06 Our paper "Magnetically Actuated Floating Millirobot Platform for Streamlined Flexible Neural Electrode Manipulation" was selected as the finalist of ICMA Best Paper Award!
    2024.06 We showcased NeuroDoc AI, a pioneered robot for stroke diagnosis in CSA&TISC 2024@Beijing!
    2024.05 Jianxin Yang, Yujian An, Yating Luo & Yuxuan Liu presented their works in ICRA 2024@Yokohama!
    2024.04 Our paper entitled "UPAMNet: A Unified Network with Deep Knowledge Priors for Photoacoustic Microscopy" was accepted by Photoacoustics!
    2024.01 Three papers were accepted by ICRA 2024!
    2024.01 Our paper entitled "A Microscopic Vision-based Robotic System for Floating Electrode Assembly" was accepted by IEEE/ASME Transactions on Mechatronics!
    2023.11 Our collaboration paper was accepted by IEEE Transactions on Visualization and Computer Graphics!
    2023.10 Our paper "EasyGaze3D: Towards Effective and Flexible 3D Gaze Estimation from a Single RGB Camera" was selected as the finalist of IROS Best Paper Award on Cognitive Robotics!

  • 教育背景


    • 香港城市大学,机械与生物医学工程系,博士学位

    • 中山大学,信息科学与技术学院,硕士学位

    • 中山大学,信息科学与技术学院,学士学位

  • 工作经历


    • 2024至今 上海交通大学 生物医学工程学院/医疗机器人研究院,长聘教轨副教授

    • 2020-2023 上海交通大学 生物医学工程学院/医疗机器人研究院,长聘教轨助理教授

    • 2018-2020 Imperial College London, The Hamlyn Centre for Robotic Surgery, Research Associate

    • 2017-2018 香港城市大学 机械与生物医学工程系,Research Assistant

  • 研究方向


    • 人体运动分析,人体姿态估计,人体形状重构,步态分析

    • 机器视觉,视觉SLAM,模式识别,机器学习,深度学习

    • 脑机接口,精神负荷检测,认知康复与辅助

    • 康复机器人, 社交机器人,人机交互

  • 科研项目


    • 国自然青年项目 在研 主持

    • 上海市浦江人才计划 在研 主持

    • 上海交通大学医工交叉中心项目 在研 合作方主持

    • 上海市科技重大专项 在研 主要参与

    • 上海市基础特区计划 在研 主要参与

  • 代表性论文专著

    2024
    [37] Yuxuan Liu, Jiasheng Zhou, Yating Luo, Jinkai Li, Sung-Liang Chen, Yao Guo* and Guang-Zhong Yang*, "UPAMNet: A unified network with deep knowledge priors for photoacoustic microscopy," Photoacoustics, vol. 38, pp. 100608, 2024. [Paper] (*Co-corresponding author)


    Illustration of the network architecture of the proposed method.

    Visualization results of super-resolution (4x) via transfer learning

    [36] Yujian An, Bingze He, Zhuochen Ma, Yao Guo* and Guang-Zhong Yang*, "Microassembly: A Review on Fundamentals, Applications and Recent Developments," in Engineering, Accepted, 2024. (*Co-corresponding author)
    [35] Bingze He, Zhiang Zhang, Jianxin Yang, Yujian An, Yao Guo* and Guang-Zhong Yang*, "Magnetically Actuated Floating Millirobot Platform for Streamlined Flexible Neural Electrode Manipulation," in IEEE International Conference on Mechatronics and Automation (ICMA), August, 2024, pp. 311-316. [Paper] (*Co-corresponding author)
    [34] Yujian An, Jianxin Yang, Bingze He, Yuxuan Liu, Yao Guo* and Guang-Zhong Yang*, "A Microscopic Vision-Based Robotic System For Floating Electrode Assembly," in IEEE/ASME Transactions on Mechatronics (TMECH), Accepted, 2024. [Paper] (*Co-corresponding author)
    [33] Yujian An, Jianxin Yang, Jinkai Li, Bingze He, Yao Guo*, Guang-Zhong Yang*, "Skill Learning in Robot-Assisted Micro-Manipulation Through Human Demonstrations with Attention Guidance", IEEE International Conference on Robotics and Automation (ICRA), May, 2024, pp. 15601-15607. [Paper] (*Co-corresponding author)

    Experimental scenario for the micro-manipulation task teleoperated by a human operator.

    A heatmap representing the concentration of the operator’s gaze.

    [32] Yating Luo, Yuxuan Liu, Jiasheng Zhou, Sung-Liang Chen, Yao Guo*, Guang-Zhong Yang*, "Fast Photoacoustic Microscopy with Robot Controlled Microtrajectory Optimization", IEEE International Conference on Robotics and Automation (ICRA), May, 2024, pp. 17750-17756. [Paper] (*Co-corresponding author)

    Overview of our robot-assisted PAM system.

    Schematic illustration of the proposed optimization framework.

    [31] Yunfei Luan†, Muhang He†, Yudong Tian, Chengjie Lin, Yunhan Fang, Zihao Zhao, Jianxin Yang, Yao Guo*, "Intelligent Disinfection Robot with High-Touch Surface Detection and Dynamic Pedestrian Avoidance”,IEEE International Conference on Robotics and Automation (ICRA), May, 2024, pp. 3595-3601. [Paper] (*Corresponding author)

    Disinfection robot system overview.


    2023
    [30] Jinkai Li, Jianxin Yang, Yuxuan Liu, ZHEN LI, Guang-Zhong Yang, Yao Guo*, "EasyGaze3D: Towards Effective and Flexible 3D Gaze Estimation from a Single RGB Camera ", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Dec. 2023, pp. 6537-6543. [Paper] [Code] (*Corresponding author)

    Overview of the proposed 3D gaze estimation framework from an RGB image.

    Qualitative results. (a) and (b) show the gaze estimation effects of EasyGaze3D with the Easy-Cali module on IMR-CaliGaze dataset. (c) shows the results of the basic EasyGaze3D on MPIIFaceGaze dataset.

    [29] Yuxuan Liu, Jianxin Yang, Xiao Gu, Yijun Chen, Yao Guo*, and Guang-Zhong Yang*, "EgoFish3D: Egocentric 3D Pose Estimation from a Fisheye Camera via Self-Supervised Learning", IEEE Transactions on Multimedia (TMM), vol. 25, pp. 8880-8891, 2023. [Paper][Code] (*Co-corresponding author)

    Illustration of the training and inference phases of our proposed EgoFish3D.

    Visualization of the egocentric 3D pose estimation by our EgoFish3D.

    [28] Frank Lo, Yao Guo*, Yingnan Sun, Jianing Qiu, Benny Lo, "An Intelligent Vision-Based Nutritional Assessment Method for Handheld Food Items", IEEE Transactions on Multimedia (TMM), 2023, vol. 25, pp. 5840 - 5851. [Paper] (*Corresponding author)

    Proposed weakly-supervised point cloud completion architecture.

    [27] Yuxuan Liu, Jianxin Yang, Xiao Gu, Yao Guo*, and Guang-Zhong Yang*, "EgoHMR: Egocentric Human Mesh Recovery via Hierarchical Latent Diffusion Model ", IEEE International Conference on Robotics and Automation (ICRA), May, 2023, pp. 9807-9813. [Paper][Code]. (*Co-corresponding author)

    Illustration of our proposed method for human mesh recovery based on a single egocentric image via latent diffusion.

    Overview of the proposed egoHMR network.

    2022
    [26] Xiao Gu, Yao Guo, Zeju Li, Jianing Qiu, Qi Dou, Yuxuan Liu, Benny Lo*, and Guang-Zhong Yang*, "Tackling Long-Tailed Category Distribution Under Domain Shifts", European Conference on Computer Vision (ECCV), July, 2022, pp. 727-743. [Paper] [Code].

    Visual illustrations of the issues complicated with long-tailed category distribution and conditional distribution shifts across domains.

    [25] Yuxuan Liu, Jianxin Yang, Xiao Gu, Yao Guo*, and Guang-Zhong Yang*, "Ego+X: An Egocentric Vision System for Global 3D Human Pose Estimation and Social Interaction Characterization", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct., 2022, pp. 5271-5277. (*Co-corresponding author) [Paper] [Code]. (Co-corresponding author)

    Illustration of the proposed egocentric vision system for social interaction characterization based on 3d canonical human pose estimation.

    Overview of our proposed ego+x system.

    [24] Jianxin Yang, Yuxuan Liu, Xiao Gu, Guang-Zhong Yang, and Yao Guo*, "PoseSDF: Simultaneous 3D Human Shape Reconstruction and Gait Pose Estimation Using Signed Distance Functions", IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, USA, pp. 1297-1303, May, 2022. (*Corresponding author) [Paper]

    Illustration of the training and inference phases of DeepSDF and our PoseSDF.

    Visualization of both shape reconstruction (DeepSDF v.s. PoseSDF) and pose estimation (V2V v.s. PoseSDF) results by directly taking real partial point cloud as input.

    [23] Yao Guo, Weidong Chen, Jie Zhao, and Guang-Zhong Yang*, "Medical Robotics: Opportunities in China", Annual Review of Control, Robotics, and Autonomous Systems (ARCRAS), vol. 5, no. 1, pp. 361-383, May, 2022. [Paper]

    Taxonomy of medical robotics.

    (a) Market sizes of medical robotics in the world and in China from 2017 to 2021. (b) Market distribution of different medical robots in China in 2019.

    [22] Yao Guo, Jianxin Yang, Yuxuan Liu, Xun Chen*, and Guang-Zhong Yang*, "Detection and Assessment of Parkinson's Disease based on Gait Analysis: A Survey", Frontiers in Aging Neuroscience, 14:916971, 2022, [Paper].

    Illustration of the pipeline for automatic recognition in PD based on Gait Data.

    [21] Yao Guo, Daniel Freer, Fani Deligianni, and Guang-Zhong Yang*, "Eye-tracking for Performance Evaluation and Workload Estimation in Space Telerobotic Training", IEEE Transactions on Human-Machine Systems (THMS), vol. 52, no. 1, pp. 1-11, February, 2022. [Paper]

    The left image demonstrates an overview of our O^3 simulator from a global view. The middle part shows the user interface observed by operators. The right part shows the controller and the control strategy.

    Relationship between the simulator-defined Final score and three eye-tracking features extracted from trial data.

    [20] Xiao Gu, Yao Guo, Guang-Zhong Yang*, and Benny Lo*, "Cross-Domain Self-Supervised Complete Geometric Representation Learning for Real-Scanned Point Cloud Based Pathological Gait Analysis", IEEE Journal of Biomedical and Health Informatics (J-BHI), vol. 26, no. 3, pp. 1034-1044, March, 2022. [Paper]

    Illustration of our proposed self-supervised learning framework.

    Visualizations of completed point sets (bottom) from real-scanned data (top).

    [19] Chengxi Zhong, Yuyu Jia, David C. Jeong, Yao Guo*, Song Liu*, "AcousNet: A Deep Learning based Approach to Dynamic 3D Holographic Acoustic Field Generation from Phased Transducer Array", IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 2, pp. 666-673, April, 2022. [Paper] [Dataset]. (*Co-corresponding author)

    Schematic of the holographic acoustic field generated from phased transducer array with squared transducer element.

    [18] Zhihao Yuan†, Xu Yan†, Yinghong Liao,Yao Guo, Guanbin Li, Zhen Li*, and Shuguang Cui , "X-Trans2Cap: Cross-Modal Knowledge Transfer using Transformer for 3D Dense Captioning", IEEE/CVF Conference of Computer Vision and Pattern Recognition (CVPR), June, 2022, pp. 8563-8573. [Paper]

    2021

    [17] Yao Guo, Xiao Gu, and Guang-Zhong Yang*, "MCDCD: Multi-Source Unsupervised Domain Adaptation for Abnormal Human Gait Detection", IEEE Journal of Biomedical and Health Informatics (J-BHI), vol. 25, no. 10, pp. 4017-4028, October, 2021. [Paper]

    An overview of the proposed MCDCD framework for abnormal gait recognition.

    t-SNE visualization of the (a) Mocap & (b) EMG feature distributions befor and after unsupervised domain adaptation.

    [16] Frank Po Wen Lo, Yao Guo*, Yingnan Sun, Jianing Qiu, and Benny Ping Lai Lo, "Deep3DRanker: A Novel Framework for Learning to Rank 3D Models with Self-Attention in Robotic Vision", IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China, 30 May-05 June, pp. 4341-4347, 2021. (*Corresponding author) [Paper]

    Flowchart of the proposed Deep3DRanker.

    The model architecture of the proposed Deep3DRanker.

    [15] Xiao Gu, Yao Guo, Fani Deligianni, Benny Lo, Guang-Zhong Yang*, “Cross-Subject and Cross-Modal Transfer for Generalized Abnormal Gait Pattern Recognition,” IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol. 32, no. 2, pp. 546-560, February, 2021. [Paper]

    Illustration of the cross-subject and cross-modal transfer framework in abnormal gait recognition.

    Illustration of cross-modal cross-subject transfer model.

    [14] Yao Guo, Xiao Gu, and Guang-Zhong Yang*, “Human–Robot Interaction for Rehabilitation Robotics.” Digitalization in Healthcare: Implementing Innovation and Artificial Intelligence, Springer, pp. 269-295, 2021. [Chapter]

    Bidirectional HRI in upper-limb prosthetic control, which includes forward prosthetic control and sensory feedback.

    Requirements of HRI for active, natural, and intelligent robot-assisted stroke rehabilitation.

    2020

    [13] Xiao-Yun Zhou*, Yao Guo, Mali Shen, Guang-Zhong Yang, “Application of Artificial Intelligence in surgery,” Frontiers of Medicine, vol. 14, pp. 417-430, July, 2020. [Paper]

    [12] Xiao Gu, Yao Guo, Fani Deligianni, Guang-Zhong Yang*, “Coupled real-synthetic domain adaptation for real-world deep depth enhancement,” IEEE Transactions on Image Processing (TIP), vol. 29, pp. 6343-6356, April, 2020. [Paper]


    Overview of the proposed depth enhancement framework..

    Qualitative results of cross-dataset performance on real-world data.

    Before 2020

    [11] Yao Guo†, Fani Deligianni†, Guang-Zhong Yang*, “From Emotions to Mood Disorders: A Survey on Gait Analysis Methodology,” IEEE Journal of Biomedical and Health Informatics (J-BHI), vol. 23, no. 6, pp. 2302-2316, November, 2019. (†: equal contribution), [Paper]

    Gait capture systems are mapped in a two-dimensional space from lab-based to home-based environment and from wearable to vision-based.

    Demonstration of four discriminative gait parameters commonly used in emotion detection (neutral, happy, sad, anger and fear).

    [10] Yao Guo, Fani Deligianni, Xiao Gu, Guang-Zhong Yang*, “3D Canonical Pose Estimation and Abnormal Gait Recognition with a Single RGB-D Camera,” IEEE Robotics and Automation Letters (RA-L) & IROS 2019, vol. 4, no. 4, pp. 3617-3624, October, 2019. [Paper]
    centered image
    Overview of the proposed mobile 3D canonical gait analysis system.
    centered image
    Information fusion for human lower limb pose representation and 3D gait analysis in the canonical coordinate system.

    [9] Ya-Yen Tsai, Yao Guo, and Guang-Zhong Yang*, “Unsupervised Task Segmentation Approach for Bimanual Surgical Tasks using Spatiotemporal and Variance Properties,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, pp. 7940-7946, 2019.

    [8] Dandan Zhang, Yao Guo, Junhong Chen, Jindong Liu, and Guang-Zhong Yang*, ``A Handheld Master Controller for Robot-Assisted Microsurgery,'' IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, pp. 394-400, 2019.

    [7] Yao Guo, You-Fu Li*, Zhanpeng Shao, “RRV: A spatiotemporal descriptor for rigid body motion recognition,” IEEE Transactions on Cybernetics (TCyber), vol. 48, no. 5, pp. 1513-1525, May, 2018. [Paper]

    6-D representation of a rigid body motion trajectory.

    [6] Yao Guo, You-Fu Li*, Zhanpeng Shao, “DSRF: A Flexible Trajectory Descriptor for Articulated Human Action Recognition,” Pattern Recognition, vol. 76, April, pp. 137-148, April, 2018. [Paper]

    Pipeline of the proposed skeletal representation.

    [5] Yao Guo, You-Fu Li*, Zhanpeng Shao, “On multi-scale self-similarities description for effective 3D/6D motion trajectory recognition,” IEEE Transactions on Industrial Informatics (TII), vol. 13, no. 6, pp. 3017-3026, December, 2017. [Paper]

    [4] Yao Guo, You-Fu Li*, and Zhanpeng Shao, “MSM-HOG: A flexible trajectory descriptor for rigid body motion recognition,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, pp. 4298-4303, 2017.

    [3] Yao Guo, You-Fu Li*, and Zhanpeng Shao, “DSRF: A Flexible Descriptor for Effective Rigid Body Motion Trajectory Recognition,” IEEE International Conference on Mechatronics and Automation (ICMA), Harbin, China, pp. 1673-1678, 2016. (Best Conference Paper Award)

    [2] Yao Guo, Kaide Huang, Nanyong Jiang, Xuemei Guo, Guoli Wang*, “An Exponential-Rayleigh model for RSS-based device-free localization and tracking,” IEEE Transactions on Mobile Computing (TMC), vol. 14, no. 3, pp. 484-494, March, 2015. [Paper]

    [1] Yao Guo, Kaide Huang, Nanyong Jiang, Xuemei Guo, and Guoli Wang*, “An Exponential-Rayleigh signal strength model for device-free localization and tracking with wireless networks,” International Conference on Intelligent Control and Information Processing (ICICIP), Beijing, China, 2013, pp. 108-113.

  • 教学工作

    GE6001 学术写作、规范与伦理(共同参与)
    GE6012 学术报告与研讨会(共同参与)
    BME3307-2 自动控制原理(B类)

  • 学术兼职

    Guest Editor of Special Issue 'Sensor-based pattern recognition and signal processing' of MDPI sensors.
    Associate Editor of Biomedical Sensors and Wearable Systems track at EMBC-2023, 2024.

  • 荣誉奖励


    • Finalist of "Best Conference Paper" @ICMA2024, 2024.

    • Selected "Next Generation Star Project" @ICRA2024, 2024.

    • Finalist of "Best Cognitive Robotics Paper" @IROS2023, 2023.



友情链接
沪交ICP备20190057 版权所有 上海交通大学医疗机器人研究院   流量统计
医疗机器人研究院