Ye-Ji Mun

I'm

About

Graduate Student Researcher

I am a fifth-year Ph.D. candidate at University of Illinois, Urbana-Champaign (UIUC). My advisor is Prof. Katherine Driggs-Campbell, I am doing research in Human-Centered Autonomy Laboratory (HCAL). Prior to UIUC, I recieved my B.S. in Electronics Engineering at Ewha Womans University, Seoul, South Korea in 2018. My research interest is in human-robot interaction, safe autonomy and artificial intellligence.

  • Education: University of Illinois at Urbana-Champaign
  • Title: fifth-year Ph.D. student
  • Email: yejimun2@illinois.edu
  • Office: 268 Coordinated Science Laboratory, UIUC
  • Resume: [Resume]

Education & Experience

Education

Ph.D. in Electrical & Computer Engineering

2019 - current

University of Illinois at Urbana-Champaign, Illinois, United States

Academic Advisor : Prof. Katherine Rose Driggs-Campbell

Bachelor Science in Electronics Engineering

2014 - 2018

Ewha Womans University, Seoul, South Korea

Exchange student in Electrical Engineering

2016 Spring

Temple University, Philadelphia, USA

Exchange student in Electrical Engineering

2015 Fall

San Diego State University, San Diego, USA

Research Experience

Human Centered Autonomy Lab, Research Assistant

Aug.2019 - Present
  • Develop a deep reinforcement learning algorithm that performs occlusion-aware planning in partially observable, crowded environments.
  • Establish an intention-aware planning for human-robot collaborations in manufacturing assembly task.
  • Design robot systems that influence humans over long-term interaction towards desired behaviors.

Information Coding and Processing Lab, Research Assistant

Oct.2017 - June 2019

Ewha Womans University, Seoul, South Korea

  • Built an adversarial robust Convolutional Neural Network (CNN) classifier that employs multiple and independent random binary code per input class and train them with ensemble of homogeneous CNN
  • Guided lab interns how to install Python and TensorFlow and introduce coding framework of TensorFlow.
  • Improved the image classification accuracy of CNN by up to 2.1% by incorporating convolutional coding.

Research

My research interest lies in the areas of human-robot interaction and robustness of the learning-based algorithms.

Occlusion-Aware Autonomous Navigation Using People as Sensors

Many tasks such as object detection and pattern recognition can be performed at levels surpassing that of humans thanks to the recent developments in deep neural networks. However, all sensors still have limited sensing capabilities which can result in serious accidents due to unobservable risks. Humans, on the other hand, have great insights to handle their limited perception, and many researchers have attempted to adapt these human-like behaviors. This work aims to incorporate interaction between road users to deal with decision-making under occlusion cases during navigation. The figure shows a mobile robot navigating in crowds and its environmental observations in probability of occupancy. Agent 0,1 and 2 is out side of robot's field of view. [Paper] [Video] [Code] [Article]

Towards Robots that Influence Humans over Long-Term Interaction

When humans and robots interact, their influence on each other can have critical impact on their task performance. Consider an autonomous car trying to merge into a lane where human-driven cars are passing by. If the autonomous car cannot make the human-driven car slow down, it will not be able to change lanes. Prior works have developed frameworks that enable robots to influence humans towards desired behaviors. However, these approaches are only effective in the short-term as they assume that the robot actions can influence the human in a consistent way. Our central insight is that humans are dynamic: people adapt to robots, and behaviors which are influential now may fall short over repeated interactions. [Paper] [Video]

Learning Task Skills and Goals Simultaneously from Physical Interaction

In real-world human-robot systems, it is essential for a robot to comprehend human objectives and respond accordingly while performing an extended series of motor actions. Although human objective alignment has recently emerged as a promising paradigm in the realm of physical human-robot interaction, its application is typically confined to generating simple motions due to inherent theoretical limitations. In this work, our goal is to develop a general formulation to learn manipulation functional modules and long-term task goals simultaneously from physical human-robot interaction. We show the feasibility of our framework in enabling robots to align their behaviors with the long-term task objectives inferred from human interactions. [Paper]

Human-Robot Collaboration in Manufacturing Assembly

The purpose of this project is to enhance the efficiency and safety of human-robot collaboration while assembling a product. This work focuses on enabling the robot to recognize human intentions and adapt its behavior to best assist the human. Human's intentions are usually not fully observable and may change during the process. Accuratly tracking the human intentions and planning tasks accordingly will greatly enhance the collaboration performance. [Paper]

CNN Defending against Adversarial Attacks

Despite the excellent classification performance, recent research has revealed that the Convolutional Neural Network (CNN) could be readily deceived by only the small adversarial perturbation. Its imperceptible to human eyes and transferability from one model to another actually threaten the security of a CNN-based system. This project aims to build a robust CNN structure that has stable performance against the external disturbances during image classification tasks.

Publication

Conference Papers

[1] Learning Task Skills and Goals Simultaneously from Physical Interaction

International Conference on Automation Science and Engineering (CASE), IEEE 2023

H. Chen*,Y.-J. Mun*, Z. Huang, Y. Niu, Y. Xie, D. L. McPherson, K. Driggs-Campbell (*equal contribution)

[Paper]

[2] Towards Safe Multi-Level Human-Robot Interaction in Industrial Tasks

International Conference on Automation Science and Engineering (CASE), IEEE 2023

Z. Huang,Y.-J. Mun, H. Chen, X. Li, Y. Xie, N. Zhong,, Y. Niu, X. Li, N. Zhong, H. You, D. L. McPherson, K. Driggs-Campbell

[Paper] [Video]

[3] Occlusion-Aware Crowd Navigation Using People as Sensors

International Conference on Robotics and Automation (ICRA), IEEE 2023

Y.-J. Mun, M. Itkina, S. Liu, K. Driggs-Campbell

[Paper] [Video] [Code]

[4] Hierarchical Intention Tracking for Robust Human-Robot Collaboration in Industrial Assembly Task

International Conference on Robotics and Automation (ICRA), IEEE 2023

Z. Huang*,Y.-J. Mun*, X. Li, Y. Xie, N. Zhong, W. Liang, J. Geng, T. Chen, and K. Driggs-Campbell (*equal contribution)

[Paper] [Video]

[5] Towards Robots that Influence Humans over Long-Term Interaction

International Conference on Robotics and Automation (ICRA), IEEE 2023

S. Sagheb, Y.-J. Mun, N. Ahmadian, B. A. Christie, A. Bajcsy, K. Driggs-Campbell, and D. P. Losey

[Paper] [Video]

[6] Seamless Interaction Design with Coexistence and Cooperation Modes for Robust Human-Robot Collaboration

International Conference on Automation Science and Engineering (CASE), IEEE, 2022

Z. Huang*Y.-J. Mun*, X. Li, Y. Xie, N. Zhong, W. Liang, J. Geng, T. Chen, and K. Driggs-Campbell (*equal contribution)

[Paper]

[7] Multi-agent variational occlusion inference using people as sensors

International Conference on Robotics and Automation (ICRA), IEEE 2022

M. Itkina, Y.-J. Mun, K. Driggs-Campbell, and M. J. Kochenderfer

[Paper] [Video] [Code]

[8][Oral] Correcting misclassified image features with convolutional coding

In Proceedings of the Korean Society of Broadcast Engineers Conference (pp. 11-14)

Mun, Y. J., Kim, N., Lee, J., Kang, J. W.

[Paper]

Journal

[1] Ensemble of Random Binary Output Encoding for Adversarial Robustness

IEEE Access, 7, 124632-124640.

Mun, Y. J. and Kang, J. W.

[Paper]

Workshops

[1] Insights from an Industrial Collaborative Assembly Project: Lessons in Research and Collaboration

Cobots and WotF, International Conference on Robotics and Automation (ICRA), IEEE, 2022

1. T. Chen, Z. Huang, J. Motes, J. Geng, Q. M. Ta, H. Dinkel, H. Abdul-Rashid, J. Myers, Y.-J. Mun, W. Lin, Y. Huang, S. Liu, M. Morales, N. M. Amato, K. Driggs-Campbell, and T. Bretl

[Paper]

[2] Occlusion-aware crowd navigation using people as sensors

16th Women in Machine Learning Workshop (WiML), NeurIPS 2021

Y.-J. Mun, M. Itkina, and K. Driggs-Campbell

[3] Multi-agent variational occlusion inference using people as sensors

Bay Area Robotics Symposium (BARS), 2021

M. Itkina, Y.-J. Mun, and K. Driggs-Campbell

[4] Safe crowd navigation in the presence of occlusions

15th Women in Machine Learning Workshop (WiML), NeurIPS 2020

Y.-J. Mun,, M. Itkina, and K. Driggs-Campbell

[5] Variational occlusion inference using people as sensors

15th Women in Machine Learning Workshop (WiML), NeurIPS 2020

M. Itkina, Y.-J. Mun, and K. Driggs-Campbell

Preprints

[1] User-Friendly Safety Monitoring System for Manufacturing Cobots

ArXiv, 2023

Y.-J. Mun, Z. Huang, H. Chen, Y. Niu, Ha. You, D. L. McPherson, K. Driggs-Campbell

[Paper]