Juyong Lee


I am a PhD(/MS int.) student at KAIST, advised by Kimin Lee. I received a B.S. degree with a double major in both Mathematics and Computer Science/Engineering at POSTECH. I have an experience as an exchange student at Stanford.

I am currently interested in building practical AI agents.

CV  /  Google Scholar  /  Github


profile photo


Research Highlights (*: equal contribution)
MobileSafetyBench: Evaluating Safety of Autonomous Agents in Mobile Device Control
Juyong Lee, Dongyoon Hahm, June Suk Choi, W. Bradley Knox, Kimin Lee
Preprint
project / paper / code

We propose a new benchmark for evaluating the safety and helpfulness of agents, with extensive analysis of the shortcomings of frontier LLM agents in mobile device control.

B-MoCA: Benchmarking Mobile Device Control Agents across Diverse Configurations
Juyong Lee, Taywon Min, Minyong An, Dongyoon Hahm, Haeone Lee, Changyeon Kim, Kimin Lee
ICLR 2024 workshop GenAI4DM (spotlight presentation)
project / paper / code

A novel benchmark that can serve as a unified testbed for mobile device control agents on performing practical daily tasks across diverse device configurations.

LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Taewook Nam*, Juyong Lee*, Jesse Zhang, Sung Ju Hwang, Joseph J Lim, Karl Pertsch
NeurIPS 2023 workshop ALOE
project / paper / code

Reinforcement learning agents discover semantically meaningful skills with tasks proposed by a large language model and rewards from a vision-language model.

Hyperbolic VAE via Latent Gaussian Distributions
Seunghyuk Cho, Juyong Lee, Dongwoo Kim
NeurIPS 2023, ICML 2023 workshop TAGML, KAIA 2022 (3rd best paper)
paper

A newly proposed distribution (over a Riemannian manifold of the diagonal Gaussians equipped with Fisher information metric) empowers learning a hyperbolic world model.

Style-Agnostic Reinforcement Learning
Juyong Lee*, Seokjun Ahn*, Jaesik Park
ECCV 2022
paper / code

Reinforcement learning agents become robust to the changes in the style of the image (e.g., background color) by adapting to adversarially generated styles.


The source code is from here