Juyong Lee


I am a PhD(/MS int.) student at KAIST, advised by Kimin Lee. I received a B.S. degree with a double major in both mathematics and computer science/engineering at POSTECH. I have an experience as an exchange student at Stanford.

I am currently interested in building practical AI agents.

CV  /  Google Scholar  /  Github


profile photo


Research Highlights (*: equal contribution)
Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents
Dongjun Lee*, Juyong Lee*, Kyuyoung Kim, Jihoon Tack, Jinwoo Shin, Yee Whye Teh, Kimin Lee
ICLR 2025
paper

A novel framework of training a contextualization module to help the decision-making of LLM agents achieves the super-human performance in the WebShop benchmark.

MobileSafetyBench: Evaluating Safety of Autonomous Agents in Mobile Device Control
Juyong Lee*, Dongyoon Hahm*, June Suk Choi*, W. Bradley Knox, Kimin Lee
Preprint
project / paper / code

We propose a new benchmark for evaluating the safety and helpfulness of agents, with extensive analysis of the shortcomings of frontier LLM agents in mobile device control.

B-MoCA: Benchmarking Mobile Device Control Agents across Diverse Configurations
Juyong Lee, Taywon Min, Minyong An, Dongyoon Hahm, Haeone Lee, Changyeon Kim, Kimin Lee
ICLR 2024 Workshop: GenAI4DM (spotlight presentation)
project / paper / code

A novel benchmark that can serve as a unified testbed for mobile device control agents on performing practical daily tasks across diverse device configurations.

LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Taewook Nam*, Juyong Lee*, Jesse Zhang, Sung Ju Hwang, Joseph J Lim, Karl Pertsch
NeurIPS 2023 Workshop: ALOE
project / paper / code

Reinforcement learning agents discover semantically meaningful skills with tasks proposed by a large language model and rewards from a vision-language model.

Hyperbolic VAE via Latent Gaussian Distributions
Seunghyuk Cho, Juyong Lee, Dongwoo Kim
NeurIPS 2023
paper

A newly proposed distribution (over a Riemannian manifold of the diagonal Gaussians equipped with Fisher information metric) empowers learning a hyperbolic world model.


The source code is from here