|
Juyong Lee
I am a PhD(/MS int.) student at KAIST, advised by Kimin Lee.
I received a B.S. degree with a double major in both mathematics and computer science/engineering at
POSTECH.
I have an experience as an exchange student at Stanford.
Recently, I am working as a research engineer (contractor via YunoJuno) at Google DeepMind.
My main research interest is autonomous replication and adaptation,
especially with efficient representation and reinforcement learning agents (e.g., LLM agents).
CV  / 
Google Scholar  / 
Github
|
|
|
Research Highlights
|
(*: equal contribution)
|
|
|
Automated Skill Discovery for Language Agents through Exploration and Iterative Feedback
Yongjin Yang*,
Sinjae Kang*,
Juyong Lee,
Dongjun Lee,
Se-Young Yun,
Kimin Lee
Preprint
paper
We introduce a framework for automated skill discovery for language model-based agents in
open-ended environments, and show its potential toward self-evolving system.
|
|
|
Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents
Dongjun Lee*,
Juyong Lee*,
Kyuyoung Kim,
Jihoon Tack,
Jinwoo Shin,
Yee Whye Teh,
Kimin Lee
ICLR 2025
project
/
paper
A novel framework of training a contextualization module to help the decision-making of LLM agents
achieves the super-human performance in the WebShop benchmark.
|
|
|
B-MoCA: Benchmarking Mobile Device Control Agents across Diverse Configurations
Juyong Lee,
Taywon Min,
Minyong An,
Dongyoon Hahm,
Haeone Lee,
Changyeon Kim,
Kimin Lee
CoLLAs 2025;
ICLR 2024 Workshop: GenAI4DM (spotlight presentation)
project
/
paper
/
code
A novel benchmark that can serve as a unified testbed for mobile device control agents
on performing practical daily tasks across diverse device configurations.
|
|
|
Hyperbolic VAE via Latent Gaussian Distributions
Seunghyuk Cho,
Juyong Lee,
Dongwoo Kim
NeurIPS 2023, ICML 2023 workshop TAGML, KAIA 2022 (3rd best paper)
paper
A newly proposed distribution (over a Riemannian manifold of the diagonal Gaussians equipped with
Fisher information metric)
empowers learning a hyperbolic world model.
|
|
|
Style-Agnostic Reinforcement Learning
Juyong Lee*,
Seokjun Ahn*,
Jaesik Park
ECCV 2022
paper
/
code
Reinforcement learning agents become robust to the changes in the style of the image (e.g.,
background color)
by adapting to adversarially generated styles.
|
The source code is from here
|
|