Hi! 👋 I'm Ji Ha Jang. I'm currently pursuing integrated PhD course in Electrical and Computer Engineering (ECE) at Seoul National University (SNU), advised by Prof. Se Young Chun. I earned my B.S degree in Electrical and Computer Engineering (ECE) at Seoul National University.
Research Keywords: multmodal AI, generative AI, commonsense AI
I'm interested in multmodal, generative, commonsense AI, and low-level computer vision.
My work is driven by a deep curiosity about how AI can better understand and interact with the complexities of the world, combining various modalities.
Representative papers are highlighted.
RoMaP: Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling Hayeon Kim*,
Ji Ha Jang*,
Se Young Chun ICCV (International Conference on Computer Vision) , 2025
project page
|
code
|
paper
We propose RoMaP, a novel framework for local 3D Gaussian editing that enables precise and flexible part-level modifications. RoMaP introduces a geometry-aware 3D mask prediction module (3D-GALP) using spherical harmonics for consistent multi-view segmentation, and a regularized SDS loss with Scheduled Latent Mixing and Part editing (SLaMP) to constrain edits to target regions while preserving context. RoMaP outperforms prior methods on both reconstructed and generated GS scenes.
INTRA: Interaction Relationship-aware Weakly Supervised Affordance Grounding Ji Ha Jang*,
Hoigi Seo*,
Se Young Chun ECCV (European Conference on Computer Vision) , 2024
project page
|
paper
We present INTRA (Interaction Relationship-aware Weakly Supervised Affordance Grounding), a novel framework for affordance grounding which enables training without egocentric images, ground different part for different interaction on same object and enables free-form text input.
PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion Gwanghyun Kim,
Ji Ha Jang,
Se Young Chun ICCV (International Conference on Computer Vision), 2023
project page
|
paper
We propose a novel pipeline called PODIA-3D, which uses pose-preserved text-to-image diffusion-based domain adaptation for 3D generative models.