Ming-Yu Liu is a principal research scientist at NVIDIA Research. Before joining NVIDIA in 2016, he was a principal research scientist at Mitsubishi Electric Research Labs (MERL). He earned his Ph.D. from the Department of Electrical and Computer Engineering at the University of Maryland College Park in 2012. He is a recipient of the R&D 100 Award by R&D Magazine for his robotic bin picking system in 2014. His street scen understanding paper and his semantic image synthesis paper were in the best paper finalist in the 2015 Robotics Science and System (RSS) conference and the 2019 Computer Vision and Pattern Recognition (CVPR) conference, respectively. In SIGGRAPH 2019, he won the Best in Show Award and Audience Choice Award in the Real Time Live show. His research focus is on generative image modeling. His research goal is to enable machines human-like imagination capability.

Research, Awards, Academic Duty, Tutorials, Workshops,


  • Best in Show Award and Audience Choice Award, RealTimeLive, SIGGRAPH 2019

  • Best paper finalist, Computer Vision and Pattern Recognition (CVPR) Conference 2019

  • 1st place, Domain Adaptation for Semantic Segmentation Competition, WAD Challenge, CVPR 2018

  • 1st place, Optical Flow Competition, Robust Vision Challenge, CVPR 2018

  • Outstanding Reviewer, CVPR 2018

  • Pioneer Research Award, NVIDIA 2017, 2018

  • NTECH Best Presenter Award, NVIDIA 2017

  • CR&D Award, Mitsubishi Electric Research Labs (MERL) 2016

  • Best paper finalist, Robotics Science and Systems (RSS) Conference 2015

  • R&D 100 award, R&D Magazine 2014

Research Focus

FUNIT: Few-shot Unsupervised Image-to-Image Translation (arXiv 2019)
Live Demo!

SPADE: Semantic Image Synthesis with Spatially-Adaptive Normalization (CVPR 2019, Siggraph Real Time Live 2019)
Live Demo!

vid2vid: Video-to-Video Synthesis (NeurIPS 2018)

MUNIT: Multimodal unsupervised image-to-image translation (ECCV 2018)

A Closed-form Solution to Photorealistic Image Stylization (ECCV 2018)

High-res image synthesis and semantic manipulation (CVPR 2018)

Decomposing Motion and Content for Video Generation (CVPR 2018)

Unsupervised image-to-image translation Network (NeurIPS 2017)

Coupled GAN
Coupled Generative Adversarial Networks (NeurIPS 2016)


  • Conference reviewer: CVPR, ICCV, ECCV, NIPS, ICML, ICLR

  • Journal reviewer: TPAMI, IJCV, TIP, TMM, CVIU

  • Journal guess editor: IJCV, CVIU

  • Area chair: ICCV, CVPR, BMVC, WACV

  • Program chair: WACV

Co-hosted Tutorials

  • [Site] CVPR 2019 Tutorial: Deep Learning for Content Creation

  • [Site] ICIP 2019 Tutorial: Image-to-Image Translation

  • [Site] CVPR 2019 Tutorial: Deep Learning for Content Creation

  • [Site] CVPR 2017 Tutorial: Theory and Applications of Generative Adversarial Networks

  • [Site] ACCV 2016 Tutorial: Deep Learning for Vision Guided Language Generation and Image Generation

Co-hosted Workshops

  • [Site] CVPR 2019 Workshop: 4th New Trends in Image Restoration and Enhancement workshop and challenges

  • [Site]CVPR 2019 Workshop: AI City Challenge

  • [Site] CVPR 2018 Workshop: AI City Challenge