Ming-Yu Liu is a distinguished research scientist at NVIDIA Research. Before joining NVIDIA in 2016, he was a principal research scientist at Mitsubishi Electric Research Labs (MERL). He earned his Ph.D. from the Department of Electrical and Computer Engineering at the University of Maryland College Park in 2012. He received the R&D 100 Award by R&D Magazine for his robotic bin picking system in 2014. His semantic image synthesis paper and scene understanding paper are in the best paper finalist in the 2019 CVPR and 2015 RSS conferences, respectively. In SIGGRAPH 2019, he won the Best in Show Award and Audience Choice Award in the Real-Time Live show for his image synthesis work. His research focus is on generative image modeling. His goal is to enable machines human-like imagination capability.

Research, Awards, Research community service, Tutorials, Workshops,



Awards

  • Best in Show Award and Audience Choice Award, RealTimeLive, SIGGRAPH 2019

  • Best paper finalist, Computer Vision and Pattern Recognition (CVPR) Conference 2019

  • 1st place, Domain Adaptation for Semantic Segmentation Competition, WAD Challenge, CVPR 2018

  • 1st place, Optical Flow Competition, Robust Vision Challenge, CVPR 2018

  • Outstanding Reviewer, CVPR 2018

  • Pioneer Research Award, NVIDIA 2017, 2018

  • NTECH Best Presenter Award, NVIDIA 2017

  • CR&D Award, Mitsubishi Electric Research Labs (MERL) 2016

  • Best paper finalist, Robotics Science and Systems (RSS) Conference 2015

  • R&D 100 award, R&D Magazine 2014

Research Focus

FUNIT: Few-shot Unsupervised Image-to-Image Translation (arXiv 2019)
Live Demo!

SPADE: Semantic Image Synthesis with Spatially-Adaptive Normalization (CVPR 2019, Siggraph Real Time Live 2019)
Live Demo!

vid2vid: Video-to-Video Synthesis (NeurIPS 2018)

MUNIT: Multimodal unsupervised image-to-image translation (ECCV 2018)

FastPhotoStyle
A Closed-form Solution to Photorealistic Image Stylization (ECCV 2018)

pix2pixHD
High-res image synthesis and semantic manipulation (CVPR 2018)

MoCoGAN
Decomposing Motion and Content for Video Generation (CVPR 2018)

UNIT
Unsupervised image-to-image translation Network (NeurIPS 2017)

Coupled GAN
Coupled Generative Adversarial Networks (NeurIPS 2016)


Research community service

  • Conference program chair: WACV

  • Conference area chair: ICCV, CVPR, BMVC, WACV

  • Conference reviewer: CVPR, ICCV, ECCV, NIPS, ICML, ICLR

  • Journal reviewer: TPAMI, IJCV, TIP, TMM, CVIU

  • Journal guess editor: IJCV


Co-hosted Tutorials

  • [Site] ICCV 2019 Tutorial on Accelerating Computer Vision with Mixed Precision

  • [Site] ICIP 2019 Tutorial: Image-to-Image Translation

  • [Site] CVPR 2019 Tutorial: Deep Learning for Content Creation

  • [Site] CVPR 2017 Tutorial: Theory and Applications of Generative Adversarial Networks

  • [Site] ACCV 2016 Tutorial: Deep Learning for Vision Guided Language Generation and Image Generation


Co-hosted Workshops

  • [Site] ICCV 2019 Workshop: Image and Video Synthesis: How, Why and "What if"?

  • [Site] ICCV 2019 Workshop: Advances in Image Manipulation workshop and challenges on image and video manipulation

  • [Site] CVPR 2019 Workshop: 4th New Trends in Image Restoration and Enhancement workshop and challenges

  • [Site]CVPR 2019 Workshop: AI City Challenge

  • [Site] CVPR 2018 Workshop: AI City Challenge