I’m obtained my Ph.D. in Computer Science at UCLA, advised by Cho-Jui Hsieh. Prior to that, I received my B.S. degree at School of Physics, Peking University in 2016 (thesis advisor: Qite Li). Before graduation, I was a research intern studying natural language processing, advised by Yansong Feng. Currently my reseach interests are optimization problems in machine learning, robust neural networks and generative modeling, specifically:

  • Convex/non-convex optimization algorithms for efficient machine learning
  • Robust neural networks
  • Efficient learning of generative models

Curriculum vitae

News

  • (Dec. 2021) I work in industry after graduation. This website is no longer actively maintained. If you have questions regarding to my previous works, send me an email (<fullname>@outlook.com, replace <fullname> with xuanqingliu).

Education

  • Fall 2018 — Fall 2021, Ph.D., Department of Computer Science, UCLA
  • Fall 2016 — Spring 2018, Ph.D. student, Department of Computer Science, UC Davis
  • Fall 2011 — Spring 2016, School of Physics, Peking University

Experience

  • Fall/Winter 2020, Research Scientist Intern, Amazon A9 (Palo Alto, CA)
  • Summer/Fall 2019, Research Scientist Intern, Amazon A9 (Palo Alto, CA)
  • Fall/Winter 2018, Student Research Collaborator, Google Research (Mountain View, CA)
  • Summer 2018, Research Scientist Intern, Criteo Lab (Palo Alto, CA)

Teaching

  • UC Davis ECS 171. Machine Learning
  • UCLA CS 180. Algorithm
  • UCLA CS 260. Machine Learning Algorithms

Preprints

  • Improving the Speed and Quality of GAN by Adversarial Training, Jiachen Zhong, Xuanqing Liu, Cho-Jui Hsieh. ArXiv preprint (2020) [PDF] [Code]
  • How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers, Yuanhao Xiong, Xuanqing Liu, Li-Cheng Lan, Yang You, Si Si, Cho-Jui Hsieh. ArXiv preprint (2020) [PDF]
  • Gradient Boosting Neural Networks: GrowNet, Sarkhan Badirli, Xuanqing Liu, Zhengming Xing, Avradeep Bhowmik, Sathiya S. Keerthi. ArXiv preprint (2020) [PDF] [Code]
  • Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective, Lu Wang, Xuanqing Liu, Jinfeng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh . ArXiv preprint (2019) [PDF] [Code]
  • GraphDefense: Towards Robust Graph Convolutional Networks, Xiaoyun Wang, Xuanqing Liu, Cho-Jui Hsieh. ArXiv preprint (2019) [PDF]
  • Stochastic Second-order Methods for Non-convex Optimization with Inexact Hessian and Gradient, Liu Liu, Xuanqing Liu, Cho-Jui Hsieh, Dacheng Tao. ArXiv preprint (2018). [PDF]
  • An inexact subsampled proximal Newton-type method for large-scale machine learning, Xuanqing Liu, Cho-Jui Hsieh*, Jason D. Lee*, Yuekai Sun* (*alphabetical order). ArXiv preprint (2017). [PDF]

Publications

  • Label Disentanglement in Partition-based Extreme Multilabel Classification, Xuanqing Liu, Wei-Cheng Chang, Hsiang-Fu Yu, Cho-Jui Hsieh, Inderjit S Dhillon. To appear at NeurIPS 2021 [PDF] [[Code]]
  • Investigating heterogeneities of live mesenchymal stromal cells using AI-based label-free imaging, Sara Imboden*, Xuanqing Liu*, Brandon S. Lee, Marie C. Payne, Cho-Jui Hsieh, Neil Y. C Lin (*Equal Contribution). Scientific Reports [PDF]
  • Evaluations and Methods for Explanation through Robustness Analysis, Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep Kumar Ravikumar, Seungyeon Kim, Sanjiv Kumar, Cho-Jui Hsieh. ICLR 2021 [PDF]
  • Provably Robust Metric Learning, Lu Wang, Xuanqing Liu, Jinfeng Yi, Yuan Jiang, Cho-Jui Hsieh. NeurIPS 2020 [PDF] [Code]
  • Quantifying Marker Expression of Live Mesenchymal Stromal Cells Using Transmitted Light Microscopy, Brandon Lee, Sara Imboden, Xuanqing Liu, Cho-Jui Hsieh, Neil Lin. BMES 2020
  • Learning to Encode Position for Transformer with Continuous Dynamical Model, Xuanqing Liu, Hsiang-Fu Yu, Inderjit Dhillon, Cho-Jui Hsieh. ICML 2020 [PDF] [Code] [Media]
  • How Does Noise Help Robustness? Stabilizing Neural ODE Networks with Stochastic Noise, Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh. CVPR 2020 (oral presentation) [PDF] [Code]
  • A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning, Xuanqing Liu, Si Si, Xiaojin(Jerry) Zhu, Yang Li, Cho-Jui Hsieh. NeurIPS 2019 [PDF] [Poster] [Code]
  • Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks, Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, Cho-Jui Hsieh. KDD (2019) [PDF] Code[V1] [V2]
  • Rob-GAN: Generator, Discriminator and Adversarial Attacker, Xuanqing Liu, Cho-Jui Hsieh. CVPR (2019). [PDF] [Code]
  • Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network, Xuanqing Liu, Yao Li*, Chongruo Wu*, Cho-Jui Hsieh (*equal contribution). ICLR (2019). [PDF] [Code]
  • Towards Robust Neural Networks via Random Self-ensemble, Xuanqing Liu, Minhao Cheng, Huan Zhang, Cho-Jui Hsieh. ECCV (2018). [PDF] [Appendix] [Code]
  • Fast Variance Reduction Method with Stochastic Batch Size. Xuanqing Liu, Cho-Jui Hsieh. ICML (2018). [PDF]

Drafts

  • Better Generalization by Efficient Trust Region Method. Xuanqing Liu, Jason D. Lee, Cho-Jui Hsieh. OpenReview.net. [PDF]

Services

I reviewed ICML*, NeurIPS, ICLR, CVPR, ICCV, ECCV, WACV, IJCAI, AAAI, TPAMI, IJCNLP, JAIR, etc.

*Top 33% Reviewer in ICML 2020.