About me

My name is Xutong Liu. I am now a postdoctoral fellow working with Prof. John C.S. Lui at the ANSR Lab, the Chinese University of Hong Kong (CUHK). I received my Ph.D. degree from the Computer Science and Engineering Department at CUHK in 2022, proudly supervised by Prof. John C.S. Lui. Prior to that, I received my bachelor’s degree with honered rank (top 5%) from University of Science and Technology of China (USTC) in 2017. For the research, I am fortunate to collaborate with many outstanding researchers, including Dr. Wei Chen, Dr. Siwei Wang from Microsoft Research, Prof. Carlee Joe-Wong, Dr. Jinhang Zuo from Carnegie Mellon University, and Prof. Shuai Li from Shanghai Jiao Tong University.

Research

My research focuses on online/data-driven combinatorial optimization, an intersection of combinatorial optimization, stochastic modeling, online learning, and reinforcement learning. Through the lens of algorithm design and mathematical tools, I am interested in solving decision-making problems for recommender systems, network systems, quantum systems, and data-center optimization. For these applications, my goal is to develop efficient solutions with provable learning efficiency, scalability, and generalizability guarantee.

My recent works mainly study the online learning and the reinforcement learning problems, e.g., combinatorial multi-armed bandits, distributed/federated multi-armed bandits, and reinforcement learning with large action space.

News

  • Dec. 2022: Our work on variance-adaptive algorithm for probabilistic maximum coverage problem is accepted to INFOCOM 2023.
  • Nov. 2022: Our work on explorative key-term selection strategies for conversational contextual bandits is accepted to AAAI 2023.
  • Sept. 2022: Our work on batch-size independent regret bounds for combinatorial bandits appears in NeurIPS 2022.
  • July. 2022: I successfully pass my Ph.D. thesis defence! I will join CUHK as a postdoc this fall.
  • May. 2022: Our work on federated online clustering of bandits appears in UAI 2022.
  • April. 2022: Our work on constrained multi-armed bandit for network applications is accepted to IEEE Transactions on Mobile Computing.
  • March 2022: Our work on competitive influence maximization appears in AISTATS 2022.
  • July 2021: Our work on multi-layered network exploration via random walks appears in ICML 2021 as a long talk (3%).