My name is Xutong Liu. I am now a postdoctoral fellow working with Prof. John C.S. Lui (IEEE/ACM Fellow) at the ANSR Lab, the Chinese University of Hong Kong (CUHK). I received my Ph.D. degree from the Computer Science and Engineering Department at CUHK in 2022, proudly supervised by Prof. John C.S. Lui. Prior to that, I received my bachelor’s degree with honered rank (top 5%) from University of Science and Technology of China (USTC) in 2017. For the research, I am fortunate to collaborate with many outstanding researchers, including Dr. Wei Chen (IEEE Fellow, Chair of MSR Asia Theory Center), Dr. Siwei Wang from Microsoft Research, Prof. Shuai Li from Shanghai Jiao Tong University, Dr. Jinhang Zuo, Prof. Mohammad Hajiesmaili, Prof. Don Towsley (IEEE/ACM Fellow) from University of Massachusetts Amherst, Prof. Carlee Joe-Wong from Carnegie Mellon University, and Prof. Adam Wierman from California Institute of Technology.
My research focuses on data-driven combinatorial optimization and combinatorial optimization under uncertainty, which are intersections of combinatorial optimization, stochastic modeling, online learning, and reinforcement learning. Through the lens of algorithm design and mathematical tools, I am interested in solving decision-making problems for recommender systems, network systems, and data-center optimization. For these applications, my goal is to develop efficient solutions with provable learning efficiency, scalability, and generalizability guarantee.
My recent works mainly study the online learning and the reinforcement learning problems, e.g., combinatorial multi-armed bandits, distributed/federated multi-armed bandits, and reinforcement learning with large action space.
[ICML’23] Contextual Combinatorial Bandits with Probabilistically Triggered Arms
Xutong Liu, Jinhang Zuo, Siwei Wang, John C.S. Lui, Mohammad Hajiesmaili, Adam Wierman, Wei Chen.
The 40th International Conference on Machine Learning (ICML), 2023. (1827/6538=27.9%).
[NeurIPS’22] Batch-Size Independent Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms or Independent Arms
Xutong Liu, Jinhang Zuo, Siwei Wang, Carlee Joe-Wong, John C.S. Lui, Wei Chen.
Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS), 2022. (2665/10411=25.6%).
[arXiv] [paper] [slides] [poster]
[UAI’22] Federated Online Clustering of Bandits
Xutong Liu, Haoru Zhao, Tong Yu, Shuai Li, John C.S. Lui.
The 38th Conference on Uncertainty in Artificial Intelligence (UAI), 2022. (230/712=32%).
[paper] [arXiv] [slides] [poster] [code]
[ICML’21, Long Oral] Multi-layered Network Exploration via Random Walks: From Offline Optimization to Online Learning
Xutong Liu, Jinhang Zuo, Xiaowei Chen, Wei Chen, John C.S. Lui.
The 38th International Conference on Machine Learning (ICML), Long Oral, 2021. (166/5513=3%).
[paper] [arXiv] [slides] [poster] [video]
[INFOCOM’18] An Online Learning Approach to Network Application Optimization with Guarantee
Kechao Cai, Xutong Liu, Yuzhen Janice Chen, and John C.S. Lui.
IEEE International Conference on Computer Communications (INFOCOM), 2018.
- Dec. 2023: Our work on learning context-aware probabilistic maximum coverage bandits is accepted to INFOCOM 2024.
- Oct. 2023: I am visiting University of Massachusetts Amherst as a visiting scholar advised by Prof. Mohammad Hajiesmaili.
- Sept. 2023: Our work on online clustering of bandits with misspecified user model is accepted to NeurIPS 2023.
- June. 2023: Our work on free-exploration in cooperative multi-agent bandits is accepted to UAI 2023.
- April. 2023: Our work on contextual combinatorial bandits with probabilistically triggered arms is accepted to ICML 2023.
- April. 2023: I was awarded RGC Postdoctoral Fellowship (one of 50 awardees globally)!
- Jan. 2023: Our work on on-demand communication for asynchronous multi-agent bandits is accepted to AISTATS 2023.
- Jan. 2023: Our work on near-optimal individual regret & low communications in multi-agent bandits is accepted to ICLR 2023.
- Dec. 2022: Our work on variance-adaptive algorithm for probabilistic maximum coverage problem is accepted to INFOCOM 2023.
- Nov. 2022: Our work on explorative key-term selection strategies for conversational contextual bandits is accepted to AAAI 2023.
- Sept. 2022: Our work on batch-size independent regret bounds for combinatorial bandits is accepted to NeurIPS 2022.
- July. 2022: I successfully pass my Ph.D. thesis defence! I will join CUHK as a postdoc this fall.
- May. 2022: Our work on federated online clustering of bandits appears in UAI 2022.
- April. 2022: Our work on constrained multi-armed bandit for network applications is accepted to IEEE Transactions on Mobile Computing.
- March 2022: Our work on competitive influence maximization appears in AISTATS 2022.
- July 2021: Our work on multi-layered network exploration via random walks appears in ICML 2021 as a long talk (3%).