My name is Xutong Liu. I am now a postdoctoral fellow working with Prof. John C.S. Lui (IEEE/ACM Fellow) at the ANSR Lab, the Chinese University of Hong Kong (CUHK). I received my Ph.D. degree from the Computer Science and Engineering Department at CUHK in 2022, proudly supervised by Prof. John C.S. Lui. Prior to that, I received my bachelor’s degree with honered rank (top 5%) from University of Science and Technology of China (USTC) in 2017. For the research, I am fortunate to collaborate with many outstanding researchers, including Dr. Wei Chen (IEEE Fellow, Chair of MSR Asia Theory Center), Dr. Siwei Wang from Microsoft Research, Prof. Shuai Li from Shanghai Jiao Tong University, Dr. Jinhang Zuo, Prof. Mohammad Hajiesmaili, Prof. Don Towsley (IEEE/ACM Fellow) from University of Massachusetts Amherst, Prof. Carlee Joe-Wong from Carnegie Mellon University, and Prof. Adam Wierman from California Institute of Technology.
My research focuses on data-driven combinatorial optimization and combinatorial optimization under uncertainty, which are intersections of combinatorial optimization, stochastic modeling, online learning, and reinforcement learning. Through the lens of algorithm design and mathematical tools, I am interested in solving decision-making problems for recommender systems, network systems, quantum systems, and data-center optimization. For these applications, my goal is to develop efficient solutions with provable learning efficiency, scalability, and generalizability guarantee.
My recent works mainly study the online learning and the reinforcement learning problems, e.g., combinatorial multi-armed bandits, distributed/federated multi-armed bandits, and reinforcement learning with large action space.
- April. 2023: Our work on contextual combinatorial bandits with probabilistically triggered arms is accepted to ICML 2023.
- April. 2023: I was awarded RGC Postdoctoral Fellowship (one of 50 awardees globally)!
- Jan. 2023: Our work on on-demand communication for asynchronous multi-agent bandits is accepted to AISTATS 2023.
- Jan. 2023: Our work on near-optimal individual regret & low communications in multi-agent bandits is accepted to ICLR 2023.
- Dec. 2022: Our work on variance-adaptive algorithm for probabilistic maximum coverage problem is accepted to INFOCOM 2023.
- Nov. 2022: Our work on explorative key-term selection strategies for conversational contextual bandits is accepted to AAAI 2023.
- Sept. 2022: Our work on batch-size independent regret bounds for combinatorial bandits appears in NeurIPS 2022.
- July. 2022: I successfully pass my Ph.D. thesis defence! I will join CUHK as a postdoc this fall.
- May. 2022: Our work on federated online clustering of bandits appears in UAI 2022.
- April. 2022: Our work on constrained multi-armed bandit for network applications is accepted to IEEE Transactions on Mobile Computing.
- March 2022: Our work on competitive influence maximization appears in AISTATS 2022.
- July 2021: Our work on multi-layered network exploration via random walks appears in ICML 2021 as a long talk (3%).