Recommender Systems in the Era of Large Language Models (LLMs)

Keywords: Recommender Systems, Large Language Models(LLMs), Pre-training and Fine-tuning, In-context Learning, Prompting

Event Date: August 4th (Day 2), Afternoon 1 & 2

Address: IJCAI 2024, Jeju, Korea

Our Survey Paper: Recommender Systems in the Era of Large Language Models (LLMs)

INTRO

Given the explosive growth of information available, recommender systems have become one of the essential services for online activities that effectively assist users in finding the content they are interested in or the product they target to purchase, thereby enhance user engaagement and satisfaction. While recommendation methods have experienced great development boosted by deep learning advances over the past few years, they are still facing several limitations. Existing methods may have difficulties in effectively understanding and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions. Recently, the emergence of Large Language Models (LLMs) has revolutionized the field of Natural Language Processing (NLP) for their spectacular language understanding and generation abilities. More impressively, they are capable of reasoning to tackle complex problems and can be easily generalized for new domains or tasks. Such capabilities provide opportunities to address the aforementioned limitations in existing recommendation methods, which makes LLM-powered recommender systems a promising research area in the future. To this end, in this tutorial, we aim to provide a comprehensive overview of the recent progress of LLM-powered recommender systems from various aspects, including Pre-training, Fine-tuning, and Prompting. It is expected to help researchers and industrial practitioners in related fields in the AI community to gain an overview understanding of LLM-powered recommender systems and inspire them with new ideas for more exciting studies in the future.

Our Slides: Part I, Part II

The topics of this tutorial include (but are not limited to) the following:

  1. Introduction to LLM-based Recommender Systems
  2. Preliminaries of Recommender Systems and LLMs
  3. Pre-training LLM-based Recommender Systems
  4. Fine-tuning LLM-based Recommender Systems
  5. Prompting LLM-based Recommender Systems
  6. Dimension Interactions & Future Directions

The applications of LLMs for various recommendation tasks in the scenario of movie recommendation

Image

Presenters and Organizers

Dr. Wenqi Fan is an assistant professor of the Department of Computing at The Hong Kong Polytechnic University (PolyU). He received his Ph.D. degree from the City University of Hong Kong (CityU) in 2020. From 2018 to 2020, he was a visiting research scholar at Michigan State University. His research interests are in the broad areas of machine learning and data mining, with a particular focus on Recommender Systems, Graph Neural Networks, and Large Language Models (LLMs). He has published innovative papers in top-tier journals and conferences such as IEEE TKDE, KDD, WWW, NeurIPS, ICDE, IJCAI, AAAI, RecSys, WSDM, and SDM. He serves as top-tier conference (senior) program committee members (e.g., ICML, ICLR, NeurIPS, KDD, WWW, AAAI, IJCAI, CIKM, WSDM), and journal reviewers (e.g., TKDE, TIST, TKDD, TOIS, TAI). He also serves as the lead tutor of tutorials in top-tier conferences (e.g., WSDM 2023, WWW 2021/2022, IJCAI 2021, and ICAPS 2021). More information about him can be found at https://wenqifan03.github.io/.

Dr. Yujuan Ding is a research fellow of the Department of Computing, The Hong Kong Polytechnic University (PolyU). She received her Ph.D. degree from the same university in 2021. From 2019 to 2020, she visited the school of Computing at National University of Singapore as a research student. She has extensive research experience in the field of multimedia analysis, large language models (LLMs), and information retrieval across various domains, specifically focusing on the topics of recommender systems, time-series modeling, vision-language modeling and other AI applications. She has published her research outcome in top-tier journals and conferences including ACM CSUR, IEEE TMM, IEEE TSCVT, ACM MM, ICCV, etc. She was the recipient of the best student paper award of ICMR 2021. She serves as the program committee members for top-tier conferences (e.g., AAAI, ACL, ACM MM, EMNLP) and reviewers for journals. More information about her can be found at https://joanding.github.io/.

Shijie Wang is currently a PhD student at The Hong Kong Polytechnic University, supervised by Dr. Wenqi Fan and Prof. Qing Li. Before joining the PolyU, he received his Bachelor’s degree (Hons) in Information and Computing Science from Xi’an Jiaotong-Liverpool University in China and the University of Liverpool in the UK in 2022. His research interests include recommender systems, large language models (LLMs), and graph neural networks. He has published innovative works in top-tier conferences such as KDD, WWW. He serves as top-tier conference program committee members (e.g., AAAI, ICDM, etc.), and journal reviewers (e.g., TKDD, etc.). More information about him can be found at https://sjay-wang.github.io/.

Liangbo Ning is currently a PhD student at The Hong Kong Polytechnic University, supervised by Prof. Qing Li and Dr. Wenqi Fan. He got his B.Eng. and MSc degrees at Northwestern Polytechnical University (NPU), Xi’an, China, in 2020 and 2023, respectively. His research interests include adversarial attacks, large language models (LLMs), and pattern recognition. He has published papers in top-tier journals and conferences such as TNNLS, WWW, KDD, etc. Additionally, he serves as a reviewer for leading journals and conferences, including TNNLS, TKDE, TKDD, KDD, etc. More information about him can be found at https://biglemon-ning.github.io/.

Dr. Qiaoyu Tan is an Assistant Professor in computer science at New York University (Shanghai) (NYU Shanghai), leading the Data-Centric AI and Network Lab. He received his Ph.D. at Texas A\&M University, and B.Eng. at Southwest University, China. His research interests are generally in machine learning and data mining, with a particular focus on graph machine learning, foundation models, effective deep learning, and trustworthy AI. He has published papers in top-tier conferences such as NeurIPS, WWW, SIGIR, AAAI, ICDE, WSDM, CIKM, etc. He has received the best student paper finalist on AMIA 2023. He serves as top-tier conference program committee members (e.g., IJCAI, KDD, WWW, AAAI, NeurIPS, CIKM, WSDM) and journal reviewers (e.g., TKDE, TKDD). ore information about him can be found at https://qiaoyu-tan.github.io/.

Prof. Qing Li is currently a Chair Professor (data science) and the Head of the Department of Computing, The Hong Kong Polytechnic University. He received the B.Eng. degree from Hunan University, Changsha, China, the M.Sc. and Ph.D. degrees from the University of Southern California, Los Angeles, all in computer science. His research interests include object modeling, multimedia databases, social media, and recommender systems. He is a Fellow of IEEE and IET, a member of ACM SIGMOD and IEEE Technical Committee on Data Engineering. He is the chairperson of the Hong Kong Web Society and is a steering committee member of DASFAA, ICWL, and WISE Society. More information about him can be found at https://www4.comp.polyu.edu.hk/~csqli/.