Recommender Systems in the Era of Large Language Models (LLMs)

Keywords: Recommender Systems, Large Language Models(LLMs), Pre-training and Fine-tuning, In-context Learning, Prompting

Event Date: December 3rd (Day 3), 10:00 AM - 12:00 AM (UTC+8, Beijing Time)

Address: Room 6, ICDM 2023, Shanghai, China

Zoom ID: 91649466943 Password: 202312

Our Survey Paper: A Comprehensive Survey on LLM-based Recommender Systems

INTRO

With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component in our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating their textual side information, these DNN-based methods still have some limitations, such as difficulties in effectively understanding users’ interests and capturing textual side information, inabilities in generalizing to various seen/unseen recommendation scenarios and reasoning on their predictions, etc.

Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems.

Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, so as to provide researchers and practitioners in relevant fields with an in-depth understanding. Therefore, in this survey, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent advanced techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss the promising future directions in this emerging field.

The topics of this tutorial include (but are not limited to) the following:

  1. Introduction to LLM-based Recommender Systems
  2. Preliminaries of Recommender Systems and LLMs
  3. Pre-training LLM-based Recommender Systems
  4. Fine-tuning LLM-based Recommender Systems
  5. Prompting LLM-based Recommender Systems
  6. Dimension Interactions & Future Directions

The applications of LLMs for various recommendation tasks in the scenario of movie recommendation

Image

Presenter Biography

Dr. Wenqi Fan is a research assistant professor of the Department of Computing at The Hong Kong Polytechnic University (PolyU). He received his Ph.D. degree from the City University of Hong Kong (CityU) in 2020. From 2018 to 2020, he was a visiting research scholar at Michigan State University. His research interests are in the broad areas of machine learning and data mining, with a particular focus on Recommender Systems, Graph Neural Networks, and Adversarial Attacks. He has published innovative papers in top-tier journals and conferences such as IEEE TKDE, KDD, WWW, NeurIPS, ICDE, IJCAI, AAAI, RecSys, WSDM, and SDM. He serves as top-tier conference (senior) program committee members (e.g., ICML, ICLR, NeurIPS, KDD, WWW, AAAI, IJCAI, CIKM, WSDM, etc.), and journal reviewers (e.g., TKDE, TIST, TKDD, TOIS, TAI, etc.). He also serves as the lead tutor of tutorials in top-tier conferences (e.g., WWW 2021, IJCAI 2021, and ICAPS 2021). More information about him can be found at https://wenqifan03.github.io/.

Jiatong Li is a Ph.D. student of the Department of Computing, The Hong Kong Polytechnic University (funded by HKPFS). He received his MSc degree in Information Technology (with Distinction, Dean's List) at the University of Melbourne, under the supervision of Dr. Lea Frermann. In 2021, He got his B.Eng. degree in Information Security at Shanghai Jiao Tong University. His research interest lies in Natural Language Processing (Large Language Models), Graph Mining, Drug Discovery, and Recommender Systems. He has published innovative papers in top-tier conferences such as IJCAI and ACL. He also serves as a Program committee member at AAAI 24. More information about him can be found at https://phenixace.github.io/.

Zihuai Zhao is currently a Ph.D. student of the Department of Computing, Hong Kong Polytechnic University, under the supervision of Prof. Qing Li and Dr. Wenqi Fan. He received both his Master's degree and Bachelor's degree in Electrical Engineering at the University of Sydney in 2023 and 2020, respectively. His research interest covers Recommender Systems, Natural Language Processing, and Deep Reinforcement Learning. More information about him can be found at https://scofizz.github.io/.

Yunqing Liu is currently a Ph.D. student of the Department of Computing, Hong Kong Polytechnic University, under the supervision of Dr. Wenqi Fan. He received his Master's degree in Computer Science at the University of Edinburgh, under the supervision of Dr. Elizabeth Polgreen. In 2020, He got his bachelor degrees at Wuhan University. His research interest lies in Drug Discovery, Graph Neural Networks, and Natural Language Processing. More information about him can be found at https://liuyunqing.github.io/.

Dr. Yiqi Wang is an assistant professor at the College of Computer, National University of Defense Technology (NUDT). She is currently working on graph neural networks including fundamental algorithms, robustness and their applications. She has published innovative works in top-tier conferences such as ICML, KDD, WWW, EMNLP, WSDM, and AAAI. She serves as top-tier conference program committee members (e.g., WWW, AAAI, IJCAI, CIKM, and WSDM) and journal reviewers (e.g., TIST, TKDD, TKDE and TOIS). She also serves as the leading tutor of tutorials in top-tier conferences (e.g., KDD 2020, AAAI2021, SDM 2021, KDD 2021 and ICAPS 2021). More information about her can be found at http://cse.msu.edu/~wangy206/.