Large Language Models for Graph Learning
LLMs4Graph workshop at WWW'24
Tell Me More!

About

Graphs are widely applied to encode entities with various relations in web applications such as social media and recommender systems. Meanwhile, graph learning-based technologies, such as graph neural networks, are demanding to support the analysis, understanding and usage of the data in graph structures. Recently, the boom of language foundation models, especially Large Language Models (LLMs), has advanced several main research areas in artificial intelligence, such as natural language processing, graph mining and recommender systems. The synergy between LLMs and graph learning holds great potential to prompt the research in both areas. For example, LLMs can facilitate existing graph learning models by providing high-quality textual features for entities and edges, or enhancing the graph data with encoded knowledge and information. It may also innovate with novel problem formulations on graph-related tasks. Due to the research significance as well as the potentials, the convergent area of LLMs and graph learning has attracted considerable research attention.

By inviting experts to deliver keynote speeches, the LLMs4Graph workshop on WWW'24 aims to share the latest innovations and breakthroughs on the target topic, serving as a beacon for current and future research. With the oral and poster sessions, it provides a communication platform for researchers in the areas of either natural language processing or graph learning to exchange their ideas, summarize existing works and discuss prospective aspects. It will focus on the under-explored ability of LLMs on graph learning tasks including modeling, prediction and reasoning. By including participants from both the academia and industry, this workshop tries to narrow the gaps between the application attempts and methodology studies, more importantly, to push the boundaries of next-gen graph methods and AI-driven decision making.


Topics of Interest

Researchers in the graph learning or relevant communities have preliminarily tested the ability of current LLMs for handling graph structured data or different ways to empower the existing LLMs for complex graph tasks given current weaknesses. Despite of the prosect as well as the achievement so far, the research area of LLMs for learning on graph is still emerging and under-explored. In order to encourage researchers with relevant background to get engaged and make contributions, we organize the first-of-its-kind LLMs for Graph Learning workshop, hoping to receive insights from both the academia and industry.

The workshop will be welcoming theory and methodology papers falling into the scope of following themes, including but not limited to:

  • Generative Models on Graph-structured Data
  • LLMs-enhanced Representation Learning for Graphs
  • LLMs for Graph Data Augmentation
  • LLMs for Interpretability and Explanation for Graph
  • Graph Structure Modeling with LLMs
  • Graph Foundation Models Pre-training
  • Fine-tuning LLMs for Graph
  • Graph Prompting and Instructing
  • Graph Task Re-formulation and Evaluation in the Era of LLMs
  • Graph Reasoning with LLMs
  • Synergistic Integration of Graph Modeling and LLMs
  • LLMs for Dynamic Graph
  • LLMs-based Methods for Graph Tasks in Vertical Domains
  • Graph-enhanced LLMs
  • Trustworthy LLMs-enhanced Graph Learning
and application papers focused on but not limited to:
  • Recommender Systems
  • Computer Vision
  • Bioinformatics
  • Financial
  • Transportation/mobility networks (e.g., traffic prediction)


Important Dates

  • Submission Deadline: February 25, 2024
  • Notification of Acceptance: March 4th, 2024
  • Camera-ready Paper Due: March 11th, 2024
  • LLMs4Graph at WWW'24 Workshop Day: Tuesday, May 14, 2024, Half-Day (AM)

Submission Details

Workshop papers should not exceed 12 pages in length (maximum 8 pages for the main paper content + maximum 2 pages for appendixes + maximum 2 pages for references). Papers must be submitted in PDF format according to the ACM template published in the ACM guidelines, selecting the generic “sigconf” sample. The PDF files must have all non-standard fonts embedded. Workshop papers must be self-contained and in English. Submissions will be reviewed double-blind, and author names and affiliations should NOT be listed. Submitted works will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. Please refer to the WWW'24 website for futher details.

Note that at least one of the authors of the accepted workshop papers must register for the workshop (details to come on the main WWW'24 website). For questions about submission, please contact us at: wenqifan03@gmail.com

To submit your work, please use the following Submission Link

Workshop Program

09:00 ~ 09:10 Opening Remarks
09:10 ~ 09:55 Invited Talk I: From Evaluation To Understanding: Auto-benchmarking (Multi-modal) LLMs and Beyond by Dr. Yixin CAO
09:55 ~ 10:40 Invited Talk II: Making Large Language Models Aligned with Recommendation by Dr. Xiang WANG

10:40 ~ 11:00 Coffee Break

11:00 ~ 11:30 Invited Talk III: Collaborative Large Language Model for Recommender Systems by Dr. Jundong Li
11:30 ~ 12:00 Oral Presentation: A Comprehensive Evaluation of Large Language Model on Temporal Event Forecasting
12:00 ~ 12:20 Closing Remarks

Keynote Speakers

Dr. Yixin Cao

Assistant Professor


Singapore Management University


Dr. Xiang Wang

Professor


University of Science and Technology of China


Dr. Jundong Li

Assistant Professor


University of Virginia


Organization


Workshop Co-Chairs

Wenqi Fan

Assistant Professor

The Hong Kong Polytechnic University

Yujuan Ding

Research Fellow

The Hong Kong Polytechnic University

Xiao Huang

Assistant Professor

The Hong Kong Polytechnic University

Qing Li

Professor

The Hong Kong Polytechnic University