Navigating Large Language Models for Recommendation: From Architecture to Learning Paradigms and Deployment
on SIGIR 2025, July 13th, 2025
Padova Congress center
Workshop Survey

Summary

Large Language Models (LLMs) are reshaping the landscape of recommender systems, giving rise to the emerging field of LLM4Rec that attracts both academia and industry. Unlike earlier approaches that simply borrowed model architectures or learning paradigms from language models, recent advances have led to a dedicated and evolving technical stack for LLM4Rec, spanning architecture design, pre-training and post-training strategies, inference techniques, and real-world deployment. This tutorial offers a systematic and in-depth overview of LLM4Rec through the lens of this technical stack. We will examine how LLMs are being adapted to recommendation tasks across different stages, empowering them with capabilities such reasoning, planning, and in-context learning. Moreover, we will highlight practical challenges including complex user modeling, trustworthiness, and evaluation. Distilling insights from recent research and identifying open problems, this tutorial aims to equip participants with a comprehensive understanding of LLM4Rec and inspire continued innovation in this rapidly evolving field.

Tutorials

  • May 14, 2024, 9:00 AM-12:30 PM: Tutorial at WWW'24. [PDF] [Slides]
  • July 13, 2025: Tutorial at SIGIR'25. [PDF] (Slides TBD)

Tutorial Organizers

Ms. Xinyu Lin

Ph.D Candidate

National University of Singapore

 

Mr. Keqin Bao

Ph.D Candidate

University of Science and Technology of China

 

Mr. Jizhi Zhang

Ph.D Candidate

University of Science and Technology of China

 

Dr. Yang Zhang

Postdoctoral Research Fellow

National University of Singapore

 
 

Dr. Wenjie Wang

Postdoctoral Research Fellow

University of Science and Technology of China

 

Dr. Fuli Feng

Professor

University of Science and Technology of China

 

Dr. Xiangnan He

Professor

University of Science and Technology of China