The 2nd Workshop on Recommendation with Generative Models
on the Web Conference 2024 (WWW'24)
Leo 3 @ Resorts World Sentosa Convention Centre, Singapore
Monday 13 May 2024, 9:00 AM - 12:35 PM

Summary

The rise of generative models has driven significant advancements in recommender systems, leaving unique opportunities for enhancing users’ personalized recommendations. This workshop serves as a platform for researchers to explore and exchange innovative concepts related to the integration of generative models into recommender systems. It primarily focuses on five key perspectives: (i) improving recommender algorithms, (ii) generating personalized content, (iii) evolving the user-system interaction paradigm, (iv) enhancing trustworthiness checks, and (v) refining evaluation methodologies for generative recommendations. With generative models advancing rapidly, an increasing body of research is emerging in these domains, underscoring the timeliness and critical importance of this workshop. The related research will introduce innovative technologies to recommender systems and contribute to fresh challenges in both academia and industry. In the long term, this research direction has the potential to revolutionize the traditional recommender paradigms and foster the development of next-generation recommender systems. [PDF]

Workshop Final Programme

Activity type Time (Singapore Time) Title
Opening 9.00am
Keynote 9.00am-9.35am Keynote 1: LLMs for Recommendations: A Hybrid Approach by Dr. Minmin Chen from Google
Paper oral 9.35am-9.45am Multimodal Conditioned Diffusion Model for Recommendation
Paper oral 9.45pm-9.55pm Diffusion Recommendation with Implicit Sequence Influence
Keynote 9.55am-10.30am Keynote 2: Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations by Dr. Jiaqi Zhai and Dr. Rui Li from Meta
Coffee break 10.30am-11.00am
Keynote 11.00am-11.35am Keynote 3: Data, methods, and evaluation for knowledge-grounded conversational recommendation systems by Prof. Julian McAuley from UCSD
Paper oral 11.35am-11.45am OutfitGPT: LLMs as Fashion Outfit Generator and Recommender
Paper oral 11.45am-11.55am Bridging Items and Language: A Transition Paradigm for Large Language Model-Based Recommendation
Paper oral 11.55am-12.05pm A Study of Implicit User Unfairness in Large Language Models for Recommendation
Paper oral 12.05pm-12.15pm How Reliable is Your Simulator? Analysis on the Limitations of Current LLM-based User Simulators for Conversational Recommendation
Paper oral 12.15pm-12.25pm Controllable and Transparent Textual Latents for Recommender Systems
Paper oral 12.25pm-12.35pm Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement Learning

Our workshop will be held in Leo 3 @ Resorts World Sentosa Convention Centre. [Layout]

Invited Speakers

Julian McAuley

Julian McAuley

University of California San Diego (UCSD)

Data, methods, and evaluation for knowledge-grounded conversational recommendation systems [Slides]

Abstract
In this talk we'll explore the current landscape of conversational recommendation in light of new developments on Large Language Models. We'll look at ways that current models can potentially be improved by exploring new datasets, methods, and evaluation protocols for conversational recommendation.

Bio
Dr. Julian McAuley has been a professor in University of California San Diego (UCSD) since 2014, where his lab works on problems in the area of Personalized Machine Learning. Broadly speaking, his lab’s research seeks to develop machine learning techniques for settings where differences among individuals explain significant variability in outcomes. A core instance of this problem is that of recommender systems, one of the core areas of his lab’s research, where he develops technologies that underlie algorithms like those used for recommendations on Netflix, Amazon, or Facebook.

Minmin Chen

Minmin Chen

Google

LLMs for Recommendations: A Hybrid Approach [Slides]

Abstract
While LLMs’ reasoning and generalization capabilities can aid higher level user understanding and longer term planning for recommendations, directly applying them to industrial recommendation systems have been shown challenging. The talk will cover our recent proposal on a hybrid approach to combine LLMs and classic recommendation models, and study its effectiveness for a challenging recommendation task on user exploration.

Bio
Minmin Chen is a principal research scientist at Google Deepmind, leading efforts on building conversational AI systems through RL and personalization. She received her PhD from Washington University in St. Louis. Her main research interests are in reinforcement learning and bandits algorithms and their applications to recommendation and assistive systems. She recently received the best paper award from WSDM 2024 for her work on Exploration. She serves as guest editor for Journal of Machine Learning, and Area chairs for Neurips, ICML, ICLR and RecSys.

Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations [Slides]

Abstract
Recommendation systems enable billions of people to make decisions on a daily basis in online content and e-commerce platforms. The scale of such systems have increased by close to 10,000x in the last few years. Despite these being the largest software systems on the planet (https://youtu.be/watch?v=txOv_pi-_R4&t=2020s as Jensen Huang remarked in NVIDIA's recent ER), most DLRM models don’t scale with compute. Our work, Generative Recommenders, reformulates ranking and retrieval in recommendation systems as sequential transduction tasks while significantly outperforming traditional DLRMs for the first time. Our new architecture introduced, HSTU, outperforms SotA Transformers by up to 15.2x on 8k sequences, while our inference algorithm, M-FALCON, boosts inference efficiency by 900x vs traditional DLRMs thanks to a novel design that fully amortizes computational costs via micro-batching. Generative Recommenders and HSTU not only deliver double-digit improvements in online A/B tests at Meta, but also demonstrate scaling law in industrial-scale RecSys, up to GPT-3/LLaMa-2-compute scale, opening up new research frontiers through the application of scaling law.

Bio
Jiaqi Zhai is a Distinguished Engineer at Meta. He leads efforts to improve recommendation systems across Facebook and Instagram, with a mission to connect billions of people to informative, entertaining, and insightful content. His team developed multiple state-of-the-art foundational technologies, including the first trillion-parameter scale generative recommenders used in production. Prior to Meta, he spent 6 years at Google and developed the cross-platform user understanding system used in Search, Chrome, and YouTube, Google's first billion-user scale online learning system with minute-level latency, and the first generative model deployed on Google Search. His work has been published in top conferences including KDD, WWW, and SIGMOD.

Rui Li

Rui Li

Meta

Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations [Slides]

Abstract
Rui Li will give the talk together with Jiaqi Zhai (refer to the talk abstract of Jiaqi Zhai).

Bio
Rui Li is a senior staff engineer at Meta working on large scale recommendation models, systems, and products. Before joining Meta, he worked at Yahoo! Research and YouTube Recommendation. Rui earned his PhD in UIUC back in 2013 working on data mining, machine learning. Rui is consistently interested in driving users' experiences and business values via practical machine learning in the search and recommendation area, published 20+ in top conferences including KDD, WWW, VLDB, and SIGIR.

Contributions

  • Diffusion Recommendation with Implicit Sequence Influence
    Yong Niu, Xing Xing, Zhichun Jia, Ruidi Liu, Mindong Xin and Jianfu Cui
  • A Study of Implicit User Unfairness in Large Language Models for Recommendation
    Chen Xu, Wenjie Wang, Yuxin Li, Liang Pang, Jun Xu and Tat-Seng Chua
  • Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement Learning
    Aleksandr Vladimirovich Petrov and Craig Macdonald
  • Controllable and Transparent Textual Latents for Recommender Systems
    Emiliano Penaloza, Haolun Wu, Olivier Gouvert and Laurent Charlin
  • How Reliable is Your Simulator? Analysis on the Limitations of Current LLM-based User Simulators for Conversational Recommendation
    Lixi Zhu, Xiaowen Huang and Jitao Sang
  • Multimodal Conditioned Diffusion Model for Recommendation [Slides]
    Haokai Ma, Yimeng Yang, Lei Meng, Ruobing Xie and Xiangxu Meng
  • Bridging Items and Language: A Transition Paradigm for Large Language Model-Based Recommendation [Slides]
    Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, See-Kiong Ng and Tat-Seng Chua
  • OutfitGPT: LLMs as Fashion Outfit Generator and Recommender
    Yujuan Ding, Junrong Liao, Wenqi Fan, Yi Bin and Qing Li

Call for Papers

The main objective of this workshop is to encourage pioneering research in the integration of generative models with recommender systems, with a specific focus on five key aspects. First, this workshop will motivate active researchers to utilize generative models for enhancing recommender algorithms and refining user modeling. Second, it promotes utilizing generative models to generate diverse content, i.e., AI-generated content (AIGC), in certain situations, complementing human-generated content to satisfy a broader range of user preferences and information needs. Third, it embraces substantial innovations in user interactions with recommender systems, possibly driven by the boom of large language models (LLMs). Fourth, the workshop will highlight the significance of trust in employing generative models for recommendations, encompassing aspects like content trustworthiness, algorithmic biases, and adherence to evolving ethical and legal standards. Lastly, the workshop will prompt researchers to develop diverse methods for the evaluation, including novel metrics and human evaluation approaches.

The workshop provides an invaluable forum for researchers to present the latest advancements in the rapidly evolving field of recommender systems. We welcome original submissions focusing on generative models in recommender systems, including a range of relevant topics:

    • Leveraging LLMs and other generative models such as diffusion models to improve user modeling and various recommendation tasks, including sequential, cold-start, social, conversational, multimodal, and causal recommendation tasks.
    • Improving generative recommender models (e.g., LLM-based recommenders) from different aspects, such as model architecture, and training and inference efficiency.
    • Combining external knowledge from LLMs or other generative models to enhance user and item representation learning.
    • Generative recommendation by harnessing generative AI to drive personalized item creation or editing, particularly in contexts such as advertisement, image, and micro-video.
    • Innovation of user-system interaction paradigm for effective user feedback by leveraging strong conversational capability of LLMs.
    • Real-world applications of generative recommender systems, ranging from finance to streaming platforms and social networks.
    • Trustworthy recommendation with generative models, for example, developing the standards and technologies to improve or inspect the recommendations from the aspects of bias, fairness, privacy, safety, authenticity, legal compliance, and identifiability.
    • Developing generative agents empowered by LLMs, motivating the recommendation agents from user simulation and data collection, to algorithm enhancement and evaluation.
    • Evaluation of generative recommender systems, including new evaluation metrics, standards, and human evaluation approaches.

Submitted papers must be a single PDF file in the template of ACM WWW 2024. Submissions can be of varying length from 4 to 8 pages, plus unlimited pages for references. The authors may decide on the appropriate length of the paper as no distinction is made between long and short papers. All submitted papers will follow the "double-blind" review policy and undergo the same review process and duration. Expert peer reviewers in the field will assess all papers based on their relevance to the workshop, scientific novelty, and technical quality.

Submission site: https://easychair.org/conferences/?conf=thewebconf2024_workshops (track of "The 2nd Workshop on Recommendation with Generative Models"). Accepted papers have the option to be included in the WWW Companions proceedings.

Important Dates

  • Paper Submission Deadline: February 5, 2024 (11:59 PM, AoE) (extended, February 26, 2024)
  • Acceptance Notification: March 4, 2024
  • Workshop Date: May 13, 2024

Workshop Organizers

Dr. Wenjie Wang

National University of Singapore

 

Mr. Yang Zhang

University of Science and Technology of China

 

Ms. Xinyu Lin

National University of Singapore

 

Dr. Fuli Feng

University of Science and Technology of China

 

Dr. Weiwen Liu

Huawei Noah's Ark Lab, China

 

Dr. Yong Liu

Huawei Noah’s Ark Lab, Singapore

 

Dr. Xiangyu Zhao

City University of Hong Kong

 

Dr. Wayne Xin Zhao

Renmin University of China

 

Dr. Yang Song

Kuaishou Technology, Beijing, China

 

Dr. Xiangnan He

University of Science and Technology of China

 

Contact