The rise of generative models has driven significant advancements in recommender systems, leaving unique opportunities for enhancing users’ personalized recommendations. This workshop serves as a platform for researchers to explore and exchange innovative concepts related to the integration of generative models into recommender systems. It primarily focuses on five key perspectives: (i) improving recommender algorithms, (ii) generating personalized content, (iii) evolving the user-system interaction paradigm, (iv) enhancing trustworthiness checks, and (v) refining evaluation methodologies for generative recommendations. With generative models advancing rapidly, an increasing body of research is emerging in these domains, underscoring the timeliness and critical importance of this workshop. The related research will introduce innovative technologies to recommender systems and contribute to fresh challenges in both academia and industry. In the long term, this research direction has the potential to revolutionize the traditional recommender paradigms and foster the development of next-generation recommender systems. [PDF]
Activity type | Time (Singapore Time) | Title |
---|---|---|
Opening | 9.00am | |
Keynote | 9.00am-9.35am | Keynote 1: LLMs for Recommendations: A Hybrid Approach by Dr. Minmin Chen from Google |
Paper oral | 9.35am-9.45am | Multimodal Conditioned Diffusion Model for Recommendation |
Paper oral | 9.45pm-9.55pm | Diffusion Recommendation with Implicit Sequence Influence |
Keynote | 9.55am-10.30am | Keynote 2: Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations by Dr. Jiaqi Zhai and Dr. Rui Li from Meta |
Coffee break | 10.30am-11.00am | |
Keynote | 11.00am-11.35am | Keynote 3: Data, methods, and evaluation for knowledge-grounded conversational recommendation systems by Prof. Julian McAuley from UCSD |
Paper oral | 11.35am-11.45am | OutfitGPT: LLMs as Fashion Outfit Generator and Recommender |
Paper oral | 11.45am-11.55am | Bridging Items and Language: A Transition Paradigm for Large Language Model-Based Recommendation |
Paper oral | 11.55am-12.05pm | A Study of Implicit User Unfairness in Large Language Models for Recommendation |
Paper oral | 12.05pm-12.15pm | How Reliable is Your Simulator? Analysis on the Limitations of Current LLM-based User Simulators for Conversational Recommendation |
Paper oral | 12.15pm-12.25pm | Controllable and Transparent Textual Latents for Recommender Systems |
Paper oral | 12.25pm-12.35pm | Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement Learning |
Our workshop will be held in Leo 3 @ Resorts World Sentosa Convention Centre. [Layout]
Abstract
In this talk we'll explore the current landscape of conversational recommendation in light of new developments on Large Language Models. We'll look at ways that current models can potentially be improved by exploring new datasets, methods, and evaluation protocols for conversational recommendation.
Bio
Dr. Julian McAuley has been a professor in University of California San Diego (UCSD) since 2014, where his lab works on problems in the area of Personalized Machine Learning. Broadly speaking, his lab’s research seeks to develop machine learning techniques for settings where differences among individuals explain significant variability in outcomes. A core instance of this problem is that of recommender systems, one of the core areas of his lab’s research, where he develops technologies that underlie algorithms like those used for recommendations on Netflix, Amazon, or Facebook.
Abstract
While LLMs’ reasoning and generalization capabilities can aid higher level user understanding and longer term planning for recommendations, directly applying them to industrial recommendation systems have been shown challenging. The talk will cover our recent proposal on a hybrid approach to combine LLMs and classic recommendation models, and study its effectiveness for a challenging recommendation task on user exploration.
Bio
Minmin Chen is a principal research scientist at Google Deepmind, leading efforts on building conversational AI systems through RL and personalization. She received her PhD from Washington University in St. Louis. Her main research interests are in reinforcement learning and bandits algorithms and their applications to recommendation and assistive systems. She recently received the best paper award from WSDM 2024 for her work on Exploration. She serves as guest editor for Journal of Machine Learning, and Area chairs for Neurips, ICML, ICLR and RecSys.
Abstract
Recommendation systems enable billions of people to make decisions on a daily basis in online content and e-commerce platforms. The scale of such systems have increased by close to 10,000x in the last few years. Despite these being the largest software systems on the planet (https://youtu.be/watch?v=txOv_pi-_R4&t=2020s as Jensen Huang remarked in NVIDIA's recent ER), most DLRM models don’t scale with compute.
Our work, Generative Recommenders, reformulates ranking and retrieval in recommendation systems as sequential transduction tasks while significantly outperforming traditional DLRMs for the first time. Our new architecture introduced, HSTU, outperforms SotA Transformers by up to 15.2x on 8k sequences, while our inference algorithm, M-FALCON, boosts inference efficiency by 900x vs traditional DLRMs thanks to a novel design that fully amortizes computational costs via micro-batching.
Generative Recommenders and HSTU not only deliver double-digit improvements in online A/B tests at Meta, but also demonstrate scaling law in industrial-scale RecSys, up to GPT-3/LLaMa-2-compute scale, opening up new research frontiers through the application of scaling law.
Bio
Jiaqi Zhai is a Distinguished Engineer at Meta. He leads efforts to improve recommendation systems across Facebook and Instagram, with a mission to connect billions of people to informative, entertaining, and insightful content. His team developed multiple state-of-the-art foundational technologies, including the first trillion-parameter scale generative recommenders used in production. Prior to Meta, he spent 6 years at Google and developed the cross-platform user understanding system used in Search, Chrome, and YouTube, Google's first billion-user scale online learning system with minute-level latency, and the first generative model deployed on Google Search. His work has been published in top conferences including KDD, WWW, and SIGMOD.
Abstract
Rui Li will give the talk together with Jiaqi Zhai (refer to the talk abstract of Jiaqi Zhai).
Bio
Rui Li is a senior staff engineer at Meta working on large scale recommendation models, systems, and products. Before joining Meta, he worked at Yahoo! Research and YouTube Recommendation. Rui earned his PhD in UIUC back in 2013 working on data mining, machine learning. Rui is consistently interested in driving users' experiences and business values via practical machine learning in the search and recommendation area, published 20+ in top conferences including KDD, WWW, VLDB, and SIGIR.
The main objective of this workshop is to encourage pioneering research in the integration of generative models with recommender systems, with a specific focus on five key aspects. First, this workshop will motivate active researchers to utilize generative models for enhancing recommender algorithms and refining user modeling. Second, it promotes utilizing generative models to generate diverse content, i.e., AI-generated content (AIGC), in certain situations, complementing human-generated content to satisfy a broader range of user preferences and information needs. Third, it embraces substantial innovations in user interactions with recommender systems, possibly driven by the boom of large language models (LLMs). Fourth, the workshop will highlight the significance of trust in employing generative models for recommendations, encompassing aspects like content trustworthiness, algorithmic biases, and adherence to evolving ethical and legal standards. Lastly, the workshop will prompt researchers to develop diverse methods for the evaluation, including novel metrics and human evaluation approaches.
The workshop provides an invaluable forum for researchers to present the latest advancements in the rapidly evolving field of recommender systems. We welcome original submissions focusing on generative models in recommender systems, including a range of relevant topics:
Submitted papers must be a single PDF file in the template of ACM WWW 2024. Submissions can be of varying length from 4 to 8 pages, plus unlimited pages for references. The authors may decide on the appropriate length of the paper as no distinction is made between long and short papers. All submitted papers will follow the "double-blind" review policy and undergo the same review process and duration. Expert peer reviewers in the field will assess all papers based on their relevance to the workshop, scientific novelty, and technical quality.
Submission site: https://easychair.org/conferences/?conf=thewebconf2024_workshops (track of "The 2nd Workshop on Recommendation with Generative Models"). Accepted papers have the option to be included in the WWW Companions proceedings.
National University of Singapore
University of Science and Technology of China
National University of Singapore
University of Science and Technology of China
Huawei Noah's Ark Lab, China
Huawei Noah’s Ark Lab, Singapore
City University of Hong Kong
Renmin University of China
Kuaishou Technology, Beijing, China
University of Science and Technology of China