The emergence of large language models (LLMs) has ushered in a new era of generative AI, exhibiting broad human-level capabilities and achieving meaningful progress toward Artificial General Intelligence (AGI). To realize AGI that truly benefits each individual, one next key direction for generative AI is to develop personal intelligence—understanding individuals and enhancing personal thinking, planning, and life experiences through personalization. Although evolving generative AI toward personal intelligence is emerging, it has attracted considerable interest from both research institutions and industry, even becoming a strategic direction for many companies. This workshop aims to provide a platform for discussing innovative ideas that facilitate the transition from generative AI to personal intelligence, (i) envisioning the future of personal intelligence powered by generative AI, (ii) establishing benchmarks for personal intelligence tasks, (iii) advancing user modeling/personalization techniques for generative models/agents, (iv) enhancing real-world applications, and (v) ensuring privacy and trustworthiness. This workshop aims to deepen understanding, accelerate progress, and support the transformative development of the next generation of generative AI: personal intelligence.
Abstract
We've moved way beyond the old days of building discovery, recommendation, decision support, and other AI tools using traditional ML and pattern recognition techniques. The future of universal personal assistance for discovery and learning is upon us. How will multimodality image, video, and audio understanding, and reasoning abilities of large foundation models change how we build these systems? I will shed some initial light on this topic by discussing 3 trends: First, the move to a single multimodal large model with reasoning abilities; Second, the fundamental research on personalization and user alignment; Third, the combination of System 1 and System 2 cognitive abilities into a single universal assistant.
Bio
Dr. Ed H. Chi is VP of Research at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and universal assistant agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard/Gemini, a conversational chatbot experiment. His research also delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >950 product landings and ~$10.4B in annual revenue since 2013.
Abstract
This talk presents an exploration of the intelligence feedback loop—a two-way street where biological inspiration drives AI development, and advanced AI, in turn, amplifies human cognitive capabilities. We begin with a concise history of AI’s evolution, emphasizing the influence of neuroscience on models ranging from classic symbolic agents to modern multimodal large language models (LLMs) that unite multisensory inputs with symbolic reasoning. For augmented cognition, we will discuss computer-use agents that operate in the complex digital world and automate information search and intent execution for human users. We then focus on new biologically-inspired developments, detailing our work on HippoRAG, a long-term memory framework for LLMs inspired by the hippocampal indexing theory.
Bio
Dr. Yu Su is a Distinguished Assistant Professor at the Ohio State University, where he co-directs the NLP group. He has broad interests in artificial intelligence, with a primary interest in the role of language as a vehicle for reasoning and communication. His group is a driving force on the emerging topic of LLM-based language agents, with seminal contributions such as Mind2Web, SeeAct, HippoRAG, LLM-Planner, and MMMU. He is a 2025 Sloan Fellow and has received multiple paper awards from CVPR and ACL.
Abstract
Recently, generative retrieval-based recommendation systems (GRs) have emerged as a promising paradigm by directly generating candidate items in an autoregressive manner. However, most modern recommender systems adopt a retrieve-and-rank strategy, where the generative model functions only as a selector during the retrieval stage. In this talk, we will discuss the key limitations of current recommender systems, Onerec's industrial deployment insights, and the future challenges and opportunities in this evolving field.
Bio
Dr. Shiyao Wang is a researcher in the field of multimodal and recommendation systems. She is currently affiliated with Kuaishou Technology, where she leads innovative research projects that bridge the gap between retrieval, ranking, and generative recommendation systems. Her work includes contributions such as OneRec, which unifies retrieval and ranking with generative recommender and iterative preference alignment, and she continues to drive forward the emerging topics in multimodal and recommendation systems.
Activity type | Time (Australia Time) | Title |
---|---|---|
Opening | 9:00–9:05 | |
Keynote | 9:05–9:40 | Keynote-1: The Future of Personalized Universal Assistant by Dr. Ed H. Chi |
Paper oral | 9:40–9:50 | Paper-1: One Size doesn't Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction by Ben Liu |
Paper oral | 9:50–10:00 | Paper-2: Identifying User Goals From UI Trajectories by Sapir Caduri |
Paper oral | 10:00–10:10 | Paper-3: Agent-Initiated Interaction in Phone UI Automation by Filippo Galgani |
Paper oral | 10:10–10:20 | Paper-4: Beyond Retrieval: Generating Narratives in Conversational Recommender Systems by Krishna Sayana |
Paper oral | 10:20–10:30 | Paper-5: A Comprehensive Security Evaluation Framework for Chinese Large Language Models by Zhenhua Huang |
Break | 10:30–11:00 | |
Keynote | 11:00–11:35 | Keynote-2: The Intelligence Feedback Loop: From Biological Inspiration to Augmented Cognition by Dr. Yu Su |
Paper oral | 11:35–11:45 | Paper-6: Generative Recommendation: Towards Personalized Multimodal Content Generation by Xinyu Lin |
Keynote | 11:45–12:20 | Keynote-3: Practice on OneRec - Unifying Retrieve and Rank with Generative Recommender and Preference Alignment by Dr. Shiyao Wang |
Closing | 12:20–12:25 |
Generative AI, propelled by advancements in large language models, has achieved remarkable milestones. As we look ahead, equipping generative AI with personalization capabilities and progressing toward personal intelligence are essential steps in its evolution toward deeply and meaningfully serving each individual—and ultimately toward AGI. The Generative AI Towards Personal Intelligence workshop aims to bring together researchers, practitioners, and industry experts to explore cutting-edge developments in personal intelligence within generative AI. This workshop will focus on inspiring future perspectives, advancing technological innovations, establishing standards, and enhancing practical applications for generative AI-driven personal intelligence.
Aiming for personal intelligence, this workshop focuses on advancing generative AI to better understand individuals and enhance personal thinking, planning, and life experiences through personalization. Specific topics include:
Submitted papers must be a single PDF file in the template of ACM WWW 2025. Submissions can be of varying length from 4 to 8 pages, plus unlimited pages for references. The authors may decide on the appropriate length of the paper as no distinction is made between long and short papers. All submitted papers will follow the "double-blind" review policy and undergo the same review process and duration. Expert peer reviewers in the field will assess all papers based on their relevance to the workshop, scientific novelty, and technical quality.
Submission site: https://easychair.org/my/conference?conf=personalintelligence0 (track of "The 3rd Workshop on Personal Intelligence with Generative AI"). Accepted papers have the option to be included in the WWW Companions proceedings. And there will be best paper award for this workshop!
We are pleased to announce a fast-track submission process in conjunction with the main conference. See below the submission guidelines:
zyang1580@gmail.com Dr. Yang Zhang
wenjiewang96@gmail.com Dr. Wenjie Wang