Preference-Adaptive Meta-Learning for Cold-Start ...
Transcript of Preference-Adaptive Meta-Learning for Cold-Start ...
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Preference-Adaptive Meta-Learning for Cold-Start Recommendation
Li Wang1 , Binbin Jin1 , Zhenya Huang1 , Hongke Zhao2 , Defu Lian1 ,Qi Liu1 and Enhong Chen1∗
1Anhui Province Key Laboratory of Big Data Analysis and Application, School of Data Science, University of Science and Technology of China (USTC); 2The College of Management and Economics, Tianjin University
Reporter: Li Wang
IJCAI-PRICAI 2021
30th International Joint Conference on Artificial Intelligence,19th -26th August,
Montreal-themed Virtual Reality
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Introduction
Cold-start problem
When encountering new users, collaborative filtering based approaches fail due to
scarce interactions, leading to a decline in the new users’ experience.
Meta-learning for cold-start recommendation
Most existing works formulate each user as a task and aim to learn globally shared
prior knowledge across all users. The learned prior knowledge can be quickly
adapted to the personalized based on the sparse interactions of cold-start users.
A user
…
A learning task
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Introduction
Limitations
Globally shared prior knowledge may be inadequate to discern users’ complicated
behaviors and causes poor generalization.
Solutions: Preference-Specific Meta-Learning
Users with similar preferences should locally share similar prior knowledge so that
it can be easily generalized to these users.
Social relations can provide a guidance to recognize a bundle of users who have
similar preferences and share similar knowledge.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Problem Definition
Given
▶User set: U; Item set: I; Rating set: R.
▶A user u of user set U can be defined as a learning task: 𝓣𝒖 = (𝓕𝒖, 𝓢𝒖, 𝑸𝒖)
𝓕𝒖: the friend set of u.
𝓢𝒖: the support set containing the interacted items.
𝑸𝒖: the query set containing the items to be predicted.
▶Meta-training tasks: 𝓣𝒕𝒓; Meta-testing tasks: 𝓣𝒕𝒆.
Goal
▶Predicting the unknown rating 𝑟𝑢,𝑖^ between the user u and the item i.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Method
Preference-Adaptive Meta-Learning(PAML)
Identifying Implicit Friends over the HIN
In cold-start scenarios, social relations are also sparse, we identify reliable implicit friends
by defining palindrome paths over the user-item-attribute graph.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Users appeared in the same path share similar tastes since they
express the same opinion on the item (blue path).
Users appeared in the same path share similar tastes since they
express the same opinion on the item attribute (red path).
Identifying Implicit Friends over the HIN
Similarity measurement
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Methodology
Preference-Adaptive Meta-Learning(PAML)
Integrating a user’s interactions and her friends to capture her overall preference.
Coarse-fine preference modeling
Fine level: Distinguish the strength of social relations and combine them at each rating score.
Coarse level: Learn an overall preference by integrating preferences obtained by the fine level.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Fine level preference modeling
1. We split items of support set
into several groups by rating
scores.
2. Learn an item based user
preference.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Fine level preference modeling
1. Getting the item based preference for
implicit friends and explicit friends.
2. Adopting the attention mechanism to
get two kinds of social-based
preference.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Coarse level preference modeling
Using attention mechanism to aggregate the
preferences under different scores and get the
overall preference.
Prediction
Objective function
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Methodology
Meta-learning Framework
Part 1:Preference-specific adaptation Part 2:Local update
Part 3:Global update
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Preference-specific adaptation
Preference-specific gates: Preference-specific knowledge:
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Meta optimizationLocal update:
Global update:
1. Use preference-specific initial parameters to make predictions for support set.
2. Local update to get the personalized initial parameters.
1. Use personalized initial parameters to make predictions for query set.
2. Global update to get the prior knowledge.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Experiments
Experimental Setups
Datasets: Douban, Yelp
Metrics: RMSE,nDCG@k
RMSE =
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Overall performance
Comparison Methods:1. FM/ NeuMF/ Wide & Deep 2. SoReg/ DiffNet 3. MeLU/ MetaEmb/ MAMO
Scenarios:1. UC: User-Cold 2. IC: Item-Cold 3. UIC: User & Item Cold 4.NC: Traditional
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Ablation study
Variants of PAML
PAML-I: without implicit friends
PAML-E: without explicit friends
PAML-A: without preference-specific
The results clearly demonstrate that the social relations could contribute to modeling the
user’s preference so that facilitating the performance.
The results not only prove our claim that users with similar preferences should locally share
prior knowledge is reasonable but also demonstrate preference-specific adapter is effective.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Parameter sensitivity
At the beginning, as the number increases,
nDCG@5 also increases, and then it will
reach a stable level.
The results reach the optimal performance at
one local update, and then as the number of
local updates increases, nDCG@5 gradually
decreases.
Anhui Province Key Laboratory of Big Data Analysis and Application (USTC)
Thanks!
For more details, please refer to our paper!
Reporter:Li [email protected]