Deep and Adversarial Knowledge Transfer in Recommendation

The Hong Kong University of Science and Technology
Department of Computer Science and Engineering


PhD Thesis Defence


Title: "Deep and Adversarial Knowledge Transfer in Recommendation"

By

Mr. Guangneng HU


Abstract

Recommendation is a basic service to filter information and to guide users 
from a large pool of items at various online systems, achieving both 
improved user satisfaction and increased cor- porate revenues. It works by 
learning user preferences on items from their historical interactions. 
Recent deep learning techniques bring in advancements of recommender 
models with the ability of learning representations of users and items 
from the interaction data. In real-world scenarios, however, interactions 
may well be sparse in a target domain of interest, and thus it hurts the 
huge success of deep models which are depending on large-scale labeled 
data. Transfer learning is studied to address the data sparsity by 
transferring the knowledge from auxiliary source domains.

There is a privacy concern when the source domain shares their data with 
the target domain. This issue gets worse by the ever-increasing abuses of 
personal data and it is inevitable due to the enforcement of data 
protection regulations. Existing research work focuses on improving the 
recommendation performance while ignores the privacy leakage issue.

In this thesis, we investigate deep knowledge transfer in recommendation, 
of that the core idea is to answer what to transfer between domains. 
Specifically, we propose three models in different transfer learning 
approaches, i.e., deep model-based transfer (DMT), deep instance-based 
transfer (DIT), and deep feature-based transfer (DFT). Firstly, in DMT, we 
transfer parameters in lower layers and learn source and target networks 
in a multi-task way. The CoNet model is introduced to learn dual knowledge 
transfer across domains and is capable of selecting knowledge to transfer 
via the sparsity-induced regularization technique enforced on the transfer 
matrix. Secondly, in DIT, we transfer certain parts of instances in the 
source domain by adaptively re-weighting them to be used in the target 
domain. The TransNet model is introduced to learn an adaptive transfer 
vector to capture relations between the target item and source items. 
Next, in DFT, we transfer a “good” feature representation that captures 
the invariant while reduces the difference between domains. The TrNews 
model is introduced to transfer heterogeneous user interests across 
domains and transfer item representations selectively. The proposed 
transfer models can be used for modeling both relational data (e.g., 
clicks), content data (e.g., news), and their combinations (hybrid data).

Finally, we investigate the adversarial knowledge transfer in 
recommendation to protect the private attributes in the source domain. 
Specifically, we propose the PrivNet model which improves the target 
performance as well as protects the source privacy, of that the core is to 
learn a privacy-aware neural representation. Through extensive experiments 
on real-world datasets, we validate the research on adversarial knowledge 
transfer. This thesis will also describe the research frontier and point 
out promising future work for investigation.


Date:			Wednesday, 2 June 2021

Time:			2:00pm - 4:00pm

Zoom Meeting: 		https://hkust.zoom.us/j/5394566475

Chairperson:		Prof. Yingying LI (ISOM)

Committee Members:	Prof. Qiang YANG (Supervisor)
 			Prof. Lei CHEN (Supervisor)
 			Prof. Kai CHEN
 			Prof. Yangqiu SONG
 			Prof. Yang WANG (IEDA)
 			Prof. Dacheng TAO (University of Sydney)


**** ALL are Welcome ****