Skip to content

The implementation of paper "Language Representations Can be What Recommenders Need: Findings and Potentials"

Notifications You must be signed in to change notification settings

LehengTHU/AlphaRec

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Language Representations Can be What Recommenders Need: Findings and Potentials

license
Leheng Sheng1, An Zhang1*, Yi Zhang2, Yuxin Chen1, Xiang Wang2, Tat-Seng Chua1,
1National University of Singapore, 2University of Science and Technology of China
*Corresponding author.

Linear Mapping & Homomorphism

By linear mapping the representations of language models (LMs), we can get a homomorphic item representation space for recommendation. We find that:

  • Homomorphic spaces generated by advanced LMs yield excellent recommendation performance.
  • Semantic similarities in language representations may imply user preference similarities.
  • The complicated user preference similarity may be implicitly encoded in language spaces, with a naive linear mapping matrix to activate.

AlphaRec

Exploring the potential of advanced language representations with leading collaborative filtering (CF) components (i.e., nonlinear projection, graph convolution, and contrastive learning objective).

"AlphaRec introduces a new language-representation-based CF paradigm with several desirable advantages: being easy to implement, lightweight, rapid convergence, superior zero-shot recommendation abilities in new domains, and being aware of user intention."

Rapid Convergence & User Intention Capture

❓ How Does AlphaRec Interact with User Intentions?

AlphaRec_demo.mp4

📋 TODO

  • Release user intention capture datasets.
  • Upload zero-shot evaluation scripts.
  • ...

👉 Quick Start for AlphaRec

Dependencies

Our experiments have been tested on Python 3.9.12 with PyTorch 1.13.1+cu117. 🛎️ Python version over 3.10 may lead to some bugs in the package 'reckit'.

  1. Set up a virtualenv and install the pytorch manually. After that, install all the dependencies listed in the requirements.txt file by running the following command:
pip install -r requirements.txt
  1. Before using the general recommendation, run the following command to install the evaluator:
pushd models/General/base
python setup.py build_ext --inplace
popd

Dataset downloading

Please download the datasets from the following anonymous link and put the unzipped dataset in the data folder:

https://drive.google.com/drive/folders/1iGKeTx3vqCtbeVdWkHOwgpbY3-s7QDy_?usp=sharing

Example of the file structure:

├── assets/
├── models/
├── data/
    ├── General/
        ├── amazon_movie/ # target datasets
            ├── cf_data/
            ├── item_info/

Commands for running

Books

nohup python main.py --rs_type General --clear_checkpoints --saveID tau_0.15_v3_mlp_ --dataset amazon_book_2014 --model_name AlphaRec --n_layers 2 --patience 20 --cuda 0 --no_wandb --train_norm --pred_norm --neg_sample 256 --lm_model v3 --model_version mlp --tau 0.15 --infonce 1 &>logs/amazon_book_2014_tau_0.15_v3_mlp__2.log &

Movies & TV

nohup python main.py --rs_type General --clear_checkpoints --saveID tau_0.15_v3_mlp_ --dataset amazon_movie --model_name AlphaRec --n_layers 2 --patience 20 --cuda 1 --no_wandb --train_norm --pred_norm --neg_sample 256 --lm_model v3 --model_version mlp --tau 0.15 --infonce 1 &>logs/amazon_movie_tau_0.15_v3_mlp__2.log &

Games

nohup python main.py --rs_type General --clear_checkpoints --saveID tau_0.2_v3_mlp_ --dataset amazon_game --model_name AlphaRec --n_layers 2 --patience 20 --cuda 2 --no_wandb --train_norm --pred_norm --neg_sample 256 --lm_model v3 --model_version mlp --tau 0.2 --infonce 1 &>logs/amazon_game_tau_0.2_v3_mlp__2.log &

☎️ Contact

Please contact the first author of this paper for queries.

🌟 Citation

You can cite this paper as follows if you find our work helpful:

@article{AlphaRec,
  title={Language Models Encode Collaborative Signals in Recommendation},
  author={Sheng, Leheng and Zhang, An and Zhang, Yi and Chen, Yuxin and Wang, Xiang and Chua, Tat-Seng},
  journal={arXiv preprint arXiv:2407.05441},
  year={2024}
}

About

The implementation of paper "Language Representations Can be What Recommenders Need: Findings and Potentials"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published