H&M Recommendation Competition Top1 方案

  1. Overview
  2. Background
  3. Solution
    1. Key Winning ideas
    2. Retrieval Strategies
    3. Feature Engineering
    4. Downsampling
    5. Model
    6. Optimization
  4. Reference

Overview

1.H&M Personalized Fashion Recommendations 1st-place solution.
2.Prove simple solution (Feature Engineering + GBDT) is powerful in recommendation system again.

Background

H&M recommendations based on data from previous transactions, as well as from customer and product meta data. The available meta data spans from simple data, such as garment type and customer age, to text data from product descriptions, to image data from garment images.

Solution

This solution can be divided into three parts:
1.Various retrieval strategies.
2.Feature engineering.
3.GBDT.

Key Winning ideas

Mainly generate recent popular items because fashion changing fast and has seasonality, tried to add cold start items but they will never be ranked to top12 due to lacking of interaction information.

User and item interaction information are always the most important of recommendation problem the features created are almost interaction features, image and text features didn’t help but should be useful for cold start problem.

Almost 50% users have no transactions in recent 3 months, so created many cumulative features for them, and last week, last month, last season features for active users.

Use 6 weeks data as train, last week as valid, retrieve 100 caninates for each user, it has stable cv-lb correlation. focus on improving single lightgbm model to the last week, cv is 0.0430 and lb is 0.0362.

At last week rent gcp’s big memory server and vast.ai’s gpu server to run bigger models to get higher accuracy.

Retrieval Strategies

Focus on increasing the hit number of retrieved 100 candidates, try various strategies to cover more positive sample.

1.Repurchase-TopN.
2.ItemCF-TopN.
3.Item of same product_code-TopN.
4.Popularity-TopN.
5.Graph embedding-TopN.
6.Logistic regression with categorical information-TopN.

Feature Engineering

Features are created base on retrieval strategies,

1.User-item interaction for repurchase.
2.Collaborative filtering score for itemCF.
3.Similarity for embedding retrieval.
4.Item count for popularity.

Type Description
Count user-item, user-category of last week/month/season/same week of last year/all, time weighted count.
Time first, last days of transactions
Mean/Max/Min aggregation of age, price, sales_channel_id.
Difference/Ratio difference between age and mean age of who purchased item, ratio of one user’s purchased item count and the item’s count
Similarity collaborative filtering score of item2item, cosine similarity of item2item(word2vec), cosine similarity of user2item(ProNE)

Downsampling

Retrieve 100-500 candidates for each user, the negative samples number is very big, so negative downsampling is must, we found 1 million-2 million negative samples for each week has better performace.

neg_samples = 1000000
seed = 42 
train[train['label']>0].append(train[train['label']==0].sample(neg_samples, random_state=seed))

Model

The best single model is a lightgbm (cv:0.0441, lb:0.0367), finally trained 5 lightgbm classifier and 7 catboost classifier for ensemble (lb:0.0371). Catboost’s lb score is much worse than lightgbm, lightgbm has very stable cv-lb correlation.

Optimization

1.Use TreeLite to accelerate lightgbm inference speed (2X faster), catboost-gpu is 30X faster than lightgbm-cpu inference.
2.Transform all the categorical features (including two way) to label encoding, use reduce_mem_usage.
3.Create a feature store, save intermediate features files to dictionary, final features to feather, exsiting features will not be create again.
4.Split all the users to 28 group, inference simultaneously with multiple servers.

Reference

[1]. https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/discussion/324070


转载请注明来源, from goldandrabbit.github.io

💰

×

Help us with donation