eBay New Recommendations Model with Three Billion Item Titles

MMS Founder
MMS Claudio Masolo

Article originally posted on InfoQ. Visit InfoQ

eBay developed a new recommendations model based on Natural Language Processing (NLP) techniques and in particular on BERT model. This new model, called ‘ranker’ use the distance score between the embeddings as a feature, in this way the information in the titles of the products is analyzed from the semantic points of view. Ranker allows eBay to increase the metrics of Purchases, Clicks, and Ad Revenue by 3.76%, 2.74%, and 4.06% compared with the previous model in production on the native app (Android and iOS) and web platform.

The eBay Promoted Listing Similar Raccomendation Model (PLSIM) is composed of three stages: retrieve the Promoted Listing Similar, called ‘recall set’, those are most relevant. Apply the trained ranker, trained with offline historical data, to rank the recall set accordingly to the likelihood of purchase. Reranking the listing by incorporating seller ad-rate. The features of the model include: recommended item historical data, recommended item-to-seed item similarity, product category, country, and user personalization features. The model is continuously trained using a Gradient Boost Tree to rank items accordingly to the relative purchase probability; the incorporation of deep-learning-based features in similarity detection increase significantly the performances.

The previous version of the recommendation ranking models evaluates the product titles using Term Frequency-Inverse Document Frequency (TF-IDF) as well as the Jaccard similarity. This token base approach has basic limitations and doesn’t consider the context of the sentences and the synonyms. Instead, BERT, a deep learning approach, has excellent performance on language understanding. Since the eBay corpora are different than books and Wikipedia, eBay engineers introduce eBERT, a BERT variant, pre-trained on the eBay items titles. It is trained with 250 million sentences form Wikipedia and 3 billion from eBay titles in several languages. In offline evaluations, this eBERT model significantly outperforms out-of-the-box BERT models on a collection of eBay-specific tagging tasks, with a F1 score of 88.9.

eBERT architecture is too heavy for an high-throughput inference, in this case, the recommendations can’t be delivered on time. To address this issue eBay developed MicroBERT, another model that is a smaller version of BERT and optimized for CPU inference. MicroBERT uses eBERT as a teacher in the training phase using a knowledge distillation process. In this way, microBERT retains 95%-98% of eBERT quality with a decreasing inference time of 300%.

Finally, microBERT, is fine-tuned using a contrastive loss function called InfoNCE. Item’s titles are encoded as embedding vectors, the model is trained to increase the cosine similarity of the thematical distance between these vectors (that represents the embeddings of the titles) that are known to be related to each other while decreasing the cosine similarity of all other pairings of item titles in a mini-batch.

This new ranking model achieves a 3.5% improvement in purchase rank (the average rank of the sold item) but the complexity of this model makes it hard to run the recommendations in real-time. This is why the title embeddings are generated by a daily batch job and stored in NuKV (eBay’s cloud-native key-value store) with item titles as key and embeddings ad values. With this approach, eBay, is able to meet the latency required.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.