Introducing TensorFlow Recommenders
syyskuuta 23, 2020
Posted by Maciej Kula and James Chen, Google Brain

From recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love.

At Google, we have spent the last several years exploring new deep learning techniques to provide better recommendations through multi-task learning, reinforcement learning, better user representations and fairness objectives. These and other advancements have allowed us to greatly improve our recommendations.

Today, we're excited to introduce TensorFlow Recommenders (TFRS), an open-source TensorFlow package that makes building, evaluating, and serving sophisticated recommender models easy.

Built with TensorFlow 2.x, TFRS makes it possible to:

TFRS is based on TensorFlow 2.x and Keras, making it instantly familiar and user-friendly. It is modular by design (so that you can easily customize individual layers and metrics), but still forms a cohesive whole (so that the individual components work well together). Throughout the design of TFRS, we've emphasized flexibility and ease-of-use: default settings should be sensible; common tasks should be intuitive and straightforward to implement; more complex or custom recommendation tasks should be possible.

TensorFlow Recommenders is open-source and available on Github. Our goal is to make it an evolving platform, flexible enough for conducting academic research and highly scalable for building web-scale recommender systems. We also plan to expand its capabilities for multi-task learning, feature cross modeling, self-supervised learning, and state-of-the-art efficient approximate nearest neighbours computation.

Example: building a movie recommender

To get a feel for how to use TensorFlow Recommenders, let’s start with a simple example. First, install TFRS using pip:

!pip install tensorflow_recommenders

We can then use the MovieLens dataset to train a simple model for movie recommendations. This dataset contains information on what movies a user watched, and what ratings users gave to the movies they watched.

We will use this dataset to build a model to predict which movies a user watched, and which they didn't. A common and effective pattern for this sort of task is the so-called two-tower model: a neural network with two sub-models that learn representations for queries and candidates separately. The score of a given query-candidate pair is simply the dot product of the outputs of these two towers.

This model architecture is quite flexible. The inputs can be anything: user ids, search queries, or timestamps on the query side; movie titles, descriptions, synopses, lists of starring actors on the candidate side.

In this example, we're going to keep things simple and stick to user ids for the query tower, and movie titles for the candidate tower.

To start with, let's prepare our data. The data is available in TensorFlow Datasets.

import tensorflow as tf
 
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
# Ratings data.
ratings = tfds.load("movie_lens/100k-ratings", split="train")
# Features of all the available movies.
movies = tfds.load("movie_lens/100k-movies", split="train")

Out of all the features available in the dataset, the most useful are user ids and movie titles. While TFRS can use arbitrarily rich features, let's only use those to keep things simple.

ratings = ratings.map(lambda x: {
    "movie_title": x["movie_title"],
    "user_id": x["user_id"],
})
movies = movies.map(lambda x: x["movie_title"])

When using only user ids and movie titles our simple two-tower model is very similar to a typical matrix factorization model. To build it, we're going to need the following:

  • A user tower that turns user ids into user embeddings (high-dimensional vector representations).
  • A movie tower that turns movie titles into movie embeddings.
  • A loss that maximizes the predicted user-movie affinity for watches we observed, and minimizes it for watches that did not happen.

TFRS and Keras provide a lot of the building blocks to make this happen. We can start with creating a model class. In the __init__ method, we set up some hyper-parameters as well as the primary components of the model.

class TwoTowerMovielensModel(tfrs.Model):
  """MovieLens prediction model."""
 
  def __init__(self):
    # The `__init__` method sets up the model architecture.
    super().__init__()
 
    # How large the representation vectors are for inputs: larger vectors make
    # for a more expressive model but may cause over-fitting.
    embedding_dim = 32
    num_unique_users = 1000
    num_unique_movies = 1700
    eval_batch_size = 128

The first major component is the user model: a set of layers that describe how raw user features should be transformed into numerical user representations. Here, we use the Keras preprocessing layers to turn user ids into integer indices, then map those into learned embedding vectors:

 # Set up user and movie representations.
    self.user_model = tf.keras.Sequential([
      # We first turn the raw user ids into contiguous integers by looking them
      # up in a vocabulary.
      tf.keras.layers.experimental.preprocessing.StringLookup(
          max_tokens=num_unique_users),
      # We then map the result into embedding vectors.
      tf.keras.layers.Embedding(num_unique_users, embedding_dim)
    ])
The movie model looks similar, translating movie titles into embeddings:
self.movie_model = tf.keras.Sequential([
      tf.keras.layers.experimental.preprocessing.StringLookup(
          max_tokens=num_unique_movies),
      tf.keras.layers.Embedding(num_unique_movies, embedding_dim)
    ])

Once we have both user and movie models we need to define our objective and its evaluation metrics. In TFRS, we can do this via the Retrieval task (using the in-batch softmax loss):

# The `Task` objects has two purposes: (1) it computes the loss and (2)
    # keeps track of metrics.
    self.task = tfrs.tasks.Retrieval(
        # In this case, our metrics are top-k metrics: given a user and a known
        # watched movie, how highly would the model rank the true movie out of
        # all possible movies?
        metrics=tfrs.metrics.FactorizedTopK(
            candidates=movies.batch(eval_batch_size).map(self.movie_model)
        )
    )

We use the compute_loss method to describe how the model should be trained.

def compute_loss(self, features, training=False):
    # The `compute_loss` method determines how loss is computed.
 
    # Compute user and item embeddings.
    user_embeddings = self.user_model(features["user_id"])
    movie_embeddings = self.movie_model(features["movie_title"])
 
    # Pass them into the task to get the resulting loss. The lower the loss is, the
    # better the model is at telling apart true watches from watches that did
    # not happen in the training data.
    return self.task(user_embeddings, movie_embeddings)
We can fit this model using standard Keras fit calls:
model = MovielensModel()
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
 
model.fit(ratings.batch(4096), verbose=False)

To sanity-check the model’s recommendations we can use the TFRS BruteForce layer. The BruteForce layer is indexed with precomputed representations of candidates, and allows us to retrieve top movies in response to a query by computing the query-candidate score for all possible candidates:

index = tfrs.layers.ann.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
 
# Get recommendations.
_, titles = index(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")

Of course, the BruteForce layer is only suitable for very small datasets. See our full tutorial for an example of using TFRS with Annoy, an approximate nearest neighbours library.

We hope this gave you a taste of what TensorFlow Recommenders offers. To learn more, check out our tutorials or the API reference. If you'd like to get involved in shaping the future of TensorFlow recommender systems, consider contributing! We will also shortly be announcing a TensorFlow Recommendations Special Interest Group, welcoming collaboration and contributions on topics such as embedding learning and distributed training and serving. Stay tuned!

Acknowledgments

TensorFlow Recommenders is the result of a joint effort of many folks at Google and beyond. We'd like to thank Tiansheng Yao, Xinyang Yi, Ji Yang for their core contributions to the library, and Lichan Hong and Ed Chi for their leadership and guidance. We are also grateful to Zhe Zhao, Derek Cheng, Sagar Jain, Alexandre Passos, Francois Chollet, Sandeep Gupta, Eric Ni, and many, many others for their suggestions and support of this project.
Next post
 Introducing TensorFlow Recommenders

Posted by Maciej Kula and James Chen, Google BrainFrom recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love. At Google, we have spent the last several years exploring new deep learning techniques to …