Introducing TF-Coder, a tool that writes tricky TensorFlow expressions for you!
agosto 26, 2020
Posted by Kensen Shi, Google Research

When manipulating tensors, one must keep track of multiple dimensions, tensor shape and DType compatibility, and of course mathematical correctness. Additionally, there are hundreds of TensorFlow operations, and finding the right ones to use can be a challenge.

Instead of coding your tensor manipulation directly, what if you could just demonstrate it through an illustrative example and get the corresponding code automatically? TensorFlow Coder (TF-Coder) makes this possible!

TF-Coder is a program synthesis tool that helps you write TensorFlow code. First, the tool asks for an input-output example of the desired tensor transformation. Then, it runs a combinatorial search to find TensorFlow expressions that perform that transformation. TF-Coder’s output is real TensorFlow code that you can include in your projects.

The following one-minute video introduces TF-Coder, and this Colab notebook allows you to use the TF-Coder tool for your own tensor manipulation problems.



In this blog post, we’ll illustrate various scenarios where TF-Coder can help you write TensorFlow code.

Programming in TensorFlow by example

Suppose you want to "add" an M-element vector with an N-element vector in a broadcasted way to produce an M x N matrix containing all pairwise sums. Instead of digging through TensorFlow documentation to figure out how to do this, you can instead provide an input-output example (using M = 3 and N = 4):

Input tensors, as a dict mapping input variable names to example tensor values:
inputs = {
    'rows': [10, 20, 30],
    'cols': [1, 2, 3, 4],
}
The desired output tensor, corresponding to the provided input tensors:
output = [[11, 12, 13, 14],
          [21, 22, 23, 24],
          [31, 32, 33, 34]]
Given this information (already entered into the TF-Coder Colab by default), the TF-Coder tool will find the appropriate TensorFlow code automatically in a fraction of a second:
tf.add(cols, tf.expand_dims(rows, 1))
The above problem was pretty simple just to illustrate the idea of programming by example. TF-Coder can be useful for harder problems as well, as we’ll see below.

TF-Coder helps you find the right function to use

Let’s suppose you are working with a numerical feature such as the price of an item. The prices in your dataset have a wide range, e.g., from under $10 to over $1000. If these prices are used directly as features, your model may overfit to specific prices in the training data, and it may also have difficulty with outlier prices during evaluation.

To deal with these issues, you may want to use bucketing to transform the numerical prices into categorical features. For example, using bucket boundaries of [10, 50, 100, 1000] means that prices under $10 should fall into bucket 0, prices between $10 and $50 fall into bucket 1, and so on.

After choosing bucket boundaries, how do you actually map the numerical prices to the bucket indices using TensorFlow? For example, given the following bucket boundaries and item prices:
# Input tensors
boundaries = [10, 50, 100, 1000]
prices = [15, 3, 50, 90, 100, 1001]
you want to compute the bucket number for each item:
# Output tensor
bucketed_prices = [1, 0, 2, 2, 3, 4]
Although TensorFlow comes with various bucketing operations, it may be tricky to figure out which specific operation does this exact kind of bucketing. Since TF-Coder can identify hundreds of Tensor operations by behavior, you can look up the correct operation by providing an input-output example:
# Input-output example
inputs = {
    'boundaries': [10, 50, 100, 1000],
    'prices': [15, 3, 50, 90, 100, 1001],
}
output = [1, 0, 2, 2, 3, 4]
Within seconds, TF-Coder outputs the following solution:
tf.searchsorted(boundaries, prices, side='right')
This gives us a useful hint, and the documentation for tf.searchsorted confirms that this code indeed performs the bucketing as desired.

TF-Coder helps you combine functions in clever ways

Now let’s consider another problem: compute a 0-1 tensor that identifies the maximum element of each row of the input tensor.
# Input tensor
scores = [[0.7, 0.2, 0.1],
          [0.4, 0.5, 0.1],
          [0.4, 0.4, 0.2],
          [0.3, 0.4, 0.3],
          [0.0, 0.0, 1.0]]
 
# Output tensor
top_scores = [[1, 0, 0],
              [0, 1, 0],
              [1, 0, 0],
              [0, 1, 0],
              [0, 0, 1]]
Note that if the same largest element appears multiple times within a row, such as in the third row of scores, then only the first such largest element should be marked, so that every row of top_scores has exactly one entry of 1.

Unlike in the last problem, there is no single TensorFlow function that performs this computation. If you search the documentation for “max”, you may find that tf.reduce_max, tf.argmax, and tf.maximum are relevant, but which one should you use? tf.reduce_max produces [0.7, 0.5, 0.4, 0.4, 1.0], tf.argmax produces [0, 1, 0, 1, 2], and tf.maximum isn’t right because it takes two arguments. None of these look close to our desired output.

TF-Coder can help solve tricky problems like this. You can write the problem in the form of an input-output example:
# Input-output example
inputs = {
    'scores': [[0.7, 0.2, 0.1],
               [0.4, 0.5, 0.1],
               [0.4, 0.4, 0.2],
               [0.3, 0.4, 0.3],
               [0.0, 0.0, 1.0]],
}
output = [[1, 0, 0],
          [0, 1, 0],
          [1, 0, 0],
          [0, 1, 0],
          [0, 0, 1]]
TF-Coder uses a combination of tf.one_hot and tf.argmax in a short solution to this problem:
tf.cast(tf.one_hot(tf.argmax(scores, axis=1), 3), tf.int32)
Through a detailed search over combinations of TensorFlow operations, TF-Coder often finds elegant solutions like this, which may simplify and speed up your TensorFlow programs.

TF-Coder helps you write correct code with less debugging

Consider normalizing lists of integer counts into probability distributions by dividing each row by the sum of that row. For instance:
# Input tensor
counts = [[0, 1, 0, 0],
          [0, 1, 1, 0],
          [1, 1, 1, 1]]
 
# Output tensor
normalized = [[0.0, 1.0, 0.0, 0.0],
              [0.0, 0.5, 0.5, 0.0],
              [0.25, 0.25, 0.25, 0.25]]
Even if you know relevant functions to use (tf.reduce_sum followed by tf.divide), writing the correct code is still nontrivial. A first attempt may look like this:
# First attempt
normalized = tf.divide(counts, tf.reduce_sum(counts, axis=1))
Is this right? There are many potential pitfalls to think about:
  • Is the summation axis correct, or should it be axis=0?
  • Are the shapes of counts and tf.reduce_sum(counts, axis=1) compatible for division, or do you need to reshape or transpose either of these?
  • counts and tf.reduce_sum(counts, axis=1) are both tf.int32 tensors. Can tf.int32 tensors be divided, or do you need to cast them to a float DType first?
  • Are the two arguments in the correct order, or should they be swapped?
  • Does the output have type tf.int32, tf.float32, or something else?
  • Is there a simpler or better way that was not considered?
You can give this task to TF-Coder with the following input-output example:
# Input-output example
inputs = {
    'counts': [[0, 1, 0, 0],
               [0, 1, 1, 0],
               [1, 1, 1, 1]],
}
output = [[0.0, 1.0, 0.0, 0.0],
          [0.0, 0.5, 0.5, 0.0],
          [0.25, 0.25, 0.25, 0.25]]
TF-Coder’s solution is:
tf.cast(tf.divide(counts, tf.expand_dims(tf.reduce_sum(counts, axis=1), axis=1)), tf.float32)
By using TF-Coder to solve this problem, the mental burden of the exercise is reduced. When TF-Coder produces the solution above, it is guaranteed that the code correctly produces the example output when run on the example input. TF-Coder’s solution will also avoid any unnecessary steps. Thus, you can quickly deduce the answers to most of the questions above: an extra tf.expand_dims step is needed to make the shapes compatible for division, and the result of tf.divide must be cast to tf.float32 (in fact tf.divide returns a tf.float64 tensor when dividing two tf.int32 tensors). In this way, TF-Coder helps you write simple and correct code without painful debugging cycles.

Caveats

There are limitations to TF-Coder. It can currently find solutions involving 3-4 operations within a minute of searching, but solutions involving 6 or more operations are too complex to find in a reasonable amount of time. Furthermore, TF-Coder currently does not support complex or string tensors, or RaggedTensors. The full list of supported operations can be found in the Colab notebook.

In addition, TF-Coder only guarantees that its solutions work for the given input-output example. The tool searches for a simple TensorFlow expression that matches the provided input-output example, but sometimes this solution is too simple and doesn’t generalize in the intended way. It can be helpful to make the example as unambiguous as possible, which can often be achieved by adding more numbers to the input and output tensors. Please review TF-Coder’s solutions to ensure that they correctly implement the intended behavior.

Try TF-Coder yourself!

Be sure to give TF-Coder a try! Even experienced TensorFlow users at Google are learning new things with the help of TF-Coder.

You can access the tool using this Colab notebook -- no download or installation is required. Follow this tutorial for a detailed walkthrough. You can also take a look at our code and documentation on GitHub and our research paper.

Note: in the Colab tool, we would like to log the problems given to TF-Coder and the resulting solutions, so that we can improve the tool and build a dataset that will accelerate program synthesis research in general, but this data collection is completely optional.
Next post
Introducing TF-Coder, a tool that writes tricky TensorFlow expressions for you!

Posted by Kensen Shi, Google Research

When manipulating tensors, one must keep track of multiple dimensions, tensor shape and DType compatibility, and of course mathematical correctness. Additionally, there are hundreds of TensorFlow operations, and finding the right ones to use can be a challenge.

Instead of coding your tensor manipulation directly, what if you could just demonstrate it through …