de febrer 13, 2019 —
Posted by the TensorFlow Team
In a recent article, we mentioned that TensorFlow 2.0 has been redesigned with a focus on developer productivity, simplicity, and ease of use.
To take a closer look at what’s changed, and to learn about best practices, check out the new Effective TensorFlow 2.0 guide (published on GitHub). This article provides a quick summary of the content you’ll find there. If any …
tf.summary
, tf.keras.metrics
, and tf.keras.optimizers
. The easiest way to automatically apply these renames is to use the v2 upgrade script.session.run()
call. By contrast, TensorFlow 2.0 executes eagerly (like Python normally does) and in 2.0, graphs and sessions should feel like implementation details.tf.Variable()
, it would be put into the default graph, and it would remain there, even if you lost track of the Python variable pointing to it. You could then recover that tf.Variable
, but only if you knew the name that it had been created with. This was difficult to do if you were not in control of the variable’s creation. As a result, all sorts of mechanisms proliferated to attempt to help users find their variables again.tf.Variable
, it gets garbage collected. See the guide for more details.session.run()
call is almost like a function call: You specify the inputs and the function to be called, and you get back a set of outputs. In TensorFlow 2.0, you can decorate a Python function using tf.function()
to mark it for JIT compilation so that TensorFlow runs it as a single graph (Functions 2.0 RFC).@tf.function
, AutoGraph will convert a subset of Python constructs into their TensorFlow equivalents. See the guide for more details.
session.run()
. In TensorFlow 2.0, users should refactor their code into smaller functions which are called as needed. In general, it’s not necessary to decorate each of these smaller functions with tf.function
; only use tf.function
to decorate high-level computations — for example, one step of training, or the forward pass of your model. tf.train.Checkpointable
and are integrated with @tf.function
, which makes it possible to directly checkpoint or export SavedModels from Keras objects. You do not necessarily have to use Keras’s.fit()
API to take advantage of these integrations. See the guide for more details.
tf.data.Datasets
and @tf.function
tf.data.Dataset
is the best way to stream training data from disk. Datasets are iterables (not iterators), and work just like other Python iterables in Eager mode. You can fully utilize dataset async prefetching/streaming features by wrapping your code in tf.function()
, which replaces Python iteration with the equivalent graph operations using AutoGraph. @tf.function
def train(model, dataset, optimizer):
for x, y in dataset:
with tf.GradientTape() as tape:
prediction = model(x)
loss = loss_fn(prediction, y)
gradients = tape.gradients(loss, model.trainable_variables)
optimizer.apply_gradients(gradients, model.trainable_variables)
If you use the Keras .fit()
API, you won’t have to worry about dataset iteration. model.compile(optimizer=optimizer, loss=loss_fn)
model.fit(dataset)
tf.cond
and tf.while_loop
. One common place where data-dependent control flow appears is in sequence models. tf.keras.layers.RNN
wraps an RNN cell, allowing you to either statically or dynamically unroll the recurrence. For demonstration’s sake, you could reimplement dynamic unroll as follows:
class DynamicRNN(tf.keras.Model):
def __init__(self, rnn_cell):
super(DynamicRNN, self).__init__(self)
self.cell = rnn_cell
def call(self, input_data):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
outputs = tf.TensorArray(tf.float32, input_data.shape[0])
state = self.cell.zero_state(input_data.shape[1], dtype=tf.float32)
for i in tf.range(input_data.shape[0]):
output, state = self.cell(input_data[i], state)
outputs = outputs.write(i, output)
return tf.transpose(outputs.stack(), [1, 0, 2]), state
See the guide for more details. tf.metrics
to aggregate data and tf.summary
to log ittf.summary
symbols are coming soon. You can access the 2.0 version of tf.summary
with: from tensorflow.python.ops import summary_ops_v2
See the guide for more details.
de febrer 13, 2019
—
Posted by the TensorFlow Team
In a recent article, we mentioned that TensorFlow 2.0 has been redesigned with a focus on developer productivity, simplicity, and ease of use.
To take a closer look at what’s changed, and to learn about best practices, check out the new Effective TensorFlow 2.0 guide (published on GitHub). This article provides a quick summary of the content you’ll find there. If any …