11月 15, 2019 —
A guest post by Guillaume Klein, research engineer at SYSTRAN.
OpenNMT-tf is a neural machine translation toolkit for TensorFlow released in 2017. At that time, the project used many features and capabilities offered by TensorFlow: training and evaluation with tf.estimator, variable scopes, graph collections, tf.contrib, etc. We enjoyed using these features together for more than 2 years.
We spent…
tf.estimator
, variable scopes, graph collections, tf.contrib
, etc. We enjoyed using these features together for more than 2 years.tf.compat.v1
and tf.compat.v2
greatly helped during this iterative process.tf.estimator
, even though this required a large code redesign. Fortunately, we found it relatively easy to write a custom training loop while still meeting performance requirements (with tf.function
) and supporting advanced features such as multi-GPU training (with tf.distribute
) and mixed precision training (with automatic mixed precision graph).@tf.function
def forward(source, target):
"""Forwards a training example into the model, computes the
loss, and accumulates the gradients.
"""
logits = model(source, target, training=True)
loss = model.compute_loss(logits, target)
gradients = optimizer.get_gradients(loss,
model.trainable_variables)
if not accum_gradients:
# Initialize the variables to accumulate the gradients.
accum_gradients.extend([
tf.Variable(tf.zeros_like(gradient),
trainable=False)
for gradient in gradients])
for accum_gradient, step_gradient in
zip(accum_gradients, gradients):
accum_gradient.assign_add(step_gradient)
return loss
Step:@tf.function
def step():
"""Applies the accumulated gradients and advances
the training step."""
grads_and_vars = [
(gradient / accum_steps, variable)
for gradient, variable in zip(accum_gradients,
model.trainable_variables)]
optimizer.apply_gradients(grads_and_vars)
for accum_gradient in accum_gradients:
accum_gradient.assign(tf.zeros_like(accum_gradient))
for i, (source, target) in enumerate(dataset):
forward(source, target)
# Apply gradients every accum_steps examples.
if (i + 1) % accum_steps == 0:
step()
tf.Module
) and using tf.train.Checkpoint
to load and save checkpoints, it is likely that you will break compatibility with existing checkpoints. To mitigate this change in OpenNMT-tf, we silently convert old checkpoints on load with this process:tf.train.load_checkpoint
tf.train.Checkpoint
tensorflow_addons.seq2seq
module that is the TensorFlow 2.0 equivalent of tf.contrib.seq2seq.
11月 15, 2019
—
A guest post by Guillaume Klein, research engineer at SYSTRAN.
OpenNMT-tf is a neural machine translation toolkit for TensorFlow released in 2017. At that time, the project used many features and capabilities offered by TensorFlow: training and evaluation with tf.estimator, variable scopes, graph collections, tf.contrib, etc. We enjoyed using these features together for more than 2 years.
We spent…