Introducing TensorBoard.dev: a new way to share your ML experiment results
12月 02, 2019
Posted by Gal Oshri, Product Manager

TensorBoard, TensorFlow’s visualization toolkit, is often used by researchers and engineers to visualize and understand their ML experiments. It enables tracking experiment metrics, visualizing models, profiling ML programs, visualizing hyperparameter tuning experiments, and much more.

While TensorBoard makes it easy to visualize your own experiments, machine learning often involves collaboration. You might want to share your research about the effect of a hyperparameter, explain a complicated training procedure, or get help troubleshooting strange model behavior.

We have seen people sharing screenshots of their TensorBoards to achieve this. However, screenshots aren’t interactive and fail to capture all the details. At Google, researchers and engineers often communicate their insights about model behavior by sending their TensorBoard visualizations to teammates. Our goal is to provide this capability to the broader community.

That is why we launched TensorBoard.dev: a managed service (currently in preview) that enables you to easily host, track, and share your ML experiments for free. Simply upload your TensorBoard logs and receive a link that can be viewed by everyone, with no installation or setup.

If a picture is worth a thousand words, we believe an interactive TensorBoard can be even more valuable.
TensorBoard.dev experiment for "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
We are excited to see how the community engages with TensorBoard.dev. Here are a few examples and ideas:

Getting Started

The first step is to identify the TensorBoard logs you want to share (you can download a sample from here). Note that the TensorBoard you upload will be publicly visible, so do not upload sensitive data.

Make sure you have the latest TensorBoard installed:
pip install -U tensorboard
Then, simply use the upload command:
tensorboard dev upload --logdir {logs}
After following the instructions to authenticate with your Google Account, a TensorBoard.dev link will be provided. You can view the TensorBoard immediately, even during the upload. The uploader will continue running and uploading new logs that appear in the log directory until you stop the process.

The TensorBoard.dev link can be opened by everybody, so feel free to use it to share your research, ask for advice in a GitHub issue or Stack Overflow question, or simply track your experiments without opening TensorBoard locally. A Google Account is needed to upload logs, but not to view the TensorBoard.

Several other commands are available for listing, deleting, or exporting your experiments. You can learn more by using the tensorboard dev --help command. There is currently a limit of 10M data points per user. If you reach this limit (you will get an error during the upload), please reach out to us! For a quick fix, delete some of your existing experiments.

You can find an end-to-end tutorial that runs in Colab here. While the tutorial shows how to use TensorBoard logs created with Keras’s .fit(), you can also use logs created with the GradientTape-based training loop (as shown in TensorBoard’s Scalars tutorial) or any other valid TensorBoard logs.

What’s next for TensorBoard.dev?

TensorBoard.dev is in preview and currently only includes TensorBoard’s Scalars dashboard. We are adding more of TensorBoard’s capabilities and expanding the sharing functionality. We are also exploring some ideas on how to make it easier to discover interesting TensorBoards that have been published.

If you have any feedback or ideas on how to make TensorBoard.dev more useful for you, we’d love to hear about it at tensorboard.dev-support@google.com.



Next post
Introducing TensorBoard.dev: a new way to share your ML experiment results
TensorFlow Core · Google Article

Posted by Gal Oshri, Product Manager

TensorBoard, TensorFlow’s visualization toolkit, is often used by researchers and engineers to visualize and understand their ML experiments. It enables tracking experiment metrics, visualizing models, profiling ML programs, visualizing hyperparameter tuning experiments, and much more.

While TensorBoard makes it easy to visualize your own experiments, machine l…