In this tutorial we’ll walk through a simple convolutional neural network to classify the images in CIFAR10 using PyTorch.

We’ll also set up Weights & Biases to log models metrics, inspect performance and share findings about the best architecture for the network. In this example we’re using Google Colab as a convenient hosted environment, but you can run your own training scripts from anywhere and visualize metrics with W&B’s experiment tracking tool.

Getting Started

  1. Open this Colab notebook.
  2. Click “Open in playground” to create a copy of this notebook for yourself.
  3. Save a copy in Google Drive for yourself.
  4. Step through each section below, pressing play on the code blocks to run the cells.

Results will be logged to a shared W&B project page.

5d9fa19bf70f354e4685b9e6 8SPcpvn

Training Your Model

Let’s review the key wandb commands we used in the Colab notebook above.

Setup

  • pip install wandb – Installs the W&B library
  • import wandb – Imports the wandb library
  • wandb login – Login to your W&B account so you can log all your metrics in one place
  • wandb.init() – Initializes a new W&B run. Each run is single execution of the training script.

Initialize Hyperparameters

  • wandb.config – Saves all your hyperparameters in a config object. This lets you use our app to sort and compare your runs by hyperparameter values.

We encourage you to tweak these and run this cell again to see if you can achieve improved model performance!

Track Results

  • wandb.watch() – Fetches all layer dimensions, gradients, model parameters and log them automatically to your dashboard.
  • wandb.save() – Saves the model checkpoint.

Visualizing Results

One you’ve trained your model you can visualize the predictions made by your model, its training and loss, gradients, best hyper-parameters and review associated code.

To view runs created by people in this public project:

  1. Check out the project page.
  2. Press ‘option+space’ to expand the runs table, comparing all the results from everyone who has tried this script.
  3. Click on the name of a run to dive in deeper to that single run on its own run page.
5d9fa19badfc332d5374a927 I1PM9YJ

Visualize Gradients

Click through to a single run to see more details about that run. For example, on this run page you can see the gradients I logged when I ran this script.

5d9fa19badfc336a9274a928 za8S6Xv

Visualize Predictions

You can visualize predictions made at every step by clicking on the Media tab. Here we can see an example of true labels and predictions made by our model on the CIFAR dataset.

5d9fa19bf70f355f0385b9e7 vzye9ei

Review Code

The overview tab picks up a link to the code. In this case, it’s a link to the Google Colab. If you’re running a script from a git repo, we’ll pick up the SHA of the latest git commit and give you a link to that version of the code in your own GitHub repo.

5d9fa19b0fd57e6120a5df81 FEBNXcI

Visualize Relationships

Use a parallel coordinates chart to see the relationship between hyperparameters and output metrics. Here, I’m looking at how the learning rate and other metrics I saved in “config” affect my loss and accuracy.

Next Steps

We encourage you to fork this colab notebook, tweak some hyperparameters and see if you can beat the leading model! Your goal is to maximize Test Accuracy. Good luck!

More about Weights & Biases

We’re always free for academics and open source projects. Email [email protected] with any questions or feature suggestions. Here are some more resources:

  1. Documentation – Python docs
  2. Gallery – example reports in W&B
  3. Articles – blog posts and tutorials
  4. Community – join our Slack community forum