Learn how you might be bottlenecking your training because of the dataset

Bad data practices WILL slow down your training (Photo credit: pixabay)

If you’re using machine learning or deep learning then you’ve likely obsessed over making sure all your code can run on GPUs or, for the brave souls, even TPUs.

I hate to be the bearer of bad news, but your models are already likely pretty optimal for GPUs! (especially if you’re using frameworks like PyTorch Lightning that automatically let you switch between GPUs and CPUs with no code changes).

The real culprit is data thoroughput. Data can be a bottleneck in subtle ways you may not be aware of.

Transforms on CPUs

When dealing with data for deep learning, you are very very…


Run PyTorch workloads on AWS with zero code changes

Source bestanimation.com (with permission)

PyTorch is an amazing framework for building neural networks. It’s easy to get started and get value very quickly. But for realistic research or production use-cases, your laptop or local server won’t do.

In this tutorial, I’ll show you how to run ANY PyTorch code on the cloud without making any code changes.

The model

For this tutorial I’m going to pick the dcgan from the PyTorch examples repository.


Iterate your way from baseline to custom models to ship products faster or to publish your research faster.

With Flash build PyTorch baselines in minutes

Whether you’re a data scientist, research engineer, AI researcher, or machine learning engineer, baselines are non-negotiable. Don’t build a fancy GAN or try a complex idea before setting up a good foundation.

In this tutorial, we’ll use Flash to build two PyTorch baselines in minutes. After that, we’ll iterate on that baseline using Lightning to get to a custom implementation tailored to your particular circumstances and squeeze more performance from your models.

Baseline Vs End-Model


Sharded is a new technique that helps you save over 60% memory and train models twice as large.

Giving it scale (Photo by Peter Gonzalez on Unsplash)

Deep learning models have been shown to improve with more data and more parameters. Even with the latest GPT-3 model from Open AI which uses 175B parameters, we have yet to see models plateau as the number of parameters grow.

For some domains like NLP, the workhorse model has been the Transformer which requires massive amounts of GPU memory. For realistic models, they just don’t fit in memory. The latest technique called Sharded was introduced by Microsoft’s Zero paper in which they develop a technique to bring us closer to 1 trillion parameters.

In this article, I will give the…


Hands-on Tutorials

This tutorial implements a variational autoencoder for non-black and white images using PyTorch.

Generated images from cifar-10 (author’s own)

It’s likely that you’ve searched for VAE tutorials but have come away empty-handed. Either the tutorial uses MNIST instead of color images or the concepts are conflated and not explained clearly.

You’re in luck!

This tutorial covers all aspects of VAEs including the matching math and implementation on a realistic dataset of color images.

The outline is as follows:

  1. Resources (github code, colab).
  2. ELBO definition (optional).
  3. ELBO, KL divergence explanation (optional).
  4. ELBO, reconstruction loss explanation (optional).
  5. PyTorch implementation

Resources

Follow along with this colab.

Code is also available on Github here (don’t forget to star!).

For a production/research-ready implementation simply install…


Opensource is the key to advancing AI and has been the driver of the majority of innovation in the field. This is the story from an insider’s perspective.

Credit: LoveTheWind (with permission)

As an open-source developer, the question I hear the most is “why would you want to give that away for free.?”

In the field of AI, there are many reasons why opensource is key. First, the code for building models does not give away any competitive advantage because the value comes from models+your own data. Second, it lets the whole world help you find and correct mistakes. Imagine building a house where every architect in the world can contribute one tiny idea. But more importantly, AI is a really hard problem to solve.

The problems in the field cannot be…


In this tutorial, I’ll show you how to use gradient ascent to figure out how to misclassify an input.

Using gradient ascent to figure out how to change an input to be classified as a 5. (All images are the author’s own with all rights reserved).

Neural networks get a bad reputation for being black boxes. And while it certainly takes creativity to understand their decision making, they are really not as opaque as people would have you believe.

In this tutorial, I’ll show you how to use backpropagation to change the input as to classify it as whatever you would like.

Follow along using this colab.

(This work was co-written with Alfredo Canziani ahead of an upcoming video)

Humans as black boxes

Let’s consider the case of humans. If I show you the following input:


In a new paper, we discuss the key ideas driving performance in self-supervised learning and show what matters.

Contrastive learning: Batch of inputs.

This is the partner blog matching our new paper: A Framework For Contrastive Self-Supervised Learning And Designing A New Approach (by William Falcon and Kyunghyun Cho).

In the last year, a stream of “novelself-supervised learning algorithms have set new state-of-the-art results in AI research: AMDIM, CPC, SimCLR, BYOL, Swav, etc…

In our recent paper, we formulate a conceptual framework for characterizing contrastive self-supervised learning approaches. …


Today we released 0.8.1 which is a major milestone for PyTorch Lightning. With incredible user adoption and growth, we’re continuing to build tools to easily do AI research.

This major release puts us on track for final API changes for our v1.0.0 coming soon!

PyTorch Lightning

PyTorch Lightning is a very light-weight structure for PyTorch — it’s more of a style guide than a framework. But once you structure your code, we give you free GPU, TPU, 16-bit precision support and much more!

Lightning is just structured PyTorch


Throughout the last 10 months, while working on PyTorch Lightning, the team and I have been exposed to many styles of structuring PyTorch code and we have identified a few key places where we see people inadvertently introducing bottlenecks.

We’ve taken great care to make sure that PyTorch Lightning does not make any of these mistakes for the code we automate for you, and we even try to correct it for users when we detect them. …

William Falcon

⚡️PyTorch Lightning Creator • PhD Student, AI (NYU, Facebook AI research).

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store