Open in app

Sign In

Write

Sign In

William Falcon
William Falcon

3K Followers

Home

About

Published in

Towards Data Science

·Mar 17, 2022

PyTorch Lightning vs DeepSpeed vs FSDP vs FFCV vs …

Scale-up PyTorch model training by mixing these techniques to compound their benefits using PyTorch Lightning — PyTorch Lightning has become one of the most widely used deep learning frameworks in the world by allowing users to focus on the research and not the engineering. Lightning users benefit from massive speed-ups to the training of their PyTorch models, resulting in huge costs savings. PyTorch Lightning is more…

Deep Learning

5 min read

PyTorch Lightning vs DeepSpeed vs FSDP vs FFCV vs …
PyTorch Lightning vs DeepSpeed vs FSDP vs FFCV vs …
Deep Learning

5 min read


Published in

Towards Data Science

·May 19, 2021

GPUs Are Fast! Datasets Are Your Bottleneck

Learn how you might be bottlenecking your training because of the dataset — If you’re using machine learning or deep learning then you’ve likely obsessed over making sure all your code can run on GPUs or, for the brave souls, even TPUs. I hate to be the bearer of bad news, but your models are already likely pretty optimal for GPUs! (especially if…

Deep Learning

3 min read

GPUs Are Fast! Datasets Are Your Bottleneck
GPUs Are Fast! Datasets Are Your Bottleneck
Deep Learning

3 min read


Published in

Towards Data Science

·May 7, 2021

Trivially Scale PyTorch on AWS

Run PyTorch workloads on AWS with zero code changes — PyTorch is an amazing framework for building neural networks. It’s easy to get started and get value very quickly. But for realistic research or production use-cases, your laptop or local server won’t do. In this tutorial, I’ll show you how to run ANY PyTorch code on the cloud without making…

Deep Learning

4 min read

Trivially Scale PyTorch on AWS
Trivially Scale PyTorch on AWS
Deep Learning

4 min read


Published in

Towards Data Science

·Feb 13, 2021

Setting A Strong Deep Learning Baseline In Minutes With PyTorch

Iterate your way from baseline to custom models to ship products faster or to publish your research faster. — Whether you’re a data scientist, research engineer, AI researcher, or machine learning engineer, baselines are non-negotiable. Don’t build a fancy GAN or try a complex idea before setting up a good foundation. In this tutorial, we’ll use Flash to build two PyTorch baselines in minutes. After that, we’ll iterate on…

Machine Learning

3 min read

Setting A Strong Deep Learning Baseline In Minutes With PyTorch
Setting A Strong Deep Learning Baseline In Minutes With PyTorch
Machine Learning

3 min read


Published in

Towards Data Science

·Dec 12, 2020

Sharded: A New Technique To Double The Size Of PyTorch Models

Sharded is a new technique that helps you save over 60% memory and train models twice as large. — Deep learning models have been shown to improve with more data and more parameters. Even with the latest GPT-3 model from Open AI which uses 175B parameters, we have yet to see models plateau as the number of parameters grow. For some domains like NLP, the workhorse model has been…

Machine Learning

5 min read

Sharded: A New Technique To Double The Size Of PyTorch Models
Sharded: A New Technique To Double The Size Of PyTorch Models
Machine Learning

5 min read


Published in

Towards Data Science

·Dec 5, 2020

Variational Autoencoder Demystified With PyTorch Implementation.

This tutorial implements a variational autoencoder for non-black and white images using PyTorch. — It’s likely that you’ve searched for VAE tutorials but have come away empty-handed. Either the tutorial uses MNIST instead of color images or the concepts are conflated and not explained clearly. You’re in luck! This tutorial covers all aspects of VAEs including the matching math and implementation on a realistic…

Deep Learning

9 min read

Variational Autoencoder Demystified With PyTorch Implementation.
Variational Autoencoder Demystified With PyTorch Implementation.
Deep Learning

9 min read


Published in

Towards Data Science

·Nov 21, 2020

Open source: The magic power of AI research.

Opensource is the key to advancing AI and has been the driver of the majority of innovation in the field. This is the story from an insider’s perspective. — As an open-source developer, the question I hear the most is “why would you want to give that away for free.?” In the field of AI, there are many reasons why opensource is key. First, the code for building models does not give away any competitive advantage because the value…

Open Source

4 min read

Opensource: The magic power of AI research.
Opensource: The magic power of AI research.
Open Source

4 min read


Published in

Towards Data Science

·Sep 13, 2020

Looking Inside The Blackbox — How To Trick A Neural Network

In this tutorial, I’ll show you how to use gradient ascent to figure out how to misclassify an input. — Neural networks get a bad reputation for being black boxes. And while it certainly takes creativity to understand their decision making, they are really not as opaque as people would have you believe. In this tutorial, I’ll show you how to use backpropagation to change the input as to classify…

Data Science

6 min read

Peering Inside The Blackbox — How To Trick A Neural Network
Peering Inside The Blackbox — How To Trick A Neural Network
Data Science

6 min read


Published in

Towards Data Science

·Sep 2, 2020

A Framework For Contrastive Self-Supervised Learning And Designing A New Approach

In a new paper, we discuss the key ideas driving performance in self-supervised learning and show what matters. — This is the partner blog matching our new paper: A Framework For Contrastive Self-Supervised Learning And Designing A New Approach (by William Falcon and Kyunghyun Cho). In the last year, a stream of “novel” self-supervised learning algorithms have set new state-of-the-art results in AI research: AMDIM, CPC, SimCLR, BYOL, Swav…

Artificial Intelligence

13 min read

A Framework For Contrastive Self-Supervised Learning And Designing A New Approach
A Framework For Contrastive Self-Supervised Learning And Designing A New Approach
Artificial Intelligence

13 min read


Published in

PyTorch

·Jun 20, 2020

PyTorch Multi-GPU Metrics Library and More in PyTorch Lightning 0.8.1

Today we released 0.8.1 which is a major milestone for PyTorch Lightning. With incredible user adoption and growth, we’re continuing to build tools to easily do AI research. This major release puts us on track for final API changes for our v1.0.0 coming soon! PyTorch Lightning PyTorch Lightning is a very light-weight…

Pytorch

3 min read

PyTorch Multi-GPU Metrics and more in PyTorch Lightning 0.8.1
PyTorch Multi-GPU Metrics and more in PyTorch Lightning 0.8.1
Pytorch

3 min read

William Falcon

William Falcon

3K Followers

⚡️PyTorch Lightning Creator • PhD Student, AI (NYU, Facebook AI research).

Following
  • The Startup Grind Team

    The Startup Grind Team

  • Steve Blank

    Steve Blank

  • Product Hunt

    Product Hunt

  • Medium

    Medium

  • Food52

    Food52

See all (79)

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

Teams