Skip to content

Commit ccec627

Browse files
authored
Merge pull request #1298 from ELC/pytorchconf-2023
Add PytorchConf 2023
2 parents 3cee68a + d369a52 commit ccec627

File tree

68 files changed

+1766
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+1766
-0
lines changed

pytorchconf-2023/category.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{
2+
"title": "PyTorch Conference 2023"
3+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
{
2+
"description": "PyTorch Libraries provide building blocks (data processing transforms, modeling components, loss functions, etc.) on top of PyTorch as well as examples and tutorials on how to use these building blocks for training SoTA Models. In this talk, we’ll provide insights into ongoing work to accelerate exploration in multimodal understanding and generative AI using TorchMultimodal. We'll also present TorchVision's new transforms API, with added support for image detection, segmentation, and video tasks.",
3+
"duration": 1560,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Nicolas Hug",
14+
"Philip Bontrager",
15+
"Evan Smothers",
16+
"Peng Chen"
17+
],
18+
"tags": [],
19+
"thumbnail_url": "https://i.ytimg.com/vi/jKdXNDSpauk/maxresdefault.jpg",
20+
"title": "Accelerating Explorations in Vision and Multimodal AI Using Pytorch Libraries",
21+
"videos": [
22+
{
23+
"type": "youtube",
24+
"url": "https://www.youtube.com/watch?v=jKdXNDSpauk"
25+
}
26+
]
27+
}
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
{
2+
"description": "There is a Cambrian explosion of performant and efficient methods to train and serve generative AI models within the community. The PyTorch team will present optimizations to transformer based Generative AI models, using pure, native PyTorch. In this talk we aim to cover both new techniques in PyTorch for driving efficiency gains, as well as showcasing how they can be composed on popular Generative AI models. Highlights will include methods spanning torch compile, quantization, sparsity, memory efficient attention, reducing padding.",
3+
"duration": 1603,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Christian Puhrsch",
14+
"Horace He"
15+
],
16+
"tags": [],
17+
"thumbnail_url": "https://i.ytimg.com/vi/IWpM_9AsC-U/maxresdefault.jpg",
18+
"title": "Accelerating Generative AI",
19+
"videos": [
20+
{
21+
"type": "youtube",
22+
"url": "https://www.youtube.com/watch?v=IWpM_9AsC-U"
23+
}
24+
]
25+
}
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
{
2+
"description": "In this session, we will explore the technology advancements of PyTorch Distributed, and dive into the details of how multi-dimensional parallelism is made possible to train Large Language Models by composing different PyTorch native distributed training APIs.",
3+
"duration": 1149,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Wanchao Liang"
14+
],
15+
"tags": [],
16+
"thumbnail_url": "https://i.ytimg.com/vi/LcDjYLJblEY/maxresdefault.jpg",
17+
"title": "Composable Distributed PT2(D)",
18+
"videos": [
19+
{
20+
"type": "youtube",
21+
"url": "https://www.youtube.com/watch?v=LcDjYLJblEY"
22+
}
23+
]
24+
}
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
{
2+
"description": "As Generative AI adoption accelerates across industry, organizations want to deliver hyper-personalized experiences to end users. For building such experiences, thousands of models are being developed by fine-tuning pre-trained large models. To meet their stringent latency and throughput goals, organizations use GPU instances to deploy such models. However, inference costs can add up quickly if deploying thousands of models and provisioning dedicated hardware for each. TorchServe offers feature likes open platform, deferred distribution initialization, model sharing and heterogeneous deployment that make it easy for users to deploy fine tuned large models and save cost. Learn how organization can use these features in conjunction with fine tuning techniques like PEFT (Parameter Efficient Fine Tuning) and use Amazon SageMaker Multi-Model Endpoint (MME) to deploy multiple GenAI models on the same GPU, share GPU instances across thousands of GenAI models, and dynamically load/unload models based on incoming traffic. All of which helps you significantly reduce the cost. Finally we showcase example code for deploying multiple Llama based models which are fine tuned using PEFT on MME.",
3+
"duration": 1260,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Saurabh Trikande",
14+
"Li Ning"
15+
],
16+
"tags": [],
17+
"thumbnail_url": "https://i.ytimg.com/vi/bOHlPg13K3U/maxresdefault.jpg",
18+
"title": "Cost Effectively Deploy Thousands of Fine Tuned Gen AI Models Like Llama Using TorchServe on AWS",
19+
"videos": [
20+
{
21+
"type": "youtube",
22+
"url": "https://www.youtube.com/watch?v=bOHlPg13K3U"
23+
}
24+
]
25+
}
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
{
2+
"description": "This talk will present checkpoint features for distributed training. Distributed checkpoint support saving and loading from multiple ranks in parallel. It handles load-time resharding which enables saving in one cluster topolgy and loading to another. It also supports saving in one parallelism and loading into another. It is currently adopted by IBM, Mosaic, and XLA for FSDP checkpoint, and it is also being used for Shampoo OSS release checkpointing support. We will talk about distributed checkpoint support today and what is coming up next.",
3+
"duration": 910,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Iris Zhang",
14+
"Chien-Chin Huang"
15+
],
16+
"tags": [],
17+
"thumbnail_url": "https://i.ytimg.com/vi_webp/ldBmHNva_Fw/maxresdefault.webp",
18+
"title": "Distributed Checkpoint",
19+
"videos": [
20+
{
21+
"type": "youtube",
22+
"url": "https://www.youtube.com/watch?v=ldBmHNva_Fw"
23+
}
24+
]
25+
}
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
{
2+
"description": "The session will highlight the new features of PyTorch 2.0 and how to get started with PyTorch 2.0 and Hugging Face Transformers today. It will cover how to fine-tune a BERT model for Text Classification using the newest PyTorch 2.0 features.",
3+
"duration": 1211,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Philipp Schmid"
14+
],
15+
"tags": [],
16+
"thumbnail_url": "https://i.ytimg.com/vi/GYQTJnD-yjQ/maxresdefault.jpg",
17+
"title": "Getting Started with Pytorch 2.0 and Hugging Face Transformers",
18+
"videos": [
19+
{
20+
"type": "youtube",
21+
"url": "https://www.youtube.com/watch?v=GYQTJnD-yjQ"
22+
}
23+
]
24+
}
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
{
2+
"description": "Four years after its initial release, PyTorch Lightning has become one of the most used deep learning frameworks. It has been adopted by tens of thousands of companies and academic groups, contributing to drive the adoption of PyTorch to where it is today. The advent of generative AI has led to new challenges in training large models, and PyTorch Lightning has empowered the industry by making many of the latest innovations accessible and robust. PyTorch Lightning powers state-of-the-art generative AI models like StableDiffusion and SDXL. It has been adopted in exciting new directions for LLMs like Hyena Hierarchy and State Space models, as well as the new RWKW recurrent architecture. On top of all that, the PyTorch Lightning-based NVIDIA NeMo, which includes NeMo Megatron, is enabling companies to train LLMs up to hundreds billions parameters. In this talk we will explore generative AI applications powered by PyTorch Lightning, and cover the latest PyTorch Lightning 2.0 features that make working with large models easy. We will also discuss how Lightning Fabric powers lit-gpt, which has been adopted as the starter kit for the recent LLM Efficiency Challenge at NeurIPS 2023.",
3+
"duration": 1522,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Luca Antiga",
14+
"Carlos Mocholi",
15+
"Adrian Walchli"
16+
],
17+
"tags": [],
18+
"thumbnail_url": "https://i.ytimg.com/vi/pfdeWgNup2Y/sddefault.jpg",
19+
"title": "Into Generative AI with PyTorch Lightning 2.0",
20+
"videos": [
21+
{
22+
"type": "youtube",
23+
"url": "https://www.youtube.com/watch?v=pfdeWgNup2Y"
24+
}
25+
]
26+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
{
2+
"description": "This high-level presentation focuses on the technological advancements in PyTorch Edge, our on-device AI stack. We will provide an overview of the current market landscape and delve into PyTorch Edge's architecture, unique differentiators, and design trade-offs. Discover how PyTorch Edge bridges the gap between research and production, offering performance, portability, and productivity for on-device AI applications.",
3+
"duration": 1327,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Mergen Nachin",
14+
"Orion Reblitz-Richardson"
15+
],
16+
"tags": [],
17+
"thumbnail_url": "https://i.ytimg.com/vi/9U9MNbNcu-w/maxresdefault.jpg",
18+
"title": "Introducing ExecuTorch from PyTorch Edge: On-Device AI Stack and Ecosystem, and Our Unique Differentiators",
19+
"videos": [
20+
{
21+
"type": "youtube",
22+
"url": "https://www.youtube.com/watch?v=9U9MNbNcu-w"
23+
}
24+
]
25+
}
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
{
2+
"description": "Artificial Intelligence (AI) is a rapidly evolving field with diverse applications, and AMD is at the forefront of this revolution, offering a wide-ranging portfolio of AI solutions. In this keynote talk, learn about AMD’s extensive portfolio of AI solutions from Cloud to Edge to Endpoints and their support for PyTorch framework. We will also showcase the growing AI ecosystem around AMD solutions facilitating a rich experience for AI users. \nBy the end of this talk, you will learn how to leverage the synergy of AMD and PyTorch to create amazing generative AI applications with ease and efficiency.",
3+
"duration": 177,
4+
"language": "eng",
5+
"recorded": "2023-10-16",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://pytorch.org/event/pytorch-conference-2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Negin Oliver"
14+
],
15+
"tags": [
16+
"Keynote"
17+
],
18+
"thumbnail_url": "https://i.ytimg.com/vi/jwdI_95C8wY/maxresdefault.jpg",
19+
"title": "Keynote: AMD & PyTorch: A Powerful Combination for Generative AI",
20+
"videos": [
21+
{
22+
"type": "youtube",
23+
"url": "https://www.youtube.com/watch?v=jwdI_95C8wY"
24+
}
25+
]
26+
}

0 commit comments

Comments
 (0)