Skip to content

Commit af7be93

Browse files
committed
Update video metadata for PyTorch Conference 2024
- Refined descriptions by removing speaker names and organization details. - Extracted and added speaker names for several talks. - Cleaned up titles by removing speaker names and unnecessary prefixes.
1 parent 45ab6cd commit af7be93

5 files changed

+18
-15
lines changed

pytorchconf-2024/videos/maximizing-training-throughput-using-torch-compile-and-fsdp-l-chu-a-viros-i-martin-b-vaughan.json

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"description": "Maximizing Training Throughput Using Torch.Compile and FSDP - Linsong Chu & Antoni Viros i Martin, IBM Research; Brian Vaughan, IBM\n\ntorch.compile is a graph compilation technique that improves GPU utilization. A key challenge in getting torch.compile to perform well is to minimize (or eliminate) graph breaks, however, this isn't trivial as even the Llama implementation provided by Meta has many graph breaks resulting in reduced training throughput. In this talk we discuss 1. how we addressed these challenges in order to train a model using torch.compile 2. how we combined torch.compile with FSDP and selective activation checkpointing to achieve the maximum throughput for training 3. model quality comparison between models trained with compile and no-compile, and lastly 4. the best setup we have for different model sizes in the Llama family that achieves the maximum throughput and MFU number (e.g. 68% MFU for the 7B model on A100 GPUs!)",
2+
"description": "torch.compile is a graph compilation technique that improves GPU utilization. A key challenge in getting torch.compile to perform well is to minimize (or eliminate) graph breaks, however, this isn't trivial as even the Llama implementation provided by Meta has many graph breaks resulting in reduced training throughput. In this talk we discuss 1. how we addressed these challenges in order to train a model using torch.compile 2. how we combined torch.compile with FSDP and selective activation checkpointing to achieve the maximum throughput for training 3. model quality comparison between models trained with compile and no-compile, and lastly 4. the best setup we have for different model sizes in the Llama family that achieves the maximum throughput and MFU number (e.g. 68% MFU for the 7B model on A100 GPUs!)",
33
"duration": 220,
44
"language": "eng",
55
"recorded": "2024-09-18",
@@ -10,11 +10,13 @@
1010
}
1111
],
1212
"speakers": [
13-
"TODO"
13+
"Linsong Chu",
14+
"Antoni Viros i Martin",
15+
"Brian Vaughan"
1416
],
1517
"tags": [],
1618
"thumbnail_url": "https://i.ytimg.com/vi_webp/_CuLeABf_fM/maxresdefault.webp",
17-
"title": "Maximizing Training Throughput Using Torch.Compile and FSDP - L. Chu, A. Viros i Martin, B. Vaughan",
19+
"title": "Maximizing Training Throughput Using Torch.Compile and FSDP",
1820
"videos": [
1921
{
2022
"type": "youtube",

pytorchconf-2024/videos/meta-llama-3-and-the-future-of-responsible-ai-development-spencer-whitman-vincent-gonguet-meta.json

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"description": "Meta Llama 3 and the Future of Responsible AI Development - Spencer Whitman & Vincent Gonguet, Meta\n\nAs AI models become increasingly powerful and pervasive, trust and safety have become top priorities. Join us for a timely talk on Llama 3, our latest foundation model, and the cutting-edge trust and safety models and tools we've developed to ensure responsible AI development. In this talk, we'll dive into: \u2022The advancements of Llama 3 and its applications \u2022Our innovative trust and safety approaches, including toxicity detection and mitigation \u2022The open-source tools and resources we're sharing to empower the community Discover how Meta is pushing the boundaries of trust and safety and learn how you can integrate these solutions into your own projects. Let's build a safer, more responsible AI future together!",
2+
"description": "As AI models become increasingly powerful and pervasive, trust and safety have become top priorities. Join us for a timely talk on Llama 3, our latest foundation model, and the cutting-edge trust and safety models and tools we've developed to ensure responsible AI development. In this talk, we'll dive into: \u2022The advancements of Llama 3 and its applications \u2022Our innovative trust and safety approaches, including toxicity detection and mitigation \u2022The open-source tools and resources we're sharing to empower the community Discover how Meta is pushing the boundaries of trust and safety and learn how you can integrate these solutions into your own projects. Let's build a safer, more responsible AI future together!",
33
"duration": 1251,
44
"language": "eng",
55
"recorded": "2024-09-18",
@@ -10,11 +10,12 @@
1010
}
1111
],
1212
"speakers": [
13-
"TODO"
13+
"Spencer Whitman",
14+
"Vincent Gonguet"
1415
],
1516
"tags": [],
1617
"thumbnail_url": "https://i.ytimg.com/vi_webp/XOIuFIl2-Ao/maxresdefault.webp",
17-
"title": "Meta Llama 3 and the Future of Responsible AI Development - Spencer Whitman & Vincent Gonguet, Meta",
18+
"title": "Meta Llama 3 and the Future of Responsible AI Development",
1819
"videos": [
1920
{
2021
"type": "youtube",

pytorchconf-2024/videos/mlir-enabling-composition-of-kernels-and-compilers-jacques-pienaar-google.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"description": "[MLIR] Enabling Composition of Kernels and Compilers - Jacques Pienaar, Google\n\nHand written kernels and compilers have been part of the toolbox to provide efficient and broad coverage. These approaches have often been positioned as being at odds with one another - and indeed the software solutions either side have sometimes made it such. MLIR, since inception, aimed to enable general, beneficial composition instead. Rather than treating kernels as a black box escape hatch, treat it as a peer in solving the serving needs. This is not magic and requires consideration of how best to combine. In this talk I'll present the approach and effect of this both in IREE and OpenXLA.",
2+
"description": "Hand written kernels and compilers have been part of the toolbox to provide efficient and broad coverage. These approaches have often been positioned as being at odds with one another - and indeed the software solutions either side have sometimes made it such. MLIR, since inception, aimed to enable general, beneficial composition instead. Rather than treating kernels as a black box escape hatch, treat it as a peer in solving the serving needs. This is not magic and requires consideration of how best to combine. In this talk I'll present the approach and effect of this both in IREE and OpenXLA.",
33
"duration": 672,
44
"language": "eng",
55
"recorded": "2024-09-18",
@@ -10,11 +10,11 @@
1010
}
1111
],
1212
"speakers": [
13-
"TODO"
13+
"Jacques Pienaar"
1414
],
1515
"tags": [],
1616
"thumbnail_url": "https://i.ytimg.com/vi_webp/Dx1fAE9fk8s/maxresdefault.webp",
17-
"title": "[MLIR] Enabling Composition of Kernels and Compilers - Jacques Pienaar, Google",
17+
"title": "[MLIR] Enabling Composition of Kernels and Compilers",
1818
"videos": [
1919
{
2020
"type": "youtube",

pytorchconf-2024/videos/mojo-lifting-pt-to-new-heights-with-max-and-mojo-mikhail-zolotukhin-modular.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"description": "[MOJO] Lifting PT to New Heights with MAX and Mojo - Mikhail Zolotukhin, Modular\n\nIn this talk we'll peek into Modular's inference engine: how it builds on and works with PyTorch and what is unique about it. We will look into how Mojo language can be used to define performant kernels and what optimizations the inference engine can perform. We will also talk briefly about our experience of developing a third party backend for torch.compile.",
2+
"description": "In this talk we'll peek into Modular's inference engine: how it builds on and works with PyTorch and what is unique about it. We will look into how Mojo language can be used to define performant kernels and what optimizations the inference engine can perform. We will also talk briefly about our experience of developing a third party backend for torch.compile.",
33
"duration": 572,
44
"language": "eng",
55
"recorded": "2024-09-18",
@@ -10,11 +10,11 @@
1010
}
1111
],
1212
"speakers": [
13-
"TODO"
13+
"Mikhail Zolotukhin"
1414
],
1515
"tags": [],
1616
"thumbnail_url": "https://i.ytimg.com/vi_webp/JmHKhc6EGpg/maxresdefault.webp",
17-
"title": "[MOJO] Lifting PT to New Heights with MAX and Mojo - Mikhail Zolotukhin, Modular",
17+
"title": "[MOJO] Lifting PT to New Heights with MAX and Mojo",
1818
"videos": [
1919
{
2020
"type": "youtube",

pytorchconf-2024/videos/welcome-to-the-pytorch-ecosystem-for-llm-fine-tuning-mini-summit-kartikay-khandelwal-meta.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"description": "Welcome to the PyTorch Ecosystem for LLM Fine-tuning Mini Summit - Kartikay Khandelwal, Meta\n\nAs open-source LLMs have become more capable, a substantial ecosystem has developed around the fine-tuning of these models. A thriving community of researchers, developers, practitioners and hobbyists has emerged which focuses on topics ranging from memory efficiency, parameter-efficient fine-tuning and quantization to performance at scale and reproducible evaluations. The goal of this mini-summit is to bring this community together to discuss ideas, share knowledge and build connections.\n\nThe agenda features a keynote from Joe Spisak on the state of the Llama ecosystem followed by invited talks from the founders of Axolotl, Unsloth and torchtune. We conclude the summit with a riveting discussion on what\u2019s next for LLMs, fine-tuning and the PyTorch ecosystem with a fabulous panel of experts - Tim Dettmers (author of bitsandbytes and QLoRA), Hailey Schoelkopf (maintainer of LM Eval Harness at EleutherAI), Aakanksha Chowdhery (Lead author on PaLM and Gemini) and Alexis Conneau (Research Lead at OpenAI)",
2+
"description": "As open-source LLMs have become more capable, a substantial ecosystem has developed around the fine-tuning of these models. A thriving community of researchers, developers, practitioners and hobbyists has emerged which focuses on topics ranging from memory efficiency, parameter-efficient fine-tuning and quantization to performance at scale and reproducible evaluations. The goal of this mini-summit is to bring this community together to discuss ideas, share knowledge and build connections.\n\nThe agenda features a keynote from Joe Spisak on the state of the Llama ecosystem followed by invited talks from the founders of Axolotl, Unsloth and torchtune. We conclude the summit with a riveting discussion on what\u2019s next for LLMs, fine-tuning and the PyTorch ecosystem with a fabulous panel of experts - Tim Dettmers (author of bitsandbytes and QLoRA), Hailey Schoelkopf (maintainer of LM Eval Harness at EleutherAI), Aakanksha Chowdhery (Lead author on PaLM and Gemini) and Alexis Conneau (Research Lead at OpenAI)",
33
"duration": 81,
44
"language": "eng",
55
"recorded": "2024-09-18",
@@ -10,11 +10,11 @@
1010
}
1111
],
1212
"speakers": [
13-
"TODO"
13+
"Kartikay Khandelwal"
1414
],
1515
"tags": [],
1616
"thumbnail_url": "https://i.ytimg.com/vi_webp/Pe_VT5ReB3U/maxresdefault.webp",
17-
"title": "Welcome to the PyTorch Ecosystem for LLM Fine-tuning Mini Summit - Kartikay Khandelwal, Meta",
17+
"title": "Welcome to the PyTorch Ecosystem for LLM Fine-tuning Mini Summit",
1818
"videos": [
1919
{
2020
"type": "youtube",

0 commit comments

Comments
 (0)