Skip to content

Commit d535ad5

Browse files
authored
Add blog post "Accelerating Whisper on Arm with PyTorch and Hugging Face Transformers" (#1975)
Signed-off-by: Chris Abraham <[email protected]> -- Squash and Merge by Andrew Bringaze at the request of Bazil Sterling & Jennifer Bly
1 parent dad2ac1 commit d535ad5

File tree

1 file changed

+39
-0
lines changed

1 file changed

+39
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
layout: blog_detail
3+
title: "Accelerating Whisper on Arm with PyTorch and Hugging Face Transformers"
4+
author: Pareena Verma, Arm
5+
---
6+
7+
Automatic speech recognition (ASR) has revolutionized how we interact with technology, clearing the way for applications like real-time audio transcription, voice assistants, and accessibility tools. OpenAI Whisper is a powerful model for ASR, capable of multilingual speech recognition and translation.
8+
9+
A new Arm Learning Path is now available that explains how to accelerate Whisper on Arm-based cloud instances using PyTorch and Hugging Face transformers.
10+
11+
**Why Run Whisper on Arm?**
12+
13+
Arm processors are popular in cloud infrastructure for their efficiency, performance, and cost-effectiveness. With major cloud providers such as AWS, Azure, and Google Cloud offering Arm-based instances, running machine learning workloads on this architecture is becoming increasingly attractive.
14+
15+
**What You’ll Learn**
16+
17+
The [Arm Learning Path](https://learn.arm.com/learning-paths/servers-and-cloud-computing/whisper/) provides a structured approach to setting up and accelerating Whisper on Arm-based cloud instances. Here’s what you cover:
18+
19+
**1. Set Up Your Environment**
20+
21+
Before running Whisper, you must set up your development environment. The learning path walks you through setting up an Arm-based cloud instance and installing all dependencies, such as PyTorch, Transformers, and ffmpeg.
22+
23+
**2. Run Whisper with PyTorch and Hugging Face Transformers**
24+
25+
Once the environment is ready, you will use the Hugging Face transformer library with PyTorch to load and execute Whisper for speech-to-text conversion. The tutorial provides a step-by-step approach for processing audio files and generating audio transcripts.
26+
27+
**3. Measure and Evaluate Performance**
28+
29+
To ensure efficient execution, you learn how to measure transcription speeds and compare different optimization techniques. The guide provides insights into interpreting performance metrics and making informed decisions on your deployment.
30+
31+
**Try it Yourself**
32+
33+
Upon completion of this tutorial, you know how to:
34+
35+
* Deploy Whisper on an Arm-based cloud instance.
36+
* Implement performance optimizations for efficient execution.
37+
* Evaluate transcription speeds and optimize further based on results.
38+
39+
**Try the live demo today** and see audio transcription in action on Arm: [Whisper on Arm Demo](https://learn.arm.com/learning-paths/servers-and-cloud-computing/whisper/_demo/).

0 commit comments

Comments
 (0)