Skip to content

Commit b9c2f00

Browse files
Svetlana Karslioglumalfet
Svetlana Karslioglu
authored andcommitted
Remove PyTorch Enterprise
1 parent 8f2a0ee commit b9c2f00

9 files changed

+4
-154
lines changed

_get_started/installation/azure.md

-9
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,6 @@ Azure [provides](https://azure.microsoft.com/en-us/services/machine-learning-ser
99
* dedicated, pre-built [machine learning virtual machines](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/){:target="_blank"}, complete with PyTorch.
1010
* bare Linux and Windows virtual machines for you to do a custom install of PyTorch.
1111

12-
## PyTorch Enterprise on Azure
13-
{: #pytorch-enterprise-on-azure}
14-
15-
Microsoft is one of the founding members and also the inaugural participant of the [PyTorch Enterprise Support Program](https://pytorch.org/enterprise-support-program). Microsoft offers PyTorch Enterprise on Azure as a part of Microsoft [Premier](https://www.microsoft.com/en-us/msservices/premier-support) and [Unified](https://www.microsoft.com/en-us/msservices/unified-support-solutions?activetab=pivot1:primaryr4) Support. The PyTorch Enterprise support service includes long-term support to selected versions of PyTorch for up to 2 years, prioritized troubleshooting, and the latest integration with [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning/) and other PyTorch add-ons including ONNX Runtime for faster inference. 
16-
17-
To learn more and get started with PyTorch Enterprise on Microsoft Azure, [visit here](https://azure.microsoft.com/en-us/develop/pytorch/).
18-
19-
For documentation, [visit here](https://docs.microsoft.com/en-us/azure/pytorch-enterprise/).
20-
2112
## Azure Primer
2213
{: #microsoft-azure-primer}
2314

_includes/quick-start-module.js

+1-2
Original file line numberDiff line numberDiff line change
@@ -259,14 +259,13 @@ $("[data-toggle='cloud-dropdown']").on("click", function(e) {
259259

260260
function commandMessage(key) {
261261
var object = {{ installMatrix }};
262-
var lts_notice = "<div class='alert-secondary'><b>Note</b>: Additional support for these binaries may be provided by <a href='/enterprise-support-program' style='font-size:100%'>PyTorch Enterprise Support Program Participants</a>.</div>";
263262

264263
if (!object.hasOwnProperty(key)) {
265264
$("#command").html(
266265
"<pre> # Follow instructions at this URL: https://github.com/pytorch/pytorch#from-source </pre>"
267266
);
268267
} else if (key.indexOf("lts") == 0 && key.indexOf('rocm') < 0) {
269-
$("#command").html("<pre>" + object[key] + "</pre>" + lts_notice);
268+
$("#command").html("<pre>" + object[key] + "</pre>");
270269
} else {
271270
$("#command").html("<pre>" + object[key] + "</pre>");
272271
}

_includes/quick_start_cloud_options.html

+1-2
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,7 @@
4545
<div class="cloud-option-row">
4646
<div class="cloud-option" data-toggle="cloud-dropdown">
4747
<div class="cloud-option-body microsoft-azure" id="microsoft-azure">
48-
<p>Microsoft Azure -</p>
49-
<span>PyTorch Enterprise Program</span>
48+
<p>Microsoft Azure</p>
5049
</div>
5150

5251
<ul>

_includes/quick_start_local.html

-2
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@
44
package manager since it installs all dependencies. You can also
55
<a href="{{ site.baseurl }}/get-started/previous-versions">install previous versions of PyTorch</a>. Note that LibTorch is only available for C++.
66
</p>
7-
<p>Additional support or warranty for some PyTorch Stable and LTS binaries are available through the <a href="/enterprise-support-program">PyTorch Enterprise Support Program</a>.
8-
</p>
97

108
<div class="row">
119
<div class="col-md-3 headings">

_posts/2021-5-25-announcing-pytorch-enterprise.md

-27
This file was deleted.

_posts/2021-8-3-pytorch-profiler-1.9-released.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ For how to optimize batch size performance, check out the step-by-step tutorial
207207
## What’s Next for the PyTorch Profiler?
208208
You just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by ```pip install torch-tb-profiler``` to optimize your PyTorch model.
209209

210-
Look out for an advanced version of this tutorial in the future. If you want tailored enterprise-grade support for this, check out [PyTorch Enterprise on Azure](https://azure.microsoft.com/en-us/develop/pytorch/). We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue [here](https://github.com/pytorch/kineto/issues).
210+
Look out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue [here](https://github.com/pytorch/kineto/issues).
211211

212212
For new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org.
213213

_posts/2022-5-12-ambient-clinical-intelligence-generating-medical-reports-with-pytorch.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -261,7 +261,7 @@ Our journey in deploying the report generation models reflects the above discuss
261261

262262
### A maturing ecosystem
263263

264-
Is it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues. Finally, for those of us that require enterprise-level support, Microsoft now offers Premier Support for use of PyTorch on Azure.
264+
Is it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues.
265265

266266
Where to go from here? For those that require the flexibility of serving PyTorch code directly, without going through the extra step of exporting self-contained models, it is worth pointing out that the TorchServe project now provides a way of bundling the code together with parameter checkpoints into a single servable archive, greatly reducing the risk of code and parameters running apart. To us, however, exporting models to TorchScript has proven beneficial. It provides a clear interface between modeling and deployment teams, and TorchScript further reduces the latency when serving models on GPU via its just-in-time compilation engine.
267267

_resources/enterprise.md

-7
This file was deleted.

enterprise/enterprise_landing.html

-103
This file was deleted.

0 commit comments

Comments
 (0)