Skip to content

Commit 4090d0d

Browse files
author
Adrian Tsai
authored
Add DirectML Execution Provider (microsoft#2057)
This change adds a new execution provider powered by [DirectML](https://aka.ms/DirectML). DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed. **Note** that the DML EP code was moved verbatim from the existing WindowsAI project, which is why it doesn't yet conform to the onnxruntime coding style. This is something that can be fixed later; we would like to keep formatting/whitespace changes to a minimum for the time being to make it easier to port fixes from WindowsAI to ORT during this transition. Summary of changes: * Initial commit of DML EP files under onnxruntime/core/providers/dml * Add cmake entries for building the DML EP and for pulling down the DirectML redist using nuget * Add a submodule dependency on the Windows Implementation Library (WIL) * Add docs under docs/execution_providers/DirectML-ExecutionProvider.md * Add support for DML EP to provider tests and perf tests * Add support for DML EP to fns_candy_style_transfer sample * Add entries to the C ABI for instantiating the DML EP
1 parent b101f1b commit 4090d0d

File tree

150 files changed

+29239
-36
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

150 files changed

+29239
-36
lines changed

.gitmodules

+4
Original file line numberDiff line numberDiff line change
@@ -43,3 +43,7 @@
4343
[submodule "cmake/external/onnx-tensorrt"]
4444
path = cmake/external/onnx-tensorrt
4545
url = https://github.com/onnx/onnx-tensorrt.git
46+
[submodule "cmake/external/wil"]
47+
path = cmake/external/wil
48+
url = https://github.com/microsoft/wil
49+

BUILD.md

+11
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,7 @@ The complete list of build options can be found by running `./build.sh (or ./bui
8787
* [Intel OpenVINO](#openvino)
8888
* [Android NNAPI](#Android)
8989
* [Nuphar](#Nuphar)
90+
* [DirectML](#DirectML)
9091
9192
**Options**
9293
* [OpenMP](#OpenMP)
@@ -387,6 +388,16 @@ For Linux (e.g. Ubuntu 16.04), install libopenblas-dev package
387388
388389
---
389390
391+
### DirectML
392+
393+
To build onnxruntime with the [DirectML execution provider](./docs/execution_providers/DirectML-ExecutionProvider.md) included, supply the `--use_dml` parameter to build.bat. e.g.
394+
395+
build.bat --use_dml
396+
397+
The DirectML execution provider supports building for both x64 and x86 architectures. DirectML is only supported on Windows.
398+
399+
---
400+
390401
## Architectures
391402
### x86
392403
- For Windows, just add --x86 argument when launching build.bat

NuGet.config

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
<?xml version="1.0" encoding="utf-8"?>
2+
<configuration>
3+
<solution>
4+
<add key="disableSourceControlIntegration" value="true" />
5+
</solution>
6+
<packageSources>
7+
<add key="NuGet Official" value="https://api.nuget.org/v3/index.json" />
8+
<add key="onnxruntime_public" value="https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime_public/nuget/v3/index.json" />
9+
</packageSources>
10+
</configuration>

README.md

+1
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ ONNX Runtime supports both CPU and GPU. Using various graph optimizations and ac
5151

5252
Currently ONNX Runtime supports the following accelerators:
5353
* MLAS (Microsoft Linear Algebra Subprograms)
54+
* [DirectML](./docs/execution_providers/DirectML-ExecutionProvider.md)
5455
* [MKL-DNN](./docs/execution_providers/MKL-DNN-ExecutionProvider.md) - [subgraph optimization](./docs/execution_providers/MKL-DNN-Subgraphs.md)
5556
* MKL-ML
5657
* [Intel nGraph](./docs/execution_providers/nGraph-ExecutionProvider.md)

ThirdPartyNotices.txt

+26
Original file line numberDiff line numberDiff line change
@@ -3769,3 +3769,29 @@ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
37693769
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
37703770
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
37713771
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
3772+
3773+
-----
3774+
3775+
microsoft/wil
3776+
3777+
MIT License
3778+
3779+
Copyright (c) Microsoft Corporation. All rights reserved.
3780+
3781+
Permission is hereby granted, free of charge, to any person obtaining a copy
3782+
of this software and associated documentation files (the "Software"), to deal
3783+
in the Software without restriction, including without limitation the rights
3784+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
3785+
copies of the Software, and to permit persons to whom the Software is
3786+
furnished to do so, subject to the following conditions:
3787+
3788+
The above copyright notice and this permission notice shall be included in all
3789+
copies or substantial portions of the Software.
3790+
3791+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
3792+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
3793+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
3794+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
3795+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
3796+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
3797+
SOFTWARE

cgmanifest.json

+10-1
Original file line numberDiff line numberDiff line change
@@ -390,8 +390,17 @@
390390
"type": "git"
391391
}
392392
},
393-
{
393+
{
394394
"component": {
395+
"git": {
396+
"commitHash": "e8c599bca6c56c44b6730ad93f6abbc9ecd60fc1",
397+
"repositoryUrl": "https://github.com/microsoft/wil"
398+
},
399+
"type": "git"
400+
}
401+
},
402+
{
403+
"component":{
395404
"type": "other",
396405
"Other": {
397406
"Name": "Go",

cmake/CMakeLists.txt

+10
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@ option(onnxruntime_USE_EIGEN_THREADPOOL "Use eigen threadpool. Otherwise OpenMP
8383
option(tensorflow_C_PACKAGE_PATH "Path to tensorflow C package installation dir")
8484
option(onnxruntime_ENABLE_LANGUAGE_INTEROP_OPS "Enable operator implemented in language other than cpp" OFF)
8585
option(onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS "Dump node input shapes and output data to standard output when executing the model." OFF)
86+
option(onnxruntime_USE_DML "Build with DirectML support" OFF)
8687

8788
set(protobuf_BUILD_TESTS OFF CACHE BOOL "Build protobuf tests" FORCE)
8889
#nsync tests failed on Mac Build
@@ -653,6 +654,15 @@ if (onnxruntime_ENABLE_MICROSOFT_INTERNAL)
653654
add_definitions(-DMICROSOFT_INTERNAL)
654655
endif()
655656

657+
if (onnxruntime_USE_DML)
658+
if(NOT WIN32)
659+
message(FATAL_ERROR "The DirectML execution provider is only supported when building for Windows.")
660+
endif()
661+
662+
add_definitions(-DUSE_DML=1)
663+
include(dml)
664+
endif()
665+
656666
#names in this var must match the directory names under onnxruntime/core/providers
657667
set(ONNXRUNTIME_PROVIDER_NAMES cpu)
658668

cmake/external/dml.cmake

+39
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Copyright (c) Microsoft Corporation. All rights reserved.
2+
# Licensed under the MIT License.
3+
4+
if (NOT onnxruntime_USE_CUSTOM_DIRECTML)
5+
if (NOT(MSVC) OR NOT(WIN32))
6+
message(FATAL_ERROR "NuGet packages are only supported for MSVC on Windows.")
7+
endif()
8+
9+
# Retrieve the latest version of nuget
10+
include(ExternalProject)
11+
ExternalProject_Add(nuget
12+
PREFIX nuget
13+
URL "https://dist.nuget.org/win-x86-commandline/v5.3.0/nuget.exe"
14+
DOWNLOAD_NO_EXTRACT 1
15+
CONFIGURE_COMMAND ""
16+
BUILD_COMMAND ""
17+
UPDATE_COMMAND ""
18+
INSTALL_COMMAND "")
19+
20+
set(NUGET_CONFIG ${PROJECT_SOURCE_DIR}/../NuGet.config)
21+
set(PACKAGES_CONFIG ${PROJECT_SOURCE_DIR}/../packages.config)
22+
set(PACKAGES_DIR ${CMAKE_CURRENT_BINARY_DIR}/packages)
23+
24+
# Restore nuget packages, which will pull down the DirectML redist package
25+
add_custom_command(
26+
OUTPUT restore_packages.stamp
27+
DEPENDS ${PACKAGES_CONFIG} ${NUGET_CONFIG}
28+
COMMAND ${CMAKE_CURRENT_BINARY_DIR}/nuget/src/nuget restore ${PACKAGES_CONFIG} -PackagesDirectory ${PACKAGES_DIR} -ConfigFile ${NUGET_CONFIG}
29+
COMMAND ${CMAKE_COMMAND} -E touch restore_packages.stamp
30+
VERBATIM)
31+
32+
add_custom_target(RESTORE_PACKAGES ALL DEPENDS restore_packages.stamp)
33+
add_dependencies(RESTORE_PACKAGES nuget)
34+
35+
list(APPEND onnxruntime_EXTERNAL_DEPENDENCIES RESTORE_PACKAGES)
36+
else()
37+
include_directories(${dml_INCLUDE_DIR})
38+
link_directories(${dml_LIB_DIR})
39+
endif()

cmake/external/wil

Submodule wil added at e8c599b

cmake/onnxruntime.cmake

+1
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ target_link_libraries(onnxruntime PRIVATE
6464
${PROVIDERS_TENSORRT}
6565
${PROVIDERS_OPENVINO}
6666
${PROVIDERS_NUPHAR}
67+
${PROVIDERS_DML}
6768
onnxruntime_optimizer
6869
onnxruntime_providers
6970
onnxruntime_util

cmake/onnxruntime_graph.cmake

+7
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,13 @@ if(NOT onnxruntime_USE_AUTOML)
2121
)
2222
endif()
2323

24+
if(NOT onnxruntime_USE_DML)
25+
list(REMOVE_ITEM onnxruntime_graph_src
26+
"${ONNXRUNTIME_ROOT}/core/graph/dml_ops/*.h"
27+
"${ONNXRUNTIME_ROOT}/core/graph/dml_ops/*.cc"
28+
)
29+
endif()
30+
2431
file(GLOB_RECURSE onnxruntime_ir_defs_src CONFIGURE_DEPENDS
2532
"${ONNXRUNTIME_ROOT}/core/defs/*.cc"
2633
)

cmake/onnxruntime_providers.cmake

+37
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,10 @@ if(onnxruntime_USE_NNAPI)
6868
set(PROVIDERS_NNAPI onnxruntime_providers_nnapi)
6969
list(APPEND ONNXRUNTIME_PROVIDER_NAMES nnapi)
7070
endif()
71+
if(onnxruntime_USE_DML)
72+
set(PROVIDERS_DML onnxruntime_providers_dml)
73+
list(APPEND ONNXRUNTIME_PROVIDER_NAMES dml)
74+
endif()
7175
source_group(TREE ${ONNXRUNTIME_ROOT}/core FILES ${onnxruntime_providers_common_srcs} ${onnxruntime_providers_srcs})
7276

7377
set(onnxruntime_providers_src ${onnxruntime_providers_common_srcs} ${onnxruntime_providers_srcs})
@@ -492,6 +496,39 @@ if (onnxruntime_USE_NNAPI)
492496
set_target_properties(onnxruntime_providers_nnapi PROPERTIES LINKER_LANGUAGE CXX)
493497
endif()
494498

499+
if (onnxruntime_USE_DML)
500+
file(GLOB_RECURSE onnxruntime_providers_dml_cc_srcs CONFIGURE_DEPENDS
501+
"${ONNXRUNTIME_ROOT}/core/providers/dml/*.h"
502+
"${ONNXRUNTIME_ROOT}/core/providers/dml/*.cpp"
503+
"${ONNXRUNTIME_ROOT}/core/providers/dml/*.cc"
504+
)
505+
source_group(TREE ${ONNXRUNTIME_ROOT}/core FILES ${onnxruntime_providers_dml_cc_srcs})
506+
add_library(onnxruntime_providers_dml ${onnxruntime_providers_dml_cc_srcs})
507+
onnxruntime_add_include_to_target(onnxruntime_providers_dml onnxruntime_common onnxruntime_framework onnx onnx_proto protobuf::libprotobuf)
508+
add_dependencies(onnxruntime_providers_dml ${onnxruntime_EXTERNAL_DEPENDENCIES})
509+
target_include_directories(onnxruntime_providers_dml PRIVATE ${ONNXRUNTIME_ROOT} ${ONNXRUNTIME_ROOT}/../cmake/external/wil/include)
510+
511+
target_link_libraries(onnxruntime_providers_dml ${CMAKE_CURRENT_BINARY_DIR}/packages/DirectML.0.0.1/build/DirectML.targets)
512+
target_link_libraries(onnxruntime_providers_dml d3d12.lib dxgi.lib)
513+
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /DELAYLOAD:DirectML.dll /DELAYLOAD:d3d12.dll /DELAYLOAD:dxgi.dll")
514+
515+
# The DML EP requires C++17
516+
set_target_properties(onnxruntime_providers_dml PROPERTIES CXX_STANDARD 17)
517+
set_target_properties(onnxruntime_providers_dml PROPERTIES CXX_STANDARD_REQUIRED ON)
518+
519+
target_compile_definitions(onnxruntime_providers_dml PRIVATE ONNX_NAMESPACE=onnx ONNX_ML LOTUS_LOG_THRESHOLD=2 LOTUS_ENABLE_STDERR_LOGGING PLATFORM_WINDOWS)
520+
target_compile_definitions(onnxruntime_providers_dml PRIVATE UNICODE _UNICODE NOMINMAX)
521+
if (MSVC)
522+
target_compile_definitions(onnxruntime_providers_dml PRIVATE _SILENCE_CXX17_ITERATOR_BASE_CLASS_DEPRECATION_WARNING)
523+
target_compile_options(onnxruntime_providers_dml PRIVATE "/W3")
524+
endif()
525+
526+
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/providers/dml DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/providers)
527+
528+
set_target_properties(onnxruntime_providers_dml PROPERTIES LINKER_LANGUAGE CXX)
529+
set_target_properties(onnxruntime_providers_dml PROPERTIES FOLDER "ONNXRuntime")
530+
endif()
531+
495532
if (onnxruntime_ENABLE_MICROSOFT_INTERNAL)
496533
include(onnxruntime_providers_internal.cmake)
497534
endif()

cmake/onnxruntime_python.cmake

+1
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,7 @@ set(onnxruntime_pybind11_state_libs
7474
${PROVIDERS_OPENVINO}
7575
${PROVIDERS_NUPHAR}
7676
${PROVIDERS_NNAPI}
77+
${PROVIDERS_DML}
7778
onnxruntime_optimizer
7879
onnxruntime_providers
7980
onnxruntime_util

cmake/onnxruntime_unittests.cmake

+5
Original file line numberDiff line numberDiff line change
@@ -219,6 +219,10 @@ if(onnxruntime_USE_AUTOML)
219219
list(APPEND onnxruntime_test_providers_dependencies automl_featurizers)
220220
endif()
221221

222+
if(onnxruntime_USE_DML)
223+
list(APPEND onnxruntime_test_providers_dependencies onnxruntime_providers_dml)
224+
endif()
225+
222226
file(GLOB_RECURSE onnxruntime_test_tvm_src CONFIGURE_DEPENDS
223227
"${ONNXRUNTIME_ROOT}/test/tvm/*.h"
224228
"${ONNXRUNTIME_ROOT}/test/tvm/*.cc"
@@ -250,6 +254,7 @@ set(ONNXRUNTIME_TEST_LIBS
250254
${PROVIDERS_OPENVINO}
251255
${PROVIDERS_NUPHAR}
252256
${PROVIDERS_NNAPI}
257+
${PROVIDERS_DML}
253258
onnxruntime_optimizer
254259
onnxruntime_providers
255260
onnxruntime_util
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# DirectML Execution Provider (Preview)
2+
3+
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.
4+
5+
When used standalone, the DirectML API is a low-level DirectX 12 library and is suitable for high-performance, low-latency applications such as frameworks, games, and other real-time applications. The seamless interoperability of DirectML with Direct3D 12 as well as its low overhead and conformance across hardware makes DirectML ideal for accelerating machine learning when both high performance is desired, and the reliability and predictabiltiy of results across hardware is critical.
6+
7+
The *DirectML Execution Provider* is an optional component of ONNX Runtime that uses DirectML to accelerate inference of ONNX models. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
8+
9+
The DirectML Execution Provider is currently in preview.
10+
11+
## Table of contents
12+
13+
- [DirectML Execution Provider (Preview)](#directml-execution-provider-preview)
14+
- [Table of contents](#table-of-contents)
15+
- [Minimum requirements](#minimum-requirements)
16+
- [Building from source](#building-from-source)
17+
- [Using the DirectML execution provider](#using-the-directml-execution-provider)
18+
- [`OrtSessionOptionsAppendExecutionProvider_DML` function](#ortsessionoptionsappendexecutionproviderdml-function)
19+
- [`OrtSessionOptionsAppendExecutionProviderEx_DML` function](#ortsessionoptionsappendexecutionproviderexdml-function)
20+
- [ONNX opset support](#onnx-opset-support)
21+
- [Multi-threading and supported session options](#multi-threading-and-supported-session-options)
22+
- [Samples](#samples)
23+
- [See also](#see-also)
24+
25+
## Minimum requirements
26+
27+
The DirectML execution provider requires any DirectX 12 capable device. Almost all commercially-available graphics cards released in the last several years support DirectX 12. Examples of compatible hardware include:
28+
29+
* NVIDIA Kepler (GTX 600 series) and above
30+
* AMD GCN 1st Gen (Radeon HD 7000 series) and above
31+
* Intel Haswell (4th-gen core) HD Integrated Graphics and above
32+
33+
DirectML is compatible with Windows 10, version 1709 (10.0.16299; RS3, "Fall Creators Update") and newer.
34+
35+
36+
37+
## Building from source
38+
39+
For general information about building onnxruntime, see [BUILD.md](../../BUILD.md).
40+
41+
Requirements for building the DirectML execution provider:
42+
1. Visual Studio 2017 toolchain (see [cmake configuration instructions](../../BUILD.md))
43+
2. [The Windows 10 SDK (10.0.18362.0) for Windows 10, version 1903](https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk) (or newer)
44+
45+
To build onnxruntime with the DML EP included, supply the `--use_dml` parameter to `build.bat`. e.g.
46+
47+
build.bat --config RelWithDebInfo --build_shared_lib --parallel --use_dml
48+
49+
The DirectML execution provider supports building for both x64 (default) and x86 architectures.
50+
51+
Note that building onnxruntime with the DirectML execution provider enabled causes the the DirectML redistributable package to be automatically downloaded as part of the build. This package contains a pre-release version of DirectML, and its use is governed by a license whose text may be found as part of the NuGet package.
52+
53+
54+
55+
## Using the DirectML execution provider
56+
57+
When using the [C API](../C_API.md) with a DML-enabled build of onnxruntime (see [Building from source](#building-from-source)), the DirectML execution provider can be enabled using one of the two factory functions included in `include/onnxruntime/core/providers/dml/dml_provider_factory.h`.
58+
59+
### `OrtSessionOptionsAppendExecutionProvider_DML` function
60+
61+
Creates a DirectML Execution Provider which executes on the hardware adapter with the given `device_id`, also known as the adapter index. The device ID corresponds to the enumeration order of hardware adapters as given by [IDXGIFactory::EnumAdapters](https://docs.microsoft.com/windows/win32/api/dxgi/nf-dxgi-idxgifactory-enumadapters). A `device_id` of 0 always corresponds to the default adapter, which is typically the primary display GPU installed on the system. A negative `device_id` is invalid.
62+
63+
OrtStatus* OrtSessionOptionsAppendExecutionProvider_DML(
64+
_In_ OrtSessionOptions* options,
65+
int device_id
66+
);
67+
68+
### `OrtSessionOptionsAppendExecutionProviderEx_DML` function
69+
70+
Creates a DirectML Execution Provider using the given DirectML device, and which executes work on the supplied D3D12 command queue. The DirectML device and D3D12 command queue must have the same parent [ID3D12Device](https://docs.microsoft.com/windows/win32/api/d3d12/nn-d3d12-id3d12device), or an error will be returned. The D3D12 command queue must be of type `DIRECT` or `COMPUTE` (see [D3D12_COMMAND_LIST_TYPE](https://docs.microsoft.com/windows/win32/api/d3d12/ne-d3d12-d3d12_command_list_type)). If this function succeeds, the inference session once created will maintain a strong reference on both the `dml_device` and `command_queue` objects.
71+
72+
OrtStatus* OrtSessionOptionsAppendExecutionProviderEx_DML(
73+
_In_ OrtSessionOptions* options,
74+
_In_ IDMLDevice* dml_device,
75+
_In_ ID3D12CommandQueue* cmd_queue
76+
);
77+
78+
**See Also**
79+
80+
[DMLCreateDevice function](https://docs.microsoft.com/windows/win32/api/directml/nf-directml-dmlcreatedevice)
81+
[ID3D12Device::CreateCommandQueue method](https://docs.microsoft.com/windows/win32/api/d3d12/nf-d3d12-id3d12device-createcommandqueue)
82+
[Direct3D 12 programming guide](https://docs.microsoft.com/windows/win32/direct3d12/directx-12-programming-guide)
83+
84+
### ONNX opset support
85+
86+
The DirectML execution provider currently supports ONNX opset 9 ([ONNX v1.4](https://github.com/onnx/onnx/releases/tag/v1.4.0)). Evaluating models which require a higher opset version is not supported, and may produce unexpected results.
87+
88+
### Multi-threading and supported session options
89+
90+
The DirectML execution provider does not support the use of memory pattern optimizations or parallel execution in onnxruntime. When supplying session options during InferenceSession creation, these options must be disabled or an error will be returned.
91+
92+
If using the onnxruntime C API, you must call `DisableMemPattern` and `SetSessionExecutionMode` functions to set the options required by the DirectML execution provider.
93+
94+
See [onnxruntime\include\onnxruntime\core\session\onnxruntime_c_api.h](..\..\include\onnxruntime\core\session\onnxruntime_c_api.h).
95+
96+
OrtStatus*(ORT_API_CALL* DisableMemPattern)(_Inout_ OrtSessionOptions* options)NO_EXCEPTION;
97+
98+
OrtStatus*(ORT_API_CALL* SetSessionExecutionMode)(_Inout_ OrtSessionOptions* options, ExecutionMode execution_mode)NO_EXCEPTION;
99+
100+
If creating the onnxruntime InferenceSession object directly, you must set the appropriate fields on the `onnxruntime::SessionOptions` struct. Specifically, `execution_mode` must be set to `ExecutionMode::ORT_SEQUENTIAL`, and `enable_mem_pattern` must be `false`.
101+
102+
Additionally, as the DirectML execution provider does not support parallel execution, it does not support multi-threaded calls to `Run` on the same inference session. That is, if an inference session using the DirectML execution provider, only one thread may call `Run` at a time. Multiple threads are permitted to call `Run` simultaneously if they operate on different inference session objects.
103+
104+
## Samples
105+
106+
A complete sample of onnxruntime using the DirectML execution provider can be found under [samples/c_cxx/fns_candy_style_transfer](../../samples/c_cxx/fns_candy_style_transfer).
107+
108+
## See also
109+
110+
[DirectML documentation \(docs.microsoft.com\)](https://docs.microsoft.com/en-us/windows/win32/direct3d12/dml)

include/onnxruntime/core/graph/constants.h

+2-1
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ constexpr const char* kMLDomain = "ai.onnx.ml";
2020
constexpr const char* kMSDomain = "com.microsoft";
2121
constexpr const char* kMSNchwcDomain = "com.microsoft.nchwc";
2222
constexpr const char* kMSAutoMLDomain = "com.microsoft.automl";
23+
constexpr const char* kMSDmlDomain = "com.microsoft.dml";
2324
constexpr const char* kNGraphDomain = "com.intel.ai";
2425
constexpr const char* kCpuExecutionProvider = "CPUExecutionProvider";
2526
constexpr const char* kCudaExecutionProvider = "CUDAExecutionProvider";
@@ -30,5 +31,5 @@ constexpr const char* kNupharExecutionProvider = "NupharExecutionProvider";
3031
constexpr const char* kBrainSliceExecutionProvider = "BrainSliceExecutionProvider";
3132
constexpr const char* kTensorrtExecutionProvider = "TensorrtExecutionProvider";
3233
constexpr const char* kNnapiExecutionProvider = "NnapiExecutionProvider";
34+
constexpr const char* kDmlExecutionProvider = "DmlExecutionProvider";
3335
} // namespace onnxruntime
34-

0 commit comments

Comments
 (0)