Skip to content

Commit b9fb5b0

Browse files
authored
[web] add doc for importing and webgpu config/flags (#311)
* [web] add doc for importing * [web] add webgpu config/flags
1 parent 05a8137 commit b9fb5b0

File tree

9 files changed

+206
-0
lines changed

9 files changed

+206
-0
lines changed

js/README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,14 @@ Click links for README of each examples.
2222

2323
* [Quick Start - Web (using bundler)](quick-start_onnxruntime-web-bundler) - a demonstration of basic usage of ONNX Runtime Web using a bundler.
2424

25+
### Importing
26+
27+
* [Importing - Nodejs Binding](importing_onnxruntime-node) - a demonstration of how to import ONNX Runtime Node.js binding.
28+
29+
* [Importing - Web](importing_onnxruntime-web) - a demonstration of how to import ONNX Runtime Web.
30+
31+
* [Importing - React Native](importing_onnxruntime-react-native) - a demonstration of how to import ONNX Runtime React Native.
32+
2533
### API usage
2634

2735
* [API usage - Tensor](api-usage_tensor) - a demonstration of basic usage of `Tensor`.

js/api-usage_ort-env-flags/README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,21 @@ ort.env.webgl.pack = true;
6262

6363
See also [WebGL flags](https://onnxruntime.ai/docs/api/js/interfaces/Env.WebGLFlags.html) in API reference document.
6464

65+
### WebGPU flags (ONNX Runtime Web)
66+
WebGPU flags are used to customize behaviors of WebGPU execution provider.
67+
68+
Following are some example code snippets:
69+
70+
```js
71+
// enable WebGPU profiling.
72+
ort.env.webgpu.profilingMode = 'default';
73+
74+
// get the gpu device object.
75+
const device = ort.env.webgpu.device;
76+
```
77+
78+
See also [WebGPU flags](https://onnxruntime.ai/docs/api/js/interfaces/Env.WebGpuFlags.html) in API reference document.
79+
6580
### SessionOptions vs. ort.env
6681

6782
Both `SessionOptions` and `ort.env` allow to specify configurations for inferencing behaviors. The biggest difference of them is: `SessionOptions` is set for one inference session instance, while `ort.env` is set global.

js/api-usage_session-options/README.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ An [execution provider](https://onnxruntime.ai/docs/reference/execution-provider
2121
| `dml` | GPU (Direct ML) | onnxruntime-node (Windows) |
2222
| `wasm` | CPU (WebAssembly) | onnxruntime-web, onnxruntime-node |
2323
| `webgl` | GPU (WebGL) | onnxruntime-web |
24+
| `webgpu`| GPU (WebGPU) | onnxruntime-web |
2425

2526
Execution provider is specified by `sessionOptions.executionProviders`. Multiple EPs can be specified and the first available one will be used.
2627

@@ -64,6 +65,17 @@ const sessionOption = { executionProviders: ['wasm'] };
6465
const sessionOption = { executionProviders: ['webgl'] };
6566
```
6667

68+
```js
69+
// [ONNX Runtime Web example] Use WebGPU EP.
70+
const sessionOption = { executionProviders: ['webgpu'] };
71+
72+
// [ONNX Runtime Web example] Use WebGPU EP with extra config.
73+
const sessionOption2 = { executionProviders: [{
74+
name: 'webgpu',
75+
preferredLayout: 'NCHW'
76+
}] }
77+
```
78+
6779
### other common options
6880

6981
There are also some other options available for all EPs.
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Importing ONNX Runtime Node.js binding
2+
3+
## Summary
4+
5+
This example is a demonstration of how to import ONNX Runtime Node.js binding in your project.
6+
7+
## Usage
8+
9+
Please use the following code snippet to import ONNX Runtime Node.js binding:
10+
11+
```js
12+
// Common.js import syntax
13+
const ort = require('onnxruntime-node');
14+
```
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# Importing ONNX Runtime React Native
2+
3+
## Summary
4+
5+
This example is a demonstration of how to import ONNX Runtime React Native in your project.
6+
7+
## Usage
8+
9+
Please use the following code snippet to import ONNX Runtime React Native:
10+
11+
```js
12+
// Common.js import syntax
13+
const ort = require('onnxruntime-react-native');
14+
```
15+
16+
```js
17+
// ES Module import syntax
18+
import * as ort from 'onnxruntime-react-native';
19+
```
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
# Importing ONNX Runtime Web
2+
3+
## Summary
4+
5+
This example is a demonstration of how to import ONNX Runtime Web in your project.
6+
7+
ONNX Runtime Web can be consumed by either using a script tag in HTML or using a modern web app framework with bundler.
8+
9+
## Usage - Using Script tag in HTML
10+
11+
Please use the following HTML snippet to import ONNX Runtime Web:
12+
13+
```html
14+
<!-- import ONNXRuntime Web from CDN (IIFE) -->
15+
<script src="https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/ort.min.js"></script>
16+
```
17+
18+
```html
19+
<!-- import ONNXRuntime Web from CDN (ESM) -->
20+
<script type="module">
21+
import * as ort from "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/esm/ort.min.js";
22+
23+
// use "ort"
24+
// ...
25+
</script>
26+
```
27+
28+
See also [Quick Start - Web (using script tag)](../quick-start_onnxruntime-web-script-tag) for an example of using script tag.
29+
30+
## Usage - Using a bundler
31+
32+
Please use the following code snippet to import ONNX Runtime Web:
33+
34+
```js
35+
// Common.js import syntax
36+
const ort = require('onnxruntime-web');
37+
```
38+
39+
```js
40+
// ES Module import syntax
41+
import * as ort from 'onnxruntime-web';
42+
```
43+
44+
See also [Quick Start - Web (using bundler)](../quick-start_onnxruntime-web-bundler) for an example of using bundler.
45+
46+
### Conditional Importing
47+
48+
ONNX Runtime Web supports conditional importing. Please refer to the following table:
49+
50+
| Description | IIFE Filename | Common.js / ES Module import path |
51+
|--------------|-----------|------------|
52+
| Default import. Includes all official released features | ort.min.js | `onnxruntime-web` |
53+
| Experimental. Includes all features | ort.all.min.js | `onnxruntime-web/experimental` |
54+
| Wasm. Includes WebAssembly backend only | ort.wasm.min.js | `onnxruntime-web/wasm` |
55+
| Wasm-core. Includes WebAssembly backend with core features only. Proxy support and Multi-thread support are excluded | ort.wasm-core.min.js | `onnxruntime-web/wasm-core` |
56+
| Webgl. Includes WebGL backend only | ort.webgl.min.js | `onnxruntime-web/webgl` |
57+
| Webgpu. Includes WebGPU backend only | ort.webgpu.min.js | `onnxruntime-web/webgpu` |
58+
| Training. Includes WebAssembly single-threaded only, with training support | ort.training.wasm.min.js | `onnxruntime-web/training` |
59+
60+
Use the following syntax to import different target:
61+
* for script tag usage, replace the URL's file name from the table ("IIFE Filename" column) above:
62+
```html
63+
<script src="https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/<file-name>"></script>
64+
```
65+
```html
66+
<script type="module">
67+
import * as ort from "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/esm/<file-name>";
68+
69+
// use "ort"
70+
// ...
71+
</script>
72+
```
73+
74+
* for Common.js module usage, use
75+
```js
76+
const ort = require('<path>');
77+
```
78+
with the path from the table ("Common.js / ES Module import path" column) above
79+
* for ES Module usage, use
80+
```js
81+
import * as ort from '<path>';
82+
```
83+
with the path from the table ("Common.js / ES Module import path" column) above

js/quick-start_onnxruntime-web-bundler/main.js

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
// Copyright (c) Microsoft Corporation.
22
// Licensed under the MIT license.
33

4+
// see also advanced usage of importing ONNX Runtime Web:
5+
// https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/importing_onnxruntime-web
46
const ort = require('onnxruntime-web');
57

68
// use an async context to call onnxruntime functions.

js/quick-start_onnxruntime-web-script-tag/index.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@
44
<title>ONNX Runtime JavaScript examples: Quick Start - Web (using script tag)</title>
55
</header>
66
<body>
7+
<!-- see also advanced usage of importing ONNX Runtime Web: -->
8+
<!-- https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/importing_onnxruntime-web -->
9+
710
<!-- import ONNXRuntime Web from CDN -->
811
<script src="https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/ort.min.js"></script>
912
<script>
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
<!DOCTYPE html>
2+
<html>
3+
<header>
4+
<title>ONNX Runtime JavaScript examples: Quick Start - Web (using script tag)</title>
5+
</header>
6+
<body>
7+
<script type="module">
8+
// see also advanced usage of importing ONNX Runtime Web:
9+
// https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/importing_onnxruntime-web
10+
11+
// import ONNXRuntime Web from CDN
12+
import * as ort from "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/esm/ort.min.js";
13+
// set wasm path override
14+
ort.env.wasm.wasmPaths = "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/";
15+
16+
// use an async context to call onnxruntime functions.
17+
async function main() {
18+
try {
19+
// create a new session and load the specific model.
20+
//
21+
// the model in this example contains a single MatMul node
22+
// it has 2 inputs: 'a'(float32, 3x4) and 'b'(float32, 4x3)
23+
// it has 1 output: 'c'(float32, 3x3)
24+
const session = await ort.InferenceSession.create('./model.onnx');
25+
26+
// prepare inputs. a tensor need its corresponding TypedArray as data
27+
const dataA = Float32Array.from([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]);
28+
const dataB = Float32Array.from([10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120]);
29+
const tensorA = new ort.Tensor('float32', dataA, [3, 4]);
30+
const tensorB = new ort.Tensor('float32', dataB, [4, 3]);
31+
32+
// prepare feeds. use model input names as keys.
33+
const feeds = { a: tensorA, b: tensorB };
34+
35+
// feed inputs and run
36+
const results = await session.run(feeds);
37+
38+
// read from results
39+
const dataC = results.c.data;
40+
document.write(`data of result tensor 'c': ${dataC}`);
41+
42+
} catch (e) {
43+
document.write(`failed to inference ONNX model: ${e}.`);
44+
}
45+
}
46+
47+
main();
48+
</script>
49+
</body>
50+
</html>

0 commit comments

Comments
 (0)