Skip to content

Commit 29e29a5

Browse files
vdiceitowlson
andcommitted
Update content/v1/ai-sentiment-analysis-api-tutorial.md
Co-authored-by: itowlson <[email protected]> Signed-off-by: Vaughn Dice <[email protected]>
1 parent fa94aae commit 29e29a5

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

content/v1/ai-sentiment-analysis-api-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -887,7 +887,7 @@ route = "/internal/kv-explorer/..."
887887

888888
### Building and Deploying Your Spin Application
889889

890-
**Note:** Running inferencing on localhost (your CPU) is not as optimal as deploying to Fermyon Cloud's Serverless AI (where inferencing is performed by high-powered GPUs). You can skip this `spin build --up` step and move straight to `spin cloud deploy` if you:
890+
**Note:** Running inferencing on localhost (your CPU) is not as optimal as deploying to a dedicated Serverless AI platform (where inferencing is performed by high-powered GPUs). You can skip this `spin build --up` step and move straight to deployment if you:
891891

892892
- a) are using one of the 3 supported models above,
893893
- b) have configured your `spin.toml` file to explicitly configure the model (as shown above)

content/v2/ai-sentiment-analysis-api-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1048,7 +1048,7 @@ key_value_stores = ["default"]
10481048

10491049
### Building and Deploying Your Spin Application
10501050

1051-
**Note:** Running inferencing on localhost (your CPU) is not as optimal as deploying to Fermyon Cloud's Serverless AI (where inferencing is performed by high-powered GPUs). You can skip this `spin build --up` step and move straight to `spin cloud deploy` if you:
1051+
**Note:** Running inferencing on localhost (your CPU) is not as optimal as deploying to a dedicated Serverless AI platform (where inferencing is performed by high-powered GPUs). You can skip this `spin build --up` step and move straight to deployment if you:
10521052

10531053
- a) are using one of the 3 supported models above,
10541054
- b) have configured your `spin.toml` file to explicitly configure the model (as shown above)

content/v3/ai-sentiment-analysis-api-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1069,7 +1069,7 @@ files = [{ source = "assets", destination = "/" }]
10691069

10701070
### Building and Deploying Your Spin Application
10711071

1072-
**Note:** Running inferencing on localhost (your CPU) is not as optimal as deploying to Fermyon Cloud's Serverless AI (where inferencing is performed by high-powered GPUs). You can skip this `spin build --up` step and move straight to `spin cloud deploy` if you:
1072+
**Note:** Running inferencing on localhost (your CPU) is not as optimal as deploying to a dedicated Serverless AI platform (where inferencing is performed by high-powered GPUs). You can skip this `spin build --up` step and move straight to deployment if you:
10731073

10741074
- a) are using one of the 3 supported models above,
10751075
- b) have configured your `spin.toml` file to explicitly configure the model (as shown above)

0 commit comments

Comments
 (0)