You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: NBSETUP.md
+3-14
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,4 @@
1
-
# Setting up environment
2
-
3
-
---
1
+
# Set up your notebook environment for Azure Machine Learning
4
2
5
3
To run the notebooks in this repository use one of following options.
6
4
@@ -12,9 +10,7 @@ Azure Notebooks is a hosted Jupyter-based notebook service in the Azure cloud. A
12
10
1. Follow the instructions in the [Configuration](configuration.ipynb) notebook to create and connect to a workspace
13
11
1. Open one of the sample notebooks
14
12
15
-
**Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook
16
-
17
-

13
+
**Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook by choosing Kernel > Change Kernel > Python 3.6 from the menus.
# install the base SDK and a Jupyter notebook server
32
28
pip install azureml-sdk[notebooks]
33
29
34
-
# install the data prep component
35
-
pip install azureml-dataprep
36
-
37
30
# install model explainability component
38
31
pip install azureml-sdk[explain]
39
32
@@ -58,8 +51,7 @@ Please make sure you start with the [Configuration](configuration.ipynb) noteboo
58
51
59
52
### Video walkthrough:
60
53
61
-
[](https://youtu.be/VIsXeTuW3FU)
62
-
54
+
[!VIDEO https://youtu.be/VIsXeTuW3FU]
63
55
64
56
## **Option 3: Use Docker**
65
57
@@ -90,9 +82,6 @@ Now you can point your browser to http://localhost:8887. We recommend that you s
90
82
If you need additional Azure ML SDK components, you can either modify the Docker files before you build the Docker images to add additional steps, or install them through command line in the live container after you build the Docker image. For example:
91
83
92
84
```sh
93
-
# install dataprep components
94
-
pip install azureml-dataprep
95
-
96
85
# install the core SDK and automated ml components
Copy file name to clipboardExpand all lines: how-to-use-azureml/automated-machine-learning/README.md
+33-31
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
# Table of Contents
2
2
1.[Automated ML Introduction](#introduction)
3
-
1.[Running samples in Azure Notebooks](#jupyter)
4
-
1.[Running samples in Azure Databricks](#databricks)
5
-
1.[Running samples in a Local Conda environment](#localconda)
3
+
1.[Setup using Azure Notebooks](#jupyter)
4
+
1.[Setup using Azure Databricks](#databricks)
5
+
1.[Setup using a Local Conda environment](#localconda)
6
6
1.[Automated ML SDK Sample Notebooks](#samples)
7
7
1.[Documentation](#documentation)
8
8
1.[Running using python command](#pythoncommand)
@@ -13,23 +13,23 @@
13
13
Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.
14
14
15
15
16
-
If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
16
+
If you are new to Data Science, automated ML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
17
17
18
-
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
18
+
If you are an experienced data scientist, automated ML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. Automated ML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
19
19
20
-
Below are the three execution environments supported by AutoML.
20
+
Below are the three execution environments supported by automated ML.
21
21
22
22
23
23
<aname="jupyter"></a>
24
-
## Running samples in Azure Notebooks - Jupyter based notebooks in the Azure cloud
24
+
## Setup using Azure Notebooks - Jupyter based notebooks in the Azure cloud
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks.
28
28
1. Follow the instructions in the [configuration](../../configuration.ipynb) notebook to create and connect to a workspace.
29
29
1. Open one of the sample notebooks.
30
30
31
31
<aname="databricks"></a>
32
-
## Running samples in Azure Databricks
32
+
## Setup using Azure Databricks
33
33
34
34
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown).
35
35
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
@@ -39,7 +39,7 @@ Below are the three execution environments supported by AutoML.
39
39
- Attach the notebook to the cluster.
40
40
41
41
<aname="localconda"></a>
42
-
## Running samples in a Local Conda environment
42
+
## Setup using a Local Conda environment
43
43
44
44
To run these notebook on your own notebook server, use these installation instructions.
45
45
The instructions below will install everything you need and then start a Jupyter notebook.
@@ -49,11 +49,15 @@ The instructions below will install everything you need and then start a Jupyter
49
49
There's no need to install mini-conda specifically.
50
50
51
51
### 2. Downloading the sample notebooks
52
-
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.
52
+
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The automated ML sample notebooks are in the "automated-machine-learning" folder.
53
53
54
54
### 3. Setup a new conda environment
55
-
The **automl/automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook.
56
-
It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. See the specific sections below for Windows, Mac and Linux. It can take about 10 minutes to execute.
55
+
The **automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook. It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. See the specific sections below for Windows, Mac and Linux. It can take about 10 minutes to execute.
56
+
57
+
Packages installed by the **automl_setup** script:
For more details refer to the [automl_env.yml](./automl_env.yml)
57
61
## Windows
58
62
Start an **Anaconda Prompt** window, cd to the **how-to-use-azureml/automated-machine-learning** folder where the sample notebooks were extracted and then run:
59
63
```
@@ -81,7 +85,7 @@ bash automl_setup_linux.sh
81
85
82
86
### 5. Running Samples
83
87
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
84
-
- Follow the instructions in the individual notebooks to explore various features in AutoML
88
+
- Follow the instructions in the individual notebooks to explore various features in automated ML.
- Simple example of using Auto ML for classification with ONNX models
198
+
- Simple example of using automated ML for classification with ONNX models
195
199
- Uses local compute for training
196
200
197
201
<aname="documentation"></a>
@@ -259,7 +263,7 @@ There are several reasons why the DsvmCompute.create can fail. The reason is us
259
263
2)`The requested VM size xxxxx is not available in the current region.` You can select a different region or vm_size.
260
264
261
265
## Remote run: Unable to establish SSH connection
262
-
AutoML uses the SSH protocol to communicate with remote DSVMs. This defaults to port 22. Possible causes for this error are:
266
+
Automated ML uses the SSH protocol to communicate with remote DSVMs. This defaults to port 22. Possible causes for this error are:
263
267
1) The DSVM is not ready for SSH connections. When DSVM creation completes, the DSVM might still not be ready to acceept SSH connections. The sample notebooks have a one minute delay to allow for this.
264
268
2) Your Azure Subscription may restrict the IP address ranges that can access the DSVM on port 22. You can check this in the Azure Portal by selecting the Virtual Machine and then clicking Networking. The Virtual Machine name is the name that you provided in the notebook plus 10 alpha numeric characters to make the name unique. The Inbound Port Rules define what can access the VM on specific ports. Note that there is a priority priority order. So, a Deny entry with a low priority number will override a Allow entry with a higher priority number.
265
269
@@ -270,18 +274,16 @@ This is often an issue with the `get_data` method.
270
274
3) You can get to the error log for the setup iteration by clicking the `Click here to see the run in Azure portal` link, click `Back to Experiment`, click on the highest run number and then click on Logs.
271
275
272
276
## Remote run: disk full
273
-
AutoML creates files under /tmp/azureml_runs for each iteration that it runs. It creates a folder with the iteration id. For example: AutoML_9a038a18-77cc-48f1-80fb-65abdbc33abe_93. Under this, there is a azureml-logs folder, which contains logs. If you run too many iterations on the same DSVM, these files can fill the disk.
277
+
Automated ML creates files under /tmp/azureml_runs for each iteration that it runs. It creates a folder with the iteration id. For example: AutoML_9a038a18-77cc-48f1-80fb-65abdbc33abe_93. Under this, there is a azureml-logs folder, which contains logs. If you run too many iterations on the same DSVM, these files can fill the disk.
274
278
You can delete the files under /tmp/azureml_runs or just delete the VM and create a new one.
275
279
If your get_data downloads files, make sure the delete them or they can use disk space as well.
276
280
When using DataStore, it is good to specify an absolute path for the files so that they are downloaded just once. If you specify a relative path, it will download a file for each iteration.
277
281
278
282
## Remote run: Iterations fail and the log contains "MemoryError"
279
-
This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size.
283
+
This can be caused by insufficient memory on the DSVM. Automated ML loads all training data into memory. So, the available memory should be more than the training data size.
280
284
If you are using a remote DSVM, memory is needed for each concurrent iteration. The max_concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and max_concurrent_iterations is set to 10, the minimum memory required is at least 80Gb.
281
285
To resolve this issue, allocate a DSVM with more memory or reduce the value specified for max_concurrent_iterations.
282
286
283
287
## Remote run: Iterations show as "Not Responding" in the RunDetails widget.
284
288
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the max_concurrent_iterations setting should always be less than the number of cores of the DSVM.
285
-
To resolve this issue, try reducing the value specified for the max_concurrent_iterations setting.
"|**n_cross_validations**|Number of cross validation splits.|\n",
165
-
"|<i>Exit Criteria [optional]</i><br><br>iterations<br>experiment_timeout_minutes|An optional duration parameter that says how long AutoML should be run.<br>This could be either number of iterations or number of minutes AutoML is allowed to run. <br><br><i>iterations</i> number of algorithm iterations to run<br><i>experiment_timeout_minutes</i> is the number of minutes that AutoML should run<br><br>By default, this is set to stop whenever AutoML determines that progress in scores is not being made|"
165
+
"|\n",
166
+
"\n",
167
+
"Automated machine learning trains multiple machine learning pipelines. Each pipelines training is known as an iteration.\n",
168
+
"* You can specify a maximum number of iterations using the `iterations` parameter.\n",
169
+
"* You can specify a maximum time for the run using the `experiment_timeout_minutes` parameter.\n",
170
+
"* If you specify neither the `iterations` nor the `experiment_timeout_minutes`, automated ML keeps running iterations while it continues to see improvements in the scores.\n",
171
+
"\n",
172
+
"The following example doesn't specify `iterations` or `experiment_timeout_minutes` and so runs until the scores stop improving.\n"
0 commit comments