Skip to content

Commit 8b8fb5d

Browse files
authored
docs: adjust notebooks to new API (#770)
### Summary of Changes The notebooks are now being executed again and work with the new API.
1 parent e993c17 commit 8b8fb5d

17 files changed

+264
-359
lines changed

.github/README.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,9 @@ common tasks on small to moderately sized datasets. As such, a major focus is to
1212

1313
Instead of implementing DS methods from scratch, we use established DS libraries under the hood such as:
1414

15-
* [pandas](https://pandas.pydata.org) for manipulation of tabular data,
16-
* [scikit-learn](https://scikit-learn.org) for machine learning, and
15+
* [polars](https://docs.pola.rs/) for manipulation of tabular data,
16+
* [scikit-learn](https://scikit-learn.org) for classical machine learning,
17+
* [torch](https://pytorch.org) for deep learning, and
1718
* [seaborn](https://seaborn.pydata.org) for visualization.
1819

1920
For more specialized tasks, we recommend using these or other DS libraries directly.

docs/README.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,9 @@ common tasks on small to moderately sized datasets. As such, a major focus is to
1212

1313
Instead of implementing DS methods from scratch, we use established DS libraries under the hood such as:
1414

15-
* [pandas](https://pandas.pydata.org) for manipulation of tabular data,
16-
* [scikit-learn](https://scikit-learn.org) for machine learning, and
15+
* [polars](https://docs.pola.rs/) for manipulation of tabular data,
16+
* [scikit-learn](https://scikit-learn.org) for classical machine learning,
17+
* [torch](https://pytorch.org) for deep learning, and
1718
* [seaborn](https://seaborn.pydata.org) for visualization.
1819

1920
For more specialized tasks, we recommend using these or other DS libraries directly.

docs/tutorials/classification.ipynb

+17-17
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@
2222
},
2323
{
2424
"cell_type": "code",
25-
"execution_count": null,
2625
"source": [
2726
"from safeds.data.tabular.containers import Table\n",
2827
"\n",
@@ -33,7 +32,8 @@
3332
"metadata": {
3433
"collapsed": false
3534
},
36-
"outputs": []
35+
"outputs": [],
36+
"execution_count": null
3737
},
3838
{
3939
"cell_type": "markdown",
@@ -47,7 +47,6 @@
4747
},
4848
{
4949
"cell_type": "code",
50-
"execution_count": null,
5150
"source": [
5251
"train_table, testing_table = titanic.split_rows(0.6)\n",
5352
"\n",
@@ -56,7 +55,8 @@
5655
"metadata": {
5756
"collapsed": false
5857
},
59-
"outputs": []
58+
"outputs": [],
59+
"execution_count": null
6060
},
6161
{
6262
"cell_type": "markdown",
@@ -72,7 +72,6 @@
7272
},
7373
{
7474
"cell_type": "code",
75-
"execution_count": null,
7675
"source": [
7776
"from safeds.data.tabular.transformation import OneHotEncoder\n",
7877
"\n",
@@ -81,7 +80,8 @@
8180
"metadata": {
8281
"collapsed": false
8382
},
84-
"outputs": []
83+
"outputs": [],
84+
"execution_count": null
8585
},
8686
{
8787
"cell_type": "markdown",
@@ -94,12 +94,12 @@
9494
},
9595
{
9696
"cell_type": "code",
97-
"execution_count": null,
9897
"source": "transformed_table = encoder.transform(train_table)",
9998
"metadata": {
10099
"collapsed": false
101100
},
102-
"outputs": []
101+
"outputs": [],
102+
"execution_count": null
103103
},
104104
{
105105
"cell_type": "markdown",
@@ -110,7 +110,6 @@
110110
},
111111
{
112112
"cell_type": "code",
113-
"execution_count": null,
114113
"source": [
115114
"extra_names = [\"id\", \"name\", \"ticket\", \"cabin\", \"port_embarked\", \"age\", \"fare\"]\n",
116115
"\n",
@@ -119,7 +118,8 @@
119118
"metadata": {
120119
"collapsed": false
121120
},
122-
"outputs": []
121+
"outputs": [],
122+
"execution_count": null
123123
},
124124
{
125125
"cell_type": "markdown",
@@ -130,7 +130,6 @@
130130
},
131131
{
132132
"cell_type": "code",
133-
"execution_count": null,
134133
"source": [
135134
"from safeds.ml.classical.classification import RandomForestClassifier\n",
136135
"\n",
@@ -140,7 +139,8 @@
140139
"metadata": {
141140
"collapsed": false
142141
},
143-
"outputs": []
142+
"outputs": [],
143+
"execution_count": null
144144
},
145145
{
146146
"cell_type": "markdown",
@@ -154,7 +154,6 @@
154154
},
155155
{
156156
"cell_type": "code",
157-
"execution_count": null,
158157
"source": [
159158
"encoder = OneHotEncoder().fit(test_table, [\"sex\"])\n",
160159
"transformed_test_table = encoder.transform(test_table)\n",
@@ -163,12 +162,13 @@
163162
" transformed_test_table\n",
164163
")\n",
165164
"#For visualisation purposes we only print out the first 15 rows.\n",
166-
"prediction.to_table().slice_rows(start=0, end=15)"
165+
"prediction.to_table().slice_rows(start=0, length=15)"
167166
],
168167
"metadata": {
169168
"collapsed": false
170169
},
171-
"outputs": []
170+
"outputs": [],
171+
"execution_count": null
172172
},
173173
{
174174
"cell_type": "markdown",
@@ -181,7 +181,6 @@
181181
},
182182
{
183183
"cell_type": "code",
184-
"execution_count": null,
185184
"source": [
186185
"encoder = OneHotEncoder().fit(test_table, [\"sex\"])\n",
187186
"testing_table = encoder.transform(testing_table)\n",
@@ -192,7 +191,8 @@
192191
"metadata": {
193192
"collapsed": false
194193
},
195-
"outputs": []
194+
"outputs": [],
195+
"execution_count": null
196196
}
197197
],
198198
"metadata": {

0 commit comments

Comments
 (0)