Skip to content

Commit b224c44

Browse files
committed
fix up missing text
1 parent ea78d7e commit b224c44

File tree

8 files changed

+1293
-1342
lines changed

8 files changed

+1293
-1342
lines changed

notebooks/04_tuning/gamma_sampler.png

-204 Bytes
Loading

notebooks/04_tuning/learning_curve2.png

100755100644
-2.24 KB
Loading

notebooks/04_tuning/notebook.ipynb

Lines changed: 1279 additions & 1318 deletions
Large diffs are not rendered by default.

notebooks/04_tuning/notebook.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,7 @@ iterator(r, 5)
7878
#-
7979

8080
using Plots
81+
gr(size=(490,300))
8182
_, _, lambdas, losses = learning_curve(mach,
8283
range=r,
8384
resampling=CV(nfolds=6),
@@ -142,9 +143,10 @@ r = range(model,
142143
unit=5,
143144
scale=:log10)
144145

145-
# The `scale` in a range makes no in a `RandomSearch` (unless it is a
146-
# function) but this will effect later plots but it does effect the
147-
# later plots.
146+
# The `scale` in a range is ignored in a `RandomSearch`, unless it is a
147+
# function. (It *is* relevant in a `Grid` search, not demonstrated
148+
# here.) Note however, the choice of scale *does* effect how later plots
149+
# will look.
148150

149151
# Let's see what sampling using a Gamma distribution is going to mean
150152
# for this range:
@@ -189,8 +191,6 @@ predict(tuned_mach, rows=1:3)
189191
rep = report(tuned_mach);
190192
rep.best_model
191193

192-
# By default, sampling of a bounded range is uniform. Lets
193-
194194
# In the special case of two-parameters, you can also plot the results:
195195

196196
plt = plot(tuned_mach)

notebooks/04_tuning/notebook.pluto.jl

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ md"Now for a pipeline model:"
175175
LogisticClassifier = @load LogisticClassifier pkg=MLJLinearModels
176176

177177
# ╔═╡ df32389d-809d-4927-9380-d0b65e00252c
178-
base_model = Pipeline(Standardizer, ContinuousEncoder, LogisticClassifier)
178+
base_model = Standardizer |> ContinuousEncoder |> LogisticClassifier
179179

180180
# ╔═╡ a4225c7f-bb3f-479b-a0cd-3f0d573aca2f
181181
base_mach = machine(base_model, XHorse, yHorse)
@@ -282,9 +282,8 @@ r_lambda_unbound = range(base_model,
282282

283283
# ╔═╡ 8729b9a8-ac7a-485a-b950-fddef2a1fb10
284284
md"""
285-
The `scale` in a range makes no **???** in a `RandomSearch` (unless it is a
286-
function) $(HTML("<del>but this will effect later plots</del>")) but it does effect the
287-
later plots.
285+
The `scale` in a range is ignored in a `RandomSearch`, unless it is a
286+
function. (It *is* relevant in a `Grid` search, not demonstrated here.) Note however, the choice of scale *does* effect how later plots will look.
288287
"""
289288

290289
# ╔═╡ 28aaf005-792c-46a4-951b-64cb2a36c8e3
@@ -351,9 +350,6 @@ rep = report(tuned_mach)
351350
# ╔═╡ e938da11-be0e-4c4a-ab04-579e5608ae53
352351
rep.best_model
353352

354-
# ╔═╡ f3738cb7-872f-4618-86cf-35420d9e6995
355-
md"By default, sampling of a bounded range is uniform. Lets"
356-
357353
# ╔═╡ 5b72bfac-40e2-4f84-a29a-8d0445351914
358354
md"In the special case of two-parameters, you can also plot the results:"
359355

@@ -560,7 +556,6 @@ md"""
560556
# ╟─22392335-e988-49b9-8f3d-b82f1e7b6f7a
561557
# ╠═4d54c007-95eb-4897-90a0-060e4b5a56de
562558
# ╠═e938da11-be0e-4c4a-ab04-579e5608ae53
563-
# ╟─f3738cb7-872f-4618-86cf-35420d9e6995
564559
# ╟─5b72bfac-40e2-4f84-a29a-8d0445351914
565560
# ╠═89dfc4b1-37dc-4f77-be65-28d0fcca50a8
566561
# ╟─efa202a5-5256-473c-9a2e-c468345d87d1

notebooks/04_tuning/notebook.unexecuted.ipynb

Lines changed: 6 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
{
44
"cell_type": "markdown",
55
"source": [
6-
"# Machine Learning in Julia"
6+
"# Machine Learning in Julia (continued)"
77
],
88
"metadata": {}
99
},
@@ -189,6 +189,7 @@
189189
"cell_type": "code",
190190
"source": [
191191
"using Plots\n",
192+
"gr(size=(490,300))\n",
192193
"_, _, lambdas, losses = learning_curve(mach,\n",
193194
" range=r,\n",
194195
" resampling=CV(nfolds=6),\n",
@@ -310,9 +311,10 @@
310311
{
311312
"cell_type": "markdown",
312313
"source": [
313-
"The `scale` in a range makes no in a `RandomSearch` (unless it is a\n",
314-
"function) but this will effect later plots but it does effect the\n",
315-
"later plots."
314+
"The `scale` in a range is ignored in a `RandomSearch`, unless it is a\n",
315+
"function. (It *is* relevant in a `Grid` search, not demonstrated\n",
316+
"here.) Note however, the choice of scale *does* effect how later plots\n",
317+
"will look."
316318
],
317319
"metadata": {}
318320
},
@@ -421,13 +423,6 @@
421423
"metadata": {},
422424
"execution_count": null
423425
},
424-
{
425-
"cell_type": "markdown",
426-
"source": [
427-
"By default, sampling of a bounded range is uniform. Lets"
428-
],
429-
"metadata": {}
430-
},
431426
{
432427
"cell_type": "markdown",
433428
"source": [

notebooks/04_tuning/tuning.png

10.6 KB
Loading
-141 Bytes
Loading

0 commit comments

Comments
 (0)