Skip to content

Commit 985c50a

Browse files
committed
faq: What is the difference between a *cost function* and a *loss function* in machine learning?
1 parent d3672e9 commit 985c50a

File tree

3 files changed

+18
-0
lines changed

3 files changed

+18
-0
lines changed

README.md

+4
Original file line numberDiff line numberDiff line change
@@ -184,6 +184,10 @@ I have set up a separate library, [`mlxtend`](http://rasbt.github.io/mlxtend/),
184184

185185
- [What are the advantages of semi-supervised learning over supervised and unsupervised learning?](./faq/semi-vs-supervised.md)
186186

187+
##### Ensemble Methods
188+
189+
- [Is Combining Classifiers with Stacking Better than Selecting the Best One?](./logistic-boosting.md)
190+
187191
##### Preprocessing
188192

189193
- [Why do we need to re-use training parameters to transform test data?](./faq/scale-training-test.md)

faq/README.md

+4
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,10 @@ Sebastian
107107

108108
- [What are the advantages of semi-supervised learning over supervised and unsupervised learning?](./semi-vs-supervised.md)
109109

110+
##### Ensemble Methods
111+
112+
- [Is Combining Classifiers with Stacking Better than Selecting the Best One?](./logistic-boosting.md)
113+
110114
##### Preprocessing
111115

112116
- [Why do we need to re-use training parameters to transform test data?](./scale-training-test.md)

faq/logistic-boosting.md

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Do bagging and boosting can be used with logistic regression?
2+
3+
I am not sure if bagging would make much sense for logistic regression -- in bagging, we reduce the variance of the deep decision tree models that overfit the training data, which wouldn't really apply to logistic regression.
4+
5+
Boosting could work though, however, I think that "stacking" would be a better approach here. Stacking would be more "powerful" since we don't use a pre-specified equation to adjust the weight, rather, we train a meta-classifier to learn the optimal weights to combine the models.
6+
7+
8+
Here's one of the many interesting, related papers, I recommend you to check out :)
9+
10+
- "Is Combining Classifiers with Stacking Better than Selecting the Best One?"

0 commit comments

Comments
 (0)