-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
group#5: lin reg script added #9
base: main
Are you sure you want to change the base?
Conversation
snippets/linear_regression.py
Outdated
|
||
def compCostFunction(estim_y, true_y): | ||
E = estim_y - true_y | ||
C = (1 / 2 * m) * np.sum(E ** 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where does the variable m come from here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, global variables are terrible. Please move all the code defining variables into a function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... and m
should be a parameter that is passed into the function.
assert isinstance(y, np.ndarray), "Only works for arrays" | ||
return x.shape[0] == y.shape[0] | ||
|
||
# To be deleted later |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are these comments obsolete? if yes, please remove
# To be deleted later | ||
# feature_1 = np.linspace(0, 2, num=100) | ||
|
||
X = np.random.randn(100,3) # feature matrix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could the variables be named with more informative names?
snippets/linear_regression.py
Outdated
|
||
return W, cost_history | ||
|
||
params, history = iterativeLinearRegression(X, y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code should be restructured so that the module can be imported and does nothing.
The test code should be under if __name__ == '__main__':
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add some small comments and questions
snippets/linear_regression.py
Outdated
# feature_1 = np.linspace(0, 2, num=100) | ||
|
||
X = np.random.randn(100,3) # feature matrix | ||
y = 1 + np.dot(X, [3.5, 4., -4]) # target vector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd write this as
y = 1 + X @ [3.5, 4., -4]) # target vector
# z = 2 + y @ feature_matrix @ feature_matrix.T
params, history = iterativeLinearRegression(X, y) | ||
print(params) | ||
|
||
import matplotlib.pyplot as plt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This duplicates line above…
print(params) | ||
print(history) | ||
|
||
import matplotlib.pyplot as plt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be moved to the header
snippets/linear_regression.py
Outdated
# iterate until the maximum number of steps | ||
for i in np.arange(steps): # begin the process | ||
|
||
y_estimated = X.dot(W) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
X @ W
!!!
snippets/linear_regression.py
Outdated
cost = compCostFunction(y_estimated, y) | ||
# Update gradient descent | ||
E = y_estimated - y | ||
gradient = (1 / m) * X.T.dot(E) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 / m * X.T @ E
No description provided.