Skip to content

Files

60 lines (38 loc) · 2.37 KB

05-linear-regression-simple.md

File metadata and controls

60 lines (38 loc) · 2.37 KB

2.5 Linear regression

Slides

Notes

Model for solving regression tasks, in which the objective is to adjust a line for the data and make predictions on new values. The input of this model is the feature matrix X and a y vector of predictions is obtained, trying to be as close as possible to the actual y values. The linear regression formula is the sum of the bias term ( w 0 ), which refers to the predictions if there is no information, and each of the feature values times their corresponding weights as ( x i 1 w 1 + x i 2 w 2 + . . . + x i n w n ).

So the simple linear regression formula looks like:

g ( x i ) = w 0 + x i 1 w 1 + x i 2 w 2 + . . . + x i n w n .

And that can be further simplified as:

g ( x i ) = w 0 + j = 1 n w j x i j

Here is a simple implementation of Linear Regression in python:

w0 = 7.1
def linear_regression(xi):
    
    n = len(xi)
    
    pred = w0
    w = [0.01, 0.04, 0.002]
    for j in range(n):
        pred = pred + w[j] * xi[j]
    return pred

If we look at the j = 1 n w j x i j part in the above equation, we know that this is nothing else but a vector-vector multiplication. Hence, we can rewrite the equation as g ( x i ) = w 0 + x i T w

We need to assure that the result is shown on the untransformed scale by using the inverse function exp().

The entire code of this project is available in this jupyter notebook.

⚠️ The notes are written by the community.
If you see an error here, please create a PR with a fix.

Navigation