machine-learningmathgradient-descent

Denormalizing thetas after a linear regression with gradient descent


I have the following set of data:

km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290

After normalizing them, I'm performing a gradient descent that gives me the following thetas:

θ0 = 0.9362124793084768
θ1 = -0.9953762249792935

I can correctly predict the price if I feed a normalized mileage, and then denormalize the predicted price, ie:

Asked price for a mileage of 50000km:
normalized mileage: 0.12483129971764294
normalized price: (mx + c) = 0.8119583714362707
real price: 7417.486843464296

What I'm looking for is to revert my thetas back to their non-normalized values, but I've been unable to, no matter which equation I tried. Is there a way to do so ?


Solution

  • It was simply a two variables equation to solve, as you can see here (excuse the handwriting): https://ibb.co/178qWcQ.

    Here is the python code that does the computation:

    x0, x1 = self.training_set[0][0], self.training_set[1][0]
    x0n, x1n = self.normalized_training_set[0][0], self.normalized_training_set[1][0]
    y0n, y1n = self.hypothesis(x0n), self.hypothesis(x1n)
    p_diff = self.max_price - self.min_price
    theta0 = (x1 / (x1 - x0)) * (y0n * p_diff + self.min_price - (x0 / x1 * (y1n * p_diff + self.min_price)))
    y0 = self.training_set[0][1]
    theta1 = (y0 - theta0) / x0
    print(theta0, theta1) //RESULT: 8481.172796984529 -0.020129886654102203