Previously, I had posted part(1) of the linear regression, where we dealt with machine learning perspective. And, I had promised to present it with other angles of Mathematics too and here I am!
In this article, we would see it with matrix calculus.
The link to previous article is given below.
http://avidsuraj.blogspot.com/2018/05/linear-algebra-part1-machine-learning.html
I have used the same notation used in the previous article. So, readers are recommended to read the previous article or at least notation portion of it.
The cost function or error function is written as:
$$ J = \sum _{i=1}^{m} {{{e}^{(i)}}^{2}}$$
where, ${e}^{(i)}$ is the error for ${i}^{th}$ training example.
Let $E$ represent matrix of all the errors. So, it is given by:
$$ E = Y-{Y}_{pred} \\
E = Y-(XW+C) \\
E = Y-XW-C $$
In this article, we would see it with matrix calculus.
The link to previous article is given below.
http://avidsuraj.blogspot.com/2018/05/linear-algebra-part1-machine-learning.html
I have used the same notation used in the previous article. So, readers are recommended to read the previous article or at least notation portion of it.
The cost function or error function is written as:
$$ J = \sum _{i=1}^{m} {{{e}^{(i)}}^{2}}$$
where, ${e}^{(i)}$ is the error for ${i}^{th}$ training example.
Let $E$ represent matrix of all the errors. So, it is given by:
$$ E = Y-{Y}_{pred} \\
E = Y-(XW+C) \\
E = Y-XW-C $$
Differentiation of a vector, Y with respect to a vector X |