⼿写逻辑回归算法
1. 模型
逻辑回归的Model为:$ h_\theta(x)=\dfrac 1 {1+e{-(\theta Tx+b)}} $
2.代价函数
针对⼀个样本的代价函数为:
if y = 1 : $ cost(x)= -log(h_\theta(x))$
if y = 0 : $ cost(x)= -log(1-h_\theta(x)) $
上述代价函数可以写成⼀个式⼦:即$ cost(x)=-ylog(h_\theta(x))-(1-y)log(1-h_\theta(x)) $
3. 损失函数
损失函数可以看成是针对所有样本的代价函数的平均值,即:
$ J_\theta(x)=\dfrac 1 m \displaystyle\sum_{i=1}m(cost(x{(i)})) = -d\frac 1 m \displaystyle\sum_{i=1}m(y{(i)}log(h_\theta(x{(i)}))+(1-y{(i)})log(1-h_\theta(x^{(i)})))$
4.对损失函数求偏导
对参数\(\theta\)求导:
$ \dfrac \partial {\partial\theta_j} J_\theta(x)= \dfrac 1 m \displaystyle\sum_{i=1}m(h_\theta(x{(i)})-y{(i)})x_j{(i)} \( 对参数\)b$求导:
$ \dfrac \partial {\partial b} J_\theta(x)= \dfrac 1 m \displaystyle\sum_{i=1}m(h_\theta(x{(i)})-y^{(i)}) $
5.运⽤梯度下降法更新参数\(\theta\)和\(b\)
$\theta_j := \theta_j - \alpha \dfrac \partial {\partial\theta_j} J_\theta(x) = \theta_j - \alpha \dfrac 1 m \displaystyle\sum_{i=1}m(h_\theta(x{(i)})-y{(i)})x_j{(i)} $
$b := b - \alpha \dfrac \partial {\partial b} J_\theta(x) = b - \alpha \dfrac 1 m \displaystyle\sum_{i=1}m(h_\theta(x{(i)})-y^{(i)}) $
6.逻辑回归的正则化损失函数
正则化的回归分析
$ J_\theta(x)=\dfrac 1 m \displaystyle\sum_{i=1}m(cost(x{(i)})) = -\left [\dfrac 1 m \displaystyle\sum_{i=1}^m\left (y{(i)}log(h_\theta(x{(i)}))+(1-
y{(i)})log(1-h_\theta(x{(i)}))\right )\right ] + \dfrac \lambda {2m} \displaystyle\sum_{i=1}n\theta_j2$
7.加⼊正则化后的梯度下降
$\theta_0 := \theta_0 - \alpha \dfrac \partial {\partial\theta_0} J_\theta(x) = \theta_0 - \alpha \dfrac 1 m \displaystyle\sum_{i=1}^m \left
(h_\theta(x{(i)})-y{(i)} \right )x_0^{(i)} $
$\theta_j := \theta_j - \alpha \dfrac \partial {\partial\theta_j} J_\theta(x) = \theta_j - \alpha \left [\dfrac 1 m
\displaystyle\sum_{i=1}m(h_\theta(x{(i)})-y{(i)})x_j{(i)} + \dfrac \lambda m \theta_j \right ] $

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。