『TensorFlow』正则化添加⽅法整理
⼀、基础正则化函数
返回⼀个⽤来执⾏L1正则化的函数,函数的签名是func(weights).
参数:
scale: 正则项的系数.
scope: 可选的scope name
先看看tf.contrib.layers.l2_regularizer(weight_decay)都执⾏了什么:
import tensorflow as tf
sess=tf.Session()
weight_decay=0.1
stant([0,1,2,3],dtype=tf.float32)
"""
l2_ib.layers.l2_regularizer(weight_decay)
_variable("I_am_a",regularizer=l2_reg,initializer=tmp)
"""
#**上⾯代码的等价代码
_variable("I_am_a",initializer=tmp)
duce_sum(a*a)*weight_decay/2;
_variable(a.name.split(":")[0]+"/Regularizer/l2_regularizer",initializer=a2)
tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES,a2)
#**
sess.run(tf.global_variables_initializer())
keys = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
for key in keys:
print("%s : %s" %(key.name,sess.run(key)))
我们很容易可以模拟出tf.contrib.layers.l2_regularizer都做了什么,不过会让代码变丑。
以下⽐较完整实现L2 正则化。
import tensorflow as tf
sess=tf.Session()
weight_decay=0.1                                                #(1)定义weight_decay
l2_ib.layers.l2_regularizer(weight_decay)          #(2)定义l2_regularizer()
stant([0,1,2,3],dtype=tf.float32)
_variable("I_am_a",regularizer=l2_reg,initializer=tmp)  #(3)创建variable,l2_regularizer复制给regularizer参数。
#⽬测REXXX_LOSSES集合
#regularizer定义会将a加⼊REGULARIZATION_LOSSES集合
print("Global Set:")
keys = tf.get_collection("variables")
for key in keys:
print(key.name)
print("Regular Set:")
keys = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
for key in keys:
print(key.name)
print("--------------------")
sess.run(tf.global_variables_initializer())
print(sess.run(a))
reg__collection(tf.GraphKeys.REGULARIZATION_LOSSES)  #(4)则REGULARIAZTION_LOSSES集合会包含所有被weight_decay后的参数和,将其相加l2_loss=tf.add_n(reg_set)
print("loss=%s" %(sess.run(l2_loss)))
"""
此处输出0.7,即:
weight_decay*sigmal(w*2)/2=0.1*(0*0+1*1+2*2+3*3)/2=0.7
其实代码⾃⼰写也很⽅便,⽤API看着⽐较正规。
在⽹络模型中,直接将l2_loss加⼊loss就好了。(loss变⼤,执⾏train⾃然会decay)
"""
⼆、添加正则化⽅法
a、原始办法
正则化的具体做法
正则化常⽤到集合,下⾯是最原始的添加正则办法(直接在变量声明后将之添加进'losses'集合或tf.GraphKeys.LOESSES也⾏):import tensorflow as tf
import numpy as np
def get_weights(shape, lambd):
var = tf.Variable(tf.random_normal(shape), dtype=tf.float32)
tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(lambd)(var))
return var
x = tf.placeholder(tf.float32, shape=(None, 2))
y_ = tf.placeholder(tf.float32, shape=(None, 1))
batch_size = 8
layer_dimension = [2, 10, 10, 10, 1]
n_layers = len(layer_dimension)
cur_lay = x
in_dimension = layer_dimension[0]
for i in range(1, n_layers):
out_dimension = layer_dimension[i]
weights = get_weights([in_dimension, out_dimension], 0.001)
bias = tf.stant(0.1, shape=[out_dimension]))
cur_lay = lu(tf.matmul(cur_lay, weights)+bias)
in_dimension = layer_dimension[i]
mess_loss = tf.reduce_mean(tf.square(y_-cur_lay))
tf.add_to_collection('losses', mess_loss)
loss = tf.add__collection('losses'))
b、tf.contrib.layers.apply_regularization(regularizer, weights_list=None)
先看参数
regularizer:就是我们上⼀步创建的正则化⽅法
weights_list: 想要执⾏正则化⽅法的参数列表,如果为None的话,就取GraphKeys.WEIGHTS中的weights.
函数返回⼀个标量Tensor,同时,这个标量Tensor也会保存到GraphKeys.REGULARIZATION_LOSSES中.这个Tensor保存了计算正则项损失的⽅法.
tensorflow中的Tensor是保存了计算这个值的路径(⽅法),当我们run的时候,tensorflow后端就通过路径
计算出Tensor对应的值
现在,我们只需将这个正则项损失加到我们的损失函数上就可以了.
如果是⾃⼰⼿动定义weight的话,需要⼿动将weight保存到GraphKeys.WEIGHTS中,但是如果使⽤layer的话,就不⽤这么⿇烦了,别⼈已经帮你考虑好了.(最好⾃⼰验证⼀下tf.GraphKeys.WEIGHTS中是否包含了所有的weights,防⽌被坑)
c、使⽤slim
使⽤slim会简单很多:
with slim.arg_scope([v2d, slim.fully_connected],
activation_lu,
weights_regularizer=slim.l2_regularizer(weight_decay)):
pass
此时添加集合为tf.GraphKeys.REGULARIZATION_LOSSES。

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。