注意⼒机制+软阈值函数=深度残差收缩⽹络(附代码)
深度残差收缩⽹络是⼀种⾯向强噪声数据的深度神经⽹络,是由“深度残差⽹络”和“收缩”组成的。⼀⽅⾯,“深度残差⽹络”已经成为了深度学习领域的基础⽹络。另⼀⽅⾯,“收缩”指的是软阈值函数,是很多信号降噪算法的关键步骤。
更重要地,在深度残差收缩⽹络中,软阈值化所需要的阈值,实质上是在注意⼒机制下⾃动设置的,从⽽避免了⼈⼯设置阈值的⿇烦。
在本⽂中,我们⾸先对残差⽹络、软阈值化和注意⼒机制的相关基础进⾏了简要的回顾,然后对深度残差收缩⽹络的动机、算法和应⽤展开解读。
1.相关基础
1.1 残差⽹络
残差⽹络(或称为深度残差⽹络、深度残差学习,英⽂ResNet)属于⼀种卷积神经⽹络。相较于普通的卷积神经⽹络,残差⽹络采⽤了跨层的恒等连接,以减轻卷积神经⽹络的训练难度。残差⽹络的⼀种常见的基本模块如图1所⽰。
1.2 软阈值化
软阈值化是许多信号降噪⽅法的核⼼步骤。它的⽤处是将绝对值低于某个阈值的特征置为零,将其他的特征也朝着零进⾏调整,也就是收缩。在这⾥,阈值是⼀个需要预先设置的参数,其取值⼤⼩对于降噪的结果有着直接的影响。软阈值化的输⼊与输出之间的关系如图所⽰。
我们可以看出,软阈值化是⼀种⾮线性变换,有着和ReLU激活函数相似的性质:梯度要么是0,要么
是1。因此,软阈值化也能够作为神经⽹络的激活函数。事实上,⼀些神经⽹络已经将软阈值函数作为激活函数进⾏了使⽤。
1.3 注意⼒机制
注意⼒机制指的是将注意⼒集中于局部关键信息的机制,可以分成两步:第⼀,通过全局扫描,发现局部有⽤信息;第⼆,增强有⽤信息并抑制冗余信息。
Squeeze-and-Excitation Network是⼀种⾮常经典的注意⼒机制下的深度学习⽅法。它可以通过⼀个⼩型的⼦⽹络,⾃动学习得到⼀组权重,对特征图的各个通道进⾏加权。其含义在于,某些特征通道是较为重要的,⽽另⼀些特征通道是信息冗余的;那么,我们就可以通过这种⽅式增强有⽤特征通道、削弱冗余特征通道。Squeeze-and-Excitation Network的⼀种基本模块如下图所⽰。
variable used in lambda
值得指出的是,通过这种⽅式,每个样本都可以有⾃⼰独特的⼀组权重,可以根据样本⾃⾝的特点,进⾏独特的特征通道加权调整。例如,样本A的第⼀特征通道是重要的,第⼆特征通道是不重要的;⽽样本B的第⼀特征通道是不重要的,第⼆特征通道是重要的;通过这种⽅式,样本A可以有⾃⼰的⼀组权重,以加强第⼀特征通道,削弱第⼆特征通道;同样地,样本B可以有⾃⼰的⼀组权重,以削弱第⼀特征通道,加强第⼆特征通道。
2.深度残差收缩⽹络理论
2.1 动机
⾸先,现实世界中的数据,或多或少都含有⼀些冗余信息。那么我们就可以尝试将软阈值化嵌⼊残差⽹络中,以进⾏冗余信息的消除。
其次,各个样本中冗余信息含量经常是不同的。那么我们就可以借助注意⼒机制,根据各个样本的情况,⾃适应地给各个样本设置不同的阈值。
2.2 算法
与残差⽹络和Squeeze-and-Excitation Network相似,深度残差收缩⽹络也是由许多基本模块堆叠⽽成的。每个基本模块都有⼀个⼦⽹络,⽤于⾃动学习得到⼀组阈值,⽤于特征图的软阈值化。值得指出
的是,通过这种⽅式,每个样本都有着⾃⼰独特的⼀组阈值。深度残差收缩⽹络的⼀种基本模块如下图所⽰。
深度残差收缩⽹络的⼤致框架如下图所⽰,是由输⼊层、许多基本模块以及最后的全连接输出层等部分所组成的。
2.3 应⽤
在原始论⽂中,深度残差收缩⽹络是应⽤于基于振动信号的机械设备故障诊断。但是从原理上来讲,深度残差收缩⽹络⾯向的是数据集含有冗余信息的情况,⽽冗余信息是⽆处不在的。例如,在图像识别的时候,图像中总会包含⼀些与标签⽆关的区域;在语⾳识别的时候,⾳频中经常会含有各种形式的噪声。因此,深度残差收缩⽹络,或者说这种在深度学习算法内部集成“注意⼒机制”+“软阈值化”的思路,有着较为⼴泛的研究价值和应⽤前景。
3.Keras代码⽰例
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Dec 28 23:24:05 2019
Implemented using TensorFlow 1.0.1 and Keras 2.2.1
M. Zhao, S. Zhong, X. Fu, et al., Deep Residual Shrinkage Networks for Fault Diagnosis,
IEEE Transactions on Industrial Informatics, 2019, DOI: 10.1109/TII.2019.2943898
@author: super_9527
"""
from __future__ import print_function
import keras
import numpy as np
from keras.datasets import mnist
from keras.layers import Dense, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D
from keras.optimizers import Adam
ularizers import l2
from keras import backend as K
dels import Model
from import Lambda
K.set_learning_phase(1)
# Input image dimensions
img_rows, img_cols =28,28
# The data, split between train and test sets
(x_train, y_train),(x_test, y_test)= mnist.load_data()
if K.image_data_format()=='channels_first':
x_train = shape(x_train.shape[0],1, img_rows, img_cols)
x_test = shape(x_test.shape[0],1, img_rows, img_cols)
input_shape =(1, img_rows, img_cols)
else:
x_train = shape(x_train.shape[0], img_rows, img_cols,1)
x_test = shape(x_test.shape[0], img_rows, img_cols,1)
input_shape =(img_rows, img_cols,1)
# Noised data
x_train = x_train.astype('float32')/255.+0.5*np.random.random([x_train.shape[0], img_rows, img_cols,1])
x_train = x_train.astype('float32')/255.+0.5*np.random.random([x_train.shape[0], img_rows, img_cols,1]) x_test = x_test.astype('float32')/255.+0.5*np.random.random([x_test.shape[0], img_rows, img_cols,1]) print('x_train shape:', x_train.shape)
print(x_train.shape[0],'train samples')
print(x_test.shape[0],'test samples')
# convert class vectors to binary class matrices
y_train = _categorical(y_train,10)
y_test = _categorical(y_test,10)
def abs_backend(inputs):
return K.abs(inputs)
def expand_dim_backend(inputs):
pand_pand_dims(inputs,1),1)
def sign_backend(inputs):
return K.sign(inputs)
def pad_backend(inputs, in_channels, out_channels):
pad_dim =(out_channels - in_channels)//2
inputs = K.expand_dims(inputs,-1)
inputs = K.spatial_3d_padding(inputs,((0,0),(0,0),(pad_dim,pad_dim)),'channels_last')
return K.squeeze(inputs,-1)
# Residual Shrinakge Block
def residual_shrinkage_block(incoming, nb_blocks, out_channels, downsample=False,
downsample_strides=2):
residual = incoming
in_channels = _shape().as_list()[-1]
for i in range(nb_blocks):
identity = residual
if not downsample:
downsample_strides =1
residual = BatchNormalization()(residual)
residual = Activation('relu')(residual)
residual = Conv2D(out_channels,3, strides=(downsample_strides, downsample_strides),
padding='same', kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))(residual)
residual = BatchNormalization()(residual)
residual = Activation('relu')(residual)
residual = Conv2D(out_channels,3, padding='same', kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))(residual)
# Calculate global means
residual_abs = Lambda(abs_backend)(residual)
abs_mean = GlobalAveragePooling2D()(residual_abs)
# Calculate scaling coefficients
scales = Dense(out_channels, activation=None, kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))(abs_mean)
scales = BatchNormalization()(scales)
scales = Activation('relu')(scales)
scales = Dense(out_channels, activation='sigmoid', kernel_regularizer=l2(1e-4))(scales)
scales = Lambda(expand_dim_backend)(scales)
# Calculate thresholds
thres = keras.layers.multiply([abs_mean, scales])
# Soft thresholding
# Soft thresholding
sub = keras.layers.subtract([residual_abs, thres])
zeros = keras.layers.subtract([sub, sub])
n_sub = keras.layers.maximum([sub, zeros])
residual = keras.layers.multiply([Lambda(sign_backend)(residual), n_sub])
# Downsampling (it is important to use the pooL-size of (1, 1))
if downsample_strides >1:
identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
# Zero_padding to match channels (it is important to use zero padding rather than 1by1 convolution)
if in_channels != out_channels:
identity = Lambda(pad_backend, arguments={'in_channels':in_channels,'out_channels':out_channels})(identity)        residual = keras.layers.add([residual, identity])
return residual
# define and train a model
inputs = Input(shape=input_shape)
net = Conv2D(8,3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_shrinkage_block(net,1,8, downsample=True)
net = BatchNormalization()(net)
net = Activation('relu')(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
modelpile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=100, epochs=5, verbose=1, validation_data=(x_test, y_test))
# get results
K.set_learning_phase(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=100, verbose=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=100, verbose=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])
4.TFLearn代码⽰例
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Dec 23 21:23:09 2019
Implemented using TensorFlow 1.0 and TFLearn 0.3.2
M. Zhao, S. Zhong, X. Fu, B. Tang, M. Pecht, Deep Residual Shrinkage Networks for Fault Diagnosis,
IEEE Transactions on Industrial Informatics, 2019, DOI: 10.1109/TII.2019.2943898
@author: super_9527
"""
from __future__ import division, print_function, absolute_import
import tflearn
import numpy as np
import tensorflow as tf
from v import conv_2d
# Data loading
from tflearn.datasets import cifar10
(X, Y),(testX, testY)= cifar10.load_data()
# Add noise
X = X + np.random.random((50000,32,32,3))*0.1
testX = testX + np.random.random((10000,32,32,3))*0.1

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。