Win7+keras+tensorflow使⽤YOLO-v3训练⾃⼰的数据集
⼀、下载和测试模型
负10的补码是多少1. 下载YOLO-v3
2. 下载权重
wget pjreddie/media/files/yolov3.weights
3. ⽣成 h5 ⽂件
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
执⾏convert.py⽂件,这是将darknet的yolo转换为⽤于keras的h5⽂件,⽣成的h5被保存在model_data下。命令中的 convert.py 和 yolo.cfg 已经在keras-yolo3-master ⽂件夹下,不需要单独下载。
4. ⽤已经被训练好的yolo.h5进⾏图⽚识别测试
python yolo_video.py --image
执⾏后会让你输⼊⼀张图⽚的路径,由于我准备的图⽚放在与yolo_video.py同级⽬录,所以直接输⼊图⽚名称,不需要加路径
这就表明测试成功了。
⼆、制作⾃⼰的VOC数据集
参考我原来写的博客:
我是在Ubuntu内标注然后移到Windows的,如果在Windows⾥安装了LabelImg,可以直接在Windows下标注。
最后⽂件布局为:
三、修改配置⽂件、执⾏训练
1. 复制 voc_annotation.py 到voc⽂件夹下,修改 voc_annotation.py 分类。如下图:
执⾏ voc_annotation.py 获得这四个⽂件
ElementTree as ET
from os import getcwd
sets=[('2018', 'train'), ('2018', 'val'), ('2018', 'test'), ('2018', 'trainval')]
classes = []
def convert_annotation(year, image_id, list_file):
python安装教程win7
in_file = open('VOCdevkit\VOC%s\Annotations\%s.xml'%(year, image_id), encoding = 'utf-8')
tree=ET.parse(in_file)
root = t()
for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (int(xmlbox.find('xmin').text), int(xmlbox.find('ymin').text), int(xmlbox.find('xmax').text), int(xmlbox.find('ymax').text))
list_file.write("" + ",".join([str(a) for a in b]) + ',' + str(cls_id))
wd = getcwd()
for year, image_set in sets:
image_ids = open('VOCdevkit\VOC%s\ImageSets\Main\%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write('%s\VOCdevkit\VOC%s\JPEGImages\%s.jpg'%(wd, year, image_id))
convert_annotation(year, image_id, list_file)
list_file.write('\n')
list_file.close()
⽹上都是 train、val、test、三个⽂件。但我觉得还应该加⼀个 trainval。还有将所有的 / 改为 \ (Windows下路径表⽰和linux下不同)。⾼亮部分是为了防⽌Windows读取错误(博主就恰好碰到了)
2. 在model_data⽂件夹下新建⼀个 (可以根据你的数据来,⽐如你检测是花的种类,可以叫 。起名最好有意义),将你的类别写⼊,⼀⾏⼀个。
3. 修改yolov3.cfg ⽂件
使⽤迁移学习思想,⽤已经预训练好的权重接着训练。需要下⾯的修改步骤:
IDE⾥直接打开cfg⽂件,ctrl+f搜 yolo, 总共会搜出3个含有yolo的地⽅。
什么是erp企业管理系统
每个地⽅都要修改3处,
          filter :3*(5+len(classes))
          classes:len(classes)  我的类别是17
          random:原来是1,显存⼩改为0
重新⽣成h5⽂件
python convert.py -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5
4. 训练
执⾏下⾯的train.py
python train.py
"""python安装不成功
Retrain the YOLO model for your own dataset.
"""
import numpy as np
import keras.backend as K
from keras.layers import Input, Lambda
dels import Model
from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping
del import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
from yolo3.utils import get_random_data
def _main():
annotation_path = 'voc/'
log_dir = 'model_data/logs/'
classes_path = 'model_data/'
anchors_path = 'model_data/'
class_names = get_classes(classes_path)
anchors = get_anchors(anchors_path)
input_shape = (416,416) # multiple of 32, hw
model = create_model(input_shape, anchors, len(class_names) )
train(model, annotation_path, input_shape, anchors, len(class_names), log_dir=log_dir)
def train(model, annotation_path, input_shape, anchors, num_classes, log_dir='logs/'):
modelpile(optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred})
logging = TensorBoard(log_dir=log_dir)
checkpoint = ModelCheckpoint(log_dir + "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5",
monitor='val_loss', save_weights_only=True, save_best_only=True, period=1)
batch_size = 10
val_split = 0.2
with open(annotation_path) as f:
lines = f.readlines()
np.random.shuffle(lines)
num_val = int(len(lines)*val_split)
num_train = len(lines) - num_val
print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
model.fit_generator(data_generator_wrap(lines[:num_train], batch_size, input_shape, anchors, num_classes),            steps_per_epoch=max(1, num_train//batch_size),
validation_data=data_generator_wrap(lines[num_train:], batch_size, input_shape, anchors, num_classes),            validation_steps=max(1, num_val//batch_size),
epochs=20,
initial_epoch=0)
model.save_weights(log_dir + 'trained_weights.h5')
def get_classes(classes_path):
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def get_anchors(anchors_path):
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
return np.array(anchors).reshape(-1, 2)
def create_model(input_shape, anchors, num_classes, load_pretrained=False, freeze_body=False,
weights_path='model_data/yolo_weights.h5'):
K.clear_session() # get a new session
image_input = Input(shape=(None, None, 3))
h, w = input_shape
num_anchors = len(anchors)魔法特效软件
y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
num_anchors//3, num_classes+5)) for l in range(3)]
model_body = yolo_body(image_input, num_anchors//3, num_classes)
print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
if load_pretrained:
model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
print('Load weights {}.'.format(weights_path))
if freeze_body:
# Do not freeze 3 output layers.
num = len(model_body.layers)-3
for i in range(num): model_body.layers[i].trainable = False
print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
[*model_body.output, *y_true])
model = Model([model_body.input, *y_true], model_loss)
return model
def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes):
n = len(annotation_lines)
np.random.shuffle(annotation_lines)
i = 0
while True:
image_data = []
box_data = []
for b in range(batch_size):
if函数怎么用文字
i %= n
image, box = get_random_data(annotation_lines[i], input_shape, random=True)
image_data.append(image)
box_data.append(box)
i += 1
image_data = np.array(image_data)
box_data = np.array(box_data)
y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
yield [image_data, *y_true], np.zeros(batch_size)
def data_generator_wrap(annotation_lines, batch_size, input_shape, anchors, num_classes):
n = len(annotation_lines)
if n==0 or batch_size<=0: return None
return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes)
if__name__ == '__main__':
_main()
代码标红的地⽅,需要根据⾃⼰实际情况进⾏修改。
其他可以设置的参数
batch_size = 32:默认值⽐较⼤,对电脑性能有要求。可以调⼩。我设置的是10
val_split = 0.1 : 这个表⽰,验证集占训练集的⽐例。建议划分⼤点。不然验证集的图⽚会很少。不利于验证集loss的计算epochs = 100,可以调⼩⼀点。我设置的是20
参考地址:

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。