EnlightenGAN的代码运⾏过程问题记录
⼀、前期准备过程
2、conda创建虚拟环境:
conda create -n enlighten python=3.5
3、进⼊项⽬⽂件夹,打开终端
conda activate enlighten
pip install -
4、创建⽂件夹mkdir
mkdir model
5、下载,放⼊model⽂件夹中
⼆、训练过程
1、创建⽂件夹…/final_dataset/trainA and …/final_dataset/trainB(即final_dataset⽂件夹与项⽬⽂件夹同级的位置),将下载分别放⼊
2.进⾏可视化过程
nohup python -m visdom.server -port=8097
可选步骤
2.关于如何停⽌visdom.server步骤:
(1)ps -aux | grep visdom.server
(2)sudo kill 进程号(PID)
可能出现的问题:
(1)
解决:
修改server.py⽂件
continue语句执行过程到Anaconda3\envs\pytorch\Lib\site-packages\visdom\server.py⽂件,在1917⾏(不⼀定每个⼈都在这⼀⾏,从后往前就⾏),将其注释掉。
(2)
al:Could not open static file ‘/home/xl/.conda/envs/enlighten/lib/python3.5/site-
packages/visdom/static/js/react-grid-layout.min.js’
解决:实验室的服务器没有⽹,需要离线本地下载react-grid-layout.min.js⽂件放
到/home/xl/.conda/envs/enlighten/lib/python3.5/site-packages/visdom/static/js/下
3.另开⼀个终端,激活虚拟环境enlighten后,输⼊:
python scripts/script.py --train
可能出现的问题:
RuntimeError:cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
解决⽅法:
(1)在train.py中加⼊:
import torch
torch.abled = False
(2)将batchsize改⼩或者减⼩训练图⽚的尺⼨
我的将batchsize改为了1,把训练图⽚的尺⼨改成了256*256,
并且将base_dataset.py中的transform_list.append(transforms.RandomCrop(opt.finesize))改成了
transform_list.append(transforms.RandomCrop(256))
注意:训练挺需要显存的,论⽂作者说3张1080ti训练3个⼩时。不然就只能把batchsize调⼩了
4.运⾏成功
------------ Options -------------
D_P_times2: False
IN_vgg: False
batchSize: 1
checkpoints_dir: ./checkpoints config: configs/unit_gta2city_folder.yaml continue_train: False
dataroot: ../final_dataset
dataset_mode: unaligned
display_freq: 30
display_id: 1
display_port: 8097
display_single_pane_ncols: 0
display_winsize: 256
fcn: 0
fineSize: 320
gpu_ids: [0]
high_times: 400
hybrid_loss: True
identity: 0.0
input_linear: False
input_nc: 3
instance_norm: 0.0
isTrain: True
l1: 10.0
lambda_A: 10.0
lambda_B: 10.0
latent_norm: False
latent_threshold: False
lighten: False
linear: False
linear_add: False
loadSize: 286
low_times: 200
lr: 0.0001
max_dataset_size: inf
model: single
multiply: False
nThreads: 4
n_layers_D: 5
n_layers_patchD: 4
name: enlightening
ndf: 64
new_lr: False
ngf: 64
niter: 100
niter_decay: 100
no_dropout: True
no_flip: False
no_html: False
no_lsgan: False
no_vgg_instance: False
noise: 0
norm: instance
norm_attention: False
output_nc: 3
patchD: True
patchD_3: 5
patchSize: 32
patch_vgg: True
phase: train
pool_size: 50
print_freq: 100
resize_or_crop: crop
save_epoch_freq: 5
save_latest_freq: 5000
self_attention: True
serial_batches: False
skip: 1.0
syn_norm: False
tanh: False
times_residual: True
use_avgpool: 0
use_mse: False
use_norm: 1.0
use_ragan: True
use_wgan: 0.0
vary: 1
vgg: 1.0
vgg_choose: relu5_1
vgg_maxpooling: False
vgg_mean: False
which_direction: AtoB
which_epoch: latest
which_model_netD: no_norm_4
which_model_netG: sid_unet_resize
-
------------- End ----------------
train.py:11: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read /lo ad for full details.
return yaml.load(stream)
CustomDatasetDataLoader
dataset [UnalignedDataset] was created
#training images = 1016
./model
---------- Networks initialized -------------
DataParallel(
(module): Unet_resize_conv(
(conv1_1): Conv2d(4, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(downsample_1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(downsample_2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(downsample_3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(downsample_4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(LReLU1_1): LeakyReLU(0.2, inplace)
(bn1_1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
(conv1_2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU1_2): LeakyReLU(0.2, inplace)
(bn1_2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
(max_pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(conv2_1): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU2_1): LeakyReLU(0.2, inplace)
(bn2_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(conv2_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU2_2): LeakyReLU(0.2, inplace)
(bn2_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(max_pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(conv3_1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU3_1): LeakyReLU(0.2, inplace)
(bn3_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(conv3_2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU3_2): LeakyReLU(0.2, inplace)
(bn3_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(max_pool3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(conv4_1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU4_1): LeakyReLU(0.2, inplace)
(bn4_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(conv4_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU4_2): LeakyReLU(0.2, inplace)
(bn4_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(max_pool4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)
(conv5_1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU5_1): LeakyReLU(0.2, inplace)
(bn5_1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(conv5_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(LReLU5_2): LeakyReLU(0.2, inplace)
(LReLU5_2): LeakyReLU(0.2, inplace)
(bn5_2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (deconv5): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv6_1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU6_1): LeakyReLU(0.2, inplace)
(bn6_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (conv6_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU6_2): LeakyReLU(0.2, inplace)
(bn6_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (deconv6): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv7_1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU7_1): LeakyReLU(0.2, inplace)
(bn7_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (conv7_2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU7_2): LeakyReLU(0.2, inplace)
(bn7_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (deconv7): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv8_1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU8_1): LeakyReLU(0.2, inplace)
(bn8_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (conv8_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU8_2): LeakyReLU(0.2, inplace)
(bn8_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (deconv8): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv9_1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU9_1): LeakyReLU(0.2, inplace)
(bn9_1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) (conv9_2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (LReLU9_2): LeakyReLU(0.2, inplace)
(conv10): Conv2d(32, 3, kernel_size=(1, 1), stride=(1, 1))
)
)
Total number of parameters: 8636675
DataParallel(
(module): NoNormDiscriminator(
(model): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(1): LeakyReLU(0.2, inplace)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(3): LeakyReLU(0.2, inplace)
(4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(5): LeakyReLU(0.2, inplace)
(6): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(7): LeakyReLU(0.2, inplace)
(8): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(9): LeakyReLU(0.2, inplace)
(10): Conv2d(512, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
(11): LeakyReLU(0.2, inplace)
(12): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
)
)
)
Total number of parameters: 11154369
DataParallel(
(module): NoNormDiscriminator(
(model): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(1): LeakyReLU(0.2, inplace)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(3): LeakyReLU(0.2, inplace)
(4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(5): LeakyReLU(0.2, inplace)
(6): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
(7): LeakyReLU(0.2, inplace)
(8): Conv2d(512, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
(9): LeakyReLU(0.2, inplace)

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。