我对PyTorchdataloader⾥的shuffle=True的理解对shuffle=True的理解:
之前不了解shuffle的实际效果,假设有数据a,b,c,d,不知道batch_size=2后打乱,具体是如下哪⼀种情况:
1.先按顺序取batch,对batch内打乱,即先取a,b,a,b进⾏打乱;
2.先打乱,再取batch。
证明是第⼆种
shuffle (bool, optional): set to ``True`` to have the data reshuffled
at every epoch (default: ``False``).
if shuffle:
sampler = RandomSampler(dataset) #此时得到的是索引
补充:简单测试⼀下pytorch dataloader⾥的shuffle=True是如何⼯作的
看代码吧~
import sys
import torch
import random
import argparse
import numpy as np
import pandas as pd
as nn
import functional as F
from torch.optim import lr_scheduler
from torchvision import datasets, transforms
from torch.utils.data import TensorDataset, DataLoader, Dataset
class DealDataset(Dataset):
def __init__(self):
xy = np.loadtxt(open('./iris.csv','rb'), delimiter=',', dtype=np.float32)
#data = pd.read_csv("iris.csv",header=None)
#xy = data.values
self.x_data = torch.from_numpy(xy[:, 0:-1])
self.y_data = torch.from_numpy(xy[:, [-1]])
self.len = xy.shape[0]
def __getitem__(self, index):
return self.x_data[index], self.y_data[index]
def __len__(self):
return self.len
dealDataset = DealDataset()
train_loader2 = DataLoader(dataset=dealDataset,
batch_size=2,
shuffle=True)
#print(dealDataset.x_data)
for i, data in enumerate(train_loader2):
inputs, labels = data
#inputs, labels = Variable(inputs), Variable(labels)
print(inputs)
#print("epoch:", epoch, "的第" , i, "个inputs", inputs.data.size(), "labels", labels.data.size())
简易数据集
python安装教程非常详细shuffle之后的结果,每次都是随机打乱,然后分成⼤⼩为n的若⼲个mini-batch.
以上为个⼈经验,希望能给⼤家⼀个参考,也希望⼤家多多⽀持。
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论