pytorch使⽤tensor计算欧⽒距离
pytorch 使⽤tensor 计算欧⽒距离
Python 基础教程--解释器的创建和配置
======================================================================
(For more information, please go to , upgrading~~)
Thanks a lot to for her help with this Blog!
众所周知,GPU的运⾏速度远⼤于CPU的速度。 You can go to.
`import numpy
as np import time import torch functional as F
a = np.random.rand(1,1000000)
b = np.random.rand(1,1000000)
c = torch.rand(1,1000000)
d = torch.rand(1,1000000)
e = torch.rand(1,1000000).cuda()
f = torch.rand(1,1000000).cuda()
计算numpy计算速度
time_start=time.time()
dist1 = (a - b)#numpy求欧⽒距离
time_end=time.time()
print(time_end-time_start)
计算tensor在cpu上的计算速度
time_start=time.time()
dist2 = F.pairwise_distance(c, d, p=2)#pytorch求欧⽒距离
time_end=time.time()
print(time_end-time_start)
计算tensor在cuda上的计算速度
time_start=time.time()
dist2 = F.pairwise_distance(e, f, p=2)
time_end=time.time()
print(time_end-time_start)
计算结果: 0.0031995773315429688 0.025923967361450195
0.0006928443908691406 `
管中窥豹,可见⼀斑,GPU相较与CPU拥有巨⼤优势!
但是⼀下常见的运算函数只能在numpy上运⾏,不能在tensor上运⾏,不得不将在GPU上的数据拉回到CPU上。⽐如距离的计算公式:
`
-- coding: utf-8 --
from numpy import *
vector1 = mat([1,2,3])
vector2 = mat([4,5,6])
print (sqrt((vector1-vector2)*((vector1-vector2).T)))
import numpy as np
x = np.random.random(10)
y = np.random.random(10)
solution1
dist1 = (x - y)
solution2
dist2 = np.sqrt(np.sum(np.square(x - y)))
print('x', x)
print('y', y)
print('dist1:', dist1)
print('dist2:', dist2)
solution3
from scipy.spatial.distance import pdist
X=np.vstack([x,y]) d2=pdist(X)[0] print('d2:',d2)`
numpy教程 pdf这⾥⾯的函数就会将数据由GPU拉回到CPU。
但是在使⽤:
functional as F distance =F.pairwise_distance(rep_a, rep_b, p=2)
此时很有可能遇到这个报错:
RuntimeError: dimension out of range (expected to be in range of
[-1, 0], but got 1)
运⾏代码时,⽼是报错:
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
输出a和b的维度发现: a的形状为 [20 , 5], 但是b的形状为[5 , ]。考虑⽤pytorch中的view函数,改变b的形状为[5,1]:
`
// b最先的形状为[5,]: tensor([ 1, 2, 3, 4, 5], device='cuda:0')
// view后b的形状为[5,1]: tensor([ [1], [2], [3], [4], [5]], device='cuda:0')
b=b.view(len(b),1)
(b)
`
从numpy.array( )的数组中,取某⼀⾏的数据转换成pytorch变量时,容易出现这个问题。类似地,也可以提前考虑⽤shape( )改变所取的那⼀⾏数据,再转换成pytorch变量,再进⾏点乘运算。代码如下:
`
// b最先的格式为: [1,2,3,4,5]
// reshape后b的格式为: [ [1.] [2.] [3.] [4.] [5.] ]
b = shape(b,(len(b),1))
b = Variable(
torch.from_numpy(b).type(T.FloatTensor))
(b)
`
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论