[{"createTime":1735734952000,"id":1,"img":"hwy_ms_500_252.jpeg","link":"https://activity.huaweicloud.com/cps.html?fromacct=261f35b6-af54-4511-a2ca-910fa15905d1&utm_source=V1g3MDY4NTY=&utm_medium=cps&utm_campaign=201905","name":"华为云秒杀","status":9,"txt":"华为云38元秒杀","type":1,"updateTime":1735747411000,"userId":3},{"createTime":1736173885000,"id":2,"img":"txy_480_300.png","link":"https://cloud.tencent.com/act/cps/redirect?redirect=1077&cps_key=edb15096bfff75effaaa8c8bb66138bd&from=console","name":"腾讯云秒杀","status":9,"txt":"腾讯云限量秒杀","type":1,"updateTime":1736173885000,"userId":3},{"createTime":1736177492000,"id":3,"img":"aly_251_140.png","link":"https://www.aliyun.com/minisite/goods?userCode=pwp8kmv3","memo":"","name":"阿里云","status":9,"txt":"阿里云2折起","type":1,"updateTime":1736177492000,"userId":3},{"createTime":1735660800000,"id":4,"img":"vultr_560_300.png","link":"https://www.vultr.com/?ref=9603742-8H","name":"Vultr","status":9,"txt":"Vultr送$100","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":5,"img":"jdy_663_320.jpg","link":"https://3.cn/2ay1-e5t","name":"京东云","status":9,"txt":"京东云特惠专区","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":6,"img":"new_ads.png","link":"https://www.iodraw.com/ads","name":"发布广告","status":9,"txt":"发布广告","type":1,"updateTime":1735660800000,"userId":3},{"createTime":1735660800000,"id":7,"img":"yun_910_50.png","link":"https://activity.huaweicloud.com/discount_area_v5/index.html?fromacct=261f35b6-af54-4511-a2ca-910fa15905d1&utm_source=aXhpYW95YW5nOA===&utm_medium=cps&utm_campaign=201905","name":"底部","status":9,"txt":"高性能云服务器2折起","type":2,"updateTime":1735660800000,"userId":3}]
GPU加速是通过大量并行计算来实现的。在一个GPU上你有大量的核心,每一个都不是很强大,但大量的核心在这里很重要。在
像PyTorch这样的框架可以尽可能多地并行计算。一般情况下,矩阵运算非常适合并行化,但也不总是能够并行化计算!在
在您的示例中,有一个循环:b = torch.ones(4,4).cuda()
for _ in range(1000000):
b += b
因为这些运算都是不可能的。如果你仔细想想,要计算下一个下一个b,你需要知道上一个上一个(或当前)^{的值。在
所以你有1000000的操作,但是每一个都必须一个接一个地计算。可能的并行化仅限于张量的大小。但在您的示例中,这个大小不是很大:
^{pr2}$
所以每次迭代只能并行化16操作(添加)。
由于CPU的很少,但更多的强大的内核,对于给定的示例来说,它只是快得多!在
但是,如果改变张量的大小,情况就会改变,那么PyTorch能够并行化更多的整体计算。我把迭代次数改为1000(因为我不想等那么久:),但是你可以输入任何你喜欢的值,CPU和GPU之间的关系应该保持不变。在
以下是不同张量大小的结果:#torch.ones(4,4) - the size you used
CPU time = 0.00926661491394043
GPU time = 0.0431208610534668
#torch.ones(40,40) - CPU gets slower, but still faster than GPU
CPU time = 0.014729976654052734
GPU time = 0.04474186897277832
#torch.ones(400,400) - CPU now much slower than GPU
CPU time = 0.9702610969543457
GPU time = 0.04415607452392578
#torch.ones(4000,4000) - GPU much faster then CPU
CPU time = 38.088677167892456
GPU time = 0.044649362564086914
如你所见,在可以并行化东西的地方(这里添加了张量元素),GPU变得非常强大。
对于给定的计算,GPU时间根本没有变化,GPU可以处理更多!
(只要内存没有用完:)