测试代码:
import timeimport torchfrom loguru import loggerdevice = 'cuda'batch_size = 1000image_channel = 3image_size = 224count = int(100000/batch_size)logger.debug(f'筹备输出数据')input_data = torch.randn(batch_size, image_channel, image_size, image_size)total_bytes = input_data.numel() * input_data.element_size()print('total_MB', total_bytes/1024/1024)logger.debug(f'开始计数')started_at = time.time()for i in range(count): input_data_with_cuda = input_data.to(device)ended_at = time.time()print('pay time', ended_at-started_at)
测试在不同平台下的运行速度,因为这个必定和内存速度、显存带宽、显存速度等等都有关系
测试平台一:intel Xeon E5-2690 CPU + tesla-m60 GPU
CPU: Intel Xeon E5-2690
RAM: DDR4 2400 MHz
GPU: NVIDIA Tesla M60
运行后果
2023-03-15 07:18:28.542 | DEBUG | __main__:<module>:15 - 筹备输出数据total_MB 574.218752023-03-15 07:18:29.688 | DEBUG | __main__:<module>:23 - 开始计数pay time 12.158783435821533
测试平台二:Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz + tesla-T4 GPU
CPU: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
RAM: DDR4 3200 MHz
GPU: NVIDIA Tesla T4
运行后果
2023-03-15 15:38:21.722 | DEBUG | __main__:<module>:15 - 筹备输出数据total_MB 574.218752023-03-15 15:38:22.766 | DEBUG | __main__:<module>:23 - 开始计数pay time 13.845425367355347
测试平台三:Macbook pro13 Apple Silicon M1 8core CPU 8core GPU
运行后果
2023-03-15 15:39:53.084 | DEBUG | __main__:<module>:15 - 筹备输出数据total_MB 574.218752023-03-15 15:39:54.708 | DEBUG | __main__:<module>:23 - 开始计数pay time 4.494465112686157