method 1:window10 Task manager of gpu The utilization data may be wrong , existence bug,
Bloggers are misled by this data . therefore , First, from nvidia-smi Or other software ( as ,Gpu-Z etc. ) see gpu True utilization .

method 2, After the first case is ruled out , We have to consider that the model is too small ,batch-size It's too small . The model structure will be changed , Or will batch-size turn up .

method 3
, If the second is true gpu The utilization rate is still very low , and cpu The utilization rate is very high , The bottleneck should be data loading ,gpu Processing too fast ,cpu Often in the process of transmitting data . therefore , Start multithreading to load data ,num_worker Can be set 2,4,
8, 16, If you have enough memory , Then set it up again pin_memory=True. This causes the data to lock the page , Putting data in virtual memory is not allowed , A little bit of data transfer time is saved .
torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size,
shuffle=True, num_workers=4, pin_memory=True)
method 4, If set num_workers Not for 0 Later boken pipe problem , Include the main program in the following code
if __name__ == '__main__':
For example, the following example :

If not , Just set it up num_workers=0

method 5, If it's a solid state drive , The speed of data loading is very fast ,num_workers Set to 0,1, 2,
4 Just fine , Don't be too big . You can try to see that one faster . The setting is too large ,cpu It takes too much time to merge the data of each sub thread , It will cause the data loading speed to drop instead of rising .

method 6, If there's still a problem , Try restarting the computer to see if it can solve the problem .

Technology
Daily Recommendation