This is going to be a short one but one that I sure will come back often.
I have a 12GB video card Nvidia RTX 3060, I bought it specifically to play with AI stuff since I’m not much of a gamer or at least I do not like to play on the PC.
I have been very happy with it’s performance when it comes to AI generated images using Stable Diffusion, I also have notice a difference when editing video and that is a bonus.
However when I want to train my own models I have been getting the “out of memory” error more than once.
I have found 2 solutions to my problem and want to share them here for those who happen to have similar issues, by the way this apply to Automatic1111 but is likely that if you have the problem with other UI this solution might work.
Solution 1:
I realized that when training if the resolution is more than 384 I get that error, you can set the resolution during training to 384, by default is 512 so just change that, however make sure to also crop your images to that size or the training result might not be that good.
Solution 2:
Once again this is for Automatic1111 UI.
NOTE: For this solution you don’t have to reduce the resolution from the default of 512.
On Windows edit the webui-user.bat and add this line:
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
Save it and start the UI, you should be good to go.
So let’s go, get your own models to have your own creations.
If you run into any problems, drop me a line I will see if I can help, after all we are all learning as we go so I’m pretty sure there are more solutions out there.