Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In meta_train phase how to change batch_size #49

Open
lishuaijun1997 opened this issue Apr 7, 2021 · 6 comments
Open

In meta_train phase how to change batch_size #49

lishuaijun1997 opened this issue Apr 7, 2021 · 6 comments

Comments

@lishuaijun1997
Copy link

Thank yours code! I have a Issues, when i use pytorch meta_train phase. I find use batch_sampler will can not use batch_size. My GPU is Nvidia 3070 ,it will CUDA out of memory. Therefore, i hope yaoyao can help me.

@yaoyao-liu
Copy link
Owner

Thanks for your interest in our work.
May I know what do you mean by "use batch_sampler will can not use batch_size"?
Do you mean that "if you change the batch size, the code is not able to work'?

Best,
Yaoyao

@lishuaijun1997
Copy link
Author

self.train_sampler = CategoriesSampler(self.trainset.label, self.args.num_batch, self.args.way, self.args.shot + self.args.train_query) self.train_loader = DataLoader(dataset=self.trainset, batch_sampler=self.train_sampler, num_workers=8, pin_memory=True)
Code only have pre_batch_size,but code have not meta batch_size. So when i use meta_train phase , I can not change meta batch_size.
Thank you for your reply.

@yaoyao-liu
Copy link
Owner

May I know what do you mean by "meta-batch size"?
If you mean "the number of tasks used to update the meta-model for once", it is one and it cannot be changed in the current PyTorch code.

Best,
Yaoyao

@yaoyao-liu
Copy link
Owner

The meta-batch size is already 1, so we are not able to reduce it. If you hope to decrease the GPU memory required, you may change the following parameters (i.e., train_query, val_query, and update_step). I don't how large is your GPU memory for your 3070. I recommend you run this project using a GPU with more than 16GB of GPU memory.

@lishuaijun1997
Copy link
Author

My GPU memory is 8GB, i will try my lab's TITAN V. I very interest in your work, so I will study your paper carefully!
I truly appreciate your timely help.

@yaoyao-liu
Copy link
Owner

No problem. If you have any further questions, feel free to send me an email or add comments on this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants