-
Notifications
You must be signed in to change notification settings - Fork 105
Open
Description
Thanks for your amazing work!
I want to convert smpl vertices to star but failed.
I think I may have mistaken the format of the input data. The input data I use is smpl vertices of people_shapshot dataset, and the dim is (6890,3).
I run the "convert_smpl_to_star.py", and the output is:
/project/macaoyuan/STAR/convertors/losses.py:78: UserWarning: The Default optimization parameters (MAX_ITER_EDGES,MAX_ITER_VERTS) were tested on batch size 32 or smaller batches
'The Default optimization parameters (MAX_ITER_EDGES,MAX_ITER_VERTS) were tested on batch size 32 or smaller batches')
Loading the SMPL Meshes and
STAGE 1/2 - Fitting the Model on Edges Objective
Traceback (most recent call last):
File "./convertors/convert_smpl_to_star.py", line 56, in <module>
np_poses , np_betas , np_trans , star_verts = convert_smpl_2_star(smpl,**opt_parms)
File "/project/macaoyuan/STAR/convertors/losses.py", line 98, in convert_smpl_2_star
d = star(poses, betas, trans)
File "/home/macaoyuan/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/macaoyuan/.conda/envs/neuralbody/lib/python3.7/site-packages/star/pytorch/star.py", line 139, in forward
v = v + trans[:,None,:]
RuntimeError: CUDA out of memory. Tried to allocate 544.00 MiB (GPU 0; 15.78 GiB total capacity; 13.47 GiB already allocated; 194.00 MiB free; 14.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I think there is something wrong with my input data. Can you tell me the usage of the convert python file?
Metadata
Metadata
Assignees
Labels
No labels