-
Notifications
You must be signed in to change notification settings - Fork 44
Description
Hi,
Thanks for the awesome work! I'm trying to generate mesh from my own point cloud (generated with DiT3D, trained also on ShapeNet). I already did the pre-processing such that the format will be similar to the ones used in this repo: (i) making pose to right-side heading, (ii) consisting of 3K points, (iii) normalizing into [-0.5, 0.5]. See Figure 1 below for some mesh results (right side) from the point cloud inputs (left side). From Figure 1, the results are quite unexpected. I already tried to use the provided pre-trained model: (a) small noise, (b) large noise, (c) and those outliers models, but the results are also still quite unexpected. Here are some questions:
- What are the possible reasons why the mesh results below are not good?
- Any direction on how to get good mesh shapes using my point clouds as shown below as input?
- If new training or fine-tuning is needed to get good meshes for my own point clouds, how to do it if I only have point cloud data (x,y,z) without normals data and mesh data?
- I wonder if the method in this repo is designed to have one pre-trained model for all classes or one pre-trained model for each class? Including those provided pre-trained models.
Many thanks! :)
Figure 1 (these samples are right-side headings. Here I rotate only for best visualization purposes)