Hi all,
Last night I merged the 'vertex-and-example-refactor' branch into develop. Here's a reminder of the main changes:
- 'SurfaceMesh' is now called just 'Mesh' and a lot of old/dead code has been removed.
- 'PositionMaterial' and 'PositionMaterialNormal' are removed and replaced with 'CubicVertex' and 'MarchingCubesVertex'. These encode the vertex data in an efficient way.
- We're added 'decode()' functions to convert the encoded vertex types into a regular 'Vertex' type. You can instead do this decoding on the GPU if you prefer. We've added both cases to the examples.
- The 'MarchingCubesSurfaceExtractor' and 'CubicSurfaceExtractor' are replaced with free functions called 'extractMarchingCubesMesh()' and 'extractCubicMesh()'. Actually the old classes are still present but will be removed in the future.
- Works with VS 2012 again
The Mesh class has also been templatized on IndexType, but actually this process is more complex than I imagined. To make use of it the surface extrators need to know which IndexType to use, which means also templatising
them on the IndexType (or more likely on the MeshType). I haven't done this yet and it needs some more thought, so you cannot use 16-bit indices yet.
mordentral wrote:
I'm looking forward to the code base changes, but I was also curious about the status of the Dual contour branch.
I talked to Matt about this recently (he's the one who actually implemented the Dual Contouring). I think our main concern it that it is not clear how a user would interact with the Hermite data which the Dual Contouring algorithm uses. With each 'voxel' the Dual Contouring approach stores a normal and distance along the edge of each of the intersection points (as I recall) and this is basically just an encoding of the mesh... do we really want the user to be reading and writing these directly? Is that actually useful functionality?
Other solutions (VoxelFarm, Upvoid, etc) seem to operate on distance fields, octrees and CSG, with procedural generation being used to create most of the content on the fly. This is quite different from what PolyVox does, which is to store an explicit representation of the volume data. It seems that the tools also become a lot more important in Dual Contouring engines, again as a result of the underlying representation being more complex. This is probably higher level than we want to deal with in PolyVox.
That said, we do have some idea for using Dual Contouring within PolyVox, but I don't think it's high priority and it needs a lot of thought/research to see how it would be used. It basically depends whether we find a use for it in Cubiquity.