Freakazo wrote:
I had a look at making the surfaceMesh functions virtual, but for the getVertices function it may pose problems since assuming that addVertex for example simply calls manualObject::position, there will be no way to read the vertex data directly until after manualObject::end is called. A solution would be to not just add a vertex to ogre, but to a member vector as well. This would however add more overhead and add more code for the user to implement when overwriting the functions. I'm also not sure if the overhead of virtual functions would be significant. This however might be something considered to be core. Opinions? [Edit] Woops, seems like template classes can't have virtual functions, will look into this a bit more.
There are basically two approaches which can be used here. The first is to have a base 'Mesh' class with subclasses for OpenGLMesh, OgreMesh, etc and virtual function for things like addVertex. The alternative approach is based on templates. For example, PolyVox volumes don't need to inherit from a common base class (though one is provided), they simply need to provide the required set of functions. The template based approch is often known as 'compile time polymorphism' or 'duck typing' if you want more info. Generally the second approach is better from a performance point of view.
This todo item was actually inspired by the TVA thread (
viewtopic.php?f=2&t=338) in which realazthat talks about the use of OutputIterators. He does seem to know a little more about templates and STL/Boost than me, so it's worth looking into these. However, it may be that they are more/less than we need and I would like to make sure I can maintain any code that gets added to PolyVox.
In terms of requirements, it would be nice if the output could be std::vector, OpenGL, Direct3D, ManualObject, OpenMesh, and possibly others. We wouldn't actually include all of these in PolyVox though, it would be for users to write the required implementation.
Freakazo wrote:
In regards with the pointListExtractor, I wasn't able to find much information on this. It mostly seems that people are going the opposite route of point list -> voxels or directly to mesh. So I'm unsure how to implement this. My, not so thought through, implementation would be to create a rigid 3d grid, transverse it and look for points where it intersects the surface, then move the point to the interpolated position of the surface. Would this qualify as a point list surface extractor?
I was thinking even simpler, in that it would be for the cubic (Minecraft) style surfaces. All it would have to output would be a list of solid voxels which are next to an empty voxel. User code would then render a cube for each voxel using instancing and/or the geometry shader (relevant Ogre code
here). This is interesting for a couple of reasons:
- It should use less memory than a index/vertex buffer mesh (unless decimation is used?). It may also execute faster than the CubicSurfaceExtractor because it is simpler, so perhaps it is good for smaller but more dynamic volumes.
- Rather than just rendering cubes for each voxel, you could perhaps render a more interesting mesh. Not sure exactly what is possible here though.
A basic implementation should be very straightforward for the PolyVox part, but you could later make it more interesting with materials, support for transparency, and support for density fields (using interpolation as you suggest, but I'm not sure if this is useful).