Midnight wrote:
So one thing that I've learned is that the old polyvox "Master" Branch, is really just the old 2013 branch of polyvox.
Yes, this is true. Master hasn't been updated in a while, basically because we were busy with Voxeliens and Cubiquity. The development branch is much more recent. I'm not adding any more features at the moment, but for the last couple of months I've been tidying up the development branch ready to make it the new master (i.e. do a new PolyVox release).
However, more on the future of PolyVox further down.
Midnight wrote:
The "Development" Branch is however the newer version which does GPU decoded texture arrays, for "multitextures" like actual layered images or something.
Yes, but PolyVox doesn't really 'do' texturing - that is entirely up to the user. It doesn't even generate texture coordinates. Personally I have never done textured cubic meshes with PolyVox, though I have done textured Marching Cubes meshes. Therefore any information I give about texturing cubic meshes is generally not well tested!
Midnight wrote:
It does the greedy meshing as well which is screwing me up at the moment but was the optimization one needs for any minecraft-like.
You can turn off the greedy meshing, but you probably don't need to. The syntax is ugly but you can change:
Code:
auto mesh = extractCubicMesh(&volData, volData.getEnclosingRegion());
to:
Code:
DefaultIsQuadNeeded< uint8_t > isQuadNeeded;
auto mesh = extractCubicMesh(&volData, volData.getEnclosingRegion(), isQuadNeeded, false);
I didn't properly test this as I don't have wireframe rendering set up at the moment.
Midnight wrote:
It also doesN'T require any build macros, it just compiles as headers now. So moving along this is the current state, since it's not documented anywhere else ATM.

Correct!
Midnight wrote:
Edit; One thing I just thought of is that your greedy mesher will force a single texture. The greedy meshing should be performed after the texturing phase, or you're committed to texturing those segments with singular textures and not per voxel as I understand it. You're forcing the use of texture atlas, and texture arrays, but leaving no room for custom implementations that don't rely soley on marching cubes and triplaner approaches.
The greedy meshing does take account of per-voxel data and does not merge faces if the data is different (usually this 'data' will be a material id). Remember, PolyVox has no concept of texturing so it can't do that first and then greedy meshing afterwards. But I believe the system is fairly flexible (though as mentioned, not well tested).
In a nutshell, your vertex data will have a 'data' member which is copied from the voxel (and which you can use as it's material id). You should pass this value to the GPU with your vertex data and use it in the shader to select which texture to apply.
Midnight wrote:
What I don't understand at this point is how using irrlicht I can send the mesh to the shader and "decode it" whatever that means. between the mesh and decodedMesh I don't see any differences in the vertex counts for example. I think it has something to do with tris and quads, but I really have no clue.
I wouldn't worry about it too much at this point - it's an advanced technique and for now you can just decode on the GPU. But to give some idea, an 'encoded' vertex looks like this:
Code:
template<typename _DataType>
struct CubicVertex
{
typedef _DataType DataType;
/// Each component of the position is stored as a single unsigned byte.
/// The true position is found by offseting each component by 0.5f.
Vector3DUint8 encodedPosition;
/// A copy of the data which was stored in the voxel which generated this vertex.
DataType data;
};
And a 'decoded' vertex looks like this:
Code:
template<typename _DataType>
struct Vertex
{
typedef _DataType DataType;
Vector3DFloat position;
Vector3DFloat normal;
DataType data;
};
Note how it requires more space because floats are used for the position rather than uint_8s. Also there is a 'normal' member which does not get filled in (I think) and so wastes space. It might not seem worth while but be aware that the Marching Cubes version is a bit more advanced.
The decoding is as follows:
Code:
inline Vector3DFloat decodePosition(const Vector3DUint8& encodedPosition)
{
Vector3DFloat result(encodedPosition.getX(), encodedPosition.getY(), encodedPosition.getZ());
result -= 0.5f; // Apply the required offset
return result;
}
template<typename DataType>
Vertex<DataType> decodeVertex(const CubicVertex<DataType>& cubicVertex)
{
Vertex<DataType> result;
result.position = decodePosition(cubicVertex.encodedPosition);
result.normal.setElements(0.0f, 0.0f, 0.0f); // Currently not calculated
result.data = cubicVertex.data; // Data is not encoded
return result;
}
The 'DecodeOnGPUExample' shows how this can be done in a shader, but is for shader experts only

Midnight wrote:
It's advantageous at this point to remain idle, wait for or merge myself the shader branch of irrlicht, and to wait until polyvox has a little more documentation? Some wiki updates would really keep the ball rolling for polyvox IMHO.
Right, so an important point here is that I am intending to wind down PolyVox development over the coming months and replace it with an open-source version of Cubiquity 2. I am preparing one final release for completeness, but after that it may not get much more attention.
When I started PolyVox 10 years ago it was very new/novel, but then Minecraft came along and now there are voxel engines everywhere. For Cubiquity 2 I want to get back to the cutting edge, and do something new and different again. The exact scope is currently not defined and it is very much in the research phase at the moment while I finish of a final release of PolyVox, Cubiquity 1, and Voxeliens, but there will be more Cubiquity 2 news later in the year.
Do read our
latest blog post for more information.