David Williams wrote:
I don't have much knowledge of sparse textures but I believe they are mostly in 2D. Does TVA allow worlds which are big enough to benefit from them? There is also a GPU gems paper called
Octree Textures on the GPU which extends the concept into 3D. Might be worth including that in your list as well.
Yes they are 2D, the idea here would be to have 3 of them and use triplanar projection. So 2D texturing techniques can be reused for 3D terrain.
As for TVA being capable of larger worlds, the top-down LOD generation approach I described in the previous post allows for Very Large Worlds, but texturing them becomes very problematic. Thus, *some* sort of runtime-procedural textures are needed, or some sort of texture paging and thus my discussion of texture synthesis of different sorts.
I've seen the GPU gems article before, it is nice, and might have some applicable ideas, but I don't think it solves the problem of scalable (large) textures, but rather is a way of breaking down a non-flat (hard to texture) surface into many tiny flat pieces so it can be textured. Thus I didn't think it would be helpful for this particular problem.
Quote:
I wasn't able to follow your GPU noise link, ...
Which link? Is there a broken link?
Quote:
... but I'm also aware of
Drop-in replacement noise() for GLSL. As I recall they avoid any dependencies on resources such as textures for lookup tables.
Thanks, I added this link to the wiki page.
Quote:
As for fractal reblending... well that one really impressed me. Such a simple idea but I've never heard of it before. Do you have any links or images for it?
The idea is actually my own. It is similar to Macro Textures idea, which I thought of myself as well, but saw it around on the net too. Instead of having a macro and micro texture, you can just use the same texture. It usually removes the repetitive features if you scale them right, because you will always be looking at a scale of 1.
The reason I named it "Fractal Reblending" is because it is a similar idea to "octaves of noise" (like in libnoise) where you blend in the same noise at a different scale to add detail. Macro/Micro textures is probably superior if you find two textures that blend well. Basically this "Fractal Reblending" thing is more of a hack

out of laziness, to use one texture. Since I thought of it myself, I don't really have any links, nor could I find any (and if you do, I'd be very interested in reading them).
As for images, I am trying to video my upgraded DemoB (traditional TVA as per the paper demo), which now has the capability to be modified. It looks like chocolate icecream (yum). Anyway, I'll be sure to zoom in and out to show you the reblending. For me, its actually hard to notice unless you look carefully, but there are probably people with an eye for these things, so it might be jarring. I'd love to get feedback on this technique after I post the video

.
Though, I am in need of a good 512^2 seamless dark rock texture, since the one I am using is ... unlicensed. This is why my shader code is languishing, and why the demos are basically unshaded at all now. I realized a while ago that I have been working hard on geometry, but the real trick to making things look good is the shading (and it does look good

). For DemoB, I also ported it to Google Native Client, which allows one to use it directly in the browser (chrome). There are a few "bumps" like getting the shaders working (not yet), and I can't stand the wireframe mode that Ogre emulates (or at least I think it does) for gles2. It is very different than wireframe mode in GL, and makes the mesh very hard to understand (thus it is harder to appreciate it, and TVA). I am thus hesitant to release DemoB over NaCl, except as a playground/sandbox so I'll probably wait until that aspect to mature.
All these nice upgrades are not yet committed, as I am still working out a subtle bug or two.
Quote:
One final note on the calculation of normals - you say that they are computed from the surrounding trianges but have you considered computing them from the volume gradient instead? This is what the marching cubes extractor does and it works very well. There's no need to worry about the edge cases, for example.
Yes you are right about that; I merely implemented them as the paper says to avoid trouble (and there was lots of trouble

), since I never did voxels before, and had no idea how to generate normals. I did see you do something with gradients in your code. I also did try to abstract normal generation to the traits (very simply), so it might even be possible to change the one-liner there to something that calculates normals differently. Originally I wanted to generate normals via Policies (stl/boost idea), so perhaps one day this can be done (sure to have caveats though).