It is currently Sat Aug 22, 2020 4:26 am


All times are UTC




Post new topic Reply to topic  [ 19 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: Larger volumes = more smooth meshes?
PostPosted: Mon Jan 19, 2015 4:57 pm 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
Hello again,

I know, I'm on a roll recently, sorry for spamming questions.
But so far, quick and helpful answers justified that :D

My next problem is that it seems like smaller volumes result in far less smooth extracted meshes (using MarchingCubesSurfaceExtractor).

In theory, I would suspect that one 3*3km volume & extracted mesh should look exactly as thirty six 500x500m volumes using exactly the same volume contents and heightmap. Basically as if the large one had been split into 36 parts.

In practice, however, I noticed that those smaller volume terrain cells look far worse than their larger equivalent. They are far less smooth and look a bit like a Mayan terrace landscape ;)

Here is a gallery showing various real-world cell sizes as well as their voxel resolutions:
http://picsurge.com/g/ruUUhP

One thing I should add is that all cells have a voxel height of 300, no matter what their other dimensions are. That is because I want to avoid having to store cells in several y-layers. So I really hope that this is not the cause of the problem.
The mesh extraction of course only happens in the region where there is anything to extract (so if some terrain cell only has the height up to 105, the region only has the height of 105 as well, not 300).

The heightmap used is always the same as it is procedurally generated, so the meter-to-pixel-on-heightmap ratio is always the same, no matter how large or small a terrain cell is.

I tried for the last 2 days to figure out the reason behind that behaviour, but I am out of ideas.
For our purposes, it would be great if we could create smaller cells (like 500x500m), but they would need to look as good the the larger cells.

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Mon Jan 19, 2015 6:10 pm 

Joined: Tue Apr 08, 2014 5:10 pm
Posts: 124
Impossible. Learn how Marching Cubes work. You are discretizing the real values. Terracing is fixed by selective smoothing or a low-pass filtering. Try generating Sine or Cosine surface and extract it ny different ways to see what is the effect.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Mon Jan 19, 2015 10:34 pm 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
petersvp wrote:
Impossible. Learn how Marching Cubes work.

Impossible as in "What you want cannot be done" or "Something must be wrong with your code"?

The problem is that I cannot sit down and learn how Marching Cubes work in detail as there are many other tasks waiting and I just do not have the time to dig into that. This is not a hobbyist project, unfortunately. There are deadlines. And when those end, we use the best solution available at that point.
What I would prefer is PolyVox to be working the way I want it to, without studying certain theorems beforehand ;)
I need to find some solution this week, or we will just stick with large cells & noise volumes. Which would not be horrible, it would just require some other things to be implemented (like cutting the Ogre mesh into smaller pieces to get smaller terrain cells).

Quote:
Terracing is fixed by selective smoothing or a low-pass filtering.

I already do smooth the noise array (though not selective), which improved the quality a bit.
And as you can see, it does work for large volumes & extracted meshes.
Every specific x/z column of voxels has exactly the same values on both the large and the corresponding small cell. So, for example, the 202/5 column of the large cell and the 2/5 column of the fifth small cell are identical.

About the low-pass filtering: Does the MarchingCubesSurfaceExtractor not already implement one? I remember that I read something about that in the forums at some point...
Anyway, I could implement one (there are a few versions available), but all solutions that would make me touch the noise array seem fishy to me as it just isn't required if the volume/extracted mesh is large enough.

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Tue Jan 20, 2015 8:44 pm 

Joined: Tue Apr 08, 2014 5:10 pm
Posts: 124
I crashed. that is, I am failing to understand what you mean.

What is your data source? MArching cunes expects array of floats, that is, in its original implementation, then it generates isosurface based on threshold value.

Marching Cubes is NOT a point cloud. I am pretty sure that your code is bugged in some cases, because some of meshes I see are not filtered at all (e.g. I see mesh generated from boolean cloud, that is, true/false, which Marching Cubes is not.)

Polyvox has its LowPassFilter, are you running that? Are you downsampling your source data and if yes, how? Too much problems can lead to terracing, including you you are trying to render 100x100x100 volume downsampled to 16x16x16 volume (which is data loss) If yo downsample, you have to check the image downsampling algorithms and implement one of these - for example, bilinear, or bicubic. It's the same as downsampling image with MSPaint in XP and Photoshop. Worse: It's 3D. You can try VolumeResampler but I never used it.

And about deadlines, however, everybody hates them...


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Tue Jan 20, 2015 9:28 pm 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
petersvp wrote:
What is your data source? MArching cunes expects array of floats, that is, in its original implementation, then it generates isosurface based on threshold value.

What I create is a PolyVox::RawVolume<BYTE>, BYTE = unsigned char.
The problem I have with anything larger is, again, the masses. 300x300x300xbyte is already 26MB.
If we used floats, that would be 100MB. And double that, because during creation we need our own C array as well as the PolyVox volume. So float would mean 200MB. And all of that creates a mesh with hundred thousands of vertices, all consisting of 3xfloat. So for a very short amount of time, one cell creation would cost, let's estimate... 300MB.
Now triple that as we have three background threads active at once, creating all the terrain cells at game start. That would be ~1GB of memory at runtime. Likely more because I certainly forgot something.
Not really application breakingly much, and it is only during that phase, but still.
And the good thing with char or any other integer is that we can use it to encode other information as well.

But I would be willing to try out floats (or even doubles), if that has a chance of allowing us to create smaller cells. With smaller cells, the temporarily larger memory consumption of floats would be acceptable.

Indeed, having only 256 different values could be enough to explain those artifacts. But still, why would that inaccurary appear in 50x50 cells, but not in the large 300x300 cells? The data is exactly the same for one 300x300 cell as for thirty six 50x50 cells.

petersvp wrote:
Polyvox has its LowPassFilter, are you running that?

I am not explicitly using it, no. I thought that might possibly be done automatically at some point during the mesh extraction.
Is there a sample where the filter is used?

petersvp wrote:
Are you downsampling your source data and if yes, how?

No, I am not downsampling. We also use ANL for noise creation, and every voxel gets its own value.
At least every x/z voxel, as we use ANL (for now) for heightmap creation. The "height" at the x/z coordinate of the voxel (converted to noise range mapping) is looked up and then the whole x/z column is filled downwards with 255 (max BYTE) - while smoothing the top 10 or so voxels, so it does not go straight from 0 to 255 in values.

Though there is a case in which ANL is not used (instead, a pre-existing image is used). But that is used only in a few placed and would not explain all cells being so.. terracy... when they consist of hundreds of 50x50 cells instead of just a few 300x300 cells.

Again, the amount of voxels per kilometer/meter never changes (I just tried that out in some pictures in the gallery). The only thing actually changing is the size of the cells.
If the cells are 3x3km large, they contain 300x300x300 voxels. If the cells are 500x500m large, they contain 50x300x50 voxels.

petersvp wrote:
And about deadlines, however, everybody hates them...

Yeah, the ugly reality :(

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Tue Jan 20, 2015 10:51 pm 
Developer
User avatar

Joined: Sun May 04, 2008 6:35 pm
Posts: 1827
TheSHEEEP wrote:
In theory, I would suspect that one 3*3km volume & extracted mesh should look exactly as thirty six 500x500m volumes using exactly the same volume contents and heightmap. Basically as if the large one had been split into 36 parts.


If you have a single large volume (300*300) then generating one big mesh vs several small meshes should give the same overall shape, that is the smoothness should not change. However, you are actually talking about generating one large volume and one large mesh, vs generating several small volumes each with one small mesh? In this case there is a chance that the contents of the small volumes are not exactly the same as the single large volume - i.e. a bug in the code which fills the volume. This seems like the most likely explanation to me.

TheSHEEEP wrote:
Every specific x/z column of voxels has exactly the same values on both the large and the corresponding small cell. So, for example, the 202/5 column of the large cell and the 2/5 column of the fifth small cell are identical.


Can you truly verify this? Can you iterate over the voxels in those two columns and print them out? What if you create the small volume by first generating the large volume and then just copying the relevant part of it? It can by quite tricky to correctly set the values based on an input noise function or heightmap, so do check as much of this as you can.

TheSHEEEP wrote:
The "height" at the x/z coordinate of the voxel (converted to noise range mapping) is looked up and then the whole x/z column is filled downwards with 255 (max BYTE) - while smoothing the top 10 or so voxels, so it does not go straight from 0 to 255 in values.


I can suggest a potentially better approach (though yours is not bad) but let's focus on why you are seeing different results with large vs. small regions. Same for the filtering - there might be things you can do but it's good to get the basics working consistantly first.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Tue Jan 20, 2015 11:06 pm 

Joined: Tue Apr 08, 2014 5:10 pm
Posts: 124
@theSheep, precision of "unsigned char" is 256 different values. If your threshold value is 127, you are perfect. You do NOT need to use floats as voxel types directly, Polyvox is abstracting this away from you. uchar_t is perfect type of voxels even if just 8-bit. 256 different values is far from perfect! for densities. I am using uint8_t as my density type, even if my voxel type is far, far more complicated.

You must realize the fact you are working with densities. If you are using ANL directly, your density cloud... okay, your 3D array of densities should be correct, and in this case, you should NOT do any sort of post filtering. However, I cannot look and help you any further without looking and debugging through a source code. Jagged mountains are bad loking but why - only your debugger may tell you.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Tue Jan 20, 2015 11:28 pm 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
David Williams wrote:
i.e. a bug in the code which fills the volume. This seems like the most likely explanation to me.

Yes, but unfortunately that is not it.
When the noise is created to determine the height of a x/z voxel column, I also write this value down in a 2048x2048 heightmap. Of course that leads to quite a lot of x/z columns writing to the same pixel as there are far more voxel columns than pixels, but that does not really matter here.

That heightmap looks exactly the same for both the large cell and the small cell version.

David Williams wrote:
Can you truly verify this? Can you iterate over the voxels in those two columns and print them out?

Ooof, well we actually have 100 large volumes vs. 3600 small ones. That would be a bit... fiddly... to do that for each one of them ;)
We also don't need to get the whole column. Just their height value, as the "downwards filling" is always the same.
So one height value would probably not be very representative.
I will try to get the same 50x50 voxel height values at the exact same positions with both version sizes and print those to an extra file (and not pick a location that is 0 everywhere :D). That is definitely doable.

David Williams wrote:
What if you create the small volume by first generating the large volume and then just copying the relevant part of it?

Doable. That might also be an optimization on its own as it requires less worker threads to be created. Similarly, I could create the large volume and just extract 36 meshes from it. We considered that, but postponed it if we require more optimization.

But if the upper test already shows that everything is equal, I will not test this for now.

David Williams wrote:
I can suggest a potentially better approach (though yours is not bad) but let's focus on why you are seeing different results with large vs. small regions. Same for the filtering - there might be things you can do but it's good to get the basics working consistantly first.

Sounds good, let's hear your ideas after I try to print the values.

petersvp wrote:
You must realize the fact you are working with densities. If you are using ANL directly, your density cloud...

Well, I convert the values I get from ANL to bytes (module->get() * 255.0 + 0.5, basically) to store them in the C-array.
So, are you saying I should keep char for my array, but convert to a floating point number when creating the PolyVox volume and transferring the array to the volume?

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Tue Jan 20, 2015 11:49 pm 

Joined: Tue Apr 08, 2014 5:10 pm
Posts: 124
What are you generating from ANL? Heightmap or full 3D?
check my topic for Crafterria, especially the last post with the world map screenshots.

If you make heightmaps from ANL values directly, it is expected such jagged effects to arise when the surface's stepping becomes so big. e.g. px[i] = 1; px[i+1] = 5; ==> terraces.

You may notice that I have A LOT of terracing in my raw Marching Cubes mesh, but my volume is boolean volume. Even if my volume is not boolean, these artefacts exists for heightmap-based terrains... and the only way to deal with them, however, is filtering.

Straight from Crafterria's source:

Code:
            //3x3x3 convolution kernel blur with option for selective filtering
            SimpleVolume<PolyVox::MaterialDensityPair44> vdst( Region(-1,-1,-1, output->xlen+1, output->ylen+1, output->worldHeight+1) );
            for(int x=0; x<vdata->getWidth(); ++x)
                for(int y=0; y<vdata->getHeight(); ++y)
                    for(int z=0; z<vdata->getDepth(); ++z)
                    {
                        uint32_t dens = 0;
                        for(int xx=x-1;xx<=x+1;++xx)
                            for(int yy=y-1;yy<=y+1;++yy)
                                for(int zz=z-1;zz<=z+1;++zz)
                                {
                                    MaterialDensityPair44 svx = vdata->getVoxelAt(xx,yy,zz);
                                    dens += svx.getDensity();
                                }
                        dens /= 27;
                        vdst.setVoxelAt(x,y,z,MaterialDensityPair44(0,dens));
                    }


This is good example for low-pass filter, and you can decide to blend result with original, especially for rock materials where you WANT jags and imperfections, and you can even double-filter it if you want for even more smoothness. However, in the preview tool, this code is used AS IS, and in the preview tool, the Voxel Type is MaterialDensityPair44.


Top
Offline Profile  
Reply with quote  
 Post subject: Re: Larger volumes = more smooth meshes?
PostPosted: Wed Jan 21, 2015 12:16 am 

Joined: Thu Oct 06, 2011 2:26 pm
Posts: 46
Location: Berlin
petersvp wrote:
What are you generating from ANL? Heightmap or full 3D?

Each x/z voxel column gets its height value from ANL.

Look at what I wrote above. If this was really the problem, it would affect all volume sizes, not just the small ones. The number of voxels is always the same. Just the terrain cell sizes vary.
Basically, when I cut large pieces out of my voxel cake, all looks well. But when I cut small pieces out, they suddenly have terraces, etc.
But the cake is always the same.

_________________
My site! - Have a look :)
Also on Twitter - with more ketchup


Top
Offline Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ]  Go to page 1, 2  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
Theme created StylerBB.net