[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k] [s4s] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Home]
Settings Home

File: hmap.png (1.86 MB, 1920x939)
1.86 MB
1.86 MB PNG
I've been messing around with Unity's terrain feature a little bit lately and I had a question about the height maps it exports.

When you export a height map from a terrain, it produces a texture file with two channels, a typical color channel as well as an alpha channel. What confuses me is that the color channel contains the image on the right, while the alpha contains the left; the left is the kind of height map I'm familiar with, but I have no idea what the right is. I can't make out its effect on the terrain in Unity either.

What exactly is the image on the right called, and how are they used? Or is this some kind of bug/I'm doing something wrong? These aren't the full textures, by the way, they're a little cropped.
You also get that from World Machine. When it exports the height, it creates a gradient for every value. So if you can imagine slicing up a regular 32bit heightmap based on its brightness value, it creates another 32bit heightmap for every slice. Hence giving you that much extra definition for the heightmap so you won't get any stepping artifacts.

If you put the images on top of each other, you'll see that the more visible gradient parts correspond to the flat colors on the left to compensate for the insufficient depth data.
That makes some sense. So, if I'm interpreting you correctly, it's like a height map using modulo? Say, take the original height values as a range from 0-100, and then take the modulo of that by 1, and record those values.

But how do you use it from a programming/shader standpoint? Since all of the sliced 32bit heightmaps you're talking about would all go between 0 and 1, do you just reference the regular height map to figure out where to start and where to end for each slice?

I.e., 1 on the left heightmap = starting point for some sampling path on the right, 0 = the end.

If you were to apply it as a displacement map, then, you'd just apply one on top of the other, with the right being used with a strength appropriate for small details and the left being used for the main shape?

More importantly, is there a term for the type/"style" on the right? I'm not sure where to go to look up more info, which was why I came here. Thanks for the answer, by the way.
Yeah thats how they work. Dunno about the correct terminology for it but it's not for shaders. It's purely for other programs with built in terrain generators like Unity and UE4 to read from. Usually it's r16, r32 or RAW format with 2 grayscale channels which hold the two images you posted above. I'd go for world machine documentation to find out more about the RAW format.
Ok, thanks, I think I understand how to go about using it then. Looking at it a little more closely I can see the details I'm missing out on by not using it.

And while I recognize it's not supposed to be for shaders, I've also found lately that it's possible to use things like this as displacement maps in HLSL, and it seems to work mostly fine if everything in your shader is synced properly. Not arguing with your statement there, as that's what I've learned as I've explored 3d techniques myself, but it seems like it'd be convenient to have a real displacement map in the shader sometimes, at least for more artistic purposes where you're not really resource-limited.
Problem with doing terrain based on shaders is that the displacement is done on the GPU just prior to presenting you the image.
So you can't do stuff like raycasts against that geometry to allow things to interfacing with the surface, like say having a player walk ontop of it
What if you had your raycasts sample the displacement texture assigned to the material it's hitting? It could read the brightness value and adjust the rig position accordingly.
You could set it up but It would add a lot of complexity and extra calculations deriving the contact information,
compared to displacing the geometry outside of the shader and then just treat it as a regular mesh.

As long as you just raycast down vertically to sample height it would at least be easy to set up (if wasteful in terms of computation).
But now think of detecting impacts horizontally in the shear plane. Let's Say I'm driving or flying an aircraft at high speed across the map.
How do I know if there is a hill infront of me? I can't just raycast forward and see if I intersect something because there is nothing there.
I would need to raycast downward against the undisplaced plane and samle every pixel of the texture along the distance of my forward vector.

What would be like one or two raycast operations becomes like a whole bunch of raycasts and texture sampling operations, basically a complete clusterfuck.

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.