[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k] [s4s] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Home]
Board
Settings Home
/3/ - 3DCG

[Advertise on 4chan]


Thread archived.
You cannot reply anymore.



File: CIExy1931_sRGB_gamut_D65.png (251 KB, 1140x1260)
251 KB
251 KB PNG
What's the deal with gamma correction? It makes sense that doing linear math with non-linear assets produces wrong results, but what is the correct way to prevent this? For instance, should textures be non-linear, and converted to linear during rendition, or should textures be converted to linear beforehand and thus look dark in an editor? Does the former approach preserve more resolution?
>>
>>516204
http://www.slideshare.net/ozlael/hable-john-uncharted2-hdr-lighting
>>
>>516204
Resolution has nothing to do with color-space...
The answer is, let your software do gamma correction on the textures into linear space at render-time if it has the option to, like Maya does. Otherwise do it yourself.
>>
Textures should be stored non-linear, at least if they're 8 bits per channel. Storing color-data in linear at an 8-bits-per-channel resolution will result in terrible banding. Storing textures in linear (even when using 16 bpc or more) is also somewhat impractical unless you know everyone in your team has an image viewer/editor that reads & respects the color-profile from the texture and then displays it correctly.

If you need to do any kind of math on your colors (i.e. do anything with them other than just copying them from A to B) you should linearize them first. That includes:

- blending (of basically any sort -- even just normal alpha blending)
- lighting
- any kind of adjustment

Before you write the colors to the screen, you need to turn them back into sRGB again, meaning you have to de-linearize them. Depending on what API/framework/engine/etc you use, the GPU can do some of that stuff just automatically for you.

Converting between colorspaces in general always implies a loss of information. How big/relevant that loss is, depends mostly on the precision of the receiving colorspace, or more precisely the precision of the receiving colorspace relative to the source colorspace. When linearizing on the GPU, you usually turn 8 BPC sRGB into linear 32 BPC floating point numbers to perform computations in. This conversion is lossless, since 32 BPC floats can represent all 256 values of 8 BPC precisely.

Converting back afterwards from 32 bpc float to 8 bit sRGB is obviously lossy (you're reducing roughly 2147483648 levels of brightness to 256), but it's not really lossy relative to performing the operation directly in 8 bit (or compared to the theoretical "8 bit" -> "infinitely precise linear colorspace" -> "do conversion" -> "8 bit" path).

8 bit linear always looks like ass, never use it. Doing anything in 8-bit sRGB (non-linear) also looks like ass, prefer to not do it (unfortunately most people do because they don't know better)



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.