AllocationVars and Allocation tags problem


Dmitry Kazakov <dimu...@...>
 

Good morning!

I have finally developed that OCIO enabled color selectors feature in Krita (some time ago I asked about reversed Display transformations in this list to make this feature). You can read about the details about that work at [1].

But now I have another problem which is related to 'allocation' and 'allocationvars' feature of OCIO. The point is, here in Krita we use openGL shaders to do color processing and the default range for the 3D-lut [-15.0, 6.0] seems to be too wide in some circumstances. I tested the our color selectors functionality on a special config by Troy Sobotka [2] and I have heavy rounding/clipping when using shaders.

Here is how the image looks when calculating on CPU:
http://dimula73.narod.ru/krita_allocation_var_CPU.png

The cause of this problem is too wide range for the 'allocatiovars' used in config, which is actually standard spi-vfx value. If I change the value to, say, [-10.0, 5.0] then the quality of the transformation becomes ok and the shaders generate the image looking exactly like on CPU.

Can we (OCIO and/or Krita) do something with it? What I'm thinking about is: it is quite rare usecase when the application (e.g. Krita) needs to display the whole range of the image colors. That is, most of the time we display only a small subset of the image colors, say, [0.0, 1.0] or [0.0, exposition]. Can we adjust the allocationvars dynamically according to the currently displayed gamut?

Some crazy idea:

The application might notice the DisplayTransform about what output colors are actually needed. For shader-based rendering it'll be [0.0, 1.0] obviously. Then OCIO might walk through the chain of transformations and adjust their allocation values according to the values really needed. Obviously enough one would need to obtain reverse transformations for that, which is impossible... But given that 3D-lut in the shader is an approximation anyway, the range can be acquired using random sampling of the space defined in allocationvars and used for generation of the 3D-lut.

I understand that this is optimization, but it can not only fix problems with such corner-cases like [2], but also give much better quality of the GPU-based rendering of the image. the 3D-luts generated by OCIO would be more dense, and would nor waste the range on values which will never be displayed anyway.

What do you think about this idea?