Date
1 - 1 of 1
AllocationVars and Allocation tags problem
Dmitry Kazakov <dimu...@...>
Good morning! Here is how the image looks when calculating on CPU: http://dimula73.narod.ru/krita_allocation_var_CPU.png The cause of this problem is too wide range for the 'allocatiovars' used in config, which is actually standard spi-vfx value. If I change the value to, say, [-10.0, 5.0] then the quality of the transformation becomes ok and the shaders generate the image looking exactly like on CPU. Can we (OCIO and/or Krita) do something with it? What I'm thinking about is: it is quite rare usecase when the application (e.g. Krita) needs to display the whole range of the image colors. That is, most of the time we display only a small subset of the image colors, say, [0.0, 1.0] or [0.0, exposition]. Can we adjust the allocationvars dynamically according to the currently displayed gamut? Some crazy idea: The application might notice the DisplayTransform about what output colors are actually needed. For shader-based rendering it'll be [0.0, 1.0] obviously. Then OCIO might walk through the chain of transformations and adjust their allocation values according to the values really needed. Obviously enough one would need to obtain reverse transformations for that, which is impossible... But given that 3D-lut in the shader is an approximation anyway, the range can be acquired using random sampling of the space defined in allocationvars and used for generation of the 3D-lut. I understand that this is optimization, but it can not only fix problems with such corner-cases like [2], but also give much better quality of the GPU-based rendering of the image. the 3D-luts generated by OCIO would be more dense, and would nor waste the range on values which will never be displayed anyway. What do you think about this idea? |
|