Re: AllocationVars with GPU path and 1d LUT

Patrick Hodoul <patric...@...>

Hi Simon,


Thanks for taking the time to use/test the master branch, and sorry for the delay to answer.


I did some investigations, and you found a 'glitch'  :-)  in the master branch (i.e. OCIO v2).

Following your use case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with the CLF requirements, I will still log an issue in github to keep track of the use case (as the corresponding unit test is clearly missing).




On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote:

I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke. 

I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.

  - !<ColorSpace>
    name: linear
    family: ""
    equalitygroup: ""
    bitdepth: 32f
    description: |
      Scene-linear, high dynamic range. Used for rendering and compositing.
    isdata: false
    allocation: lg2
    allocationvars: [-15, 6]

  - !<ColorSpace>
    name: sRGB
    family: ""
    equalitygroup: ""
    bitdepth: 32f
    description: |
      Standard RGB Display Space
    isdata: false
    allocation: uniform
    allocationvars: [-0.125, 1.125]
    to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}

My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.

On my side, I have applied the same transform but using the GPU path.

In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.

My shader output looks like this and the language used is HLSL_DX11 :

Texture2D ociolut1d_0;
SamplerState ociolut1d_0Sampler;

float2 ociolut1d_0_computePos(float f)
  float dep = min(f, 1.0) * 65535.;
  float2 retVal;
  retVal.y = float(int(dep / 4095.));
  retVal.x = dep - retVal.y * 4095.;
  retVal.x = (retVal.x + 0.5) / 4096.;
  retVal.y = (retVal.y + 0.5) / 17.;
  return retVal;

float4 OCIOConvert(in float4 inPixel)
  float4 outColor = inPixel;
  outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r;
  outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g;
  outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;

  return outColor;

If anyone ever had that issue, I'll be happy to hear it out :)


Join to automatically receive all group messages.