Re: AllocationVars with GPU path and 1d LUT


Simon Therriault <mos...@...>
 

Thanks for the confirmation Patrick!

Also, in latest release, 1.1, I can see the same kind of behaviour. Let's say for the Nuke-default configuration, if I go from linear to SRGB, the resulting 3d LUT doesn't have any negative values. So, if I input Marci's image, all the blacks, which are negative, will be "clamped" to a pixel value higher than 0. I don't expect bug fixes for it but just looking for a confirmation that it is expected in this version.

Also, do you have any timeline for the official 2.0 release?

Thanks again for your help!

On Friday, November 2, 2018 at 12:28:46 PM UTC-4, Patrick Hodoul wrote:

Here is the github issue: #622


On Friday, November 2, 2018 at 12:21:51 PM UTC-4, Patrick Hodoul wrote:

Hi Simon,

 

Thanks for taking the time to use/test the master branch, and sorry for the delay to answer.

 

I did some investigations, and you found a 'glitch'  :-)  in the master branch (i.e. OCIO v2).

Following your use case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with the CLF requirements, I will still log an issue in github to keep track of the use case (as the corresponding unit test is clearly missing).

 

Regards,

Patrick.


On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote:
Hi,

I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke. 

I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.

colorspaces:
  - !<ColorSpace>
    name: linear
    family: ""
    equalitygroup: ""
    bitdepth: 32f
    description: |
      Scene-linear, high dynamic range. Used for rendering and compositing.
    isdata: false
    allocation: lg2
    allocationvars: [-15, 6]

  - !<ColorSpace>
    name: sRGB
    family: ""
    equalitygroup: ""
    bitdepth: 32f
    description: |
      Standard RGB Display Space
    isdata: false
    allocation: uniform
    allocationvars: [-0.125, 1.125]
    to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}

My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.

On my side, I have applied the same transform but using the GPU path.

In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.

My shader output looks like this and the language used is HLSL_DX11 :

Texture2D ociolut1d_0;
SamplerState ociolut1d_0Sampler;

float2 ociolut1d_0_computePos(float f)
{
  float dep = min(f, 1.0) * 65535.;
  float2 retVal;
  retVal.y = float(int(dep / 4095.));
  retVal.x = dep - retVal.y * 4095.;
  retVal.x = (retVal.x + 0.5) / 4096.;
  retVal.y = (retVal.y + 0.5) / 17.;
  return retVal;
}



float4 OCIOConvert(in float4 inPixel)
{
  float4 outColor = inPixel;
  outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r;
  outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g;
  outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;

  return outColor;
}

If anyone ever had that issue, I'll be happy to hear it out :)


Thanks!

Join ocio-dev@lists.aswf.io to automatically receive all group messages.