Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
|
|
Patrick Hodoul <patric...@...>
Hi
Simon,
Thanks for taking the
time to use/test the master branch, and sorry for the delay to answer.
I did some
investigations, and you found a 'glitch' :-) in
the master branch (i.e. OCIO v2).
Following your use
case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with
the CLF requirements, I will still log an issue in github to keep track of the
use case (as the corresponding unit test is clearly missing).
Regards,
Patrick.
toggle quoted message
Show quoted text
On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote: Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
|
|
Patrick Hodoul <patric...@...>
Here is the github issue: #622
toggle quoted message
Show quoted text
On Friday, November 2, 2018 at 12:21:51 PM UTC-4, Patrick Hodoul wrote: Hi
Simon,
Thanks for taking the
time to use/test the master branch, and sorry for the delay to answer.
I did some
investigations, and you found a 'glitch' :-) in
the master branch (i.e. OCIO v2).
Following your use
case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with
the CLF requirements, I will still log an issue in github to keep track of the
use case (as the corresponding unit test is clearly missing).
Regards,
Patrick. On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote: Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
|
|
Simon Therriault <mos...@...>
Thanks for the confirmation Patrick!
Also, in latest release, 1.1, I can see the same kind of behaviour. Let's say for the Nuke-default configuration, if I go from linear to SRGB, the resulting 3d LUT doesn't have any negative values. So, if I input Marci's image, all the blacks, which are negative, will be "clamped" to a pixel value higher than 0. I don't expect bug fixes for it but just looking for a confirmation that it is expected in this version.
Also, do you have any timeline for the official 2.0 release?
Thanks again for your help!
toggle quoted message
Show quoted text
On Friday, November 2, 2018 at 12:28:46 PM UTC-4, Patrick Hodoul wrote:
Here is the github issue: #622On Friday, November 2, 2018 at 12:21:51 PM UTC-4, Patrick Hodoul wrote: Hi
Simon,
Thanks for taking the
time to use/test the master branch, and sorry for the delay to answer.
I did some
investigations, and you found a 'glitch' :-) in
the master branch (i.e. OCIO v2).
Following your use
case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with
the CLF requirements, I will still log an issue in github to keep track of the
use case (as the corresponding unit test is clearly missing).
Regards,
Patrick. On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote: Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
|
|
Patrick Hodoul <patric...@...>
Having a realistic timeline for OCIO v2 is quite challenging for now.
Patrick.
toggle quoted message
Show quoted text
On Friday, November 2, 2018 at 1:27:10 PM UTC-4, Simon Therriault wrote: Thanks for the confirmation Patrick!
Also, in latest release, 1.1, I can see the same kind of behaviour. Let's say for the Nuke-default configuration, if I go from linear to SRGB, the resulting 3d LUT doesn't have any negative values. So, if I input Marci's image, all the blacks, which are negative, will be "clamped" to a pixel value higher than 0. I don't expect bug fixes for it but just looking for a confirmation that it is expected in this version.
Also, do you have any timeline for the official 2.0 release?
Thanks again for your help! On Friday, November 2, 2018 at 12:28:46 PM UTC-4, Patrick Hodoul wrote:
Here is the github issue: #622On Friday, November 2, 2018 at 12:21:51 PM UTC-4, Patrick Hodoul wrote: Hi
Simon,
Thanks for taking the
time to use/test the master branch, and sorry for the delay to answer.
I did some
investigations, and you found a 'glitch' :-) in
the master branch (i.e. OCIO v2).
Following your use
case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with
the CLF requirements, I will still log an issue in github to keep track of the
use case (as the corresponding unit test is clearly missing).
Regards,
Patrick. On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote: Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
|
|
Aaron Carlisle <carlisle...@...>
I didn't investigate whether this is the exact same issue but this might be related to the issue you are experiencing.
toggle quoted message
Show quoted text
On Tue, Nov 6, 2018 at 1:11 PM Patrick Hodoul < patric...@...> wrote: Having a realistic timeline for OCIO v2 is quite challenging for now.
Patrick. On Friday, November 2, 2018 at 1:27:10 PM UTC-4, Simon Therriault wrote: Thanks for the confirmation Patrick!
Also, in latest release, 1.1, I can see the same kind of behaviour. Let's say for the Nuke-default configuration, if I go from linear to SRGB, the resulting 3d LUT doesn't have any negative values. So, if I input Marci's image, all the blacks, which are negative, will be "clamped" to a pixel value higher than 0. I don't expect bug fixes for it but just looking for a confirmation that it is expected in this version.
Also, do you have any timeline for the official 2.0 release?
Thanks again for your help! On Friday, November 2, 2018 at 12:28:46 PM UTC-4, Patrick Hodoul wrote:
Here is the github issue: #622On Friday, November 2, 2018 at 12:21:51 PM UTC-4, Patrick Hodoul wrote: Hi
Simon,
Thanks for taking the
time to use/test the master branch, and sorry for the delay to answer.
I did some
investigations, and you found a 'glitch' :-) in
the master branch (i.e. OCIO v2).
Following your use
case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with
the CLF requirements, I will still log an issue in github to keep track of the
use case (as the corresponding unit test is clearly missing).
Regards,
Patrick. On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote: Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/d/optout.
|
|
Simon Therriault <mos...@...>
I traced it in the getGpuLut3D and what I see is that when it applies the LogToLin, this will clamp all negative values of the Lut1d. The resulting operation is 2^-15 -> 0.0000305.
When the Lut1d is inversed (linear to SRGB uses the inverse lut1d), it finds the index in the LUT where that data is located. For me, this resulted in 0.1. So, all previous values (negative ones) aren't taken into account. Since the log to lin won't give negative values, I always have postiive data.
I'll play with the configuration file. Maybe I need to tweak something, like they did on the fix you sent me about blender.
Thanks
toggle quoted message
Show quoted text
On Wednesday, November 7, 2018 at 12:41:07 PM UTC-5, Aaron Carlisle wrote: I didn't investigate whether this is the exact same issue but this might be related to the issue you are experiencing.
On Tue, Nov 6, 2018 at 1:11 PM Patrick Hodoul < patr...@...> wrote: Having a realistic timeline for OCIO v2 is quite challenging for now.
Patrick. On Friday, November 2, 2018 at 1:27:10 PM UTC-4, Simon Therriault wrote: Thanks for the confirmation Patrick!
Also, in latest release, 1.1, I can see the same kind of behaviour. Let's say for the Nuke-default configuration, if I go from linear to SRGB, the resulting 3d LUT doesn't have any negative values. So, if I input Marci's image, all the blacks, which are negative, will be "clamped" to a pixel value higher than 0. I don't expect bug fixes for it but just looking for a confirmation that it is expected in this version.
Also, do you have any timeline for the official 2.0 release?
Thanks again for your help! On Friday, November 2, 2018 at 12:28:46 PM UTC-4, Patrick Hodoul wrote:
Here is the github issue: #622On Friday, November 2, 2018 at 12:21:51 PM UTC-4, Patrick Hodoul wrote: Hi
Simon,
Thanks for taking the
time to use/test the master branch, and sorry for the delay to answer.
I did some
investigations, and you found a 'glitch' :-) in
the master branch (i.e. OCIO v2).
Following your use
case and using the latest commit from the master branch, the inv LUT 1D is clamping to [0, 1]. Even if we are currently revisiting all the Ops to comply with
the CLF requirements, I will still log an issue in github to keep track of the
use case (as the corresponding unit test is clearly missing).
Regards,
Patrick. On Monday, October 29, 2018 at 8:19:53 PM UTC-4, Simon Therriault wrote: Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
|
|
Troy Sobotka <troy.s...@...>
There's two issues at work, both of which are important.
The first is the deeper GPU issue that the ever wise Mr. Hodoul has fixed with the completely reworked GPU code path. That is a massive issue, and one that is deeper than the simple allocation variables. This ends up in potential posterization in OCIOv1 given that the GPU transforms are collapsed into a single transform.
The allocation variables are important if you have a GPU path as well, given that the GPU path is a constricted set of values. As far as I have come to expect and understand things, the allocation variables are required if _either side of the transform has a scene referred range_. If your from_reference or to_reference requires a transform into the scene referred domain, your allocation vars need to be set according to the minimum and maximum values required, as per type. If these aren't set correctly, the range you require may be clipped off when going to the GPU.
With respect, TJS
|
|
Simon Therriault <mos...@...>
Thanks for your answer Troy.
I started to play with the latest version with the revamped GPU path. I'll have to wait for the official release to integrate it though.
For now, I'll continue to see if I can get something out of the AllocationVars. From what I understand, the nuke default config is not necessarily the best to play with the GPU path. All the 1dLUT in play describes the to_reference transform. When I use them for inverse, the config as is clips the negative.
toggle quoted message
Show quoted text
On Friday, November 9, 2018 at 1:42:14 PM UTC-5, Troy James Sobotka wrote: There's two issues at work, both of which are important.
The first is the deeper GPU issue that the ever wise Mr. Hodoul has fixed with
the completely reworked GPU code path. That is a massive issue, and one that
is deeper than the simple allocation variables. This ends up in
potential posterization
in OCIOv1 given that the GPU transforms are collapsed into a single transform.
The allocation variables are important if you have a GPU path as well,
given that
the GPU path is a constricted set of values. As far as I have come to expect and
understand things, the allocation variables are required if _either side of the
transform has a scene referred range_. If your from_reference or to_reference
requires a transform into the scene referred domain, your allocation vars need
to be set according to the minimum and maximum values required, as per
type. If these aren't set correctly, the range you require may be
clipped off when
going to the GPU.
With respect,
TJS
|
|
Troy Sobotka <troy.s...@...>
It took me a bit of time mucking to get the allocation variables set properly.
If you set your allocation variables type to lg2, the math for the lower and upper variable is the same as it is in an AllocationTransform, which is log2(VALUE), which if you have stops as a reference is log2(2^STOP_ADJUSTMENT * MIDDLE_GREY). For example, for ten stops down with a middle grey peg at 0.18 it should be log2(2^-10 * 0.18).
For uniforms, it is value as-is, and they get normalized to and from including negatives I believe, as per the example given in the documentation.
With respect, TJS
toggle quoted message
Show quoted text
On Fri, Nov 9, 2018 at 11:09 AM Simon Therriault <mos...@...> wrote: Thanks for your answer Troy.
I started to play with the latest version with the revamped GPU path. I'll have to wait for the official release to integrate it though.
For now, I'll continue to see if I can get something out of the AllocationVars. From what I understand, the nuke default config is not necessarily the best to play with the GPU path. All the 1dLUT in play describes the to_reference transform. When I use them for inverse, the config as is clips the negative.
On Friday, November 9, 2018 at 1:42:14 PM UTC-5, Troy James Sobotka wrote:
There's two issues at work, both of which are important.
The first is the deeper GPU issue that the ever wise Mr. Hodoul has fixed with the completely reworked GPU code path. That is a massive issue, and one that is deeper than the simple allocation variables. This ends up in potential posterization in OCIOv1 given that the GPU transforms are collapsed into a single transform.
The allocation variables are important if you have a GPU path as well, given that the GPU path is a constricted set of values. As far as I have come to expect and understand things, the allocation variables are required if _either side of the transform has a scene referred range_. If your from_reference or to_reference requires a transform into the scene referred domain, your allocation vars need to be set according to the minimum and maximum values required, as per type. If these aren't set correctly, the range you require may be clipped off when going to the GPU.
With respect, TJS -- You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@.... For more options, visit https://groups.google.com/d/optout.
|
|
Bernard Lefebvre <bernard....@...>
The issue is now fixed. See https://github.com/imageworks/OpenColorIO/issues/622
toggle quoted message
Show quoted text
Le lundi 29 octobre 2018 20:19:53 UTC-4, Simon Therriault a écrit : Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces: - !<ColorSpace> name: linear family: "" equalitygroup: "" bitdepth: 32f description: | Scene-linear, high dynamic range. Used for rendering and compositing. isdata: false allocation: lg2 allocationvars: [-15, 6]
- !<ColorSpace> name: sRGB family: "" equalitygroup: "" bitdepth: 32f description: | Standard RGB Display Space isdata: false allocation: uniform allocationvars: [-0.125, 1.125] to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0; SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f) { float dep = min(f, 1.0) * 65535.; float2 retVal; retVal.y = float(int(dep / 4095.)); retVal.x = dep - retVal.y * 4095.; retVal.x = (retVal.x + 0.5) / 4096.; retVal.y = (retVal.y + 0.5) / 17.; return retVal; }
float4 OCIOConvert(in float4 inPixel) { float4 outColor = inPixel; outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r; outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g; outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor; }
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!
|
|