Date   

Re: S-Log2/S-gamut 10-bit 422 XAVC to Linear ACES RGB EXR

Vincent Olivier <vin...@...>
 

Hi Jeremy,

Thanks so much for your reply.

On 2013-05-29, at 12:09 AM, Jeremy Selan <jeremy...@...> wrote:

First off, can you share a bit more about what you're trying to accomplish in transcoding the F55 stream to OpenEXR? What are you hoping to do with the EXR frames? Are you aiming for real-time playback?  Is the encoding performance critical? Hearing more about the desired usage would be very helpful.




For now, I "only" want to find a way to get the most accurate, richest, "objectively" (standalone) scene-referred images from my camera in ACES-linear. This is the first step: to which degree can the images from the F55/F65 be considered a physically accurate photographic reference measurement of the the scene for processing and also for archival (I don't think only keeping the original camera-referred data is sufficient in the long term). I'm also looking for a way to keep other exposure-related data such as t-stop and sensitivity and other in-camera processing logs into the EXR headers to be able to physically qualify a scene based with regards to the image.

I'm a programmer and a photographer. So, applications I'm looking forward to tackle are, first and foremost, of course, related to the mathematical and computational aspects of color and lighting æsthetics. I mean, I am very bored with the state of cinematography right now, there are really only 2 looks applied to every single movie: the Transformers anamorphic-crushed lensflare blue and the HDSLR peaches and cream indy porridge. Since The Archers (especially Red Shoes and Black Narcissus, and notice that these 2-words titles start with the name of a color), I haven't seen cinematography that can get a braingasm out of my visual cortex. Just look at what is created by the contrast between the herbal arrangement and the dresses in the the fashion show at the end of Queen Cotton (starting at 10:00). That's what I call art. I think the job of a cinematographer (a good one anyways) has to extend all the way to the computer imagery, the compositing and the image finishing of the movie in the digital realm. That's my personal goal.

Image-based lighting for unbiased rendering is another area of innovation I will be interested to look into in the near future once I get comfortable with ACES. I cannot afford access to Arnold nodes, but some Arnold core developers have contributed to Blender's Cycles in a significant way, and I really like this renderer, and it's open source. And the color pipeline is a really hot topic right now in the Blender community.

I'm not sure this answers you question. But I hope it clarifies my intentions a bit.


Next off, how are you going to view these linearized EXR frames? Note that when you use the referenced color math to go to scene-linear, you'll probably prefer some sort of 's-shaped' tone mapping operator, rather than a simple gamma curve or such.




I only have access to Rec 709 monitors, some OLED ones. So I look at the images through an ACES to Rec 709 LUT, but the computational space is ACES and as far as measurements are concerned, I rely on the various analytical tools, mostly histograms and vectorscopes (I'm building a custom CIE [x,y] diagram vectorscope to track what happens to image data in and out of color transforms as a way to visually represent those transforms: VERY useful when you are trying to communicate where you want to go with your color pipeline).



The reason I ask is that in your code example, you appear have a single chunk of code that is responsible for both decoding the frames, and applying a particular set of hard-coded color transformations.  In my personal experience, I tend to gravitate towards separable chunks of re-usable processing rather than 'all in 1' binaries.




Well, to me, before it's ACES, it's garbage™. ;-) My code reflects this philosophy. I don't think I will find a need for independent Sgamut and Slog2 transforms. Maybe the YCbCr to RGB transform will become optional from the Sgamut/Slog2 handling depending on the original footage input, yes. But I am also looking for the most computationally efficient code for a specific application (and I will probably write one big CUDA kernel for that too) and code-gathering gets me there (at the price of elegance, perhaps, but, well, performance is important in a production context). And this code is merely a proof of concept for now.




For example, it's common to have multiple productions at Imageworks concurrently which, while sharing input cameras, may choose to use slightly different input color transforms.  For this reason, in OpenColorIO the color configurations are loaded at runtime, rather than being built in.




And I would love to contribute the code I'm writing back to OCIO, in the elegant dynamic linear 3D color space transform combination you created. It's just that I need to get my YCbCr to ACES transform right, first. Why don't Imageworks have the color computing equivalent of Google's Summer of Code? I'll bring my sleeping bag and my toothbrush at your signal! ;-)



> My only question here is: what does Poyton mean when, comparing 8-bit to 10-bit, he says "The extra two bits are appended as least-significant bits to provide increased precision."?

I believe what poynton is saying is that when converting between bit-depths at different precisions, you typically get them as extra LSB info.  Put another way, say you have a floating-point image representation where pixels values are between [0.0-1.0]. (This is NOT scene-linear imagery, of course).  If we wanted to encode that in 8 bits, we would use integer values 0-255.  And for 10 bits, we would using 0-1023.


Agree with the [0, 255] to [0,1023], but then saying that the extra 2 bits are the least significant doesn't really makes sense… But anyway, if we both thing that's what he meant and coding makes sense visually, then, that's that.


 Note that scaling from 8 to 10 bits (or back) is NOT a simple bit shift. Recall that a bit shift by 2 places is a simple mult or divide by 4, so if we took 255 to 10 bits using shifting 255 would map to 1020! Ugh! (Remember that we want to use the full 1023 sized coding range).  So I tend to think about bit changes as a mult/divide by the max.  I.e., to go from 8 bits -> 10 in a manner that uses the full range, you must mult by 1023/255.0.


Agreed. That's how I do it in lines 187 to 191. But as far as headroom/footroom are concerned I don't go from 8 to 10, I simply take out the assumed 10-bit footroom values from the sample and then divide it by the 10-bit max.


There are more bit-efficient ways to do this - and old OpenImageIO thread discusses the subtleties - I can search for it if you're interested. For non-performance critical versions, going to float as an intermediate representation is usually the best option. Getting integer math right is hard, in some crappy non-obvious ways.

Your code has a few lines similar to,
>  pow(256, i)
This 'double' arithmetic is probably not what you're looking for. Perhaps a simple bitshift instead?


I chose a power function because I do not know the endianness at this point. It is not implemented yet, but I want to be able 


> S-log2/S-gamut YCbCr to S-log2/S-gamut R'G'B'
1) The camera uses the Rec 709 R'G'B' to YCbCr transform. And I use the reverse Rec 708 to get R'G'B' from YCbCr. Or is there something in SMPTE-ST-2048-1:2011 I should know about and take into consideration here?

Your matrixing back to rgb, with appropriate range consideration, is probably appropriate.  What I'd recommend for validating your code is to see if you can break this processing into separate steps and compare against known reference solutions.  For example, does this sony camera have any way to write out an RGB image directly? Or, does sony provide any reference software to transcode the stream to an uncompressed full range RGB image?  Step one for testing your code is to disable the linearization, and to only compare the YCbCr transform bits.



Yes, I'm partnering with a local FX company to take reference shots of a macbeth + f-stop chart and we'll see. But I would REALLY appreciate someone from Sony Electronics to validate the code at some point in time. I know they probably don't start with something linear in-camera to get to Slog2/Sgamut, but they sure have done something similar to what I'm doing… I  just find it odd that they didn't publish it (if they have it, which might not be the case), like they did for the original Slog/Sgamut transform in the 2009 whitepaper.


If memory serves, I also believe that openexr has native support for subsampled chroma images, you may want to investigate that.



I can and did output separate files with for each Y, Cb and Cr channels as grayscale. But I didn't find a way to put all three in one file.



As you note, YCbCr most often utilizes the range of 16-235 (for 8 bits) and 64-940 (for 10 bits) when storing *rec709* imagery. However, the Slog2 imagery takes advantage of this extra range so you have to be careful not to build in the wrong scale factors. Once again, off the top of my head I'm not sure if your code is correct or not.  But if I were in your shoes I would carefully compare the reconstructed RGB full range image versus a known correct result. (this may require capturing with a different setting in the camera).



Yes, will be looking for a raw recorder to get their 16-bit Slog2/Sgamut RAW to OpenEXR ACES transform. However, my understanding is that they are at the same point as I: their Slog2/Sgamut transforms are still a work in progress, even for the F65 (based on what I can see in Vegas and in their RAW Viewer).




> There are two distinct CTL transforms from S-gamut to ACES for two white points: 3200K and 5500K. Why? Would one get the 5500K transform matrix by applying a white-balance transform on the 3200K version (and vice-versa)?

The color transform you're looking is tailored to the F65, FYI, so I'm not sure how closely these matricies would match for your camera.  The reason there are two transforms is that I believe Sony has optimized the conversion to SLog2, in the F65, to be specific to the color balance on the camera.  This is pretty non-standard, and so should be taken with a grain of salt until we get an official IDT from the ACES community. But in my understanding the different IDTs are required for strict accuracy.  Perhaps if you have a camera at hand, you can do an example test with both approaches, and see how large the residual differences are?

In practice, people my prefer to standardize on one of the IDTs for sanity sake, even if it's not perfect in all situations.  (An example of a similar common practice would be the Arri Alexa's log-c conversion ,where a different lut is required depending on the exposure index used. But in practice, people often drop this and just assume EI800 linearization).



The ACES community, that's us, right? ;-)

One reason I sent this message is to see if there is interest in obtaining a consensus around community official LUTs (and there is, as you must know, VFX supervisors are in a permanent state of panic regarding the increasing influx of Slog2/Sgamut material coming their way, and right now, they settle on the "least visually horrible" frankeinstein transform they can find. 

And since there is interest (we have to take into account that Sony is wildly pushing with all its political weight to have one and only one camera-referred space and gamma standardized - SMPTE-ST-2048, xvYCC- and preferably their own) I am wondering if we can all join our efforts to at least corroborate the findings.




> I feel like I'm reverse engineering the whole thing and I'm not confident enough (yet) of the rigorous "scene-referredness" of the output. It looks good, > but there has been too much guesswork involved to fully trust it. I would really appreciate some pointers.

Agreed! You definitely need to validate this stuff when so much of the code is untested / bleeding edge.  If you have the time, interest, and access to the camera, nothing beats a ground truth linearization test.  The rough outline for the test is to setup a test chart with a stable light source, and then to shoot an exposure sweep across the full range of exposures. Then, post-linearization, you should be able to align the different exposures and see how close they match! If they all match (other than clipping at the ends of the dynamic range), then your conversion is dead on (at least for grayscale axis).


Yes! We are definitely on the same page. I would just like some help from Sony, ideally.

That being said, I am in touch with them (Sony Electronics) to see if they have information in-house that they could let me take a look at. We are at the NDA stage right now. My feeling is that they "don't". That Sgamut/Slog2 is mostly a marketing initiative for the moment and they have yet to produce rigorous statements about the technical nature of the ideal transforms.


> Finally, on a side note, I would eventually accelerate the linear parts of the color transform through CUBLAS. I think I can achieve realtime speed both for offline conversion and field monitoring. Has anyone tried to port some of the OCIO code to CUDA?

Yes, there have some attempts to do CUDA integration for GPU accelerated 'final quality' transforms, but these were never taken past the prototype stage as of yet.  (Once again, my fault!)  I can point you to the branch if you're interested.



Yes, please!




 There has also been some OpenCL interest.


Hasn't OpenCL gone the way of Cg, already? ;-)


Vincent

PS: excuse my bad written English. It is really not my mother tongue, nor my working language either.

PPS: I'm a total sucker for your ideas that got implemented in Katana, BTW. Really, this is sexy stuff to me. Have you ever had thoughts about extending/abstracting the Katana ontology/wokflow/project management system into more than just postproduction? Because, and my comment is nothing compared to the recognition you already had for this, I think that looking at a whole movie in this way (including previz, physical in-camera capture, audio, etc.) could be one heck of a deal-changer for filmmaking. That probably deserve another thread or another list entirely, but I thought I'd just pitch it here while I have your attention!


Re: S-Log2/S-gamut 10-bit 422 XAVC to Linear ACES RGB EXR

Jeremy Selan <jeremy...@...>
 

Vincent,

Interesting code, thanks for sharing!  My apologies for not replying sooner, my bandwidth has been way too limited of late. :(

There are quite a few different questions here.  Let me take a stab some.

First off, can you share a bit more about what you're trying to accomplish in transcoding the F55 stream to OpenEXR? What are you hoping to do with the EXR frames? Are you aiming for real-time playback?  Is the encoding performance critical? Hearing more about the desired usage would be very helpful.  Next off, how are you going to view these linearized EXR frames? Note that when you use the referenced color math to go to scene-linear, you'll probably prefer some sort of 's-shaped' tone mapping operator, rather than a simple gamma curve or such.

The reason I ask is that in your code example, you appear have a single chunk of code that is responsible for both decoding the frames, and applying a particular set of hard-coded color transformations.  In my personal experience, I tend to gravitate towards separable chunks of re-usable processing rather than 'all in 1' binaries. For example, it's common to have multiple productions at Imageworks concurrently which, while sharing input cameras, may choose to use slightly different input color transforms.  For this reason, in OpenColorIO the color configurations are loaded at runtime, rather than being built in.

Here's an example of a stand-alone binary, which uses OpenImageIO to do the image reading/writing, and OpenColorIO to do the color processing.  Note that the color math is not built-in, but is abstracted away in the library:

  Then, I'm using FFMEG's Lanczos 422 to 444 upscaling algorithm, which is slow, but produces the best results, IMHO.

Agreed!  Lanczos is a great compromise between maintaining sharpness, without introducing too much overshoot/undershoot. :)

> My only question here is: what does Poyton mean when, comparing 8-bit to 10-bit, he says "The extra two bits are appended as least-significant bits to provide increased precision."?

I believe what poynton is saying is that when converting between bit-depths at different precisions, you typically get them as extra LSB info.  Put another way, say you have a floating-point image representation where pixels values are between [0.0-1.0]. (This is NOT scene-linear imagery, of course).  If we wanted to encode that in 8 bits, we would use integer values 0-255.  And for 10 bits, we would using 0-1023.  Note that scaling from 8 to 10 bits (or back) is NOT a simple bit shift. Recall that a bit shift by 2 places is a simple mult or divide by 4, so if we took 255 to 10 bits using shifting 255 would map to 1020! Ugh! (Remember that we want to use the full 1023 sized coding range).  So I tend to think about bit changes as a mult/divide by the max.  I.e., to go from 8 bits -> 10 in a manner that uses the full range, you must mult by 1023/255.0. There are more bit-efficient ways to do this - and old OpenImageIO thread discusses the subtleties - I can search for it if you're interested. For non-performance critical versions, going to float as an intermediate representation is usually the best option. Getting integer math right is hard, in some crappy non-obvious ways.

Your code has a few lines similar to,
>  pow(256, i)
This 'double' arithmetic is probably not what you're looking for. Perhaps a simple bitshift instead?

> S-log2/S-gamut YCbCr to S-log2/S-gamut R'G'B'
1) The camera uses the Rec 709 R'G'B' to YCbCr transform. And I use the reverse Rec 708 to get R'G'B' from YCbCr. Or is there something in SMPTE-ST-2048-1:2011 I should know about and take into consideration here?

Your matrixing back to rgb, with appropriate range consideration, is probably appropriate.  What I'd recommend for validating your code is to see if you can break this processing into separate steps and compare against known reference solutions.  For example, does this sony camera have any way to write out an RGB image directly? Or, does sony provide any reference software to transcode the stream to an uncompressed full range RGB image?  Step one for testing your code is to disable the linearization, and to only compare the YCbCr transform bits.

If memory serves, I also believe that openexr has native support for subsampled chroma images, you may want to investigate that.

As you note, YCbCr most often utilizes the range of 16-235 (for 8 bits) and 64-940 (for 10 bits) when storing *rec709* imagery. However, the Slog2 imagery takes advantage of this extra range so you have to be careful not to build in the wrong scale factors. Once again, off the top of my head I'm not sure if your code is correct or not.  But if I were in your shoes I would carefully compare the reconstructed RGB full range image versus a known correct result. (this may require capturing with a different setting in the camera).

> There are two distinct CTL transforms from S-gamut to ACES for two white points: 3200K and 5500K. Why? Would one get the 5500K transform matrix by applying a white-balance transform on the 3200K version (and vice-versa)?

The color transform you're looking is tailored to the F65, FYI, so I'm not sure how closely these matricies would match for your camera.  The reason there are two transforms is that I believe Sony has optimized the conversion to SLog2, in the F65, to be specific to the color balance on the camera.  This is pretty non-standard, and so should be taken with a grain of salt until we get an official IDT from the ACES community. But in my understanding the different IDTs are required for strict accuracy.  Perhaps if you have a camera at hand, you can do an example test with both approaches, and see how large the residual differences are?

In practice, people my prefer to standardize on one of the IDTs for sanity sake, even if it's not perfect in all situations.  (An example of a similar common practice would be the Arri Alexa's log-c conversion ,where a different lut is required depending on the exposure index used. But in practice, people often drop this and just assume EI800 linearization).

> I feel like I'm reverse engineering the whole thing and I'm not confident enough (yet) of the rigorous "scene-referredness" of the output. It looks good, > but there has been too much guesswork involved to fully trust it. I would really appreciate some pointers.

Agreed! You definitely need to validate this stuff when so much of the code is untested / bleeding edge.  If you have the time, interest, and access to the camera, nothing beats a ground truth linearization test.  The rough outline for the test is to setup a test chart with a stable light source, and then to shoot an exposure sweep across the full range of exposures. Then, post-linearization, you should be able to align the different exposures and see how close they match! If they all match (other than clipping at the ends of the dynamic range), then your conversion is dead on (at least for grayscale axis).

> Finally, on a side note, I would eventually accelerate the linear parts of the color transform through CUBLAS. I think I can achieve realtime speed both for offline conversion and field monitoring. Has anyone tried to port some of the OCIO code to CUDA?

Yes, there have some attempts to do CUDA integration for GPU accelerated 'final quality' transforms, but these were never taken past the prototype stage as of yet.  (Once again, my fault!)  I can point you to the branch if you're interested.  There has also been some OpenCL interest.

For simple monitoring though, you dont necessarily need to go to scene-linear, but instead can go straight from slog2 to the display transform using a 3d-lut.  OCIO does support this on the GPU already, see the ociodisplay example for the code.

Cheers,
Jeremy


On Sun, May 26, 2013 at 6:38 PM, Vincent Olivier <vin...@...> wrote:
Hi,

I'm trying to convert the S-Log2/S-gamut 10-bit 422 XAVC footage coming out of my Sony F55 camera to a sequence of Linear ACES RGB OpenEXR files. I would like to validate the assumptions I make throughout the conversion process with you guys, if you be so kind.




Little-endian S-log2/S-gamut 10-bit 422 YCbCr to little-endian S-log2/S-gamut 10-bit 444 YCbCr

First, I'm using FFMPEG to open the XAVC file (which it recognizes simply as a MXF-muxed XAVC Intra stream). The x264 decoding seems to work superbly as far as I can see. Then, I'm using FFMEG's Lanczos 422 to 444 upscaling algorithm, which is slow, but produces the best results, IMHO.

My only question here is: what does Poyton mean when, comparing 8-bit to 10-bit, he says "The extra two bits are appended as least-significant bits to provide increased precision."?

Because FFMPEG indicates that the stream is 10-bit little-endian, which calls for a 8-bit shift of the second byte (lines 168-184 in my code). Anyways, this seems to work just fine. I'm just checking if there is something I don't understand in Poyton's qualification of the 10-bit YCbCr bitstream endianness or maybe it's reformatted under the hood by FFMPEG from the raw XAVC output. Mystery…



S-log2/S-gamut YCbCr to S-log2/S-gamut R'G'B'

My assumptions here are that:

1) The camera uses the Rec 709 R'G'B' to YCbCr transform. And I use the reverse Rec 708 to get R'G'B' from YCbCr. Or is there something in SMPTE-ST-2048-1:2011 I should know about and take into consideration here?

2) The footroom provision is 0…64 for luma samples and 0…512 for chroma samples. See lines 187-191 in my code. I have adapted that from the 8-bit footroom values (16/128) because it seems to make sense according to basic signal statistics I've made on the samples from one frame. But I'm REALLY not sure about that…

3) Slog2 code uses "full-range" RGB (0…255 and not 0…219). See matrix at lines 131-136 in my code for the YCbCr to RGB "full-range". The headroom-preserving transform matrix is at 140-145 (I'm not using this one).



S-log2/S-gamut R'G'B' to Linear S-gamut RGB

This is where it gets interesting. I have adapted the "slog2.py" code, part of the OpenColorIO-Configs project on Github provided by Jeremy Selan (I sent him an email weeks ago and didn't hear from him).

Assumptions made in my code:

1) The rescale at slog2.py:17-23 is redundant if you provide "full-range" RGB to the S-Log2 linearization algorithm. I left it in my code (see lines 65-66). But commenting it out seems to give more dynamic range to the result. Again, I might be dead wrong on this.

2) The differences between S-log1 and S-log2 are only: A: for the same input, S-log1 doesn't have a rescaling step and S-log2 has one (see my previous point), B: there is a linear portion in the shadows, and C: the highlight-portion power-function is scaled by 219/155.



Linear S-gamut RGB to Linear ACES RGB

There are two distinct CTL transforms from S-gamut to ACES for two white points: 3200K and 5500K. Why? Would one get the 5500K transform matrix by applying a white-balance transform on the 3200K version (and vice-versa)?



I feel like I'm reverse engineering the whole thing and I'm not confident enough (yet) of the rigorous "scene-referredness" of the output. It looks good, but there has been too much guesswork involved to fully trust it. I would really appreciate some pointers.


Finally, on a side note, I would eventually accelerate the linear parts of the color transform through CUBLAS. I think I can achieve realtime speed both for offline conversion and field monitoring. Has anyone tried to port some of the OCIO code to CUDA?


Thanks for everything!

Vincent

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.
 
 


S-Log2/S-gamut 10-bit 422 XAVC to Linear ACES RGB EXR

Vincent Olivier <vin...@...>
 

Hi,

I'm trying to convert the S-Log2/S-gamut 10-bit 422 XAVC footage coming out of my Sony F55 camera to a sequence of Linear ACES RGB OpenEXR files. I would like to validate the assumptions I make throughout the conversion process with you guys, if you be so kind.




Little-endian S-log2/S-gamut 10-bit 422 YCbCr to little-endian S-log2/S-gamut 10-bit 444 YCbCr

First, I'm using FFMPEG to open the XAVC file (which it recognizes simply as a MXF-muxed XAVC Intra stream). The x264 decoding seems to work superbly as far as I can see. Then, I'm using FFMEG's Lanczos 422 to 444 upscaling algorithm, which is slow, but produces the best results, IMHO.

My only question here is: what does Poyton mean when, comparing 8-bit to 10-bit, he says "The extra two bits are appended as least-significant bits to provide increased precision."?

Because FFMPEG indicates that the stream is 10-bit little-endian, which calls for a 8-bit shift of the second byte (lines 168-184 in my code). Anyways, this seems to work just fine. I'm just checking if there is something I don't understand in Poyton's qualification of the 10-bit YCbCr bitstream endianness or maybe it's reformatted under the hood by FFMPEG from the raw XAVC output. Mystery…



S-log2/S-gamut YCbCr to S-log2/S-gamut R'G'B'

My assumptions here are that:

1) The camera uses the Rec 709 R'G'B' to YCbCr transform. And I use the reverse Rec 708 to get R'G'B' from YCbCr. Or is there something in SMPTE-ST-2048-1:2011 I should know about and take into consideration here?

2) The footroom provision is 0…64 for luma samples and 0…512 for chroma samples. See lines 187-191 in my code. I have adapted that from the 8-bit footroom values (16/128) because it seems to make sense according to basic signal statistics I've made on the samples from one frame. But I'm REALLY not sure about that…

3) Slog2 code uses "full-range" RGB (0…255 and not 0…219). See matrix at lines 131-136 in my code for the YCbCr to RGB "full-range". The headroom-preserving transform matrix is at 140-145 (I'm not using this one).



S-log2/S-gamut R'G'B' to Linear S-gamut RGB

This is where it gets interesting. I have adapted the "slog2.py" code, part of the OpenColorIO-Configs project on Github provided by Jeremy Selan (I sent him an email weeks ago and didn't hear from him).

Assumptions made in my code:

1) The rescale at slog2.py:17-23 is redundant if you provide "full-range" RGB to the S-Log2 linearization algorithm. I left it in my code (see lines 65-66). But commenting it out seems to give more dynamic range to the result. Again, I might be dead wrong on this.

2) The differences between S-log1 and S-log2 are only: A: for the same input, S-log1 doesn't have a rescaling step and S-log2 has one (see my previous point), B: there is a linear portion in the shadows, and C: the highlight-portion power-function is scaled by 219/155.



Linear S-gamut RGB to Linear ACES RGB

There are two distinct CTL transforms from S-gamut to ACES for two white points: 3200K and 5500K. Why? Would one get the 5500K transform matrix by applying a white-balance transform on the 3200K version (and vice-versa)?



I feel like I'm reverse engineering the whole thing and I'm not confident enough (yet) of the rigorous "scene-referredness" of the output. It looks good, but there has been too much guesswork involved to fully trust it. I would really appreciate some pointers.


Finally, on a side note, I would eventually accelerate the linear parts of the color transform through CUBLAS. I think I can achieve realtime speed both for offline conversion and field monitoring. Has anyone tried to port some of the OCIO code to CUDA?


Thanks for everything!

Vincent


Re: P3 TO XYZ

dbr/Ben <dbr....@...>
 

On 20/05/2013, at 20:42, 大叔 <my200...@...> wrote:

1.Is the P3 to XYZ LUT a constant LUT(or constant values)?

Not sure exactly what you are asking, but.. The P3 to XYZ transform is typically done as a colour matrix and gamma adjustment, which could also be baked into a 3D LUT

There is a description of the spi-vfx config's xyz16 space in the docs:

..and the configuration itself:

2.if we make a P3 to XYZ LUT.Do we need the calculated informations of the projector?

No. The P3 to XYZ transform is independent of your projection calibration.

In other words, the P3 to XYZ should be a standard colour transform, which could be used on any calibrated projector.


Re: Custom namespace specification and SONAMEs

Piotr <piotr.s...@...>
 

Thanks Jeremy,

We use a 3rd party tool that is built against OCIO. Their recent release includes a custom namespaced OCIO build, but with no header file installation. The issue is that whilst they have namespaced their OCIO build with a prefix, the soname remains unchanged. This is quite possibly the worst scenario since our plugins (which also link against the same version of OCIO) fail to load as the symbols cannot be resolved by the linker.

We are thus wanting to build our own namespaced version of OCIO with a corresponding SONAME; at the moment I am testing a build with a
-D SOVERSION=LFL
option to the configuration.

Curious as to how others do it.

Cheers

Piotr



On Tuesday, May 7, 2013 8:43:42 AM UTC-7, Jeremy Selan wrote:
Hmmm... I hadnt thought about passing an alternate soname to the build process, but it makes sense that you could need it for applications which load both a public and private version of OCIO.  

Can you share any more specifics about what you're trying to do? Either way, I'm happy to add a custom SONAME option (if it doesnt already exist).

For example, at SPI we build custom versions of the nuke OCIO plugins (so we can stay up to date with the public master repo), which link to our spi namespaced version of OCIO, and merely by registering our custom nodes first we havent had an issue.  I'm curious why we havent needed to worry about an alternate so name.

-- Jeremy



On Mon, May 6, 2013 at 4:27 PM, Piotr Stanczyk <piotr...@...> wrote:
Hi All,

I am looking to build the core lib with the -D OCIO_NAMESPACE=foo option.  Whilst this builds just fine, I see that the SONAME is unchanged with the above.
So, how do you resolve ambiguities here? Should I be passing in the -D SONAME=foobar in here?

Any tips welcome - thanks

Piotr


--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


Re: Custom namespace specification and SONAMEs

Jeremy Selan <jeremy...@...>
 

Hmmm... I hadnt thought about passing an alternate soname to the build process, but it makes sense that you could need it for applications which load both a public and private version of OCIO.  

Can you share any more specifics about what you're trying to do? Either way, I'm happy to add a custom SONAME option (if it doesnt already exist).

For example, at SPI we build custom versions of the nuke OCIO plugins (so we can stay up to date with the public master repo), which link to our spi namespaced version of OCIO, and merely by registering our custom nodes first we havent had an issue.  I'm curious why we havent needed to worry about an alternate so name.

-- Jeremy



On Mon, May 6, 2013 at 4:27 PM, Piotr Stanczyk <piotr.s...@...> wrote:
Hi All,

I am looking to build the core lib with the -D OCIO_NAMESPACE=foo option.  Whilst this builds just fine, I see that the SONAME is unchanged with the above.
So, how do you resolve ambiguities here? Should I be passing in the -D SONAME=foobar in here?

Any tips welcome - thanks

Piotr


--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.
 
 


Custom namespace specification and SONAMEs

Piotr Stanczyk <piotr.s...@...>
 

Hi All,

I am looking to build the core lib with the -D OCIO_NAMESPACE=foo option.  Whilst this builds just fine, I see that the SONAME is unchanged with the above.
So, how do you resolve ambiguities here? Should I be passing in the -D SONAME=foobar in here?

Any tips welcome - thanks

Piotr



Re: Apply color transformation on images containing a single channel

Jeremy Selan <jeremy...@...>
 

>Unfortunately, it seems that PlanarImageDesc also requires at least 3 channels of >data. I tried passing it NULL pointers for the other, non existing channels, though it >complains at runtime that "Valid ptrs must be passed for all 3 image rgb color >channels".
>I guess I can construct dummy 0 arrays for the other channels, but that seems like >a waste of resources. Do you have any other ideas?

Yah, OCIO assumes that you always have at least the 3 R,G,B channels available for simultaneously processing.  The reason is that OCIO allows for channel 'crosstalk' in  colorspace definitions, so it's impossible in the general case to compute any single output channel without access to each input at the same time.  If life were only 1-D luts, this would work but with 3-D or matricies...

If there is a way to structure your code to have all 3 channels available at once I'd highly recommend it.  Otherwise, you can create dummy '0' channels but know that your implementation will silently fail (ugh!) for transforms with crosstalk. (which is queryable from the processor, fyi).

-- Jeremy



On Wed, Apr 10, 2013 at 11:55 PM, Mark Boorer <mark...@...> wrote:
Hi Jeremy,

Thanks for your quick reply!

Unfortunately, it seems that PlanarImageDesc also requires at least 3 channels of data. I tried passing it NULL pointers for the other, non existing channels, though it complains at runtime that "Valid ptrs must be passed for all 3 image rgb color channels".

I guess I can construct dummy 0 arrays for the other channels, but that seems like a waste of resources. Do you have any other ideas?

Thanks,
Mark






On Thursday, April 11, 2013 1:59:55 PM UTC+10, Jeremy Selan wrote:
If your image is packed separately per channel (ex:  RRRRRRR... GGGGG... BBBB.... ), the Planar ImageDesc is probably what you're looking for. Give that a try?

PlanarImageDesc(float * rData, float * gData, float * bData, float * aData,
                        long width, long height,
                        ptrdiff_t yStrideBytes = AutoStride);

Note that the float * for alpha is optional, so if you are only loading RGB you can pass a NULL ptr for that.

-- Jeremy




On Wed, Apr 10, 2013 at 8:36 PM, Mark Boorer <mar...@...> wrote:
Hi,

In my application, images are split into their separate channels, and operations are performed per channel. I have individual float arrays for each channel (obtained via openimageio), and I would like to perform a colorspace transformation to each, preferably without joining them together.
I noticed that the PackedImageDesc docs say it ignores channels > 4, is there an easy way in the c++ library to provide a float array, and specify which channel it contains to perform the transformation?

Something like:

OCIO::PackedImageDesc img(data, w, h, 1, "R");

Thanks,
Mark

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.
 
 


Re: Apply color transformation on images containing a single channel

Mark Boorer <mark...@...>
 

Hi Jeremy,

Thanks for your quick reply!

Unfortunately, it seems that PlanarImageDesc also requires at least 3 channels of data. I tried passing it NULL pointers for the other, non existing channels, though it complains at runtime that "Valid ptrs must be passed for all 3 image rgb color channels".

I guess I can construct dummy 0 arrays for the other channels, but that seems like a waste of resources. Do you have any other ideas?

Thanks,
Mark





On Thursday, April 11, 2013 1:59:55 PM UTC+10, Jeremy Selan wrote:
If your image is packed separately per channel (ex:  RRRRRRR... GGGGG... BBBB.... ), the Planar ImageDesc is probably what you're looking for. Give that a try?

PlanarImageDesc(float * rData, float * gData, float * bData, float * aData,
                        long width, long height,
                        ptrdiff_t yStrideBytes = AutoStride);

Note that the float * for alpha is optional, so if you are only loading RGB you can pass a NULL ptr for that.

-- Jeremy




On Wed, Apr 10, 2013 at 8:36 PM, Mark Boorer <mar...@...> wrote:
Hi,

In my application, images are split into their separate channels, and operations are performed per channel. I have individual float arrays for each channel (obtained via openimageio), and I would like to perform a colorspace transformation to each, preferably without joining them together.
I noticed that the PackedImageDesc docs say it ignores channels > 4, is there an easy way in the c++ library to provide a float array, and specify which channel it contains to perform the transformation?

Something like:

OCIO::PackedImageDesc img(data, w, h, 1, "R");

Thanks,
Mark

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


Re: Apply color transformation on images containing a single channel

Jeremy Selan <jeremy...@...>
 

If your image is packed separately per channel (ex:  RRRRRRR... GGGGG... BBBB.... ), the Planar ImageDesc is probably what you're looking for. Give that a try?


PlanarImageDesc(float * rData, float * gData, float * bData, float * aData,
                        long width, long height,
                        ptrdiff_t yStrideBytes = AutoStride);

Note that the float * for alpha is optional, so if you are only loading RGB you can pass a NULL ptr for that.

-- Jeremy




On Wed, Apr 10, 2013 at 8:36 PM, Mark Boorer <mark...@...> wrote:
Hi,

In my application, images are split into their separate channels, and operations are performed per channel. I have individual float arrays for each channel (obtained via openimageio), and I would like to perform a colorspace transformation to each, preferably without joining them together.
I noticed that the PackedImageDesc docs say it ignores channels > 4, is there an easy way in the c++ library to provide a float array, and specify which channel it contains to perform the transformation?

Something like:

OCIO::PackedImageDesc img(data, w, h, 1, "R");

Thanks,
Mark

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.
 
 


Apply color transformation on images containing a single channel

Mark Boorer <mark...@...>
 

Hi,

In my application, images are split into their separate channels, and operations are performed per channel. I have individual float arrays for each channel (obtained via openimageio), and I would like to perform a colorspace transformation to each, preferably without joining them together.
I noticed that the PackedImageDesc docs say it ignores channels > 4, is there an easy way in the c++ library to provide a float array, and specify which channel it contains to perform the transformation?

Something like:

OCIO::PackedImageDesc img(data, w, h, 1, "R");

Thanks,
Mark


Re: OCIOLookTransform missing in Nuke 7.0v4?

Hugh Macdonald <hugh.ma...@...>
 

Your other option is to select Other -> All Plugins -> Update, and then finding it in the Other -> All Plugins -> O menu.

Hugh Macdonald
nvizible – VISUAL EFFECTS

+44(0) 20 3167 3860
+44(0) 7773 764 708

www.nvizible.com



On 8 April 2013 18:47, Jeremy Selan <jeremy...@...> wrote:
It looks like Nuke 7.0v6 ships with the OCIOLookTransform node, though it's not exposed in the menus by default.

Quoting the release notes...

BUG ID 24785 - OCIOLookTransform was missing from the Other > All 
plugins menu. 
You can also access OCIOLookTransform by pressing X on the Node 
Graph, making sure TCL is selected in the dialog that opens, typing OCIOLookTransform, and clicking OK



On Sun, Apr 7, 2013 at 6:33 PM, Alex - <ale...@...> wrote:
It seems like the OCIOLookTransform node mentioned on this page:
Is missing from the OCIO bundled with Nuke 7.0v4.

Has the LookTransform node been deprecated or am I missing something?

Regards
Alex

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.
 
 


D I S C L A I M E R : This email and any files transmitted with it are intended solely for the intended addressee, and may contain confidential information or material protected by law, copyright or other legislation.  If you have received this message in error, please return it to the sender or notify the sender by calling +44 (0)20 3167 3860, and immediately and permanently delete it. You should not copy it or use it for any purpose, nor disclose its contents to any other person. Only the intended recipient may place any reliance upon it. Nvizible Limited accepts no responsibility or liability for emails sent by its employees or personnel which are not sent in the course of its business or that of its clients.
 
Nvizible Limited, 3 Richfield Avenue, Richfield Place, Reading, Berkshire, RG1 8EQ .  Registered in England & Wales with Company Number: 6900121


Re: OCIOLookTransform missing in Nuke 7.0v4?

Jeremy Selan <jeremy...@...>
 

It looks like Nuke 7.0v6 ships with the OCIOLookTransform node, though it's not exposed in the menus by default.

Quoting the release notes...

BUG ID 24785 - OCIOLookTransform was missing from the Other > All 
plugins menu. 
You can also access OCIOLookTransform by pressing X on the Node 
Graph, making sure TCL is selected in the dialog that opens, typing OCIOLookTransform, and clicking OK



On Sun, Apr 7, 2013 at 6:33 PM, Alex - <ale...@...> wrote:
It seems like the OCIOLookTransform node mentioned on this page:
Is missing from the OCIO bundled with Nuke 7.0v4.

Has the LookTransform node been deprecated or am I missing something?

Regards
Alex

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.
 
 


OCIOLookTransform missing in Nuke 7.0v4?

Alex - <ale...@...>
 

It seems like the OCIOLookTransform node mentioned on this page:
Is missing from the OCIO bundled with Nuke 7.0v4.

Has the LookTransform node been deprecated or am I missing something?

Regards
Alex


Re: converting The Right Way™ from linear -> custom log -> monitor

Jeremy Selan <jeremy...@...>
 

Hi!

So you shouldnt have to muck around with any of the specific underlying Transforms (you mention the log transform, and the file transform).  The typical use case with OCIO is that these are all abstracted away inside the OCIO configuration, and that clients of the library treat the color transformation stack as a 'black box'.  (The reason we expose these *.Transforms at all in python is that so it's possible to script the creation of Configs using the python API.)

I.e., if you wanted to convert pixels from colorspace "A" to colorspace "B"...

config.getProcessor("A","B") would suffice.  (either the names, or roles)

The client would *not* need to know whether the definitions of A and B involved 1-D LUTs, 3-D LUTs, log converts, etc.

So can you confirm your facility already has a show-level OCIO config with the colorspace definitions already setup?  (If you're using Katana or Nuke or Mari with OCIO functionality, it's likely you already have one setup).  Also note that I see you're calling  OCIO.GetCurrentConfig().  That reads from $OCIO, so if you're not seeing a warning in the shell then you're probably good to go.

The one exception where you DO need to use a Transform in a client app is the DisplayTransform, which is the simplest way to create a "canonical viewing transform".

So say you wanted to convert colors, from log space, to something suitable for image display.   As this is for image display, the DisplayTransform is most appropriate:

config = OCIO.GetCurrentConfig()

t = OCIO.DisplayTransform()
t.setInputColorSpaceName( "NAME OF YOUR LOG COLORSPACE OR ROLE_COMPOSITING_LOG ")
t.setDisplay( config.getDefaultDisplay() )
t.setView( config.getDefaultView( config.getDefaultDisplay() ) )

And then you get the processor,
processor = config.getProcessor(t)

And then you use the processor to process pixels.

For extra credit... Note that if you're drawing a gradient and having the user pick colors, a log space gradient - while much better than scene-linear - may not be ideal.  I.e., you may find that the distribution of colors along the gradient is not to your liking.    One of the OCIO pre-defined roles is ROLE_COLOR_PICKER.  The intent is that this would be the alias to the color space most suitable for use in a color-picker.  (The Katana color picker makes use of this role, for example).  And the default OCIO configs we publish have this defined in a manner suitable for color picking.

Let me know if you have any further questions along these lines. This color picker stuff is both complicated, and fun. :)

-- Jeremy


On Mon, Mar 18, 2013 at 8:12 AM, <sor...@...> wrote:
I'm writing a fairly simple Python based color-managed color picker. OCIO seems like a good tool for the job, and it's an opportunity for me to learn a new API. Beyond hacking something together that "just works" I'd like to understand The Correct Way To Do Things According To The OCIO Python API. At the moment I'm not sure how my problem fits into the OCIO abstraction of roles, Displays, ColorSpaces, Transforms, Processors, Looks, Views, and Configs.

Our studio maintains monitor-specific Truelight Cube v2.0 files, which OCIO seems capable of reading. But we use these TrueLight files to map colors in log space into the final display color space. And each show's definition of "log space" depends on show-specific variables such as these, for use in a Josh Pines lin-to-log transform:

# log reference
LOGLIN_LOGREF:                   445 445 445
# linear reference
LOGLIN_LINREF:                   0.18 0.18 0.18
# negative gamma
LOGLIN_NGAMMA:                   0.6 0.6 0.6
# black level
LOGLIN_BLACKLEVEL:               0.0 0.0 0.0
# negative density per 10 bit log code value
LOGLIN_DENSPERCV:                0.002 0.002 0.002



The following code runs, but it's not producing correct output because 'color_in' hasn't been converted from scene linear to log. Where do I add that conversion? I suppose that would be another Processor, generated from a Transform between Constants.ROLE_SCENE_LINEAR and the show-specific log colorspace, but is it possible to create a log color space defined in this way? I could kludge together some math that produces the correct result, but it feels like I'm cheating.

import PyOpenColorIO as OCIO
ocio_config = OCIO.GetCurrentConfig()
ft = OCIO.FileTransform(path,
                        interpolation=OCIO.Constants.INTERP_LINEAR)
log_to_display = self.ocio_config.getProcessor(ft)
# oops - color_in is not log!
color_out = log_to_display.applyRGB(color_in)
displaySwatch(color_out)

 
 


Re: Bugs in Windows build process

Jeremy Selan <jeremy...@...>
 

Hi!

We're definitely interested in posting a pre-compiled windows binary.
(It's a very common request). If you can point me where to download
it (off-list), I will add it to the downloads section.

Thanks!

-- Jeremy

On Fri, Mar 8, 2013 at 6:53 AM, Marie Fétiveau <m...@...> wrote:
Hello !

Does Windows pre-compiled ociobakelut still interest you ?

I've just built static and dynamic libs, ociocheck and ociobakelut for win7
64 (msvc 2010 express).
Here.

I also have win32 binaries but they are pretty old (1.RC). Not very
interesting...

++

Marie

On Wed, Dec 12, 2012 at 1:44 AM, Jeremy Selan <jeremy...@...>
wrote:

One of the items we're particularly interested in is getting a
pre-compiled ociobakelut working on windows, as part of the installer.
ociobakelut is required for maya / photoshop support, so I'd hate to
put it out there without the bakelut functionality. (We have found
csp(s) to work great with Maya).



--
You received this message because you are subscribed to the Google Groups
"OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to ocio-dev+u...@....
For more options, visit https://groups.google.com/groups/opt_out.


Re: Bugs in Windows build process

Marie Fétiveau <m...@...>
 

Hello !

Does Windows pre-compiled ociobakelut still interest you ?

I've just built static and dynamic libs, ociocheck and ociobakelut for win7 64 (msvc 2010 express).

I also have win32 binaries but they are pretty old (1.RC). Not very interesting...

++

Marie


On Wed, Dec 12, 2012 at 1:44 AM, Jeremy Selan <jeremy...@...> wrote:
One of the items we're particularly interested in is getting a
pre-compiled ociobakelut working on windows, as part of the installer.
 ociobakelut is required for maya / photoshop support, so I'd hate to
put it out there without the bakelut functionality. (We have found
csp(s) to work great with Maya).




Pull Request: better disk caching behavior for concurrent lookups

Jeremy Selan <jeremy...@...>
 

https://github.com/imageworks/OpenColorIO/pull/309

Previously, if one loaded a file (LUT) that wasnt in cache, this would block
concurrent access of the file caches, even for different files. With this
update, lookups for different files are non-blocking. However, lookups on
the same file are blocking; a lut should never be loaded twice from disk even
when concurrently used.

This change was added so image viewers which need real time performance can
efficienly use a 'prefetch' thread to see the file caches ahead of time. This is
particularly useful when used when playing clips with per-shot 3-D LUTs.


Re: Linear workflow in Silhouette? And example LUT files?

Paul Miller <ste...@...>
 

I wonder if the 1.0 clamping issue happens with LUTs only, or if it will
also happen with a custom OCIO config?
Also not sure if the issue is in the OCIO code, or if it's just in
Silhouette. The CSP and Houdini LUTs work fine in Nuke's
OCIOFileTransform node (Nuke 6.3v8 from May 1012). I may just be using
an old version of Silhouette (4.5.3) with an older OCIO - I'll see if I
can try a newer Silhouette version, maybe the bug was fixed!
Silhouette isn't doing any clamping of its own, so this must be happening in the FileTransform.

Also, I bring up DPXs because if I use DPX in Silhouette, I can set its
"Interpretation" to "linear", and then simply apply the 3D LUT (a .cube
that accepts CineonLog input) and it will look perfect! However there
are a few downsides to this:
- We'd need to convert our EXRs to DPXs.
- It's not exactly a linear workflow, e.g. the "Gain" slider applies
gain in log space instead of scene-linear space.
We put the Gain transform before the FileTransform because that is what Mari does/did. I can see the reasoning of that, if the file LUT is meant to model a specific special display.

Sounds like a custom Config is the way to go.


Re: Linear workflow in Silhouette? And example LUT files?

Paul Miller <pa...@...>
 

On 2/13/2013 12:38 AM, Derek Melmoth wrote:
I'm attempting to get a linear workflow working in Silhouette (4.5.3),
which uses OpenColorIO. I'd like to load in scene-linear EXRs, apply a
1D linear-to-CineonLog transform, and then apply a 3D LUT.
Silhouette lets you use a straight-up OCIO config, or you can select a single LUT file. Sounds like you may need a custom config to do this, or find a LUT that combines both conversions.

Silhouette makes it very easy to apply a LUT file... so all I should
need is a LUT file that contains both a 1D lin-to-log shaper LUT and a
3D LUT. I'm able to create such LUT files for some applications (e.g. I
can create .CSP luts that work in RV and Nuke), but I'm having trouble
finding a file format that works correctly in Silhouette.
Silhouette reads all the LUT formats supported by OCIO. Can you convert your .CSP file to another format that is supported?

Second question:
A single LUT file may be the simplest way for me to get a linear
workflow in Silhouette, but perhaps it's not the best way or the
recommended way? What's the recommended approach to linear workflow in
Silhouette? Or should we switch to DPXs instead of EXRs? Is there a
way to apply 2 LUT files instead of 1? Should I look into creating a
new OCIO configuration?
If you're working with EXRs you're already using a linear workflow, so that statement is a bit confusing. Switching to DPXs would require an additional conversion (handled by OCIO), so there is no need to do that.

Silhouette v5 was just released and we're working on a point release that adds/fixes a few things. I could look at getting the CSP bug fix in there.


Thanks!
-Derek



Anyone interested in what I've tried so far may read ahead:
==============================

So far here are the issues I've been having with the different LUT formats:

3dl
(Autodesk Apps: Lustre, Flame, etc. Supports shaper LUT + 3D)
Problem:
I don't think it can accept input values above 1.0, can it?

csp
(Cinespace (Rising Sun Research) LUT. Spline-based shaper LUT, with
either 1D or 3D LUT.)
CSP is perfect and allows a 1D shaper LUT with arbitrary inputs and
outputs, and we've been using these in RV for a while.
Problem:
In Silhouette CSP cannot be used right now because of a known CSP-reader
bug in OCIO:
https://github.com/imageworks/OpenColorIO/pull/304
(It expects to read the string "CSPLUTV100", but instead reads
"CSPLUTV100\n" with the newline character at the end, and fails to read
the file.)

hdl
(Houdini. 1D Lut, 3D lut, 1D shaper Lut)
The 1D portion doesn't allow arbitrary input coordinates like CSP does,
but luckily the inaccuracy in the darkest colours is only slightly
noticeable.
Problem:
Unfortunately there's a bug that clips input values that are above 1.0,
and this is a deal-breaker. So, scene-linear values from 0 to 1 work as
expected, but anything above 1 fails to go above 0.6696 in log space. Is
there a place to file a bug report for this?
Also the lut file's extension had to be renamed to .lut in order for
Silhouette to read it :)


From the LUT list in the OCIO FAQ, it looks like the only other format
that supports shaper LUTs is .cub (Truelight format). I will try this
format next!

Are there any other LUT formats in OCIO that support 1D shaper LUTs? Or
should I instead just use the 3D LUT that accepts CineonLog input?


Thanks for reading! And I must say I'm pretty thankful for OpenColorIO
and the VES Cinematic Color white paper! :D
Cheers,
-Derek


1041 - 1060 of 2227