Date   

Python apply .cube ?

tre...@...
 

Please help!
I am trying to load in a file, resize it, apply a .cube LUT then write it out.
I have the following so far, but I'm falling over between the oiio > ocio parts (one using an oiio.ImgBuf and the other using a pixel array)
Have I missed something ? Is there an easier way to apply a .cube LUT to an image.
Thanks for any pointers !
Trevor

    #load in file & resize
        origBuffer = oiio.ImageBuf(aFile)
        resizeBuffer = oiio.ImageBuf(oiio.ImageSpec (1920, 972, 3, oiio.FLOAT))
        oiio.ImageBufAlgo.resize(resizeBuffer, origBuffer)

        config = OCIO.Config()

        transform = OCIO.FileTransform(src = self.LUT, interpolation = self.interpolation, direction = OCIO.Constants.TRANSFORM_DIR_FORWARD)
        processor = config.getProcessor(transform)

        pixels=self.inputNode.get_pixels()
        img = processor.applyRGBA(pixels)
    img.write(outputPath)


Pull request for custom transforms

Lukas Stockner <lukas.s...@...>
 

Hello,

I decided to implement the custom transform idea I posted earlier and send a pull request: https://github.com/imageworks/OpenColorIO/pull/390

The implementation is quite short and simple, it just exposes a class that new Transforms can inherit. By implementing all the functions, it can then be used just like regular transforms.
This is particularly useful for customizing a OCIO-based color pipeline, since own additions might be needed.
In my case, this functionality is required for implementing a tonemapping operator in Blender, which uses OCIO for its color management. Currently, the only two options are to either maintain a modified version of OCIO, or to implement it alongside OCIO before or after the Processor is called. With this change, it would be possible to implement the tonemapper as a Transform and directly use it in the DisplayTransform.

An example that shows how a custom transform looks like are these two files (header and source):
http://www.pasteall.org/56658
http://www.pasteall.org/56657

So, I hope that this feature will be accepted, and will of course fix any issues that might be present.

Best regards,
Lukas Stockner


Re: Precision of float values in config generated from Python

Dithermaster <dither...@...>
 

You need to print and later parse 8 digits to uniquely capture a float.

For those craving more information: https://randomascii.wordpress.com/2012/03/08/float-precisionfrom-zero-to-100-digits-2/

///d@


On Fri, Jan 30, 2015 at 6:32 PM, Haarm-Pieter Duiker <li...@...> wrote:
Thanks for digging into this, and my apologies for missing the obvious limit of float precision. I was running into an issues where the limited precision is combined with high-intensity values to produce different results than the reference implementation. I've found a workaround though. More precision preserved throughout the process would still be appreciated.

HP




On Fri, Jan 30, 2015 at 3:53 AM, Kevin Wheatley <kevin.j....@...> wrote:
There are cases in the code where the precision of the formatting is
used to generate Cache Ids, changing the precision here would change
the behavior  there are also cases with luts where certain precision
limits are assumed, my conclusion would thus be getting to full float
precision should be doable, but with careful adjustments, in
particular there might not be enough test coverage to ensure
maintaining the behaviour exactly.

Going to double precision to go beyond a maximum of 9 digits is a
different matter, does it make a visible difference tot he images?

Kevin

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/d/optout.


Re: Precision of float values in config generated from Python

Haarm-Pieter Duiker <li...@...>
 

Thanks for digging into this, and my apologies for missing the obvious limit of float precision. I was running into an issues where the limited precision is combined with high-intensity values to produce different results than the reference implementation. I've found a workaround though. More precision preserved throughout the process would still be appreciated.

HP




On Fri, Jan 30, 2015 at 3:53 AM, Kevin Wheatley <kevin.j....@...> wrote:
There are cases in the code where the precision of the formatting is
used to generate Cache Ids, changing the precision here would change
the behavior  there are also cases with luts where certain precision
limits are assumed, my conclusion would thus be getting to full float
precision should be doable, but with careful adjustments, in
particular there might not be enough test coverage to ensure
maintaining the behaviour exactly.

Going to double precision to go beyond a maximum of 9 digits is a
different matter, does it make a visible difference tot he images?

Kevin

--
You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
For more options, visit https://groups.google.com/d/optout.


Re: Precision of float values in config generated from Python

Kevin Wheatley <kevin.j....@...>
 

There is also a bug in yaml-cpp which prevents this in
src/emitterstate.cpp it needs fixing to read:

bool EmitterState::SetFloatPrecision(int value, FMT_SCOPE scope)
{
if(value < 0 || value > (2 +
std::numeric_limits<float>::digits * 3010/10000))
return false;
_Set(m_floatPrecision, value, scope);
return true;
}

bool EmitterState::SetDoublePrecision(int value, FMT_SCOPE scope)
{
if(value < 0 || value > (2 +
std::numeric_limits<double>::digits * 3010/10000))
return false;
_Set(m_doublePrecision, value, scope);
return true;
}

Then in OCIO's code you need to add something like:

out.SetFloatPrecision(2 +
std::numeric_limits<float>::digits * 3010/10000);
out.SetDoublePrecision(2 +
std::numeric_limits<double>::digits * 3010/10000);

To the save function in OCIOYaml.cpp (near line 1664)

At least by that point ociocheck will read and write a config file
with more precision :-)

Kevin


Re: Precision of float values in config generated from Python

Kevin Wheatley <kevin.j....@...>
 

There are cases in the code where the precision of the formatting is
used to generate Cache Ids, changing the precision here would change
the behavior there are also cases with luts where certain precision
limits are assumed, my conclusion would thus be getting to full float
precision should be doable, but with careful adjustments, in
particular there might not be enough test coverage to ensure
maintaining the behaviour exactly.

Going to double precision to go beyond a maximum of 9 digits is a
different matter, does it make a visible difference tot he images?

Kevin


Re: Precision of float values in config generated from Python

Kevin Wheatley <kevin.j....@...>
 

On Fri, Jan 30, 2015 at 8:50 AM, Kevin Wheatley
<kevin.j....@...> wrote:
sounds like limits of float precision to me. that would mean being
'double' clean through the code,
looking through the code the matrix is stored as single precision
float and the bindings to python also assume as such, the image
processing also works as float, storing the matrix as doubles would be
possible, at the expense of some performance loss in processing
(speculation on my part :-). This assumes your python uses doubles
internally (likely the case).

Separate but related note:

there is some scattering of precision assumptions through the code
too, though these are mostly limited to output of files...

FLOAT_DECIMALS is set to 7 and there is a DOUBLE_DECIMALS used to
output things, however it should really be using something like:

2 + std::numeric_limits<Target>::digits * 3010/10000;

(or max_digits10() if we allow it)

Which would make the values in the code incorrect...

see http://www2.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1822.pdf
(or http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF)

However be aware there are 'limitations' on some platforms (MSVC 7.1
for instance) where you can not rely on round tripping of values in
their standard library (dinkumware).

I might see if I can knock up a patch for the output precision problems.

Kevin


Re: Precision of float values in config generated from Python

Kevin Wheatley <kevin.j....@...>
 

sounds like limits of float precision to me. that would mean being
'double' clean through the code,

Kevin


Precision of float values in config generated from Python

Haarm-Pieter Duiker <li...@...>
 

Hello,

Is there a way to configure the precision used for floating-point values stored in a config?

I have a Python script that generates a config based on a bunch of inputs, including a number of matrices that are declared in Python like so:

matrix = [0.6954522414, 0.1406786965, 0.1638690622,
                   0.0447945634, 0.8596711185, 0.0955343182,
                   -0.0055258826, 0.0040252103, 1.0015006723]

Matrices are added to a MatrixTransform through calls something like this:

ocio_transform = ocio.MatrixTransform()
ocio_transform.setMatrix(matrix)


We end up with a statement in the config that looks something like the following:

...
    to_reference: !<MatrixTransform> {matrix: [0.695452, 0.140679, 0.163869, 0, 0.0447946, 0.859671, 0.0955343, 0, -0.00552588, 0.00402521, 1.0015, 0, 0, 0, 0, 1]}
...

Ignore the padding from 3x3 to 4x4. The values in the config print with 6, 7 or 8 decimals. The input values had 10. 

Is there any way to preserve the precision of the original values? Would it be better to use the spimtx files to store these values?

Thanks in advance for your help,
HP





Re: ACES 1.0 released

Haarm-Pieter Duiker <li...@...>
 

Hi Steve,

These all seem like fair points. I'll take a look at this tomorrow and update the config. 

The questions around the linear range covered by ACEScc may need a bit more investigation. 

HP



On Wednesday, January 28, 2015, Steve Agland <sag...@...> wrote:
Hi HP,

I'm looking into rolling an internal project's OCIO configuration forward to use as much of your ACES 1.0 config as possible. I've run into a couple of issues and wanted to run them by you.

Firstly, the allocation for most of the linear spaces (including ACEScg, and our current working space "Linear - P3-D60") is currently defined as:

    allocation: uniform
    allocationvars: [0, 1]

This causes clipping of values > 1 when converting from the linear space into an Output Transform. It seems to only be a problem in certain OCIO implementations (I believe those using the GPU code path). In our case is was first noticed in RV. Changing these allocation settings to something like this seems to resolve the problem:

    allocation: lg2
    allocationvars: [-8.5, 5, .003]

I'm not sure if those values are optimal. Since it's only for interactive display it's probably fine. Using the lg2 allocation method seems to be the recommended approach for linear spaces.


The second issue I've run into is with experimenting with the ACEScc space for use in grading/DI.

We're using Nucoda. The current test workflow is I'm delivering ACEScc-encoded proxies of the final comps to DI, and supplying them with a couple of baked LUTs for display (ACEScc -> P3-DCI) and for a hypothetical future archive master (ACEScc -> ACES2065-1). This seems to work well for interactive grading and display with the few production images we've tested but I'm concerned about preserving as much information as possible in the final ACES master.

The ACEScc_to_ACES.spi1d LUT bundled with the config has a range of 0.0 - 1.0. I think this could be wider. (Also, should this file perhaps should be called ACEScc_to_ACESccLin.sp1 since it doesn't include the matrix transformation?).

The S-2014-003 document (Annex A, pp. 10) suggests that negative ACEScc values are to be expected - either from very dark linear values (< 7.25 stops below 18% middle grey), or from colors outside the AP1 gamut. At the other end, a value of 1.0 in ACEScc maps to ~223 in ACESccLin, but the spec suggests that the ACEScc -> ACESccLin conversion formula doesn't clip until you reach a linear value of 65504 (pp. 9). This seems to correspond to a value of about 1.468 in ACEScc. At the low end the curve seems to hit linear 0 at about -0.358.

It seems like these values ( -0.358, 1.468 ) might be a reasonable range to use in the ACEScc_to_ACESccLin LUT - perhaps with more samples - in order to preserve more information when transforming in and out of ACEScc.

To test this I generated my own alternative ACEScc_to_ACES.spi1d with that range and tried a back-and-forth conversion: ACES -> ACEScc -> ACES. It seems to better preserve very dark (esp. black) and very very bright values in the admittedly contrived syntheticChart.01.exr ACES test image. In practice maybe this isn't going to matter much but it seems like an opportunity for a little more accuracy for no extra cost.

That's all a long way of saying "can we change a few of these numbers?" :)  Let me know what you think.

Cheers

Steve


On 14 January 2015 at 05:53, Haarm-Pieter Duiker <li...@...> wrote:
Hello,

The ACES 1.0 Developer Release is officially available! Full product information can be found on the revamped website, http://www.oscars.org/aces. To give you a preview, the ACES 1.0 release includes the following components:
  • Core ACES color transformation implemented in the Color Transformation Language (CTL)
  • Documentation on the expected use of these transforms
  • Technical specifications for other ACES core components
  • Documentation for implementers and end users
  • Test images in a variety of color encodings and formats, demonstrating the results of applying core ACES transformations
  • An OpenColorIO (OCIO) configuration package for core ACES transforms
The components of the ACES release can be found here
    Thanks for your feedback on the needs of the ACES OCIO config in particular and the system in general.

    Please take a look at the source code, documentation, configs and let us know what you think. Send in feedback on this list, the ACES Google Group or the direct email ac...@... address.

    Thanks again for all the feedback. Please keep it coming.
    HP

    --
    You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
    For more options, visit https://groups.google.com/d/optout.

    --
    You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
    For more options, visit https://groups.google.com/d/optout.


    Re: ACES 1.0 released

    Steve Agland <sag...@...>
     

    Hi HP,

    I'm looking into rolling an internal project's OCIO configuration forward to use as much of your ACES 1.0 config as possible. I've run into a couple of issues and wanted to run them by you.

    Firstly, the allocation for most of the linear spaces (including ACEScg, and our current working space "Linear - P3-D60") is currently defined as:

        allocation: uniform
        allocationvars: [0, 1]

    This causes clipping of values > 1 when converting from the linear space into an Output Transform. It seems to only be a problem in certain OCIO implementations (I believe those using the GPU code path). In our case is was first noticed in RV. Changing these allocation settings to something like this seems to resolve the problem:

        allocation: lg2
        allocationvars: [-8.5, 5, .003]

    I'm not sure if those values are optimal. Since it's only for interactive display it's probably fine. Using the lg2 allocation method seems to be the recommended approach for linear spaces.


    The second issue I've run into is with experimenting with the ACEScc space for use in grading/DI.

    We're using Nucoda. The current test workflow is I'm delivering ACEScc-encoded proxies of the final comps to DI, and supplying them with a couple of baked LUTs for display (ACEScc -> P3-DCI) and for a hypothetical future archive master (ACEScc -> ACES2065-1). This seems to work well for interactive grading and display with the few production images we've tested but I'm concerned about preserving as much information as possible in the final ACES master.

    The ACEScc_to_ACES.spi1d LUT bundled with the config has a range of 0.0 - 1.0. I think this could be wider. (Also, should this file perhaps should be called ACEScc_to_ACESccLin.sp1 since it doesn't include the matrix transformation?).

    The S-2014-003 document (Annex A, pp. 10) suggests that negative ACEScc values are to be expected - either from very dark linear values (< 7.25 stops below 18% middle grey), or from colors outside the AP1 gamut. At the other end, a value of 1.0 in ACEScc maps to ~223 in ACESccLin, but the spec suggests that the ACEScc -> ACESccLin conversion formula doesn't clip until you reach a linear value of 65504 (pp. 9). This seems to correspond to a value of about 1.468 in ACEScc. At the low end the curve seems to hit linear 0 at about -0.358.

    It seems like these values ( -0.358, 1.468 ) might be a reasonable range to use in the ACEScc_to_ACESccLin LUT - perhaps with more samples - in order to preserve more information when transforming in and out of ACEScc.

    To test this I generated my own alternative ACEScc_to_ACES.spi1d with that range and tried a back-and-forth conversion: ACES -> ACEScc -> ACES. It seems to better preserve very dark (esp. black) and very very bright values in the admittedly contrived syntheticChart.01.exr ACES test image. In practice maybe this isn't going to matter much but it seems like an opportunity for a little more accuracy for no extra cost.

    That's all a long way of saying "can we change a few of these numbers?" :)  Let me know what you think.

    Cheers

    Steve


    On 14 January 2015 at 05:53, Haarm-Pieter Duiker <li...@...> wrote:
    Hello,

    The ACES 1.0 Developer Release is officially available! Full product information can be found on the revamped website, http://www.oscars.org/aces. To give you a preview, the ACES 1.0 release includes the following components:
    • Core ACES color transformation implemented in the Color Transformation Language (CTL)
    • Documentation on the expected use of these transforms
    • Technical specifications for other ACES core components
    • Documentation for implementers and end users
    • Test images in a variety of color encodings and formats, demonstrating the results of applying core ACES transformations
    • An OpenColorIO (OCIO) configuration package for core ACES transforms
    The components of the ACES release can be found here
      Thanks for your feedback on the needs of the ACES OCIO config in particular and the system in general.

      Please take a look at the source code, documentation, configs and let us know what you think. Send in feedback on this list, the ACES Google Group or the direct email ac...@... address.

      Thanks again for all the feedback. Please keep it coming.
      HP

      --
      You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
      To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
      For more options, visit https://groups.google.com/d/optout.


      Implementing a CustomTransform

      Lukas Stockner <lukas.s...@...>
       

      Hi,
      I recently found that it would be quite useful to have a CustomTransform that allows to add own Transforms without changing the actual OCIO code.
      My idea for this would be to have a CustomTransform class with a pure virtual function which is called by BuildCustomOps and puts the required Ops into the OpRcPtrVec.
      To add an own transform, it would then be enough to inherit MyTransform from CustomTransform and implement this virtual function (in addition to the regular Transform member functions), which then pushes self-defined Ops into the OpRcPtrVec.
      Are there problems with this approach? If no, should I implement it and send a pull request?

      Best regards,
      Lukas Stockner


      Re: ACES 1.0 released

      Francois Lord <franco...@...>
       

      Congrats to everyone who worked on this. It is a huge achievement and you have made a great improvement to the industry.
      I can't wait to play with all of it, and test it on the first project that comes without a defined color pipeline, which shouldn't take too long.


      On Tue, Jan 13, 2015, 13:53 Haarm-Pieter Duiker <li...@...> wrote:
      Hello,

      The ACES 1.0 Developer Release is officially available! Full product information can be found on the revamped website, http://www.oscars.org/aces. To give you a preview, the ACES 1.0 release includes the following components:
      • Core ACES color transformation implemented in the Color Transformation Language (CTL)
      • Documentation on the expected use of these transforms
      • Technical specifications for other ACES core components
      • Documentation for implementers and end users
      • Test images in a variety of color encodings and formats, demonstrating the results of applying core ACES transformations
      • An OpenColorIO (OCIO) configuration package for core ACES transforms
      The components of the ACES release can be found here
        Thanks for your feedback on the needs of the ACES OCIO config in particular and the system in general.

        Please take a look at the source code, documentation, configs and let us know what you think. Send in feedback on this list, the ACES Google Group or the direct email ac...@... address.

        Thanks again for all the feedback. Please keep it coming.
        HP

        --
        You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
        To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
        For more options, visit https://groups.google.com/d/optout.


        ACES 1.0 released

        Haarm-Pieter Duiker <li...@...>
         

        Hello,

        The ACES 1.0 Developer Release is officially available! Full product information can be found on the revamped website, http://www.oscars.org/aces. To give you a preview, the ACES 1.0 release includes the following components:
        • Core ACES color transformation implemented in the Color Transformation Language (CTL)
        • Documentation on the expected use of these transforms
        • Technical specifications for other ACES core components
        • Documentation for implementers and end users
        • Test images in a variety of color encodings and formats, demonstrating the results of applying core ACES transformations
        • An OpenColorIO (OCIO) configuration package for core ACES transforms
        The components of the ACES release can be found here
          Thanks for your feedback on the needs of the ACES OCIO config in particular and the system in general.

          Please take a look at the source code, documentation, configs and let us know what you think. Send in feedback on this list, the ACES Google Group or the direct email ac...@... address.

          Thanks again for all the feedback. Please keep it coming.
          HP


          Re: Academy CTF LUT Format Support

          Ben Doherty <benjdo...@...>
           

          Certainly. Here's CBS Digital's fork of OCIO:

          You can see my latest commits. The bulk of the code is in FileFormatCTF.cpp.


          On Tuesday, December 9, 2014 3:08:29 PM UTC-8, Ben Doherty wrote:

          Hello all,

          My name is Ben Doherty, and I'm a developer working for CBS Digital. We’re interested in addiing support in OCIO for The Academy's Color Transform File (CTF) LUTs. This additional functionality will benefit us here at CBS and hopefully encourage more studios to adopt the LUT format.

          Allow me to outline my ideas and development steps so far.

          As I understand it, each LUT file format is abstracted via an anonymous FileFormat which is registered with the FileTransform class.

          I've done the following:

          1. Created FileFormatCTF.cpp
          2. Registered the new format with the FileTransform class via registerFileFormat(CreateFileFormatCTF())
          3. Began implementing the GetFormatInfo(), Read(), and BuildFileOps() methods for the LocalFileFormat.
          4. Created a LocalCachedFile class
          5. Created a XMLTagHandler class (explained below)
          6. Created a CachedOp class (explained below)

          The CTF format is specified with XML. The bulk of a CTF LUT is sequential "Operator" elements (not to be confused with the OCIO Op class, though they are similar). A few examples: <matrix>, <gamma>, <range>, <lut1d>, etc.

          The Read() function parses the XML. For every Operator tag (<matrix>, <gamma>, etc) it comes across, it will instantiate the appropriate XMLTagHandler subclass. For instance, for every <matrix> tag, the class will instantiate a MatrixTagHandler. The job of a TagHandler is to read the specifics of the XML element and cache the salient data in a CachedOp subclass (e.g., MatrixCachedOp). Each tag, then, is converted to a CachedOp. A vector of these CachedOps is stored in the LocalCachedFile class.

          When the time comes to BuildFileOps(), the function iterates in order through the CachedOps within the LocalCachedFile, asking each one to create whichever OCIO Op is appropriate and adding it to the ops vector.

          Please let me know if I have overlooked anything. At the moment, I’ve successfully implemented the <matrix> tag, which seems to be working as expected. I’m more than open to design / implementation suggestions; I can post my code if necessary.

          Thanks!
          Ben


          Re: Academy CTF LUT Format Support

          Mark Boorer <mark...@...>
           

          Hi Ben,

          That sounds great! What you've described sounds like you're heading down the right track. If it's possible, could you post the code to a github account? That way we could better collaborate, and would make it easier for when the code is finally merged with OpenColorIO properly :)

          Very keen to see what you've got so far.

          Cheers,
          Mark

          On Tue, Dec 9, 2014 at 11:08 PM, Ben Doherty <benjdo...@...> wrote:

          Hello all,

          My name is Ben Doherty, and I'm a developer working for CBS Digital. We’re interested in addiing support in OCIO for The Academy's Color Transform File (CTF) LUTs. This additional functionality will benefit us here at CBS and hopefully encourage more studios to adopt the LUT format.

          Allow me to outline my ideas and development steps so far.

          As I understand it, each LUT file format is abstracted via an anonymous FileFormat which is registered with the FileTransform class.

          I've done the following:

          1. Created FileFormatCTF.cpp
          2. Registered the new format with the FileTransform class via registerFileFormat(CreateFileFormatCTF())
          3. Began implementing the GetFormatInfo(), Read(), and BuildFileOps() methods for the LocalFileFormat.
          4. Created a LocalCachedFile class
          5. Created a XMLTagHandler class (explained below)
          6. Created a CachedOp class (explained below)

          The CTF format is specified with XML. The bulk of a CTF LUT is sequential "Operator" elements (not to be confused with the OCIO Op class, though they are similar). A few examples: <matrix>, <gamma>, <range>, <lut1d>, etc.

          The Read() function parses the XML. For every Operator tag (<matrix>, <gamma>, etc) it comes across, it will instantiate the appropriate XMLTagHandler subclass. For instance, for every <matrix> tag, the class will instantiate a MatrixTagHandler. The job of a TagHandler is to read the specifics of the XML element and cache the salient data in a CachedOp subclass (e.g., MatrixCachedOp). Each tag, then, is converted to a CachedOp. A vector of these CachedOps is stored in the LocalCachedFile class.

          When the time comes to BuildFileOps(), the function iterates in order through the CachedOps within the LocalCachedFile, asking each one to create whichever OCIO Op is appropriate and adding it to the ops vector.

          Please let me know if I have overlooked anything. At the moment, I’ve successfully implemented the <matrix> tag, which seems to be working as expected. I’m more than open to design / implementation suggestions; I can post my code if necessary.

          Thanks!
          Ben

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.


          Academy CTF LUT Format Support

          Ben Doherty <benjdo...@...>
           

          Hello all,

          My name is Ben Doherty, and I'm a developer working for CBS Digital. We’re interested in addiing support in OCIO for The Academy's Color Transform File (CTF) LUTs. This additional functionality will benefit us here at CBS and hopefully encourage more studios to adopt the LUT format.

          Allow me to outline my ideas and development steps so far.

          As I understand it, each LUT file format is abstracted via an anonymous FileFormat which is registered with the FileTransform class.

          I've done the following:

          1. Created FileFormatCTF.cpp
          2. Registered the new format with the FileTransform class via registerFileFormat(CreateFileFormatCTF())
          3. Began implementing the GetFormatInfo(), Read(), and BuildFileOps() methods for the LocalFileFormat.
          4. Created a LocalCachedFile class
          5. Created a XMLTagHandler class (explained below)
          6. Created a CachedOp class (explained below)

          The CTF format is specified with XML. The bulk of a CTF LUT is sequential "Operator" elements (not to be confused with the OCIO Op class, though they are similar). A few examples: <matrix>, <gamma>, <range>, <lut1d>, etc.

          The Read() function parses the XML. For every Operator tag (<matrix>, <gamma>, etc) it comes across, it will instantiate the appropriate XMLTagHandler subclass. For instance, for every <matrix> tag, the class will instantiate a MatrixTagHandler. The job of a TagHandler is to read the specifics of the XML element and cache the salient data in a CachedOp subclass (e.g., MatrixCachedOp). Each tag, then, is converted to a CachedOp. A vector of these CachedOps is stored in the LocalCachedFile class.

          When the time comes to BuildFileOps(), the function iterates in order through the CachedOps within the LocalCachedFile, asking each one to create whichever OCIO Op is appropriate and adding it to the ops vector.

          Please let me know if I have overlooked anything. At the moment, I’ve successfully implemented the <matrix> tag, which seems to be working as expected. I’m more than open to design / implementation suggestions; I can post my code if necessary.

          Thanks!
          Ben


          Re: ACES OCIO configs, testing and feedback

          Andy Jones <andy....@...>
           

          +1 for the common still camera gamuts.  Prophoto, Adobe Wide Gamut, and Adobe RGB.  There's a smattering of different whitepoints and gammas with those, which can be annoying to deal with.

          On Wed, Dec 3, 2014 at 12:22 PM, Matt Plec <mp...@...> wrote:
          I'm a bit late to the party, but for what it's worth, Red log (with their various red color primaries I assume?), Canon log, and GoPro "protune" were popular requests for inclusion in the default Nuke set, so I expect people will also want to convert between those and ACES.

          Two others worthy of consideration are ProPhoto and Adobe RGB. May sound crazy but hear me out.... Having them available makes it possible to produce stills out of vfx tools in a form print/marketing can work with directly and also to bring in stills from print/photography workflows directly to an ACES reference space comp/render.

          Cheers,
          Matt


          On Fri, Nov 7, 2014 at 7:02 PM, Haarm-Pieter Duiker <li...@...> wrote:
          On the IDT side, the current Academy repo includes IDTs for the following cameras / configurations
          • Arri Alexa with the following exposure indices: EI800, EI640, EI500, EI400, EI3200, EI320, EI2560, EI250, EI2000, EI200, EI1600, EI160, EI1280, EI1000
          • Sony F65 Tungsten and Daylight
          • Sony F35
          Are those very specific IDTs what you're looking for? What other camera / IDT color spaces would be helpful? Red? Canon? GoPro?

          On the point about including conversion between different primaries, what primaries sets would be useful? Rec2020? Rec709? P3? Arri wide gamut?

          We're also looking into the legal vs. full range options for spaces like ACESproxy.

          Thanks for the suggestions so far,
          HP




           



          On Thu, Nov 6, 2014 at 7:50 AM, Francois Lord <franco...@...> wrote:
          Yes, and I suspect we will continue to receive camera native material for a while. VFX houses will adopt ACES more quickly than production houses.

          Would it be a good idea to include profiles for Rec.2020 so that we can easily convert CG renderings to and from ACES when working with the ocio config?


          On Wed Nov 05 2014 at 4:53:54 PM Steve Agland <sag...@...> wrote:
          On 6 November 2014 06:56, Haarm-Pieter Duiker <li...@...> wrote:

          Could you provide some example of when and how you would use the IDTs with OCIO? From a number of other sources, we're hearing that most IDTs will be applied by the Camera SDKs / toolsets so most OCIO-supporting applications will interact with ACES, ACESproxy or ACESlog images rather than images in the camera raw color spaces.

          It's not uncommon to receive external material in camera color spaces and to want to ingest them using the same OCIO-supporting tools that are used for color management internally. That could be scripts, command-line utilities or apps like Nuke. Having the IDTs in OCIO just makes that easier, even though the rest of the pipeline won't necessarily use those transforms.

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.


          Re: ACES OCIO configs, testing and feedback

          Matt Plec <mp...@...>
           

          I'm a bit late to the party, but for what it's worth, Red log (with their various red color primaries I assume?), Canon log, and GoPro "protune" were popular requests for inclusion in the default Nuke set, so I expect people will also want to convert between those and ACES.

          Two others worthy of consideration are ProPhoto and Adobe RGB. May sound crazy but hear me out.... Having them available makes it possible to produce stills out of vfx tools in a form print/marketing can work with directly and also to bring in stills from print/photography workflows directly to an ACES reference space comp/render.

          Cheers,
          Matt


          On Fri, Nov 7, 2014 at 7:02 PM, Haarm-Pieter Duiker <li...@...> wrote:
          On the IDT side, the current Academy repo includes IDTs for the following cameras / configurations
          • Arri Alexa with the following exposure indices: EI800, EI640, EI500, EI400, EI3200, EI320, EI2560, EI250, EI2000, EI200, EI1600, EI160, EI1280, EI1000
          • Sony F65 Tungsten and Daylight
          • Sony F35
          Are those very specific IDTs what you're looking for? What other camera / IDT color spaces would be helpful? Red? Canon? GoPro?

          On the point about including conversion between different primaries, what primaries sets would be useful? Rec2020? Rec709? P3? Arri wide gamut?

          We're also looking into the legal vs. full range options for spaces like ACESproxy.

          Thanks for the suggestions so far,
          HP




           



          On Thu, Nov 6, 2014 at 7:50 AM, Francois Lord <franco...@...> wrote:
          Yes, and I suspect we will continue to receive camera native material for a while. VFX houses will adopt ACES more quickly than production houses.

          Would it be a good idea to include profiles for Rec.2020 so that we can easily convert CG renderings to and from ACES when working with the ocio config?


          On Wed Nov 05 2014 at 4:53:54 PM Steve Agland <sag...@...> wrote:
          On 6 November 2014 06:56, Haarm-Pieter Duiker <li...@...> wrote:

          Could you provide some example of when and how you would use the IDTs with OCIO? From a number of other sources, we're hearing that most IDTs will be applied by the Camera SDKs / toolsets so most OCIO-supporting applications will interact with ACES, ACESproxy or ACESlog images rather than images in the camera raw color spaces.

          It's not uncommon to receive external material in camera color spaces and to want to ingest them using the same OCIO-supporting tools that are used for color management internally. That could be scripts, command-line utilities or apps like Nuke. Having the IDTs in OCIO just makes that easier, even though the rest of the pipeline won't necessarily use those transforms.

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.

          --
          You received this message because you are subscribed to the Google Groups "OpenColorIO Developers" group.
          To unsubscribe from this group and stop receiving emails from it, send an email to ocio-dev+u...@....
          For more options, visit https://groups.google.com/d/optout.


          Re: parseColorSpaceFromString() issue

          mi...@...
           


          I needed to get git-savvy anyway, so I went ahead and made a pull request for this patch: https://github.com/imageworks/OpenColorIO/pull/381

          Cheers,
          -Mike


          On Thursday, October 2, 2014 10:16:05 AM UTC-7, mik...@... wrote:
          On Wednesday, October 1, 2014 2:23:56 PM UTC-7, Mark Boorer wrote:

          If you feel like knocking up a pull request containing your patch, I'll merge it (assuming it's all good). Otherwise I'm happy to do so on your behalf.


          Thanks Mark.  If you wouldn't mind handling the pull request, that'd be helpful.  I'm afraid I'm still not fully up to speed on git.

          -miker