Python apply .cube ?
tre...@...
Please help! I am trying to load in a file, resize it, apply a .cube LUT then write it out. I have the following so far, but I'm falling over between the oiio > ocio parts (one using an oiio.ImgBuf and the other using a pixel array) Have I missed something ? Is there an easier way to apply a .cube LUT to an image. Thanks for any pointers ! Trevor #load in file & resize origBuffer = oiio.ImageBuf(aFile) resizeBuffer = oiio.ImageBuf(oiio.ImageSpec (1920, 972, 3, oiio.FLOAT)) oiio.ImageBufAlgo.resize(resizeBuffer, origBuffer) config = OCIO.Config() transform = OCIO.FileTransform(src = self.LUT, interpolation = self.interpolation, direction = OCIO.Constants.TRANSFORM_DIR_FORWARD) processor = config.getProcessor(transform) pixels=self.inputNode.get_pixels() img = processor.applyRGBA(pixels) img.write(outputPath) |
|
Pull request for custom transforms
Lukas Stockner <lukas.s...@...>
Hello,
I decided to implement the custom transform idea I posted earlier and send a pull request: https://github.com/imageworks/OpenColorIO/pull/390 The implementation is quite short and simple, it just exposes a class that new Transforms can inherit. By implementing all the functions, it can then be used just like regular transforms. This is particularly useful for customizing a OCIO-based color pipeline, since own additions might be needed. In my case, this functionality is required for implementing a tonemapping operator in Blender, which uses OCIO for its color management. Currently, the only two options are to either maintain a modified version of OCIO, or to implement it alongside OCIO before or after the Processor is called. With this change, it would be possible to implement the tonemapper as a Transform and directly use it in the DisplayTransform. An example that shows how a custom transform looks like are these two files (header and source): http://www.pasteall.org/56658 http://www.pasteall.org/56657 So, I hope that this feature will be accepted, and will of course fix any issues that might be present. Best regards, Lukas Stockner |
|
Re: Precision of float values in config generated from Python
Dithermaster <dither...@...>
You need to print and later parse 8 digits to uniquely capture a float. ///d@ On Fri, Jan 30, 2015 at 6:32 PM, Haarm-Pieter Duiker <li...@...> wrote:
|
|
Re: Precision of float values in config generated from Python
Haarm-Pieter Duiker <li...@...>
Thanks for digging into this, and my apologies for missing the obvious limit of float precision. I was running into an issues where the limited precision is combined with high-intensity values to produce different results than the reference implementation. I've found a workaround though. More precision preserved throughout the process would still be appreciated. HP On Fri, Jan 30, 2015 at 3:53 AM, Kevin Wheatley <kevin.j....@...> wrote: There are cases in the code where the precision of the formatting is |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
There is also a bug in yaml-cpp which prevents this in
src/emitterstate.cpp it needs fixing to read: bool EmitterState::SetFloatPrecision(int value, FMT_SCOPE scope) { if(value < 0 || value > (2 + std::numeric_limits<float>::digits * 3010/10000)) return false; _Set(m_floatPrecision, value, scope); return true; } bool EmitterState::SetDoublePrecision(int value, FMT_SCOPE scope) { if(value < 0 || value > (2 + std::numeric_limits<double>::digits * 3010/10000)) return false; _Set(m_doublePrecision, value, scope); return true; } Then in OCIO's code you need to add something like: out.SetFloatPrecision(2 + std::numeric_limits<float>::digits * 3010/10000); out.SetDoublePrecision(2 + std::numeric_limits<double>::digits * 3010/10000); To the save function in OCIOYaml.cpp (near line 1664) At least by that point ociocheck will read and write a config file with more precision :-) Kevin |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
There are cases in the code where the precision of the formatting is
used to generate Cache Ids, changing the precision here would change the behavior there are also cases with luts where certain precision limits are assumed, my conclusion would thus be getting to full float precision should be doable, but with careful adjustments, in particular there might not be enough test coverage to ensure maintaining the behaviour exactly. Going to double precision to go beyond a maximum of 9 digits is a different matter, does it make a visible difference tot he images? Kevin |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
On Fri, Jan 30, 2015 at 8:50 AM, Kevin Wheatley
<kevin.j....@...> wrote: sounds like limits of float precision to me. that would mean beinglooking through the code the matrix is stored as single precision float and the bindings to python also assume as such, the image processing also works as float, storing the matrix as doubles would be possible, at the expense of some performance loss in processing (speculation on my part :-). This assumes your python uses doubles internally (likely the case). Separate but related note: there is some scattering of precision assumptions through the code too, though these are mostly limited to output of files... FLOAT_DECIMALS is set to 7 and there is a DOUBLE_DECIMALS used to output things, however it should really be using something like: 2 + std::numeric_limits<Target>::digits * 3010/10000; (or max_digits10() if we allow it) Which would make the values in the code incorrect... see http://www2.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1822.pdf (or http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF) However be aware there are 'limitations' on some platforms (MSVC 7.1 for instance) where you can not rely on round tripping of values in their standard library (dinkumware). I might see if I can knock up a patch for the output precision problems. Kevin |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
sounds like limits of float precision to me. that would mean being
'double' clean through the code, Kevin |
|
Precision of float values in config generated from Python
Haarm-Pieter Duiker <li...@...>
Hello, Is there a way to configure the precision used for floating-point values stored in a config? I have a Python script that generates a config based on a bunch of inputs, including a number of matrices that are declared in Python like so: matrix = [0.6954522414, 0.1406786965, 0.1638690622, 0.0447945634, 0.8596711185, 0.0955343182, -0.0055258826, 0.0040252103, 1.0015006723] Matrices are added to a MatrixTransform through calls something like this: ocio_transform = ocio.MatrixTransform() ocio_transform.setMatrix(matrix) We end up with a statement in the config that looks something like the following: ... to_reference: !<MatrixTransform> {matrix: [0.695452, 0.140679, 0.163869, 0, 0.0447946, 0.859671, 0.0955343, 0, -0.00552588, 0.00402521, 1.0015, 0, 0, 0, 0, 1]} ... Ignore the padding from 3x3 to 4x4. The values in the config print with 6, 7 or 8 decimals. The input values had 10. Is there any way to preserve the precision of the original values? Would it be better to use the spimtx files to store these values? Thanks in advance for your help, HP |
|
Re: ACES 1.0 released
Haarm-Pieter Duiker <li...@...>
Hi Steve,
toggle quoted message
Show quoted text
These all seem like fair points. I'll take a look at this tomorrow and update the config. The questions around the linear range covered by ACEScc may need a bit more investigation. HP On Wednesday, January 28, 2015, Steve Agland <sag...@...> wrote:
|
|
Re: ACES 1.0 released
Steve Agland <sag...@...>
Hi HP, I'm looking into rolling an internal project's OCIO configuration forward to use as much of your ACES 1.0 config as possible. I've run into a couple of issues and wanted to run them by you. Firstly, the allocation for most of the linear spaces (including ACEScg, and our current working space "Linear - P3-D60") is currently defined as: allocation: uniform allocationvars: [0, 1] This causes clipping of values > 1 when converting from the linear space into an Output Transform. It seems to only be a problem in certain OCIO implementations (I believe those using the GPU code path). In our case is was first noticed in RV. Changing these allocation settings to something like this seems to resolve the problem: allocation: lg2 allocationvars: [-8.5, 5, .003] I'm not sure if those values are optimal. Since it's only for interactive display it's probably fine. Using the lg2 allocation method seems to be the recommended approach for linear spaces. The second issue I've run into is with experimenting with the ACEScc space for use in grading/DI. We're using Nucoda. The current test workflow is I'm delivering ACEScc-encoded proxies of the final comps to DI, and supplying them with a couple of baked LUTs for display (ACEScc -> P3-DCI) and for a hypothetical future archive master (ACEScc -> ACES2065-1). This seems to work well for interactive grading and display with the few production images we've tested but I'm concerned about preserving as much information as possible in the final ACES master. The ACEScc_to_ACES.spi1d LUT bundled with the config has a range of 0.0 - 1.0. I think this could be wider. (Also, should this file perhaps should be called ACEScc_to_ACESccLin.sp1 since it doesn't include the matrix transformation?). The S-2014-003 document (Annex A, pp. 10) suggests that negative ACEScc values are to be expected - either from very dark linear values (< 7.25 stops below 18% middle grey), or from colors outside the AP1 gamut. At the other end, a value of 1.0 in ACEScc maps to ~223 in ACESccLin, but the spec suggests that the ACEScc -> ACESccLin conversion formula doesn't clip until you reach a linear value of 65504 (pp. 9). This seems to correspond to a value of about 1.468 in ACEScc. At the low end the curve seems to hit linear 0 at about -0.358. It seems like these values ( -0.358, 1.468 ) might be a reasonable range to use in the ACEScc_to_ACESccLin LUT - perhaps with more samples - in order to preserve more information when transforming in and out of ACEScc. To test this I generated my own alternative ACEScc_to_ACES.spi1d with that range and tried a back-and-forth conversion: ACES -> ACEScc -> ACES. It seems to better preserve very dark (esp. black) and very very bright values in the admittedly contrived syntheticChart.01.exr ACES test image. In practice maybe this isn't going to matter much but it seems like an opportunity for a little more accuracy for no extra cost. That's all a long way of saying "can we change a few of these numbers?" :) Let me know what you think. Cheers Steve On 14 January 2015 at 05:53, Haarm-Pieter Duiker <li...@...> wrote:
|
|
Implementing a CustomTransform
Lukas Stockner <lukas.s...@...>
Hi,
I recently found that it would be quite useful to have a CustomTransform that allows to add own Transforms without changing the actual OCIO code. My idea for this would be to have a CustomTransform class with a pure virtual function which is called by BuildCustomOps and puts the required Ops into the OpRcPtrVec. To add an own transform, it would then be enough to inherit MyTransform from CustomTransform and implement this virtual function (in addition to the regular Transform member functions), which then pushes self-defined Ops into the OpRcPtrVec. Are there problems with this approach? If no, should I implement it and send a pull request? Best regards, Lukas Stockner |
|
Re: ACES 1.0 released
Francois Lord <franco...@...>
Congrats to everyone who worked on this. It is a huge achievement and you have made a great improvement to the industry. On Tue, Jan 13, 2015, 13:53 Haarm-Pieter Duiker <li...@...> wrote:
|
|
ACES 1.0 released
Haarm-Pieter Duiker <li...@...>
Hello, The ACES 1.0 Developer Release is officially available! Full product information can be found on the revamped website, http://www.oscars.org/aces. To give you a preview, the ACES 1.0 release includes the following components:
The components of the ACES release can be found here
Thanks for your feedback on the needs of the ACES OCIO config in particular and the system in general. Please take a look at the source code, documentation, configs and let us know what you think. Send in feedback on this list, the ACES Google Group or the direct email ac...@... address. Thanks again for all the feedback. Please keep it coming. HP |
|
Re: Academy CTF LUT Format Support
Ben Doherty <benjdo...@...>
Certainly. Here's CBS Digital's fork of OCIO: You can see my latest commits. The bulk of the code is in FileFormatCTF.cpp. On Tuesday, December 9, 2014 3:08:29 PM UTC-8, Ben Doherty wrote:
|
|
Re: Academy CTF LUT Format Support
Mark Boorer <mark...@...>
Hi Ben, That sounds great! What you've described sounds like you're heading down the right track. If it's possible, could you post the code to a github account? That way we could better collaborate, and would make it easier for when the code is finally merged with OpenColorIO properly :)Very keen to see what you've got so far. Mark On Tue, Dec 9, 2014 at 11:08 PM, Ben Doherty <benjdo...@...> wrote:
|
|
Academy CTF LUT Format Support
Ben Doherty <benjdo...@...>
Hello all, My name is Ben Doherty, and I'm a developer working for CBS Digital. We’re interested in addiing support in OCIO for The Academy's Color Transform File (CTF) LUTs. This additional functionality will benefit us here at CBS and hopefully encourage more studios to adopt the LUT format. Allow me to outline my ideas and development steps so far. As I understand it, each LUT file format is abstracted via an anonymous FileFormat which is registered with the FileTransform class. I've done the following:
The CTF format is specified with XML. The bulk of a CTF LUT is sequential "Operator" elements (not to be confused with the OCIO Op class, though they are similar). A few examples: <matrix>, <gamma>, <range>, <lut1d>, etc. The Read() function parses the XML. For every Operator tag (<matrix>, <gamma>, etc) it comes across, it will instantiate the appropriate XMLTagHandler subclass. For instance, for every <matrix> tag, the class will instantiate a MatrixTagHandler. The job of a TagHandler is to read the specifics of the XML element and cache the salient data in a CachedOp subclass (e.g., MatrixCachedOp). Each tag, then, is converted to a CachedOp. A vector of these CachedOps is stored in the LocalCachedFile class. When the time comes to BuildFileOps(), the function iterates in order through the CachedOps within the LocalCachedFile, asking each one to create whichever OCIO Op is appropriate and adding it to the ops vector. Please let me know if I have overlooked anything. At the moment, I’ve successfully implemented the <matrix> tag, which seems to be working as expected. I’m more than open to design / implementation suggestions; I can post my code if necessary. Thanks! |
|
Re: ACES OCIO configs, testing and feedback
Andy Jones <andy....@...>
+1 for the common still camera gamuts. Prophoto, Adobe Wide Gamut, and Adobe RGB. There's a smattering of different whitepoints and gammas with those, which can be annoying to deal with. On Wed, Dec 3, 2014 at 12:22 PM, Matt Plec <mp...@...> wrote:
|
|
Re: ACES OCIO configs, testing and feedback
Matt Plec <mp...@...>
I'm a bit late to the party, but for what it's worth, Red log (with their various red color primaries I assume?), Canon log, and GoPro "protune" were popular requests for inclusion in the default Nuke set, so I expect people will also want to convert between those and ACES. Two others worthy of consideration are ProPhoto and Adobe RGB. May sound crazy but hear me out.... Having them available makes it possible to produce stills out of vfx tools in a form print/marketing can work with directly and also to bring in stills from print/photography workflows directly to an ACES reference space comp/render. Cheers, Matt On Fri, Nov 7, 2014 at 7:02 PM, Haarm-Pieter Duiker <li...@...> wrote:
|
|
Re: parseColorSpaceFromString() issue
mi...@...
I needed to get git-savvy anyway, so I went ahead and made a pull request for this patch: https://github.com/imageworks/OpenColorIO/pull/381 Cheers, -Mike On Thursday, October 2, 2014 10:16:05 AM UTC-7, mik...@... wrote:
|
|