Re: Support for OpenGL/GLSL > 2.0?
mark.alex...@...
texture3D() is also deprecated in GLSL 1.30, so it should be returning the texture() version for 1.30. An additional enum shouldn't be needed since GLSL is forward-compatible from 1.30 onwards. It's between 1.20 (GL2.x) and 1.30 (GL3.0) that there's a big break in the language. On Thursday, May 21, 2015 at 6:41:04 AM UTC-4, Boon Hean Low wrote:
|
|
Support for OpenGL/GLSL > 2.0?
Boon Hean Low <boonhe...@...>
Hi everyone, I saw that the GpuLanguage enum goes only up to GLSL 1.3, are there any plans to support anything higher? I'm using getGpuShaderText() to get the fragment shader snippets but its still using texture3D() which is deprecated in OpenGL 4.0 which I'm using now. thanks Boon |
|
Re: multiple config file
dbr/Ben <dbr....@...>
Merging/hierarchical configs have been discussed in the past, but I suspect will not happen any time soon (it is a difficult problem to solve)
toggle quoted message
Show quoted text
The best solution/alternative I am aware of is to generate your configs via a Python script. For example have a function which produces your base OCIO.Config object, then a function which does show-specific modifications and writes out a show-specific config That way you have a single source for all your common colorspaces etc, while still preserving a self-contained config.ocio for each show There is examples of creating configs from Python in the OpenColorIO-configs repository - Ben Sent from my phone On 29 Apr 2015, at 12:30, lucien...@... wrote: |
|
multiple config file
lucien...@...
Hi guys,
How can I set up my pipeline so I have multiple config file building my color management profile. The idea is to have config at facility level and then have show specific but that would add to the facility level config. I hope that makes sense. The bottleneck for me is the OCIO env is not a ':' seperated list of path but pointing to a file only. Anyone has a workaround? cheers Lucien |
|
Re: Review: added ociobuildicc app
Gerardo <gerard...@...>
Hello, Just to say thanks and let you know that last versions of LittleCMS are able to perform color conversions in unbounded full floating point mode and generates ICC v4.3 profiles. latest LCMS version is 2.6/2.7 v4.3 profiles (including cLUT profiles) are able to work in unbounded FP domain, so results won't be clipped to 16-bpc. Greetings, Gerardo |
|
Re: Python apply .cube ?
tre...@...
to answer my own question: It seems I had the wrong idea about how the FileTransform worked. From Configs/nuke-default/make.py : create the colorspace, then appy using oiio.ImageBufAlgo.colorconvert cs = OCIO.ColorSpace(name='sRGB') cs.setDescription("Standard RGB Display Space") cs.setBitDepth(OCIO.Constants.BIT_DEPTH_F32) cs.setAllocation(OCIO.Constants.ALLOCATION_UNIFORM) cs.setAllocationVars([RANGE[0], RANGE[1]]) t = OCIO.FileTransform('srgb.spi1d', interpolation=OCIO.Constants.INTERP_LINEAR) cs.setTransform(t, OCIO.Constants.COLORSPACE_DIR_TO_REFERENCE) config.addColorSpace(cs) |
|
Python apply .cube ?
tre...@...
Please help! I am trying to load in a file, resize it, apply a .cube LUT then write it out. I have the following so far, but I'm falling over between the oiio > ocio parts (one using an oiio.ImgBuf and the other using a pixel array) Have I missed something ? Is there an easier way to apply a .cube LUT to an image. Thanks for any pointers ! Trevor #load in file & resize origBuffer = oiio.ImageBuf(aFile) resizeBuffer = oiio.ImageBuf(oiio.ImageSpec (1920, 972, 3, oiio.FLOAT)) oiio.ImageBufAlgo.resize(resizeBuffer, origBuffer) config = OCIO.Config() transform = OCIO.FileTransform(src = self.LUT, interpolation = self.interpolation, direction = OCIO.Constants.TRANSFORM_DIR_FORWARD) processor = config.getProcessor(transform) pixels=self.inputNode.get_pixels() img = processor.applyRGBA(pixels) img.write(outputPath) |
|
Pull request for custom transforms
Lukas Stockner <lukas.s...@...>
Hello,
I decided to implement the custom transform idea I posted earlier and send a pull request: https://github.com/imageworks/OpenColorIO/pull/390 The implementation is quite short and simple, it just exposes a class that new Transforms can inherit. By implementing all the functions, it can then be used just like regular transforms. This is particularly useful for customizing a OCIO-based color pipeline, since own additions might be needed. In my case, this functionality is required for implementing a tonemapping operator in Blender, which uses OCIO for its color management. Currently, the only two options are to either maintain a modified version of OCIO, or to implement it alongside OCIO before or after the Processor is called. With this change, it would be possible to implement the tonemapper as a Transform and directly use it in the DisplayTransform. An example that shows how a custom transform looks like are these two files (header and source): http://www.pasteall.org/56658 http://www.pasteall.org/56657 So, I hope that this feature will be accepted, and will of course fix any issues that might be present. Best regards, Lukas Stockner |
|
Re: Precision of float values in config generated from Python
Dithermaster <dither...@...>
You need to print and later parse 8 digits to uniquely capture a float. ///d@ On Fri, Jan 30, 2015 at 6:32 PM, Haarm-Pieter Duiker <li...@...> wrote:
|
|
Re: Precision of float values in config generated from Python
Haarm-Pieter Duiker <li...@...>
Thanks for digging into this, and my apologies for missing the obvious limit of float precision. I was running into an issues where the limited precision is combined with high-intensity values to produce different results than the reference implementation. I've found a workaround though. More precision preserved throughout the process would still be appreciated. HP On Fri, Jan 30, 2015 at 3:53 AM, Kevin Wheatley <kevin.j....@...> wrote: There are cases in the code where the precision of the formatting is |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
There is also a bug in yaml-cpp which prevents this in
src/emitterstate.cpp it needs fixing to read: bool EmitterState::SetFloatPrecision(int value, FMT_SCOPE scope) { if(value < 0 || value > (2 + std::numeric_limits<float>::digits * 3010/10000)) return false; _Set(m_floatPrecision, value, scope); return true; } bool EmitterState::SetDoublePrecision(int value, FMT_SCOPE scope) { if(value < 0 || value > (2 + std::numeric_limits<double>::digits * 3010/10000)) return false; _Set(m_doublePrecision, value, scope); return true; } Then in OCIO's code you need to add something like: out.SetFloatPrecision(2 + std::numeric_limits<float>::digits * 3010/10000); out.SetDoublePrecision(2 + std::numeric_limits<double>::digits * 3010/10000); To the save function in OCIOYaml.cpp (near line 1664) At least by that point ociocheck will read and write a config file with more precision :-) Kevin |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
There are cases in the code where the precision of the formatting is
used to generate Cache Ids, changing the precision here would change the behavior there are also cases with luts where certain precision limits are assumed, my conclusion would thus be getting to full float precision should be doable, but with careful adjustments, in particular there might not be enough test coverage to ensure maintaining the behaviour exactly. Going to double precision to go beyond a maximum of 9 digits is a different matter, does it make a visible difference tot he images? Kevin |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
On Fri, Jan 30, 2015 at 8:50 AM, Kevin Wheatley
<kevin.j....@...> wrote: sounds like limits of float precision to me. that would mean beinglooking through the code the matrix is stored as single precision float and the bindings to python also assume as such, the image processing also works as float, storing the matrix as doubles would be possible, at the expense of some performance loss in processing (speculation on my part :-). This assumes your python uses doubles internally (likely the case). Separate but related note: there is some scattering of precision assumptions through the code too, though these are mostly limited to output of files... FLOAT_DECIMALS is set to 7 and there is a DOUBLE_DECIMALS used to output things, however it should really be using something like: 2 + std::numeric_limits<Target>::digits * 3010/10000; (or max_digits10() if we allow it) Which would make the values in the code incorrect... see http://www2.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1822.pdf (or http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF) However be aware there are 'limitations' on some platforms (MSVC 7.1 for instance) where you can not rely on round tripping of values in their standard library (dinkumware). I might see if I can knock up a patch for the output precision problems. Kevin |
|
Re: Precision of float values in config generated from Python
Kevin Wheatley <kevin.j....@...>
sounds like limits of float precision to me. that would mean being
'double' clean through the code, Kevin |
|
Precision of float values in config generated from Python
Haarm-Pieter Duiker <li...@...>
Hello, Is there a way to configure the precision used for floating-point values stored in a config? I have a Python script that generates a config based on a bunch of inputs, including a number of matrices that are declared in Python like so: matrix = [0.6954522414, 0.1406786965, 0.1638690622, 0.0447945634, 0.8596711185, 0.0955343182, -0.0055258826, 0.0040252103, 1.0015006723] Matrices are added to a MatrixTransform through calls something like this: ocio_transform = ocio.MatrixTransform() ocio_transform.setMatrix(matrix) We end up with a statement in the config that looks something like the following: ... to_reference: !<MatrixTransform> {matrix: [0.695452, 0.140679, 0.163869, 0, 0.0447946, 0.859671, 0.0955343, 0, -0.00552588, 0.00402521, 1.0015, 0, 0, 0, 0, 1]} ... Ignore the padding from 3x3 to 4x4. The values in the config print with 6, 7 or 8 decimals. The input values had 10. Is there any way to preserve the precision of the original values? Would it be better to use the spimtx files to store these values? Thanks in advance for your help, HP |
|
Re: ACES 1.0 released
Haarm-Pieter Duiker <li...@...>
Hi Steve,
toggle quoted message
Show quoted text
These all seem like fair points. I'll take a look at this tomorrow and update the config. The questions around the linear range covered by ACEScc may need a bit more investigation. HP On Wednesday, January 28, 2015, Steve Agland <sag...@...> wrote:
|
|
Re: ACES 1.0 released
Steve Agland <sag...@...>
Hi HP, I'm looking into rolling an internal project's OCIO configuration forward to use as much of your ACES 1.0 config as possible. I've run into a couple of issues and wanted to run them by you. Firstly, the allocation for most of the linear spaces (including ACEScg, and our current working space "Linear - P3-D60") is currently defined as: allocation: uniform allocationvars: [0, 1] This causes clipping of values > 1 when converting from the linear space into an Output Transform. It seems to only be a problem in certain OCIO implementations (I believe those using the GPU code path). In our case is was first noticed in RV. Changing these allocation settings to something like this seems to resolve the problem: allocation: lg2 allocationvars: [-8.5, 5, .003] I'm not sure if those values are optimal. Since it's only for interactive display it's probably fine. Using the lg2 allocation method seems to be the recommended approach for linear spaces. The second issue I've run into is with experimenting with the ACEScc space for use in grading/DI. We're using Nucoda. The current test workflow is I'm delivering ACEScc-encoded proxies of the final comps to DI, and supplying them with a couple of baked LUTs for display (ACEScc -> P3-DCI) and for a hypothetical future archive master (ACEScc -> ACES2065-1). This seems to work well for interactive grading and display with the few production images we've tested but I'm concerned about preserving as much information as possible in the final ACES master. The ACEScc_to_ACES.spi1d LUT bundled with the config has a range of 0.0 - 1.0. I think this could be wider. (Also, should this file perhaps should be called ACEScc_to_ACESccLin.sp1 since it doesn't include the matrix transformation?). The S-2014-003 document (Annex A, pp. 10) suggests that negative ACEScc values are to be expected - either from very dark linear values (< 7.25 stops below 18% middle grey), or from colors outside the AP1 gamut. At the other end, a value of 1.0 in ACEScc maps to ~223 in ACESccLin, but the spec suggests that the ACEScc -> ACESccLin conversion formula doesn't clip until you reach a linear value of 65504 (pp. 9). This seems to correspond to a value of about 1.468 in ACEScc. At the low end the curve seems to hit linear 0 at about -0.358. It seems like these values ( -0.358, 1.468 ) might be a reasonable range to use in the ACEScc_to_ACESccLin LUT - perhaps with more samples - in order to preserve more information when transforming in and out of ACEScc. To test this I generated my own alternative ACEScc_to_ACES.spi1d with that range and tried a back-and-forth conversion: ACES -> ACEScc -> ACES. It seems to better preserve very dark (esp. black) and very very bright values in the admittedly contrived syntheticChart.01.exr ACES test image. In practice maybe this isn't going to matter much but it seems like an opportunity for a little more accuracy for no extra cost. That's all a long way of saying "can we change a few of these numbers?" :) Let me know what you think. Cheers Steve On 14 January 2015 at 05:53, Haarm-Pieter Duiker <li...@...> wrote:
|
|
Implementing a CustomTransform
Lukas Stockner <lukas.s...@...>
Hi,
I recently found that it would be quite useful to have a CustomTransform that allows to add own Transforms without changing the actual OCIO code. My idea for this would be to have a CustomTransform class with a pure virtual function which is called by BuildCustomOps and puts the required Ops into the OpRcPtrVec. To add an own transform, it would then be enough to inherit MyTransform from CustomTransform and implement this virtual function (in addition to the regular Transform member functions), which then pushes self-defined Ops into the OpRcPtrVec. Are there problems with this approach? If no, should I implement it and send a pull request? Best regards, Lukas Stockner |
|
Re: ACES 1.0 released
Francois Lord <franco...@...>
Congrats to everyone who worked on this. It is a huge achievement and you have made a great improvement to the industry. On Tue, Jan 13, 2015, 13:53 Haarm-Pieter Duiker <li...@...> wrote:
|
|
ACES 1.0 released
Haarm-Pieter Duiker <li...@...>
Hello, The ACES 1.0 Developer Release is officially available! Full product information can be found on the revamped website, http://www.oscars.org/aces. To give you a preview, the ACES 1.0 release includes the following components:
The components of the ACES release can be found here
Thanks for your feedback on the needs of the ACES OCIO config in particular and the system in general. Please take a look at the source code, documentation, configs and let us know what you think. Send in feedback on this list, the ACES Google Group or the direct email ac...@... address. Thanks again for all the feedback. Please keep it coming. HP |
|