Re: rv integration
Jeremy Selan <jeremy...@...>
Super excited about getting RV support. Can't wait!
Allow me to take a stab at some of your questions. This will probably be a lengthy reply... On Fri, Jun 22, 2012 at 11:43 AM, Jim Hourihan <jimho...@...> wrote: * Beyond file data linearization and display correction what would the priority be of using OCIO for e.g. multiple looks, per-shot CDL, changing the context, etc.Per-shot CDLs / OCIO::Contexts are two sides of the same coin. Per-shot corrections, (whether they're 3d luts, CDLs, or something else) is something we use internally at Imageworks quite liberally, and would be wonderful to have. (I know many of our client's workflows live or die on per-shot 3d lut support...) From the perspective of an image playback tool, this all comes down to providing a custom OCIO::Context to the getProcessor(...) query. API users can think of the Context as representing all environment variable set at launch time, which can then be overridden. In applications which are re-launched between shots (such a compositing apps or lighting apps), the Context -- defaulting to the current launch envvars -- will "just work". I.e., per-shot luts will be supported without the user worrying about the Context. But in an image playback utility, where per-shot luts are changing from moment to moment, using the OCIO::Context is likely a necessity. I would expect that a facility-neutral implementation of per-shot luts in RV would provide some sort of callback (similar to source_setup), which would be given the new media/clip object (or list of properties about it), and then do facility specific parsing, and in return provide a list of 'context' overrides. For example, in one implementation a studio may choose (when the shot changes in the image playback tool), to parse the image metadata, look for a 'SHOT' field, and the set (SHOT, SHOT_VALUE) in the context. Anther studio may choose to parse the filenames instead and set "SEQUENCE". But the uniformity of the approach is to have the user script get access to the clip/media properties as input, and as output generate a series of key/value string pairs that are passed to OCIO::Config::GetProcessor(...) call. Does this make sense? Note that OCIO will abstract the actual lookup of the per-shot luts. (Resolving the context envvars into filenames). The client application should not worry about what files were actually used, it merely has to setup the Context appropriately. You can introspect and see what luts were actually used using the getMetadata() call. But note that the return value is an unordered set of all the LUTs used, and is not intended to circumvent OCIO's internal processing engine. * On the GPU path how is a prelut (aka shaper lut) generated and used? In the ociodisplay example's main it looks like there was an intention to have one but the example doesn't use it.OCIO doesnt explicitly use shaper luts, but instead relies on a similar 'allocation' mechanism to make sure all 3d lut lattice samplings are well behaved. This provides a bit more info: http://opencolorio.org/configurations/allocation_vars.html The reason shaper luts are insufficient within OCIO is that a shaperlut is tightly coupled to the image's input color space (such as a single lin -> 3d lut transform). But... in OCIO the actually color processing is an NxN problem, with a custom transform for each input color space and each output color space combination. It gets even more complicated when one considers that a canonical DisplayTransform includes slots for per-shot luts, scene-linear fstop offset controls, post display-transform ccs, etc. So rather than overwhelm the user with requirements for a million shaper luts, the allocation 'metadata' essentially allows a prelut to be computed analytically on the fly, as needed. Note: A recent facility just found a minor issue in the internal allocation handling, which is exacerbated in specific (up until now) rare conditions. We intended on fixing this in an upcoming dot release, and it will not impact the public API. The GPU path is fully functional, in that any configuration you create can be used on the GPU, independent of the number of luts applied, etc. But... the GPU codepath currently is an approximation of the CPU path, and as such is not suitable for baking into final deliverables. The reason it's an approximation is that the GPU assumes the existence of a single 3D lut sampling point, and many color processing chains require multiple 3D luts. So in these cases the data will be baked on the fly into a similar, but not exactly indentical, representation. However, in our experience, the quality of the existing GPU pathway when properly configured is more than sufficient, even for live client reviews, and the increase in performance from the GPU 3D lut code makes this definitely worth it in viewer/playback applications. Examples of applications that use the GPU codepath in their viewers include Katana, Mari, and Silhouette. Examples of apps that use the CPU codepath in their viewer is Nuke. -- Jeremy
|
|