Re: rv integration

Jim Hourihan <jimho...@...>

On Jun 29, 2012, at 11:25 AM, Jeremy Selan wrote:

Super excited about getting RV support. Can't wait!

Allow me to take a stab at some of your questions. This will probably
be a lengthy reply...
Excellent! I have OCIO in our build system and integrated into our (new) renderer so things are going well. I have some questions (below)

On Fri, Jun 22, 2012 at 11:43 AM, Jim Hourihan <jimho...@...> wrote:

* Beyond file data linearization and display correction what would the priority be of using OCIO for e.g. multiple looks, per-shot CDL, changing the context, etc.
From the perspective of an image playback tool, this all comes down to
providing a custom OCIO::Context to the getProcessor(...) query. API
users can think of the Context as representing all environment
variable set at launch time, which can then be overridden. In
applications which are re-launched between shots (such a compositing
apps or lighting apps), the Context -- defaulting to the current
launch envvars -- will "just work". I.e., per-shot luts will be
supported without the user worrying about the Context.

But in an image playback utility, where per-shot luts are changing
from moment to moment, using the OCIO::Context is likely a necessity.

I would expect that a facility-neutral implementation of per-shot luts
in RV would provide some sort of callback (similar to source_setup),
which would be given the new media/clip object (or list of properties
about it), and then do facility specific parsing, and in return
provide a list of 'context' overrides.

For example, in one implementation a studio may choose (when the shot
changes in the image playback tool), to parse the image metadata, look
for a 'SHOT' field, and the set (SHOT, SHOT_VALUE) in the context.
Anther studio may choose to parse the filenames instead and set
"SEQUENCE". But the uniformity of the approach is to have the user
script get access to the clip/media properties as input, and as output
generate a series of key/value string pairs that are passed to
OCIO::Config::GetProcessor(...) call.

Does this make sense?
Yes it makes sense. The one thing I'm missing here is how you create the proper context per-shot. It seems pretty clear that the facility will modify the source_setup in rv to figure out per-shot what to do (choosing contexts). But how do they communicate that to rv? The usual mechanism in rv is to set a property in rv's graph which is read by a node during evaluation. In this case that implies supplying the name (or list of names) of a context, but it looks like the contexts are not named so we can't fetch one internally like that (or are they filenames of configs?). Is the expectation that the context pointer is passed directly? I'm probably missing something. I read the docs, but am unclear as to whether a context is another config file, created programmatically, or something else.

Note that OCIO will abstract the actual lookup of the per-shot luts.
(Resolving the context envvars into filenames). The client
application should not worry about what files were actually used, it
merely has to setup the Context appropriately. You can introspect
and see what luts were actually used using the getMetadata() call.
But note that the return value is an unordered set of all the LUTs
used, and is not intended to circumvent OCIO's internal processing
I don't think we'll need to do that. So no problem there. The generated shader + LUT is called directly. Once its working I may need to get information about whether or not the 3D LUT was actually used or not to prevent running out of samplers, but that can wait.

* On the GPU path how is a prelut (aka shaper lut) generated and used? In the ociodisplay example's main it looks like there was an intention to have one but the example doesn't use it.
OCIO doesnt explicitly use shaper luts, but instead relies on a
similar 'allocation' mechanism to make sure all 3d lut lattice
samplings are well behaved. This provides a bit more info:

The reason shaper luts are insufficient within OCIO is that a
shaperlut is tightly coupled to the image's input color space (such as
a single lin -> 3d lut transform). But... in OCIO the actually color
processing is an NxN problem, with a custom transform for each input
color space and each output color space combination. It gets even
more complicated when one considers that a canonical DisplayTransform
includes slots for per-shot luts, scene-linear fstop offset controls,
post display-transform ccs, etc. So rather than overwhelm the user
with requirements for a million shaper luts, the allocation
'metadata' essentially allows a prelut to be computed analytically on
the fly, as needed.
I was thrown off by the declaration of another sampler (tex1) in ociodisplay's GLSL main() shader text. I realized later it was not being used.

* Is the GPU path fully functional compared to the CPU path?
The GPU path is fully functional, in that any configuration you create
can be used on the GPU, independent of the number of luts applied,

But... the GPU codepath currently is an approximation of the CPU path,
and as such is not suitable for baking into final deliverables. The
reason it's an approximation is that the GPU assumes the existence of
a single 3D lut sampling point, and many color processing chains
require multiple 3D luts. So in these cases the data will be baked on
the fly into a similar, but not exactly indentical, representation.

However, in our experience, the quality of the existing GPU pathway
when properly configured is more than sufficient, even for live client
reviews, and the increase in performance from the GPU 3D lut code
makes this definitely worth it in viewer/playback applications.
Examples of applications that use the GPU codepath in their viewers
include Katana, Mari, and Silhouette. Examples of apps that use the
CPU codepath in their viewer is Nuke.
There are four places in rv's graph that are roughly analogous to OCIO: a CPU-only LUT pipeline before the main memory cache (per-shot), file->working space (per-shot), look (per-shot), and working space->display (only one slot for whole session -- or perhaps per output device in the future). I'm thinking we allow OCIO to take over any or all of those positions if desired. I think that would cover all bases. For better or worse, we'll have to use the GPU path for all but the pre-cache position. So if the user goes directly to the final display pixels before caching that's fine, but all subsequent operations will be in the display color space.

rvio would be the same. So when using rvio + OCIO for baking people may want to do the entire OCIO pipeline in one go in the pre-cache slot so its all done on the CPU.

The current integrated code allows the user to specify the in and out color spaces for each of the four slots above. I assume the context should also be specified for each; is that right?

One of our guys just rewrote the source_setup in python so we'll be able to import the OCIO module directly -- I think its safe to say that takes care of source_setup integration; at least for querying OCIO from the user level.

(BTW, when we [tweak] say "color space" we mean *all* of the degrees of freedom associated with a color transform not just the linear ones -- so that includes the YUV weights, non-linear transfer function + parameters, as well as the chromaticities or any other linear transform. I realize people don't all agree with that use of the noun phrase "colou?r space" so bear with me on that. I'll try and qualify it when appropriate. It looks like the OCIO "colorspace" is used in the similar way. )

Reiterating my questions for clarity:

1) What *is* a context (an ocio config file? programmatically defined only?)
2) How should the user (or rv developer) specify which context to use?
3) Am I missing a use case if OCIO is swapped in for any/all of the existing pre-cache, file, look, and display LUT pipelines?

Thanks for the help.


-- Jeremy

Join to automatically receive all group messages.