Re: Color space transform bidirectionality


Jeremy Selan <jeremy...@...>
 

Alan,

Sorry for the delay in responding. (Had to attend to non-OCIO
responsibilities this week).

The approach you describe is exactly in line with what I've been
thinking. Excellent.

A few additions:

You refer to the different LUT types as "storage, grading, and
display". In the currently library we dont draw hard lines between
the different transform uses. All three of these would be color
spaces, and are treated equally in the code. Adapting your terminology
to OCIO, "storage" color spaces would always provide transforms both
to and from scene linear, display color spaces typically would only
define a transform from scene linear, and grading spaces would be
defined dynamically at runtime. But there's no code-level distinction
between the types. Is there any benefit to doing so? I've been
thinking about adding
metadata (tags) that will let the user tag color spaces as belonging
to different categories. (such as 'display', 'IO', etc). This would
probably help in the UI (you could filter by tag), but I cant think of
any other uses so it's a bit low on the priority list.

* You refer to these things all as LUTs. I'd prefer to just call
them "transforms" to leave open the possibility of other correction
math. (For example, the grading lut could just as easily be a 1D lut,
a 3D lut, or a ASC-CDL).

Your processing pipeline is already expressed in the DisplayTransform.
You specify the storage LUT (inputColorSpace), the displayLut
(outputColorSpace), and also optional color correction(s). (Which
can occur in any color space, including scene linear or a log-like DI
space). The Nuke OCIODisplay node provides an example of using this
API.

-- Jeremy

On Mon, Aug 23, 2010 at 2:59 PM, Alan Jones <sky...@...> wrote:
Hi Jeremy,

On Mon, Aug 23, 2010 at 4:30 PM, Jeremy Selan <jeremy...@...> wrote:
OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately.
For practical purposes those are all probably invertible, but even a 1D lut
can lose information (such as everything below black point getting pushed
to 0 - though seeing that information isn't required it doesn't really matter).
I'm not a fan of putting too much effort into protecting users from their own
stupidity ;) An uninvertible matrix could also happen, but again, probably
not in practice.

I'd love to figure out
a way to make this more explicit in the API. Suggestions?
That depends - you make a good usage case that somewhat goes out the
window. The only real solution (which unfortunately cuts down your options
for processing shortcuts and also limits what LUTs you can bring in) I can
think of is something like follows.

A color space is defined and it is always defined relative to scene referred
linear. This way for each space it knows how to convert to or from scene
referred and nothing else.  Then when it registers it can say which directions
it can handle.

You kind of also need to define types of LUTs at this point though. I'm
thinking of three. Storage LUTs (which pull data two and from representations
used for storage, such as S-Log etc), Grading LUTs (which are designed to
apply a particular look whether that's a color grade or a film transfer
representation), and Display LUTs (which modify values to ensure that it
displays correctly on a given device - i.e. sRGB, Rec709 etc).

With this storage LUTs would go from whatever their storage form across to
scene referred linear usually (though you'd use the reverse for saving
perhaps) then
the grading LUTs would make any changes to the color (if you had some magical
linear display then this data would look correct on that display, but
the grading
LUTs wouldn't be used until the end as not to screw the radiometric
linearity of
your data). The finally with that data you apply the display LUT for your output
device.

So it'd be
1 Storage LUT
0+ Grading LUTs
1 Display LUT
as the common workflow.

I've got no idea how well this aligns with the design goals of OCIO as I'm just
getting familiar with it, but that's how I'd look at structuring
things to help define
the process so it's more accessible to less technical users. It's pretty much
what I'm planning on putting in place in the pipeline I'm building
here, but on a
pipeline wide scale where at ingest everything is shifted to
radiometrically linear
space and stored in float (well probably half-float for most cases) on
disk. Then
people working in lighting/comp would have grading+display LUTs applied on
the fly so they're looking at as close to final as possible.

I'd be interested to hear your thoughts.

Cheers,

Alan.

Join {ocio-dev@lists.aswf.io to automatically receive all group messages.