Color space transform bidirectionality


Jeremy Selan <jeremy...@...>
 

Alan,

OCIO is being a bit clever under the hood, though in my opinion not in
a dangerous way.

OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately. I.e, if your
colorspace definition is only built from cleanly invertible ops
(simple cc math, matrix ops, log, 1D lookup tables), and is round-trip
safe, the inverse transform is allowed. If your colorspace definition
relies on 3D LUTs, any attempt to run the inverse transform will fail
during the getProcessor call. (If you try to add a 3d lut to nuke and
try to perform the inverse you'll see this). I'd love to figure out
a way to make this more explicit in the API. Suggestions?

The reason we defer the inverse 3d lut check to this late part in the
API is to allow for the following interesting possibility.

Say you have 2 display emulation colorspaces defined as follows.

srgb: (A normal filmlook)
lin_to_log.lut1d
log_to_srgb.lut3d

srgb_warm: (A warmer filmlook varient).
lin_to_log.lut1d
log_to_srgb.lut3d
warm.mtx


Say you have pixels in a baked srgb representation, and you wanted to
view them with the warm look. Internally, our processing chain would
look like:
INPUT -> inverse log_to_srgb.lut3d -> inverse lin_to_log.lut1d ->
lin_to_log.lut1d -> log_to_srgb.lut3d -> warm.mtx -> OUTPUT

But, OCIO ( will soon) optimize this to a simplified conversion:
INPUT -> warm.mtx -> OUTPUT

This particular color space transformation, although conceptually
requiring an inverse 3d lut, doesnt actually end up using it!
Which is pretty cool, and really convenient in practice.

This is why in Nuke we just blindly added all color spaces to both the
input and output side, to allow for these circumstances. I can see
this is subtle though, and potentially misleading from a new user
perspective.

Thoughts?

-- Jeremy

On Mon, Aug 23, 2010 at 1:53 PM, Alan Jones <sky...@gmail.com> wrote:
Hi All,

I'm just reading through the Nuke plugin to get an idea of the API and
how things
are supposed to work together.

I notice that it adds all color spaces to both the input and output.
This assumes
that all color transforms can be reversed. I was under the impression that some
transforms would not be. Is there any facility within OCIO's current design for
this?

Cheers,

Alan.


Alan Jones <sky...@...>
 

Hi Jeremy,

On Mon, Aug 23, 2010 at 4:30 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately.
For practical purposes those are all probably invertible, but even a 1D lut
can lose information (such as everything below black point getting pushed
to 0 - though seeing that information isn't required it doesn't really matter).
I'm not a fan of putting too much effort into protecting users from their own
stupidity ;) An uninvertible matrix could also happen, but again, probably
not in practice.

I'd love to figure out
a way to make this more explicit in the API. Suggestions?
That depends - you make a good usage case that somewhat goes out the
window. The only real solution (which unfortunately cuts down your options
for processing shortcuts and also limits what LUTs you can bring in) I can
think of is something like follows.

A color space is defined and it is always defined relative to scene referred
linear. This way for each space it knows how to convert to or from scene
referred and nothing else. Then when it registers it can say which directions
it can handle.

You kind of also need to define types of LUTs at this point though. I'm
thinking of three. Storage LUTs (which pull data two and from representations
used for storage, such as S-Log etc), Grading LUTs (which are designed to
apply a particular look whether that's a color grade or a film transfer
representation), and Display LUTs (which modify values to ensure that it
displays correctly on a given device - i.e. sRGB, Rec709 etc).

With this storage LUTs would go from whatever their storage form across to
scene referred linear usually (though you'd use the reverse for saving
perhaps) then
the grading LUTs would make any changes to the color (if you had some magical
linear display then this data would look correct on that display, but
the grading
LUTs wouldn't be used until the end as not to screw the radiometric
linearity of
your data). The finally with that data you apply the display LUT for your output
device.

So it'd be
1 Storage LUT
0+ Grading LUTs
1 Display LUT
as the common workflow.

I've got no idea how well this aligns with the design goals of OCIO as I'm just
getting familiar with it, but that's how I'd look at structuring
things to help define
the process so it's more accessible to less technical users. It's pretty much
what I'm planning on putting in place in the pipeline I'm building
here, but on a
pipeline wide scale where at ingest everything is shifted to
radiometrically linear
space and stored in float (well probably half-float for most cases) on
disk. Then
people working in lighting/comp would have grading+display LUTs applied on
the fly so they're looking at as close to final as possible.

I'd be interested to hear your thoughts.

Cheers,

Alan.


Jeremy Selan <jeremy...@...>
 

Alan,

Sorry for the delay in responding. (Had to attend to non-OCIO
responsibilities this week).

The approach you describe is exactly in line with what I've been
thinking. Excellent.

A few additions:

You refer to the different LUT types as "storage, grading, and
display". In the currently library we dont draw hard lines between
the different transform uses. All three of these would be color
spaces, and are treated equally in the code. Adapting your terminology
to OCIO, "storage" color spaces would always provide transforms both
to and from scene linear, display color spaces typically would only
define a transform from scene linear, and grading spaces would be
defined dynamically at runtime. But there's no code-level distinction
between the types. Is there any benefit to doing so? I've been
thinking about adding
metadata (tags) that will let the user tag color spaces as belonging
to different categories. (such as 'display', 'IO', etc). This would
probably help in the UI (you could filter by tag), but I cant think of
any other uses so it's a bit low on the priority list.

* You refer to these things all as LUTs. I'd prefer to just call
them "transforms" to leave open the possibility of other correction
math. (For example, the grading lut could just as easily be a 1D lut,
a 3D lut, or a ASC-CDL).

Your processing pipeline is already expressed in the DisplayTransform.
You specify the storage LUT (inputColorSpace), the displayLut
(outputColorSpace), and also optional color correction(s). (Which
can occur in any color space, including scene linear or a log-like DI
space). The Nuke OCIODisplay node provides an example of using this
API.

-- Jeremy

On Mon, Aug 23, 2010 at 2:59 PM, Alan Jones <sky...@gmail.com> wrote:
Hi Jeremy,

On Mon, Aug 23, 2010 at 4:30 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately.
For practical purposes those are all probably invertible, but even a 1D lut
can lose information (such as everything below black point getting pushed
to 0 - though seeing that information isn't required it doesn't really matter).
I'm not a fan of putting too much effort into protecting users from their own
stupidity ;) An uninvertible matrix could also happen, but again, probably
not in practice.

I'd love to figure out
a way to make this more explicit in the API. Suggestions?
That depends - you make a good usage case that somewhat goes out the
window. The only real solution (which unfortunately cuts down your options
for processing shortcuts and also limits what LUTs you can bring in) I can
think of is something like follows.

A color space is defined and it is always defined relative to scene referred
linear. This way for each space it knows how to convert to or from scene
referred and nothing else.  Then when it registers it can say which directions
it can handle.

You kind of also need to define types of LUTs at this point though. I'm
thinking of three. Storage LUTs (which pull data two and from representations
used for storage, such as S-Log etc), Grading LUTs (which are designed to
apply a particular look whether that's a color grade or a film transfer
representation), and Display LUTs (which modify values to ensure that it
displays correctly on a given device - i.e. sRGB, Rec709 etc).

With this storage LUTs would go from whatever their storage form across to
scene referred linear usually (though you'd use the reverse for saving
perhaps) then
the grading LUTs would make any changes to the color (if you had some magical
linear display then this data would look correct on that display, but
the grading
LUTs wouldn't be used until the end as not to screw the radiometric
linearity of
your data). The finally with that data you apply the display LUT for your output
device.

So it'd be
1 Storage LUT
0+ Grading LUTs
1 Display LUT
as the common workflow.

I've got no idea how well this aligns with the design goals of OCIO as I'm just
getting familiar with it, but that's how I'd look at structuring
things to help define
the process so it's more accessible to less technical users. It's pretty much
what I'm planning on putting in place in the pipeline I'm building
here, but on a
pipeline wide scale where at ingest everything is shifted to
radiometrically linear
space and stored in float (well probably half-float for most cases) on
disk. Then
people working in lighting/comp would have grading+display LUTs applied on
the fly so they're looking at as close to final as possible.

I'd be interested to hear your thoughts.

Cheers,

Alan.


Alan Jones <sky...@...>
 

Hi Jeremy,

On Fri, Aug 27, 2010 at 4:18 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
Sorry for the delay in responding. (Had to attend to non-OCIO
responsibilities this week).
No worries :)

The approach you describe is exactly in line with what I've been
thinking. Excellent.
This is great news :)

But there's no code-level distinction
between the types.  Is there any benefit to doing so?
I agree, the only benefit is from a UI perspective to assist the user
in not doing something stupid. Downside to this is sometimes
there is a good reason to do something stupid. So perhaps it would
be best to go for some middle road (listing the logical ones first and
including the type of all transforms for instance).

I'd prefer to just call them "transforms" to leave open the possibility of other correction
math.  (For example, the grading lut could just as easily be a 1D lut,
a 3D lut, or a ASC-CDL).
Sounds like a logical choice of terminology to me.

Your processing pipeline is already expressed in the DisplayTransform.
 You specify the storage LUT (inputColorSpace), the displayLut
(outputColorSpace),  and also optional color correction(s).  (Which
can occur in any color space, including scene linear or a log-like DI
space).  The Nuke OCIODisplay node provides an example of using this
API.
Great stuff :) thanks for sharing. Slowly getting my head around how OCIO
works.

Cheers,

Alan