Re: Color space transform bidirectionality


Alan Jones <sky...@...>
 

Hi Jeremy,

On Mon, Aug 23, 2010 at 4:30 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately.
For practical purposes those are all probably invertible, but even a 1D lut
can lose information (such as everything below black point getting pushed
to 0 - though seeing that information isn't required it doesn't really matter).
I'm not a fan of putting too much effort into protecting users from their own
stupidity ;) An uninvertible matrix could also happen, but again, probably
not in practice.

I'd love to figure out
a way to make this more explicit in the API. Suggestions?
That depends - you make a good usage case that somewhat goes out the
window. The only real solution (which unfortunately cuts down your options
for processing shortcuts and also limits what LUTs you can bring in) I can
think of is something like follows.

A color space is defined and it is always defined relative to scene referred
linear. This way for each space it knows how to convert to or from scene
referred and nothing else. Then when it registers it can say which directions
it can handle.

You kind of also need to define types of LUTs at this point though. I'm
thinking of three. Storage LUTs (which pull data two and from representations
used for storage, such as S-Log etc), Grading LUTs (which are designed to
apply a particular look whether that's a color grade or a film transfer
representation), and Display LUTs (which modify values to ensure that it
displays correctly on a given device - i.e. sRGB, Rec709 etc).

With this storage LUTs would go from whatever their storage form across to
scene referred linear usually (though you'd use the reverse for saving
perhaps) then
the grading LUTs would make any changes to the color (if you had some magical
linear display then this data would look correct on that display, but
the grading
LUTs wouldn't be used until the end as not to screw the radiometric
linearity of
your data). The finally with that data you apply the display LUT for your output
device.

So it'd be
1 Storage LUT
0+ Grading LUTs
1 Display LUT
as the common workflow.

I've got no idea how well this aligns with the design goals of OCIO as I'm just
getting familiar with it, but that's how I'd look at structuring
things to help define
the process so it's more accessible to less technical users. It's pretty much
what I'm planning on putting in place in the pipeline I'm building
here, but on a
pipeline wide scale where at ingest everything is shifted to
radiometrically linear
space and stored in float (well probably half-float for most cases) on
disk. Then
people working in lighting/comp would have grading+display LUTs applied on
the fly so they're looking at as close to final as possible.

I'd be interested to hear your thoughts.

Cheers,

Alan.

Join ocio-dev@lists.aswf.io to automatically receive all group messages.