Date   

Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

I'll be posting (anonymously) the previous comments on ver 0.1 so that
new members will be able to easily track the comment history.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

... from an anonymous commenter, some excellent questions. (I'll be
adding these to the FAQ).

One question I think would be good for the faq: how does this system
compare to commercial systems like Truelight and Cinespace? I have a
general idea of what the answer is, but I think it's something that
people might be wondering.
This system is complementary to Truelight / Cinespace. Although
Truelight comes with a bunch of plugins, its core functionality is to
generate the 3D luts for things like print emulation, device mapping,
etc, and I don't see this role changing. (Our library does not
attempt
to fill these roles). It will be very straightforward to implement
a
reader for unencrypted truelight luts. Note that supporting
Truelight
*encrypted* luts is probably not simple due to legal considerations.
I
am unsure if Cinespace has the capability to export un-encrypted LUTs.

Also, I didn't totally get this line in the faq: "GPU Acceleration is
handled by dynamically generating shader code and associated 3D luts. We
DO NOT, and WILL NOT link to GL / Cg / etc." Maybe I don't fully
understand which chunk of code you're going to be open-sourcing ...
you're saying that this project will have no calls to GL or Cg? Is the
Cg stuff not part of this project?
Sorry for being confusing. ALL of the Cg / GLSL generation stuff
we've
developed at SPI (related to color processing) is being included in
this
release.

This entry was intended to clarify issues relating to integration;
that
even though this library generates cg / glsl code, we don't link to
Cg/
GL directly. Instead, the library returns "simple" data types, a
std::string with the shader text, and a float* with the lookup
table.
It is left to the client app to make the appropriate GL calls
accordingly. (which in our experience is not much of a burden, and
actually makes everyones lives simpler). A bit of extra information
(such as cacheIDs for the 3d luts) are also included to allow for
additional client-side optimizations. (In the 3dlut cacheid case, to
prevent reloading the 3d texture every time something changes).

On Apr 21, 6:07 pm, Jeremy Selan <jeremy...@...> wrote:
I'll be posting (anonymously) the previous comments on ver 0.1 so that
new members will be able to easily track the comment history.

--
Subscription settings:http://groups.google.com/group/ocs-dev/subscribe?hl=en


Original Letter Of Intent (cleaned up a bit)

Jeremy Selan <jeremy...@...>
 

Color Folks,

Sony Imageworks is planning to open-source our color pipeline, and
we'd like your feedback.

Our hope is not to re-invent the wheel, but to help facilitate color
interchange between vfx / animation houses, and to standardize image
visualizations / transformations across tools. Many of you personally
deal with this pain on a daily basis, and I'm pretty sure no one
considers it a solved problem.

Note:
Our goal is NOT to push our internal pipeline eccentricities on other
facilities. Rather, our hope is that if 95% of what we've done is
generic enough to support arbitrary pipelines -- as we believe it is
-- the time is ripe to lay down a standard which could make everyone's
lives easier.

So what are we looking from you?

We would love your honest, unfiltered feedback:
* Is this project conceptually useful to your organization?
* Would you use it? (or recommend using it?)
* Are there design choices we've made that limit its usefulness?
* Are there commercial tools you'd like us to work on support
for?
* Do you know of something better (open source), which we're not
aware of?
* Are you interested in getting early versions of the library?

I am including 3 attachments:

* A project overview / FAQ
* The C++ header for our current library
* The XML configuration for our default "visual effects" color setup

NOTE: The latter two documents are NOT the final spec for the library,
and have not been de-imageworks-ified". But, both files are a good
staring point to get everyone on the same page. (They are in fact our
'live' production code / configuration).

Please do not hesitate to contact me personally if you'd prefer that
route over open email feedback.

Regards,
Jeremy Selan


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

---- Commenter 1

This all looks really cool. I don't know what your budget for
getting this ready for a wider world is, but you may want to consider
removing the STL from the code. Some people are very particular about
what parts of the C++ standard they use. Also, you may want to
reconsider throwing exceptions. The API is very nice, I'd be happy to
call this code.

---- Jeremy

Thanks!

Does anyone else have concerns with the API being C++ (as opposed to
pure C)? My inclination is to leave it as C++ for now as most of the
initial apps / plugin APIs we're targeting right off the bat support C+
+. And, I think the average user probably won't be looking at the code
very often. My hope is that they'll follow some simple steps to
install the library and any associated plugins they care about, and
wont have to think about the language at all. I also think switching
to a pure C API may be a bit more work than I'd prefer at this
stage. OpenEXR solves this in a clever manner, with it's
ImfCRgbaFile.h compatibility header. Should it prove useful, we could
always add a C wrapper layer in that vein.

How do people feel about exceptions? I'm comfortable with them, but I
understand that many teams avoid using them. I would be happy to swap
out an exception approach with a success / fail return code if people
prefer. (The disadvantage being that good error reporting is harder
in these cases - you can always use a get/set errorstring approach,
but I've never loved that in a multi-thread environment).

---- Commenter 2

Adding some more applications:
- Adobe CS (Photoshop. Aftereffects, etc) (ICC profiles)
- Pixar Renderman 'It' viewer (Truelight ASCII .cub 3D lut)
- Assimilate Scratch (Arri /Kodak .3dl)
- Iridas Framecycler (Iridas .cube)
- Autodesk (MESH3D, .lut, etc.)

And for completeness (not sure if all of these make sense):
-Avid Media Composer
-Final Cut Pro
- OpenFX (plugin API) "

---- Jeremy

Good format additions, I'll add those to the FAQ. (Which I will resend
or turn into a google doc once enough updates accumulate). None these
sound difficult to support, particularly if people are open to sending
me format examples / docs. We already have basic support for
exporting ICC profiles and import .3dl support. (Though I thought .
3dl was an autodesk format, am I mistaken? probably...)


---- Commenter 2

Assuming tier 1 and 2 support for 3rd party applications are handled
by linking against commercial libraries, my vote is for the core
library to be the default built target. Requiring the developer to
explicitly list the packages for which 3rd party support is required,
eg:

configure --with-nuke=/usr/lib/nuke6 --with-rv=/usr/lib/rv

...would allow a single invocation of 'make' to build the plugins for
which the necessary libraries are present.

Code that may have proprietary library dependencies should live
alongside (not in) the main library:

colorlib-
|-common-
| |-include
| |-lib
|
|-3rd_party_plugins
|-foundry-
| |-nuke
|
|-apple-
|-shake
|-finalcut

Wrapping multiple 'product' directories under 'vendor' directories
addresses the case of some vendors that have inconsistent APIs across
products (autodesk, apple).

---- Jeremy

I'm just starting to think about the build system but this sounds like
a good approach. I shall use it for now.

My hope is also, to the extent possible, to provide pre-built
libraries and plugins. libstdc++ will likely be the biggest
dependency for linux, so we should be able to compile only a few
architectures and hit the major platforms, right? (I may be wrong
here though).


---- Commenter 2

Providing multiple levels of service:

Considering that small or offshore VFX facilities may not have
resources to compile plugins from source, it might be nice for the
library to be able to provide a lower level of service than the one it
is capable of delivering -- eg, an exporter of LUTs for use in an
application's native Lookup plugin.

In fact, even some medium-to-large facilities might elect to use the
library in 'tier-3' mode as a 'universal translator' for generating
show-specific LUTs for a range of 3rd party products (at the expense
of some lost accuracy).


---- Jeremy

Sure. I like this idea of providing lut exporters for as many formats
as possible. With the python API, these are really easy to write.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

---- Commenter

You mention rgb and rgba. How is alpha handled? Is there a policy?

---- Jeremy

Current policy is that alpha is always passed unchanged. For
applications which wish to be more clever (such as special premult /
unpremult handling), the hope is for this library to continue to be
ignorant of the issue.

In the next rev of the API, this will be explicitly documented in the
header. (numChannels will be replaced with a pixelStride, and we will
explicitly mention that the calls only modify the rgb triples for
processing). This will also let us handle >4 channel images.


---- Commenter

When you decline support for spatially varying color xforms, do you
mean image space? So, things like histogram-based tone curves, or
whatever. Or did you mean no RGB space deformation operators?

---- Jeremy

Yes, I mean image space varying transforms are not supported.
Histogram-based operations, while they dont currently exist, are fine.
There is just a lot of code internally that processes in scanlines and
I would prefer to keep all pixels independent internally. This is also
required in the GPU pipeline, which makes this assumption as it bakes
the color transform stack into a 3d lut(s).

---- Commenter

After reading through the info a couple times, I am wondering about a
few key terms. You use "reference" and "linear" and "log"
occasionally, and I think these need to be dealt with more strictly.

---- Jeremy

Agreed. I will scrub the document of all references to linear or
log, except in sections that are explicitly discussion workflow
conventions. (Where I feel the are appropriate, as long as the terms
are qualified / clarified at their first introduction).

---- Commenter

It appears that the "reference" color space (the working space, or
connection space) is not ever specified. It only exists as a by-
product of the to_linear and from_linear methods in the config file.
Since that working space can (conceptually) be arbitrary, it seems
weird that all the code refers to it as linear. Now, I'm not
suggesting anyone use something other than linear, but you do in the
README. So, the extreme request is: you should change all the
"linear"s to "reference"s. Which might be a bad idea, but take a few
moments and read thru the material you sent doing a mental replace of
"reference" for "linear". I feel that you will find that the
Reference space needs to be specified.

---- Jeremy

This is an excellent suggestion, and definitely take it to it's
logical conclusion. I will go ahead and remove all references to
linear from the API and configuration file and substitute the term
'reference' as appropriate. Thinking about this further, the two API
calls which probably caught your attention: ConvertColorspaceToLinear,
and ConvertLinearToColorspace, are probably unnecessary anyways. The
more general function ConvertColorspaceToColorspace should be
sufficient, and then would remove a lot of ambiguity from the API.

Along the same topic, there are bunch of calls to get the names of
colorspaces corresponding to certain use cases: GetLinearColorspace,
GetLogColorspace, GetColorTimingColorspace, etc.

This is the only place colorspaces names are 'enshrined' in any way, I
will probably refine this call to be more abstract. Something more
like
enum ColorspaceRole { SCENE_LINEAR, COMPOSITING_LOG, COLOR_TIMING,
DATA, }

GetColorspaceForRole(ColorspaceRole role);

... where the mapping from role to colorspace is defined in the show
configuration file.

Having a few predefined roles (rather than accepting arbitrary names
as roles) has proven to be a useful abstraction. For example, if we
were to write a LogConvert node from Nuke, with pre-defined role enums
(which will be documented) our LogConvert node could assume something
like,

ConvertColorspaceToColorspace( GetColorspaceForRole(SCENE_LINEAR),
GetColorspaceForRole(COMPOSITING_LOG) ), etc.

In our experience new roles have not come along that often, and new
ones can be added at any time.

---- Commenter

On the phone we discussed the idea that two shows might be both
linear, but have different color primaries (e.g. wide versus 709). I
realize you don't work that way now, in particular avoiding reading
assets from one config into another config. But where does the color
primary part go? Is there an example buried in the sample config?

---- Jeremy

We do not have an example of this in a current config, so here's a
mockup of what it could look like:

// Show 'A', native r709
<config>
<colorspace name='lin709'></colorspace>
<colorspace name='linP3'>
<to_reference>
<transform file='p3_to_709.mtx' direction='forward'/>
</to_reference>
<from_reference>
<transform file='p3_to_709.mtx' direction='inverse'/>
</from_reference>
</config>

// Show 'B', native P3
<config>
<colorspace name='linP3'></colorspace>
<colorspace name='lin709'>
<to_reference>
<transform file='p3_to_709.mtx' direction='inverse'/>
</to_reference>
<from_reference>
<transform file='p3_to_709.mtx' direction='forward'/>
</from_reference>
</config>


Assuming your linear files are tagged appropriately with "linP3" or
"lin709" (either as tokens in the header or by dynamically inspecting
other attributes) things will work out correctly.

No matter which show you are set to use, images will be transfomed in
the the appropriate working space as expected. Of course, this
assumes
that each show is setup in a complementary manner to allow this. This
assumption would be broken if for instance two shows disagreed on
what
the p3_to_709 matrix was.

---- Commenter

The other word is "log", which you use here: "converting log to linear
is not knowable". I didn't follow this logic. There is a lg10.lut1d
file somewhere, why is it more special than any other lut file?

---- Jeremy

Ah, lg10.lut1d is not more special than any other lut files. I was
trying to get across the concept that the contents of the
<to_reference> / <from_reference> blocks are not queryable through the
API by design. (The transforms are essentially a single black box
externally).

In terms of generality, there are no magic named colorspaces
whatsoever inside the API. All configuration info comes from the
single file. As discussed above, we do expect to allow for pre-defined
Colorspace "Labels / Roles" which are geared towards the vfx /
animation pipeline, but I think there is a strong argument in favor of
this.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Additional Feedback

Jeremy Selan <jeremy...@...>
 

<Jeremy: this has been edited to remove 'personal' comments>

Commenter:

Answering your questions...

* Is this project conceptually useful to your organization?

Yes, we would love a cross application colour library that gives
consistent results on all applications with minimal user effort and
developer pain.

* Would you use it? (or recommend using it?)

Probably, we'd like to understand it in a bit more detail, and we
would need to sit down and discuss how/when/where we used it.

* Are there design choices we've made that limit its usefulness?

Straight up, for Nuke to user the CPU path, we'd need to pass in
separate pointers for R, G and B data, as they are allocated
separately and do not live at fixed strides from each other.

* Are there commercial tools you'd like us to work on support for?

Truelight

* Do you know of something better (open source), which we're not
aware of?

No

################################################################################

I've looked over the lightly commented header file, the explaining
text and the comments from others so far. While I understand that this
is the API at the moment and will no doubt change, I'm not sure of the
exact relationship between objects/entities within it. While I get the
basic ideas, as in what a colour space is, what colour timing is, I'm
not sure of some of the details, eg: what is the difference between a
Look and a FilmLook, are colour spaces simply to deal with things like
log/lin or do they extend to 3D colour conversion? Do you have a FAQ/
overview document somewhere that goes into slightly more detail on the
API?

################################################################################
On the general engineering front.

I've seen it mentioned in the posts Jon forwarded to me, but a C API
is more robust for code in a dynamic library. I generally prefer C++
myself, but you could have a core C API and wrap it with some very
very thing C++ classes as an option.

If you have
class ASCColorCorrection {...}
and a function
void ApplyASCColorCorrection(...., const ASCColorCorrection
&c, ...);
why not make that function with global scope a member function 'apply'
in the class?

If you start adding row and components strides as well as separate
buffer pointers the API, it will make for a complex set of calls to
apply a conversion/look to an image. I'd suggest a very lightweight
wrapper class for images that packages all that up, and you pass an
object of that class into the relevant functions. Given that you are
talking about supporting fixed component strides already, this
shouldn't make the implementation any more difficult and make the API
much tidier.

Something like...

class Image {
public :
/// single buffer RGB[A] ctor, packed rows, packed components
Image(float *data, int width, int height, int nComps);

/// single buffer RGB[A] ctor, unpacked rows, unpacked components
Image(float *data, int width, int height, int nComps, off_t
rowOffset, off_t componentOffset);

/// multi buffer RGB ctor, packed rows
Image(float *rdata, float *gdata, float *bdata, int width, int
height);

/// multi buffer RGB ctor, unpacked rows
Image(float *rdata, float *gdata, float *bdata, int width, int
height, off_t rowOffset);

/// a bunch of accessors
....

bool hasPackedComponents();
bool hasPackedRows();
};

void ConvertColorspaceToColorspace(const Image &img,

const std::string& inputColorspace,

const std::string& outputColorspace);

Then in Nuke I could go...

OpenColour::ConvertColorspaceToColorspace(OpenColour::Image(redBuf,
greenBuf, blueBuf, width, height), "BananaColourSpace",
"CheeseColourSpace");
And in a packed host you would go...

OpenColour::ConvertColorspaceToColorspace(OpenColour::Image(buffer,
width, height), "BananaColourSpace", "CheeseColourSpace");


-----------------------------


another half day of trawling the internet and reading through your
color.h header file and I have a better idea of what you are doing in
your library as is. I'm attaching my summary of the headers you sent
us which I have put up on our Wiki. I'm probably wrong on a few bits,
and would appreciate a quick glance over it to tell me where.

Two questions to start with,
- what is a Look?
- what is a Visualization as used by the Filmlook objects?


-----------------------------

Addl comments:
* implicit global state makes me nervous
* an arbitrary ApplyLut function that runs a lut, (refered to by file
name!) on an image.

The following need to be talked over/understood,

1. it needs a big tidy, some random functions should go, some
functions should be made members and so on, but they know that,
2. images are currently all packed RGBA/RGB, support for planar
images has been mentioned, no mention of support for separate RGB
buffers,
3. all that global state makes me nervous, I would prefer it
wrapped into some pimpl context object,
4. what some classes/entities are for is not clear, ie: Look and
Visualisation,
5. the relationship between a FilmLook, a named display and a
visualisation is unclear,
6. configuration is via external XML files, which we might need to
wrap.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Additional Feedback

Jeremy Selan <jeremy...@...>
 

These are all excellent suggestions, thanks!

We hope to address all of these major issues in the next version of
the header (0.5.1).


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


New header, 0.5.0, now posted

Jeremy Selan <jeremy...@...>
 

This addresses almost all major comments (I hope) with the prior
header.

The one (big) missing chunk is that it does not expose function for
dynamically manipulating color configurations. (example being
OCSConfig->addColorspace, OCSConfig->writeToFile). These will be
coming in a future revision, and will allow for authoring apps /
dynamic color workflows.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: New header, 0.5.0, now posted

Rod Bogart <bog...@...>
 

Is the ColorSpace "role" just a database sorting tag, or does the
"role" impact the internal process of conversion?

RGB

On Thu, Apr 22, 2010 at 10:13 AM, Jeremy Selan <jeremy...@...> wrote:
This addresses almost all major comments (I hope) with the prior
header.

The one (big) missing chunk is that it does not expose function for
dynamically manipulating color configurations. (example being
OCSConfig->addColorspace, OCSConfig->writeToFile).  These will be
coming in a future revision, and will allow for authoring apps /
dynamic color workflows.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: New header, 0.5.0, now posted

Larry Gritz <l...@...>
 

Random questions:

What is the role of the 'direction' parameter to
ApplyColorSpaceTransform? Why would the user not just reverse the
InputColorSpace and OutputColorSpace parameter ordering to get the
reverse transformation?

Any reason why 'long' rather than 'int' in so many places? (I'm not
necessarily objecting, just curious.)

Any role for OpenCL in addition to GLSL and Cg?

Do you think "FilmLook" might seem anachronistic in the future or hurt
adoption in non-film pipelines? DisplayLook? OutputLook?

Is it long/redundant to have ApplyASCColorCorrectionTransform,
ApplyFilmlookTransform, etc? If they were all called ApplyTransform,
they could still be distinguished by the argument types
(ASCColorCorrection& vs FilmLook&, etc.). A matter of style, not
critical, just bringing it up in case others prefer to avoid names
that are extra long only because they have redundancy in the naming
and arguments.

stride_t -- you never define. Either change to ptrdiff_t, or typedef
them and use stride_t everywhere.

ImageView will surely need pixel accessor methods.

What does ColorSpace::isData() do?



--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: New header, 0.5.0, now posted

Jeremy Selan <jeremy...@...>
 

ApplyColorSpaceTransform does not use the roles.

GetHWFilmlook does. For example, if the fStopExposure arg is
specified, the exposure is adjusted internally in the SCENE_LINEAR
space (as specified by the role). If an ASCColorCorrection is
specified, this occurs in ROLE_COLOR_TIMING colorspace.

Additionally, plugins often rely on them. In our current Nuke
plugin, we have a LogConvert node that essentially does
ApplyColorSpaceTransform between ROLE_SCENE_LINEAR and
ROLE_COMPOSITING_LOG. This is convenient as end-user compositors
often know they want a 'Log Convert', but dont really want to concern
themselves with the specifics for which conversions to use.

I will update the header docs to make the functions which rely on
roles more obvious.

Would you agree with this concept? Assigning 'roles' at a high level
really is useful for a facility, but it's a slippery slope for sure in
having too many of them, and/or making them too facility specific.

The alternative would be to expose the roles as additional arguments.
(For example, in the HW filmlook exposure call an arg would be to
explicitly pass in the fstop exposure colorspace).

-- Jeremy

On Apr 22, 11:03 am, Rod Bogart <bog...@...> wrote:
Is the ColorSpace "role" just a database sorting tag, or does the
"role" impact the internal process of conversion?

RGB


Re: New header, 0.5.0, now posted

Jeremy Selan <jeremy...@...>
 

On Apr 22, 11:31 am, Larry Gritz <l....@...> wrote:
Random questions:

What is the role of the 'direction' parameter to
ApplyColorSpaceTransform? Why would the user not just reverse the
InputColorSpace and OutputColorSpace parameter ordering to get the
reverse transformation?
Yah, the simple version is what we've had for 6 years. I added the
direction parameter for completeness yesterday. On second thought,
I'll drop it. ;)


Any reason why 'long' rather than 'int' in so many places? (I'm not
necessarily objecting, just curious.)
I'm just super used to working in CPython land so often they look
natural to me. (Python's native 'Int' data type is long - so using
them all over makes writing python glue a tiny bit more braindead).
But this is not a great reason to use longs, i'll roll it back.
http://docs.python.org/release/2.5/api/intObjects.html#l2h-384

Any role for OpenCL in addition to GLSL and Cg?
Sure, if anyone ever needs it it would make good sense. Does OpenCL
support accelerated 3d texture lookups? If so, it'll map well.
Otherwise, a bit more complex but still do-able. Probably not for 1.0
though...

Do you think "FilmLook" might seem anachronistic in the future or hurt
adoption in non-film pipelines? DisplayLook? OutputLook?
Yah, I agree FilmLook is not very forward-looking. I like your ideas,
I'll probably steal one of em.


Is it long/redundant to have ApplyASCColorCorrectionTransform,
ApplyFilmlookTransform, etc? If they were all called ApplyTransform,
they could still be distinguished by the argument types
(ASCColorCorrection& vs FilmLook&, etc.). A matter of style, not
critical, just bringing it up in case others prefer to avoid names
that are extra long only because they have redundancy in the naming
and arguments.
Yes, these names are redundant. I'll shorten them up. I think part
of the reason I went for a long name is that I've been looking at the
code for so long (at least a prior version) that I'm a bit dead to
it. Fresh eyes on an API help quite a bit.


stride_t -- you never define. Either change to ptrdiff_t, or typedef
them and use stride_t everywhere.
Fixed.

ImageView will surely need pixel accessor methods.
Not externally, is my hope. Once the user constructs an ImageView,
other than passing it in I dont want them to call anything on it.
Internally, I can decorate the Imp with any private accessors I'd like
to use.

What does ColorSpace::isData() do?
Colorspaces that are 'data' are a bit special. Basically, any
colorspace transforms you try to apply to them are ignored. (Think of
applying a ColorCorrection to an ID pass). Also, in the Filmlook pass
it obeys special 'data min' and 'data max' args. I will document it
as such.

Thanks for all the comments! They are hugely appreciated.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


New header, 0.5.1, now posted

Jeremy Selan <jeremy...@...>
 

Addresses Larry + Rods comments (mostly).

Coming soon, a mutable ColorConfig + better GLSL version-ness.
(And after that, a library that builds)


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


[ocs-dev] Foundry Questions

brunobignose <bruno.j....@...>
 

Hi All,

this is a collection of questions that we came up with at The Foundry
after sitting down with various product manager and techy types....

What is you internal representation?
- we know you want to present this as a black box but we would like
to know some technical details
- what are the scope of the manipulations, are they all 3D ? Mixed 1D
and 3D?
- how do you represent
- transforms
- display devices
- looks

What third party LUTs do you manage?
- any open source ones?
- what happens if a user doesn't have a license for a proprietary lut
library?

How does one go about generating...
- LUTs?
- display profiles?
- looks?
- anything else in there...

Are you looking at making it extensible?

How does this all play with ICC proiles?

What level of thread safety is there?

Invertibility,
- how guaranteed is that for the general colour transforms?
- can you flag invertible transforms in any way?

Distributing colour management XML and LUTS
- have you thought of how to pack XML + LUTS + whatever into a single
file for distribution?
- zip/tgz/voodoo?

OpenGL fragment shaders
- how do we/you manage any LUTs that need to go along with the
shaders you create?
- have you thought about OpenCL code generation?
- we want to be able to supply our own names for objects in generated
code so as to avoid name collision, eg: the function name,

On going support
- who is responsible for support?
- how do we manage folding in fixes?

Are you shipping it with any vanilla XFMs that you have created at
Sony?
- and not just log/lin
- device profiles?

thanks

Bruno NIcoletti


[ocs-dev] Re: Foundry Questions

Jeremy Selan <jeremy...@...>
 

On May 5, 2:30 pm, brunobignose <bruno.j....@...>
wrote:

What is you internal representation?
 - we know you want to present this as a black box but we would like
to know some technical details
My discussion of the internals as a 'black' box are meant to be from
the perspective of a simple client application, not from folks
interested in color management or to 'authoring' apps. (Authoring apps
being applications that can generate / introspect into color
configurations).

 - what are the scope of the manipulations, are they all 3D ? Mixed 1D and 3D?
The internal transformations currently include 1D luts / (normal + log
allocation), matrix operations, gamma, 3D lut. They can be arbitrarily
chained and used in the forward or inverse directions (with the
exception of 3d luts, where it is left to use the user to provide an
inverse). We also are adding a group transform, which will be useful
in the API.

 - how do you represent
       - transforms
       - display devices
       - looks
Please see the file config_v0.ocs, you will see examples of all 3.

ColorSpaces are ordered lists of atomic transforms (1d luts, mtx,
gamma, 3d lut, etc). They can be used in any order as many times as
necessary.
DisplayDevices are just labels that define additional blocks of
transforms.
"Looks" are currently a bit in flux. If a look is constant across a
show (even if there are multiple looks), it is typically rolled into
the DisplayDevice block as this is more convenient. Examples of this
would be show-wide 'Warm Look', or "DI Gamut check', etc. If a look
is shot-varying we can use either a single 3dlut or asc color
correction. (However, before release i expect this will get more
general and allow for an arbitrary transform type).

What third party LUTs do you manage?
Currently the list is small, just some internal formats and .3dl
files. But, we hope to support all commercial ascii lut formats
before 1.0

 - any open source ones?
What open source lut formats are you referring to?

 - what happens if a user doesn't have a license for a proprietary lut
library?
It is currently not obvious how we will add support for proprietary
(encrypted) lut formats. Example, truelight. Some companies allow for
the export of ascii luts, but others dont. One workaround would be to
provide a lut plugin interface, so 3rd party color management
companies could write OCS plugins that do the appropriate licence
checks before running a lut transform. A plugin API is probably
outside the scope of a 1.0 release, but could be added to 1.1 without
effecting binary compatibility.

How does one go about generating...
 - LUTs?
Any ways you want. This is not a problem we're looking to solve for
1.0, and is treading deep into philosophical / 'secret sauce' issues.
Internally at SPI, we have a bunch of raw measurement data (such as
film stock profiles, camera exposure sweeps, etc) that guide this
practice. You could make a whole class on this, I'm sure. There are
also a bunch of different philosophies on how this should be done,
many of which are equally valid. My gut feel is that if this project
succeeds, we'll end up with a whole mailing list devoted to this
topic.

Please read the color space comments in this file,
config_v0.ocs

it provides a bit more info.

 - display profiles?
Same as above.

 - looks?
These are typically delivered by DI houses, and are director / DP
guided. If a user is not provided a look by their client, they
probably dont have to worry about one.

 - anything else in there...
Remember that we'll provide a few good default configurations, but I
expect all major facilities to define their own. And, we shouldnt
dictate how they do this. The default configurations wont suck
either, they're really what we use as the starting point for films.

Are you looking at making it extensible?
Not sure what you mean here. Anyone can add or create a custom
colorspace configuration. Anyone can take one of the posted
configurations and add new color spaces or devices to it. And the
whole library will be open source so the internal details wont be
secret. So,... "yes?"


How does this all play with ICC proiles?
We hope to add the ability to export ICC profiles, for lut preview in
tools such as photoshop. Other than that, they are not closely
related. We currently have no intent to add support for reading ICC
profiles.


What level of thread safety is there?
The library is fully thread safe. (and efficient in a multi-thread
context).

I am currently adding the color configuration mutable API, and am
grappling with a few corner issues such as ' what if one thread
manipulates the color configuration while another is processing
pixels?' but this is a mental correctness issue, not a crashing issue.


Invertibility,
 - how guaranteed is that for the general colour transforms?
Color configurations specify whether each operation is invertible or
not. An 'auto' mode is provided for types that are easy to perfectly
invert, but we dont handle 'non-obvious' inversion cases. (i.e., 3d
luts).

Note that our 1D lut processing is really pretty clever (clever in a
good way), and guaranteed to have perfect invertibility. For
example, you can specify a 1d lut that goes from a low dynamic range
allocation to an arbitrary high-dynamic range allocation, and still
get perfect invertibility. We also have a validation script that
prints out info about the current color configuration, this has a step
thats walks through each colorspace and validates perfect
invertibility (flagging otherwise).

 - can you flag invertible transforms in any way?
Yes.

Distributing colour management XML and LUTS
 - have you thought of how to pack XML + LUTS + whatever into a single
file for distribution?
Yes, we're starting to think about it. Internally a directory works
great but we're open to a single file as well. zip or tgz sound
great, we have no preference. I'd probably do a timing check and pick
whichever is faster to decompress live.

 - zip/tgz/voodoo?

OpenGL fragment shaders
 - how do we/you manage any LUTs that need to go along with the
shaders you create?
Only a single 3d lut is needed client side for the shader. The client
is responsible for uploading to the card as necessary, though we
provide a cacheid (cheap to compute) so it can be recomputed /
uploaded as infrequently as possible.

 - have you thought about OpenCL code generation?
Not yet. Are you interested in opengl?

 - we want to be able to supply our own names for objects in generated
code so as to avoid name collision, eg: the function name,
Yes, this is already part of the base api. please see the header.

Note that sidefx has already pointed out that we're going to have to
have finer granularity in our specification of glsl profiles (better
glsl versions), this has yet to be added but will soon.

On going support
 - who is responsible for support?
 - how do we manage folding in fixes?
It's an open source project, so at a minimum I'm responsible for
support. But my hope is that before 1.0 we develop a critical mass
community of those interested who can all work on this 'as developer
equals'. (multiple people having 'root' checkin privileges). I always
get suspicious of open source projects where one person approves all
checkins. My hope is that a few other reputable color folks will step
up and we can all be responsible for validating new code checkins,
etc. Of course, i'll be reading every line of code, but having more
folks on board helps prevent 'hit by bus' scenarios. (or more likely,
'caught in a hard production crunch-time' scenario).

Support on the artist / usage side will be open to a much broader
community, just as nuke-users is.

Are you shipping it with any vanilla XFMs that you have created at
Sony?
 - and not just log/lin
 - device profiles?
What's an XFM? A configuration I presume? (im now calling them .ocs
files)

Yes, we're gonna ship all input device profiles we've got, not just
log/lin. The only camera we've never characterized is RED, so we'd
have to leave that to someone else. (Or could adapt the nuke one for
use in our environment).

For device profiles the only ones we ever use are srgb, r709, and p3
dlp. These will be included.


thanks

Bruno NIcoletti


[ocs-dev] Re: Foundry Questions

Jeremy Selan <jeremy...@...>
 

Oh, and one thing to keep in mind; the current version of this library
already works great in Nuke. :)

We've exposed 3 new nodes: ColorSpaceConvert / LogConvert /
DisplayTransform. The 1st two are used in the comp graph natively,
and the final one is used in the Input Process group to do filmlook
conversions. We also have a custom read / write nodes that uses our
color processing instead of the nuke-native processing. All of these
plugins are planned on being open-sourced as well, so right off the
bat we'll have a pretty good workflow example. (And clean example code
on how to do plugin client integration).

Obviously, native application support is the direction we're looking
to head in the long term, but I wanted to clarify that direct
manufacturer support is by no means required for Open Color Space to
be useful at launch.

-- Jeremy


Re: [ocs-dev] Re: Foundry Questions

Bruno Nicoletti <bruno.j....@...>
 

Hi Jeremy,

thanks for the reply. A few more questions....


 - how do you represent
       - transforms
       - display devices
       - looks
Please see the file config_v0.ocs, you will see examples of all 3.
These reference various .spiXXX files, are you going to open up the
format of those files?


 - any open source ones?
What open source lut formats are you referring to?
Little CMS was the one mumbled about here. Not sure how important this
is, need to get back to our folks.

Are you looking at making it extensible?
Not sure what you mean here.  Anyone can add or create a custom
colorspace configuration.  Anyone can take one of the posted
configurations and add new color spaces or devices to it.  And the
whole library will be open source so the internal details wont be
secret.  So,... "yes?"
By some sort of plugin method, you mentioned this in your reply with
respect to proprietary CMS systems.

What level of thread safety is there?
The library is fully thread safe. (and efficient in a multi-thread
context).

I am currently adding the color configuration mutable API, and am
grappling with a few corner issues such as ' what if one thread
manipulates the color configuration while another is processing
pixels?' but this is a mental correctness issue, not a crashing issue.
I'd simply say you couldn't process while manipulating the the configuration.

 - have you thought about OpenCL code generation?
Not yet. Are you interested in opengl?
We are playing with OpenCL and it might be useful to have that at some
point. Not a high priority for us at the moment.


Are you shipping it with any vanilla XFMs that you have created at
Sony?
 - and not just log/lin
 - device profiles?
What's an XFM?  A configuration I presume? (im now calling them .ocs
files)

Yes, we're gonna ship all input device profiles we've got, not just
log/lin.  The only camera we've never characterized is RED, so we'd
have to leave that to someone else. (Or could adapt the nuke one for
use in our environment).

For device profiles the only ones we ever use are srgb, r709, and p3
dlp. These will be included.
Sorry, 'XFM' is my bizarre short hand for 'transform', but that still
answers it.



I suppose what we are missing are use cases of the library in
practice. I have a pretty good guess as to how it would work at big
film houses, film houses collaborating and film houses sharing work
with smaller shops, but clear descriptions of that would be great.

I guess my major concern is that if The Foundry is using this as the
basis of a cross product CMS, we still need to make it work for the
small shops in isolation, and they need tools to do at least the basic
stuff that the big boys have in-house specialist do. So display
calibration profiles and so on. Have you had any thoughts as to that?


--
Bruno Nicoletti


[ocs-dev] Re: Foundry Questions

Jeremy Selan <jeremy...@...>
 

Sorry for the delay!

Progress on the library is going well, I hope to have a code drop
ready in the next few weeks.

To answer your latest few questions...


These reference various .spiXXX files, are you going to open up the
format of those files?
Yes and no. We're going to open up the formats in the sense that the
library will have a built-in reader support (The format details are
incredibly uninteresting; just another brain-dead ascii text format).
However, we're not going to push for anyone else to adopt them.
Internally, once we get support for commercial lut formats we'll
probably try to retire the spi formats.


I suppose what we are missing are use cases of the library in
practice. I have a pretty good guess as to how it would work at big
film houses, film houses collaborating and film houses sharing work
with smaller shops, but clear descriptions of that would be great.

I guess my major concern is that if The Foundry is using this as the
basis of a cross product CMS, we still need to make it work for the
small shops in isolation, and they need tools to do at least the basic
stuff that the big boys have in-house specialist do. So display
calibration profiles and so on. Have you had any thoughts as to that?
I'm not really familiar with how small houses (where someone is tasked
only part time) deal with the issue of color management. Maybe we
would ask some contacts at smaller houses for more detail about their
'off the shelf' color pipelines? I think this route would
particularly be worth the effort, as the small houses (who dont have
someone tasked full time to color) would have the most to gain in
having this library succeed.

-- Jeremy


[ocs-dev] Got Luts?

Jeremy Selan <jeremy...@...>
 

Hello!

I'm at the stage where I would like to get a survey of lut file
formats (1d, 3d, matrix, etc) that folks actually use in the wild.

If you commonly use a lut format at your facility, or define one in
your software package, I would hugely appreciate it if you could
upload example files and/or spec documents so I can begin work on the
importers.

Even if it's a proprietary format, I'd love to take a peek so I can
get a sense of the scope of formats out in the wild. (I need to make
sure the internal API is rich enough to encompass current use cases).

Many formats have been mentioned previously, including:
- Truelight ASCII .cub 3D lut
- Assimilate Scratch (Arri /Kodak .3dl)
- Iridas Framecycler (Iridas .cube)
- Autodesk (MESH3D, .lut, etc.)

For the majority of these, I do not have either example files or
specifications. Please help! :)

Also, does anyone know if the majority of lut formats identifiable by
their extension? Are there common extension conflicts? Ideally, I'd
like to try and have format reader registered based on file extension,
and only if that fails give each lut loader a chance to read it.
(Similar to how reader plugins work in nuke).

(Note that I'm not assuming 1 file == 1 lut. (We will support readers
where 1 file can generate multiple transforms, such as a 1d shaper lut
-> 3d lut -> 1d shaper lut.))


Re: [ocs-dev] Got Luts?

"Nathaniel Hoffman" <na...@...>
 

Jeremy,

There are two LUT formats I know of that are used in game development.

One is a 2D image format (any lossless image format will do - we've used
BMP, TGA and PNG) with a 2D representation of the LUT where the planes
along the 3rd axis have been placed next to each other. So a 32x32x32 LUT
would turn into a 1024x32 2D image. A common usage is for an identity LUT
in this format to be placed next to an ungraded screenshot, both
manipulated in Photoshop or some other color manipulation package, and
then the colorized LUT is extracted.

The other format is a DDS (Microsoft DirectDraw Surface) file with a 3D
texture in it, typically uncompressed. This is usually loaded directly
into the game engine.

These are both a bit ad-hoc and not really standardized, so I don't know
if they are relevant for OCS. If they are, let me know and I can try to
work up something more like an actual spec for each of these.

Thanks,

Naty

Hello!

I'm at the stage where I would like to get a survey of lut file
formats (1d, 3d, matrix, etc) that folks actually use in the wild.

If you commonly use a lut format at your facility, or define one in
your software package, I would hugely appreciate it if you could
upload example files and/or spec documents so I can begin work on the
importers.

Even if it's a proprietary format, I'd love to take a peek so I can
get a sense of the scope of formats out in the wild. (I need to make
sure the internal API is rich enough to encompass current use cases).

Many formats have been mentioned previously, including:
- Truelight ASCII .cub 3D lut
- Assimilate Scratch (Arri /Kodak .3dl)
- Iridas Framecycler (Iridas .cube)
- Autodesk (MESH3D, .lut, etc.)

For the majority of these, I do not have either example files or
specifications. Please help! :)

Also, does anyone know if the majority of lut formats identifiable by
their extension? Are there common extension conflicts? Ideally, I'd
like to try and have format reader registered based on file extension,
and only if that fails give each lut loader a chance to read it.
(Similar to how reader plugins work in nuke).

(Note that I'm not assuming 1 file == 1 lut. (We will support readers
where 1 file can generate multiple transforms, such as a 1d shaper lut
-> 3d lut -> 1d shaper lut.))