Date   

Re: New header, 0.5.0, now posted

Larry Gritz <l...@...>
 

Random questions:

What is the role of the 'direction' parameter to
ApplyColorSpaceTransform? Why would the user not just reverse the
InputColorSpace and OutputColorSpace parameter ordering to get the
reverse transformation?

Any reason why 'long' rather than 'int' in so many places? (I'm not
necessarily objecting, just curious.)

Any role for OpenCL in addition to GLSL and Cg?

Do you think "FilmLook" might seem anachronistic in the future or hurt
adoption in non-film pipelines? DisplayLook? OutputLook?

Is it long/redundant to have ApplyASCColorCorrectionTransform,
ApplyFilmlookTransform, etc? If they were all called ApplyTransform,
they could still be distinguished by the argument types
(ASCColorCorrection& vs FilmLook&, etc.). A matter of style, not
critical, just bringing it up in case others prefer to avoid names
that are extra long only because they have redundancy in the naming
and arguments.

stride_t -- you never define. Either change to ptrdiff_t, or typedef
them and use stride_t everywhere.

ImageView will surely need pixel accessor methods.

What does ColorSpace::isData() do?



--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: New header, 0.5.0, now posted

Rod Bogart <bog...@...>
 

Is the ColorSpace "role" just a database sorting tag, or does the
"role" impact the internal process of conversion?

RGB

On Thu, Apr 22, 2010 at 10:13 AM, Jeremy Selan <jeremy...@gmail.com> wrote:
This addresses almost all major comments (I hope) with the prior
header.

The one (big) missing chunk is that it does not expose function for
dynamically manipulating color configurations. (example being
OCSConfig->addColorspace, OCSConfig->writeToFile).  These will be
coming in a future revision, and will allow for authoring apps /
dynamic color workflows.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


New header, 0.5.0, now posted

Jeremy Selan <jeremy...@...>
 

This addresses almost all major comments (I hope) with the prior
header.

The one (big) missing chunk is that it does not expose function for
dynamically manipulating color configurations. (example being
OCSConfig->addColorspace, OCSConfig->writeToFile). These will be
coming in a future revision, and will allow for authoring apps /
dynamic color workflows.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Additional Feedback

Jeremy Selan <jeremy...@...>
 

These are all excellent suggestions, thanks!

We hope to address all of these major issues in the next version of
the header (0.5.1).


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Additional Feedback

Jeremy Selan <jeremy...@...>
 

<Jeremy: this has been edited to remove 'personal' comments>

Commenter:

Answering your questions...

* Is this project conceptually useful to your organization?

Yes, we would love a cross application colour library that gives
consistent results on all applications with minimal user effort and
developer pain.

* Would you use it? (or recommend using it?)

Probably, we'd like to understand it in a bit more detail, and we
would need to sit down and discuss how/when/where we used it.

* Are there design choices we've made that limit its usefulness?

Straight up, for Nuke to user the CPU path, we'd need to pass in
separate pointers for R, G and B data, as they are allocated
separately and do not live at fixed strides from each other.

* Are there commercial tools you'd like us to work on support for?

Truelight

* Do you know of something better (open source), which we're not
aware of?

No

################################################################################

I've looked over the lightly commented header file, the explaining
text and the comments from others so far. While I understand that this
is the API at the moment and will no doubt change, I'm not sure of the
exact relationship between objects/entities within it. While I get the
basic ideas, as in what a colour space is, what colour timing is, I'm
not sure of some of the details, eg: what is the difference between a
Look and a FilmLook, are colour spaces simply to deal with things like
log/lin or do they extend to 3D colour conversion? Do you have a FAQ/
overview document somewhere that goes into slightly more detail on the
API?

################################################################################
On the general engineering front.

I've seen it mentioned in the posts Jon forwarded to me, but a C API
is more robust for code in a dynamic library. I generally prefer C++
myself, but you could have a core C API and wrap it with some very
very thing C++ classes as an option.

If you have
class ASCColorCorrection {...}
and a function
void ApplyASCColorCorrection(...., const ASCColorCorrection
&c, ...);
why not make that function with global scope a member function 'apply'
in the class?

If you start adding row and components strides as well as separate
buffer pointers the API, it will make for a complex set of calls to
apply a conversion/look to an image. I'd suggest a very lightweight
wrapper class for images that packages all that up, and you pass an
object of that class into the relevant functions. Given that you are
talking about supporting fixed component strides already, this
shouldn't make the implementation any more difficult and make the API
much tidier.

Something like...

class Image {
public :
/// single buffer RGB[A] ctor, packed rows, packed components
Image(float *data, int width, int height, int nComps);

/// single buffer RGB[A] ctor, unpacked rows, unpacked components
Image(float *data, int width, int height, int nComps, off_t
rowOffset, off_t componentOffset);

/// multi buffer RGB ctor, packed rows
Image(float *rdata, float *gdata, float *bdata, int width, int
height);

/// multi buffer RGB ctor, unpacked rows
Image(float *rdata, float *gdata, float *bdata, int width, int
height, off_t rowOffset);

/// a bunch of accessors
....

bool hasPackedComponents();
bool hasPackedRows();
};

void ConvertColorspaceToColorspace(const Image &img,

const std::string& inputColorspace,

const std::string& outputColorspace);

Then in Nuke I could go...

OpenColour::ConvertColorspaceToColorspace(OpenColour::Image(redBuf,
greenBuf, blueBuf, width, height), "BananaColourSpace",
"CheeseColourSpace");
And in a packed host you would go...

OpenColour::ConvertColorspaceToColorspace(OpenColour::Image(buffer,
width, height), "BananaColourSpace", "CheeseColourSpace");


-----------------------------


another half day of trawling the internet and reading through your
color.h header file and I have a better idea of what you are doing in
your library as is. I'm attaching my summary of the headers you sent
us which I have put up on our Wiki. I'm probably wrong on a few bits,
and would appreciate a quick glance over it to tell me where.

Two questions to start with,
- what is a Look?
- what is a Visualization as used by the Filmlook objects?


-----------------------------

Addl comments:
* implicit global state makes me nervous
* an arbitrary ApplyLut function that runs a lut, (refered to by file
name!) on an image.

The following need to be talked over/understood,

1. it needs a big tidy, some random functions should go, some
functions should be made members and so on, but they know that,
2. images are currently all packed RGBA/RGB, support for planar
images has been mentioned, no mention of support for separate RGB
buffers,
3. all that global state makes me nervous, I would prefer it
wrapped into some pimpl context object,
4. what some classes/entities are for is not clear, ie: Look and
Visualisation,
5. the relationship between a FilmLook, a named display and a
visualisation is unclear,
6. configuration is via external XML files, which we might need to
wrap.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

---- Commenter

You mention rgb and rgba. How is alpha handled? Is there a policy?

---- Jeremy

Current policy is that alpha is always passed unchanged. For
applications which wish to be more clever (such as special premult /
unpremult handling), the hope is for this library to continue to be
ignorant of the issue.

In the next rev of the API, this will be explicitly documented in the
header. (numChannels will be replaced with a pixelStride, and we will
explicitly mention that the calls only modify the rgb triples for
processing). This will also let us handle >4 channel images.


---- Commenter

When you decline support for spatially varying color xforms, do you
mean image space? So, things like histogram-based tone curves, or
whatever. Or did you mean no RGB space deformation operators?

---- Jeremy

Yes, I mean image space varying transforms are not supported.
Histogram-based operations, while they dont currently exist, are fine.
There is just a lot of code internally that processes in scanlines and
I would prefer to keep all pixels independent internally. This is also
required in the GPU pipeline, which makes this assumption as it bakes
the color transform stack into a 3d lut(s).

---- Commenter

After reading through the info a couple times, I am wondering about a
few key terms. You use "reference" and "linear" and "log"
occasionally, and I think these need to be dealt with more strictly.

---- Jeremy

Agreed. I will scrub the document of all references to linear or
log, except in sections that are explicitly discussion workflow
conventions. (Where I feel the are appropriate, as long as the terms
are qualified / clarified at their first introduction).

---- Commenter

It appears that the "reference" color space (the working space, or
connection space) is not ever specified. It only exists as a by-
product of the to_linear and from_linear methods in the config file.
Since that working space can (conceptually) be arbitrary, it seems
weird that all the code refers to it as linear. Now, I'm not
suggesting anyone use something other than linear, but you do in the
README. So, the extreme request is: you should change all the
"linear"s to "reference"s. Which might be a bad idea, but take a few
moments and read thru the material you sent doing a mental replace of
"reference" for "linear". I feel that you will find that the
Reference space needs to be specified.

---- Jeremy

This is an excellent suggestion, and definitely take it to it's
logical conclusion. I will go ahead and remove all references to
linear from the API and configuration file and substitute the term
'reference' as appropriate. Thinking about this further, the two API
calls which probably caught your attention: ConvertColorspaceToLinear,
and ConvertLinearToColorspace, are probably unnecessary anyways. The
more general function ConvertColorspaceToColorspace should be
sufficient, and then would remove a lot of ambiguity from the API.

Along the same topic, there are bunch of calls to get the names of
colorspaces corresponding to certain use cases: GetLinearColorspace,
GetLogColorspace, GetColorTimingColorspace, etc.

This is the only place colorspaces names are 'enshrined' in any way, I
will probably refine this call to be more abstract. Something more
like
enum ColorspaceRole { SCENE_LINEAR, COMPOSITING_LOG, COLOR_TIMING,
DATA, }

GetColorspaceForRole(ColorspaceRole role);

... where the mapping from role to colorspace is defined in the show
configuration file.

Having a few predefined roles (rather than accepting arbitrary names
as roles) has proven to be a useful abstraction. For example, if we
were to write a LogConvert node from Nuke, with pre-defined role enums
(which will be documented) our LogConvert node could assume something
like,

ConvertColorspaceToColorspace( GetColorspaceForRole(SCENE_LINEAR),
GetColorspaceForRole(COMPOSITING_LOG) ), etc.

In our experience new roles have not come along that often, and new
ones can be added at any time.

---- Commenter

On the phone we discussed the idea that two shows might be both
linear, but have different color primaries (e.g. wide versus 709). I
realize you don't work that way now, in particular avoiding reading
assets from one config into another config. But where does the color
primary part go? Is there an example buried in the sample config?

---- Jeremy

We do not have an example of this in a current config, so here's a
mockup of what it could look like:

// Show 'A', native r709
<config>
<colorspace name='lin709'></colorspace>
<colorspace name='linP3'>
<to_reference>
<transform file='p3_to_709.mtx' direction='forward'/>
</to_reference>
<from_reference>
<transform file='p3_to_709.mtx' direction='inverse'/>
</from_reference>
</config>

// Show 'B', native P3
<config>
<colorspace name='linP3'></colorspace>
<colorspace name='lin709'>
<to_reference>
<transform file='p3_to_709.mtx' direction='inverse'/>
</to_reference>
<from_reference>
<transform file='p3_to_709.mtx' direction='forward'/>
</from_reference>
</config>


Assuming your linear files are tagged appropriately with "linP3" or
"lin709" (either as tokens in the header or by dynamically inspecting
other attributes) things will work out correctly.

No matter which show you are set to use, images will be transfomed in
the the appropriate working space as expected. Of course, this
assumes
that each show is setup in a complementary manner to allow this. This
assumption would be broken if for instance two shows disagreed on
what
the p3_to_709 matrix was.

---- Commenter

The other word is "log", which you use here: "converting log to linear
is not knowable". I didn't follow this logic. There is a lg10.lut1d
file somewhere, why is it more special than any other lut file?

---- Jeremy

Ah, lg10.lut1d is not more special than any other lut files. I was
trying to get across the concept that the contents of the
<to_reference> / <from_reference> blocks are not queryable through the
API by design. (The transforms are essentially a single black box
externally).

In terms of generality, there are no magic named colorspaces
whatsoever inside the API. All configuration info comes from the
single file. As discussed above, we do expect to allow for pre-defined
Colorspace "Labels / Roles" which are geared towards the vfx /
animation pipeline, but I think there is a strong argument in favor of
this.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

---- Commenter 1

This all looks really cool. I don't know what your budget for
getting this ready for a wider world is, but you may want to consider
removing the STL from the code. Some people are very particular about
what parts of the C++ standard they use. Also, you may want to
reconsider throwing exceptions. The API is very nice, I'd be happy to
call this code.

---- Jeremy

Thanks!

Does anyone else have concerns with the API being C++ (as opposed to
pure C)? My inclination is to leave it as C++ for now as most of the
initial apps / plugin APIs we're targeting right off the bat support C+
+. And, I think the average user probably won't be looking at the code
very often. My hope is that they'll follow some simple steps to
install the library and any associated plugins they care about, and
wont have to think about the language at all. I also think switching
to a pure C API may be a bit more work than I'd prefer at this
stage. OpenEXR solves this in a clever manner, with it's
ImfCRgbaFile.h compatibility header. Should it prove useful, we could
always add a C wrapper layer in that vein.

How do people feel about exceptions? I'm comfortable with them, but I
understand that many teams avoid using them. I would be happy to swap
out an exception approach with a success / fail return code if people
prefer. (The disadvantage being that good error reporting is harder
in these cases - you can always use a get/set errorstring approach,
but I've never loved that in a multi-thread environment).

---- Commenter 2

Adding some more applications:
- Adobe CS (Photoshop. Aftereffects, etc) (ICC profiles)
- Pixar Renderman 'It' viewer (Truelight ASCII .cub 3D lut)
- Assimilate Scratch (Arri /Kodak .3dl)
- Iridas Framecycler (Iridas .cube)
- Autodesk (MESH3D, .lut, etc.)

And for completeness (not sure if all of these make sense):
-Avid Media Composer
-Final Cut Pro
- OpenFX (plugin API) "

---- Jeremy

Good format additions, I'll add those to the FAQ. (Which I will resend
or turn into a google doc once enough updates accumulate). None these
sound difficult to support, particularly if people are open to sending
me format examples / docs. We already have basic support for
exporting ICC profiles and import .3dl support. (Though I thought .
3dl was an autodesk format, am I mistaken? probably...)


---- Commenter 2

Assuming tier 1 and 2 support for 3rd party applications are handled
by linking against commercial libraries, my vote is for the core
library to be the default built target. Requiring the developer to
explicitly list the packages for which 3rd party support is required,
eg:

configure --with-nuke=/usr/lib/nuke6 --with-rv=/usr/lib/rv

...would allow a single invocation of 'make' to build the plugins for
which the necessary libraries are present.

Code that may have proprietary library dependencies should live
alongside (not in) the main library:

colorlib-
|-common-
| |-include
| |-lib
|
|-3rd_party_plugins
|-foundry-
| |-nuke
|
|-apple-
|-shake
|-finalcut

Wrapping multiple 'product' directories under 'vendor' directories
addresses the case of some vendors that have inconsistent APIs across
products (autodesk, apple).

---- Jeremy

I'm just starting to think about the build system but this sounds like
a good approach. I shall use it for now.

My hope is also, to the extent possible, to provide pre-built
libraries and plugins. libstdc++ will likely be the biggest
dependency for linux, so we should be able to compile only a few
architectures and hit the major platforms, right? (I may be wrong
here though).


---- Commenter 2

Providing multiple levels of service:

Considering that small or offshore VFX facilities may not have
resources to compile plugins from source, it might be nice for the
library to be able to provide a lower level of service than the one it
is capable of delivering -- eg, an exporter of LUTs for use in an
application's native Lookup plugin.

In fact, even some medium-to-large facilities might elect to use the
library in 'tier-3' mode as a 'universal translator' for generating
show-specific LUTs for a range of 3rd party products (at the expense
of some lost accuracy).


---- Jeremy

Sure. I like this idea of providing lut exporters for as many formats
as possible. With the python API, these are really easy to write.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Original Letter Of Intent (cleaned up a bit)

Jeremy Selan <jeremy...@...>
 

Color Folks,

Sony Imageworks is planning to open-source our color pipeline, and
we'd like your feedback.

Our hope is not to re-invent the wheel, but to help facilitate color
interchange between vfx / animation houses, and to standardize image
visualizations / transformations across tools. Many of you personally
deal with this pain on a daily basis, and I'm pretty sure no one
considers it a solved problem.

Note:
Our goal is NOT to push our internal pipeline eccentricities on other
facilities. Rather, our hope is that if 95% of what we've done is
generic enough to support arbitrary pipelines -- as we believe it is
-- the time is ripe to lay down a standard which could make everyone's
lives easier.

So what are we looking from you?

We would love your honest, unfiltered feedback:
* Is this project conceptually useful to your organization?
* Would you use it? (or recommend using it?)
* Are there design choices we've made that limit its usefulness?
* Are there commercial tools you'd like us to work on support
for?
* Do you know of something better (open source), which we're not
aware of?
* Are you interested in getting early versions of the library?

I am including 3 attachments:

* A project overview / FAQ
* The C++ header for our current library
* The XML configuration for our default "visual effects" color setup

NOTE: The latter two documents are NOT the final spec for the library,
and have not been de-imageworks-ified". But, both files are a good
staring point to get everyone on the same page. (They are in fact our
'live' production code / configuration).

Please do not hesitate to contact me personally if you'd prefer that
route over open email feedback.

Regards,
Jeremy Selan


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en


Re: Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

... from an anonymous commenter, some excellent questions. (I'll be
adding these to the FAQ).

One question I think would be good for the faq: how does this system
compare to commercial systems like Truelight and Cinespace? I have a
general idea of what the answer is, but I think it's something that
people might be wondering.
This system is complementary to Truelight / Cinespace. Although
Truelight comes with a bunch of plugins, its core functionality is to
generate the 3D luts for things like print emulation, device mapping,
etc, and I don't see this role changing. (Our library does not
attempt
to fill these roles). It will be very straightforward to implement
a
reader for unencrypted truelight luts. Note that supporting
Truelight
*encrypted* luts is probably not simple due to legal considerations.
I
am unsure if Cinespace has the capability to export un-encrypted LUTs.

Also, I didn't totally get this line in the faq: "GPU Acceleration is
handled by dynamically generating shader code and associated 3D luts. We
DO NOT, and WILL NOT link to GL / Cg / etc." Maybe I don't fully
understand which chunk of code you're going to be open-sourcing ...
you're saying that this project will have no calls to GL or Cg? Is the
Cg stuff not part of this project?
Sorry for being confusing. ALL of the Cg / GLSL generation stuff
we've
developed at SPI (related to color processing) is being included in
this
release.

This entry was intended to clarify issues relating to integration;
that
even though this library generates cg / glsl code, we don't link to
Cg/
GL directly. Instead, the library returns "simple" data types, a
std::string with the shader text, and a float* with the lookup
table.
It is left to the client app to make the appropriate GL calls
accordingly. (which in our experience is not much of a burden, and
actually makes everyones lives simpler). A bit of extra information
(such as cacheIDs for the 3d luts) are also included to allow for
additional client-side optimizations. (In the 3dlut cacheid case, to
prevent reloading the 3d texture every time something changes).

On Apr 21, 6:07 pm, Jeremy Selan <jeremy...@gmail.com> wrote:
I'll be posting (anonymously) the previous comments on ver 0.1 so that
new members will be able to easily track the comment history.

--
Subscription settings:http://groups.google.com/group/ocs-dev/subscribe?hl=en


Previous comments (for posterity sake)

Jeremy Selan <jeremy...@...>
 

I'll be posting (anonymously) the previous comments on ver 0.1 so that
new members will be able to easily track the comment history.


--
Subscription settings: http://groups.google.com/group/ocs-dev/subscribe?hl=en

2161 - 2170 of 2170