Date   

Development Priorities?

Jeremy Selan <jeremy...@...>
 

Folks,

If you're considering using OpenColorIO in the near term, we would
greatly appreciate it if you would respond and let us know which of
the topics below you personally consider to be most important.
Perhaps reply with the top 5 items you care about?

(Note: We plan on addressing all of these issues, but would love to
get a sense of what's holding people up in the short term).

If there's an important task that isn't represented on this list, please add it!

Thanks!

-- Jeremy

--------------------------------------------------------------------------------



Fall 2010 OpenColorIO Development Topics:

Documentation
- Quickstart Guide
- End User (Artist) Docs
- Developer API Docs
- Color Config Authoring Docs

Facility Integration
- Support for additional lut formats (import)
- Support for lut export
- 3rd party app plugins
RV
OFX
OpenImageIO
Houdini
<Your App Here!>

"Real" Color Configurations
- Flesh out the existing ocio configs (spi-anim,spi-vfx) for real use
- Add a example ACES ocio config
- Add a config that emulates the default nuke color configuration
- Add example color config authoring scripts
- Document 'best practice' for each config, and provide workflow
examples with imagery

Core Library:
- Unit testing / correctness validation
- Overall performance optimization

Issues deferred until after 1.0 (tentatively Jan '11):
- Dynamic Color API (OCIO Plugin API)
- Live CDL Support


OCIO 0.5.15 posted

Jeremy Selan <jeremy...@...>
 

Version 0.5.15 (Sept 8 2010):
* Library is well behaved when $OCIO is unset, allowing for use in
an un-colormanaged environment
* Color Transforms can be applied in Python (config->getProcessor)
* Simplification of API (getColorSpace allows cs name, role names,
and cs objects)
* Makefile enhancements (courtesy Malcolm Humphreys)
* A bunch of bug fixes


Re: [ocs-dev] Re: Supporting 1D luts which are different per channel

Jeremy Selan <jeremy...@...>
 

Definitely a new Op. How about Spline1DOp?

-- Jeremy

On Fri, Sep 3, 2010 at 6:27 AM, Malcolm Humphreys
<malcolmh...@mac.com> wrote:
Hi,

Just looking at this again, one thing we didn't cover was a prelut which has non-uniform spaced points (scattered). Would you see this as support we would need to add to the Lut1DOp or would this end up being a different op type.

This is a simple example from the csp spec.
--snip--
Map extended input (max. 4.0) into top 10% of LUT

11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
--snip--

A more complex example could be
--snip--
11
0.0 0.1 0.2 0.3 0.4 0.45 0.5 0.8 0.85 1.0 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.55 0.6 0.7 0.9 1.0
--snip--

.malcolm


On 22/07/2010, at 12:39 PM, Malcolm Humphreys wrote:

Hi Jeremy,

oops never mind looks like it will support it.

Yep it all looks ok to me, just really feel like spliting out the
non-core stuff into a plugin or similar dir.

For now I think nearest iterp is fine as we normally have enough
points for 10bits in all channels, we have used Catmull-Rom in
the past. As we will be only using this for viewing scene linear it
should be ok in the short term.

I don't need 4 channel luts, but other people might.

.malcolm

On Jul 22, 1:53 am, Jeremy Selan <jeremy...@gmail.com> wrote:
Malcolm,

I believe the 1D lut op does allow for a different number of entries
per channel.
If we look at src/core/Lut1DOp.h,

you'll see the entry for each color channel lut is stored in an
individual vector: fv_t luts[3].
So in your loading code (if we hardcoded the sizes used in your example),

lut1d->luts[0].resize(11);
lut1d->luts[1].resize(6);
lut1d->luts[2].resize(6);

Does the rest of the Format loading code make sense to you?  All your
work should be in a single file, a la FileFormat3DL.

A few additional questions for you:

* Currently, OCIO only support linear and nearest interpolation for 1D
luts.  If the examples you've given are typical (where the 1d lut is
size 6) I couldnt imagine linear interpolation would suffice, and I'd
also imagine that the interpolation type chosen would highly influence
the resulting image.   Does CSP dictate the interpolation type?  What
type would you prefer?  I have no problem adding higher types (cubic,
etc) I just hadnt had the need to yet.  (Note that the .3dl shaper 1D
lut also has this issue (it's often size 17), I just hadnt tackled it
yet.)

* Do you care about 4 channel luts? (I.e., changing alpha)  We've
never needed this at SPI, which is why the OCIO currently assumes 3
channels, but if other people think its important for completeness
sake Im open to it.

-- Jeremy

On Wed, Jul 21, 2010 at 6:11 AM, Malcolm Humphreys

<malcolmh...@mac.com> wrote:
Hi,
I started looking at adding csp lut format to ocio.
A csp lut allows a 1D prelut with a different number of points per channel. The
current Lut1DOp only supports applying the same 1D lut to all channels.
I'm wondering if this is something you were thinking of supporting in ocio?
--snip--
Access LUT data via a gamma lookup
Red channel has gamma 2.0
Green channel has gamma 3.0 but also has fewer points
Blue channel has gamma 2.0 but also has fewer points
11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0.0 0.01 0.04 0.09 0.16 0.25 0.36 0.49 0.64 0.81 1.0
6
0.0 0.2 0.4 0.6 0.8 1.0
0.0 0.008 0.064 0.216 0.512 1.0
6
0.0 0.2 0.4 0.6 0.8 1.0
0.0 0.04 0.16 0.36 0.64 1.0
--snip--
.malcolm


Re: [ocs-dev] Re: Supporting 1D luts which are different per channel

Malcolm Humphreys <malcolmh...@...>
 

Hi,

Just looking at this again, one thing we didn't cover was a prelut which has non-uniform spaced points (scattered). Would you see this as support we would need to add to the Lut1DOp or would this end up being a different op type.

This is a simple example from the csp spec.
--snip--
Map extended input (max. 4.0) into top 10% of LUT

11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
--snip--

A more complex example could be
--snip--
11
0.0 0.1 0.2 0.3 0.4 0.45 0.5 0.8 0.85 1.0 4.0
0.0 0.1 0.2 0.3 0.4 0.5 0.55 0.6 0.7 0.9 1.0
--snip--

.malcolm

On 22/07/2010, at 12:39 PM, Malcolm Humphreys wrote:

Hi Jeremy,

oops never mind looks like it will support it.

Yep it all looks ok to me, just really feel like spliting out the
non-core stuff into a plugin or similar dir.

For now I think nearest iterp is fine as we normally have enough
points for 10bits in all channels, we have used Catmull-Rom in
the past. As we will be only using this for viewing scene linear it
should be ok in the short term.

I don't need 4 channel luts, but other people might.

.malcolm

On Jul 22, 1:53 am, Jeremy Selan <jeremy...@gmail.com> wrote:
Malcolm,

I believe the 1D lut op does allow for a different number of entries
per channel.
If we look at src/core/Lut1DOp.h,

you'll see the entry for each color channel lut is stored in an
individual vector: fv_t luts[3].
So in your loading code (if we hardcoded the sizes used in your example),

lut1d->luts[0].resize(11);
lut1d->luts[1].resize(6);
lut1d->luts[2].resize(6);

Does the rest of the Format loading code make sense to you? All your
work should be in a single file, a la FileFormat3DL.

A few additional questions for you:

* Currently, OCIO only support linear and nearest interpolation for 1D
luts. If the examples you've given are typical (where the 1d lut is
size 6) I couldnt imagine linear interpolation would suffice, and I'd
also imagine that the interpolation type chosen would highly influence
the resulting image. Does CSP dictate the interpolation type? What
type would you prefer? I have no problem adding higher types (cubic,
etc) I just hadnt had the need to yet. (Note that the .3dl shaper 1D
lut also has this issue (it's often size 17), I just hadnt tackled it
yet.)

* Do you care about 4 channel luts? (I.e., changing alpha) We've
never needed this at SPI, which is why the OCIO currently assumes 3
channels, but if other people think its important for completeness
sake Im open to it.

-- Jeremy

On Wed, Jul 21, 2010 at 6:11 AM, Malcolm Humphreys

<malcolmh...@mac.com> wrote:
Hi,
I started looking at adding csp lut format to ocio.
A csp lut allows a 1D prelut with a different number of points per channel. The
current Lut1DOp only supports applying the same 1D lut to all channels.
I'm wondering if this is something you were thinking of supporting in ocio?
--snip--
Access LUT data via a gamma lookup
Red channel has gamma 2.0
Green channel has gamma 3.0 but also has fewer points
Blue channel has gamma 2.0 but also has fewer points
11
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0.0 0.01 0.04 0.09 0.16 0.25 0.36 0.49 0.64 0.81 1.0
6
0.0 0.2 0.4 0.6 0.8 1.0
0.0 0.008 0.064 0.216 0.512 1.0
6
0.0 0.2 0.4 0.6 0.8 1.0
0.0 0.04 0.16 0.36 0.64 1.0
--snip--
.malcolm


OCIO 0.5.14 posted

Jeremy Selan <jeremy...@...>
 

Version 0.5.14 (Sept 1 2010):
* Python binding enhancements
* Simplified class implementations (reduced internal header count)

Most changes this week were internal, all API changes were binary
compatible additions.
(Which is a good sign, considering a stable 0.6 is fast approaching!)

-- Jeremy


Re: Color space transform bidirectionality

Alan Jones <sky...@...>
 

Hi Jeremy,

On Fri, Aug 27, 2010 at 4:18 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
Sorry for the delay in responding. (Had to attend to non-OCIO
responsibilities this week).
No worries :)

The approach you describe is exactly in line with what I've been
thinking. Excellent.
This is great news :)

But there's no code-level distinction
between the types.  Is there any benefit to doing so?
I agree, the only benefit is from a UI perspective to assist the user
in not doing something stupid. Downside to this is sometimes
there is a good reason to do something stupid. So perhaps it would
be best to go for some middle road (listing the logical ones first and
including the type of all transforms for instance).

I'd prefer to just call them "transforms" to leave open the possibility of other correction
math.  (For example, the grading lut could just as easily be a 1D lut,
a 3D lut, or a ASC-CDL).
Sounds like a logical choice of terminology to me.

Your processing pipeline is already expressed in the DisplayTransform.
 You specify the storage LUT (inputColorSpace), the displayLut
(outputColorSpace),  and also optional color correction(s).  (Which
can occur in any color space, including scene linear or a log-like DI
space).  The Nuke OCIODisplay node provides an example of using this
API.
Great stuff :) thanks for sharing. Slowly getting my head around how OCIO
works.

Cheers,

Alan


Re: Color space transform bidirectionality

Jeremy Selan <jeremy...@...>
 

Alan,

Sorry for the delay in responding. (Had to attend to non-OCIO
responsibilities this week).

The approach you describe is exactly in line with what I've been
thinking. Excellent.

A few additions:

You refer to the different LUT types as "storage, grading, and
display". In the currently library we dont draw hard lines between
the different transform uses. All three of these would be color
spaces, and are treated equally in the code. Adapting your terminology
to OCIO, "storage" color spaces would always provide transforms both
to and from scene linear, display color spaces typically would only
define a transform from scene linear, and grading spaces would be
defined dynamically at runtime. But there's no code-level distinction
between the types. Is there any benefit to doing so? I've been
thinking about adding
metadata (tags) that will let the user tag color spaces as belonging
to different categories. (such as 'display', 'IO', etc). This would
probably help in the UI (you could filter by tag), but I cant think of
any other uses so it's a bit low on the priority list.

* You refer to these things all as LUTs. I'd prefer to just call
them "transforms" to leave open the possibility of other correction
math. (For example, the grading lut could just as easily be a 1D lut,
a 3D lut, or a ASC-CDL).

Your processing pipeline is already expressed in the DisplayTransform.
You specify the storage LUT (inputColorSpace), the displayLut
(outputColorSpace), and also optional color correction(s). (Which
can occur in any color space, including scene linear or a log-like DI
space). The Nuke OCIODisplay node provides an example of using this
API.

-- Jeremy

On Mon, Aug 23, 2010 at 2:59 PM, Alan Jones <sky...@gmail.com> wrote:
Hi Jeremy,

On Mon, Aug 23, 2010 at 4:30 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately.
For practical purposes those are all probably invertible, but even a 1D lut
can lose information (such as everything below black point getting pushed
to 0 - though seeing that information isn't required it doesn't really matter).
I'm not a fan of putting too much effort into protecting users from their own
stupidity ;) An uninvertible matrix could also happen, but again, probably
not in practice.

I'd love to figure out
a way to make this more explicit in the API. Suggestions?
That depends - you make a good usage case that somewhat goes out the
window. The only real solution (which unfortunately cuts down your options
for processing shortcuts and also limits what LUTs you can bring in) I can
think of is something like follows.

A color space is defined and it is always defined relative to scene referred
linear. This way for each space it knows how to convert to or from scene
referred and nothing else.  Then when it registers it can say which directions
it can handle.

You kind of also need to define types of LUTs at this point though. I'm
thinking of three. Storage LUTs (which pull data two and from representations
used for storage, such as S-Log etc), Grading LUTs (which are designed to
apply a particular look whether that's a color grade or a film transfer
representation), and Display LUTs (which modify values to ensure that it
displays correctly on a given device - i.e. sRGB, Rec709 etc).

With this storage LUTs would go from whatever their storage form across to
scene referred linear usually (though you'd use the reverse for saving
perhaps) then
the grading LUTs would make any changes to the color (if you had some magical
linear display then this data would look correct on that display, but
the grading
LUTs wouldn't be used until the end as not to screw the radiometric
linearity of
your data). The finally with that data you apply the display LUT for your output
device.

So it'd be
1 Storage LUT
0+ Grading LUTs
1 Display LUT
as the common workflow.

I've got no idea how well this aligns with the design goals of OCIO as I'm just
getting familiar with it, but that's how I'd look at structuring
things to help define
the process so it's more accessible to less technical users. It's pretty much
what I'm planning on putting in place in the pipeline I'm building
here, but on a
pipeline wide scale where at ingest everything is shifted to
radiometrically linear
space and stored in float (well probably half-float for most cases) on
disk. Then
people working in lighting/comp would have grading+display LUTs applied on
the fly so they're looking at as close to final as possible.

I'd be interested to hear your thoughts.

Cheers,

Alan.


OCIO 0.5.13 posted

Jeremy Selan <jeremy...@...>
 

Version 0.5.13 (Aug 18 2010):
* GPU Processing now supports High Dynamic Range color spaces
* Added log processing operator, and updates to many other ops
* Numerous bug fixes + updates to python glue
* Exposed PyOpenColorIO header, for use in apps that require
custom python glue
* Matrix op is optimized for diagonal-only subcases
* Numerous updates to Nuke Plugins (now with an addition node,
OCIODisplay)

All code available from github and google code (as usual).

We're in the process of porting Katana (our internal lighting and
compositing tool) to OCIO, and work is progressing well. (Our target
completion date is mid Sept). Katana happens to be a nice testbed for
OCIO development - utilizing both the GPU + CPU code paths, and also
having a prior (similar) color management implementation as a
reference sanity check).


Re: Color space transform bidirectionality

Alan Jones <sky...@...>
 

Hi Jeremy,

On Mon, Aug 23, 2010 at 4:30 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately.
For practical purposes those are all probably invertible, but even a 1D lut
can lose information (such as everything below black point getting pushed
to 0 - though seeing that information isn't required it doesn't really matter).
I'm not a fan of putting too much effort into protecting users from their own
stupidity ;) An uninvertible matrix could also happen, but again, probably
not in practice.

I'd love to figure out
a way to make this more explicit in the API. Suggestions?
That depends - you make a good usage case that somewhat goes out the
window. The only real solution (which unfortunately cuts down your options
for processing shortcuts and also limits what LUTs you can bring in) I can
think of is something like follows.

A color space is defined and it is always defined relative to scene referred
linear. This way for each space it knows how to convert to or from scene
referred and nothing else. Then when it registers it can say which directions
it can handle.

You kind of also need to define types of LUTs at this point though. I'm
thinking of three. Storage LUTs (which pull data two and from representations
used for storage, such as S-Log etc), Grading LUTs (which are designed to
apply a particular look whether that's a color grade or a film transfer
representation), and Display LUTs (which modify values to ensure that it
displays correctly on a given device - i.e. sRGB, Rec709 etc).

With this storage LUTs would go from whatever their storage form across to
scene referred linear usually (though you'd use the reverse for saving
perhaps) then
the grading LUTs would make any changes to the color (if you had some magical
linear display then this data would look correct on that display, but
the grading
LUTs wouldn't be used until the end as not to screw the radiometric
linearity of
your data). The finally with that data you apply the display LUT for your output
device.

So it'd be
1 Storage LUT
0+ Grading LUTs
1 Display LUT
as the common workflow.

I've got no idea how well this aligns with the design goals of OCIO as I'm just
getting familiar with it, but that's how I'd look at structuring
things to help define
the process so it's more accessible to less technical users. It's pretty much
what I'm planning on putting in place in the pipeline I'm building
here, but on a
pipeline wide scale where at ingest everything is shifted to
radiometrically linear
space and stored in float (well probably half-float for most cases) on
disk. Then
people working in lighting/comp would have grading+display LUTs applied on
the fly so they're looking at as close to final as possible.

I'd be interested to hear your thoughts.

Cheers,

Alan.


Re: Color space transform bidirectionality

Jeremy Selan <jeremy...@...>
 

Alan,

OCIO is being a bit clever under the hood, though in my opinion not in
a dangerous way.

OCIO's currently approach is that to make colorspace definitions
"convenient", we assumes inverse color transforms are allowed when the
round-trip can automatically be made 100% accurately. I.e, if your
colorspace definition is only built from cleanly invertible ops
(simple cc math, matrix ops, log, 1D lookup tables), and is round-trip
safe, the inverse transform is allowed. If your colorspace definition
relies on 3D LUTs, any attempt to run the inverse transform will fail
during the getProcessor call. (If you try to add a 3d lut to nuke and
try to perform the inverse you'll see this). I'd love to figure out
a way to make this more explicit in the API. Suggestions?

The reason we defer the inverse 3d lut check to this late part in the
API is to allow for the following interesting possibility.

Say you have 2 display emulation colorspaces defined as follows.

srgb: (A normal filmlook)
lin_to_log.lut1d
log_to_srgb.lut3d

srgb_warm: (A warmer filmlook varient).
lin_to_log.lut1d
log_to_srgb.lut3d
warm.mtx


Say you have pixels in a baked srgb representation, and you wanted to
view them with the warm look. Internally, our processing chain would
look like:
INPUT -> inverse log_to_srgb.lut3d -> inverse lin_to_log.lut1d ->
lin_to_log.lut1d -> log_to_srgb.lut3d -> warm.mtx -> OUTPUT

But, OCIO ( will soon) optimize this to a simplified conversion:
INPUT -> warm.mtx -> OUTPUT

This particular color space transformation, although conceptually
requiring an inverse 3d lut, doesnt actually end up using it!
Which is pretty cool, and really convenient in practice.

This is why in Nuke we just blindly added all color spaces to both the
input and output side, to allow for these circumstances. I can see
this is subtle though, and potentially misleading from a new user
perspective.

Thoughts?

-- Jeremy

On Mon, Aug 23, 2010 at 1:53 PM, Alan Jones <sky...@gmail.com> wrote:
Hi All,

I'm just reading through the Nuke plugin to get an idea of the API and
how things
are supposed to work together.

I notice that it adds all color spaces to both the input and output.
This assumes
that all color transforms can be reversed. I was under the impression that some
transforms would not be. Is there any facility within OCIO's current design for
this?

Cheers,

Alan.


Color space transform bidirectionality

Alan Jones <sky...@...>
 

Hi All,

I'm just reading through the Nuke plugin to get an idea of the API and
how things
are supposed to work together.

I notice that it adds all color spaces to both the input and output.
This assumes
that all color transforms can be reversed. I was under the impression that some
transforms would not be. Is there any facility within OCIO's current design for
this?

Cheers,

Alan.


Re: The S-Log formula

Alan Jones <sky...@...>
 

Hi Jeremy,

On Wed, Aug 18, 2010 at 12:06 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
As a heads up, we're in touch with the Sony camera guys and may be
able to offer a 'blessed' f35 linearization as part of OCIO soon...
Awesome - good to know, thanks. I've implemented using that formula with the
black point at 64 in 10bit. It looks pretty good and it'd be great to
know if it's
correct - if anything it feels like it might be crunching the lower
end a little too
much.

Cheers,

Alan.


OCIO 0.5.12 posted

Jeremy Selan <jeremy...@...>
 

A new version is out!

Version 0.5.12 (Aug 18 2010):
* Additional DisplayTransform improvements
* Additional GPU Improvements
* Added op hashing (processor->getGPULut3DCacheID)

The big picture overview is that progress is going really well now,
and we're definitely on track to have the API locked down in
September. (Which will be the 'API stable' 0.6 version).

At that point we'll start focusing on documentation, optimization,
unit testing, and a bunch more 3rd party plugins (including RV,
houdini, etc).

... which are all part of a push for a January 1.0 release!

-- Jeremy


Re: The S-Log formula

Jeremy Selan <jeremy...@...>
 

Alan,

As a heads up, we're in touch with the Sony camera guys and may be
able to offer a 'blessed' f35 linearization as part of OCIO soon...

-- Jeremy


Re: LUT Plugin API

Jeremy Selan <jeremy...@...>
 

Sorry for the delay in answering this...

Adding OCIO plugins is a really promising idea, which I'd like to
explore in the medium term. I particularly like that it would allow
for a clean partitioning of dependencies, and thus could be our window
to CTL support!

However, in the near team (before Jan) I'm a bit concerned about
mission creep, and would like to keep the project focused on getting
the simplest, cleanest, and fastest 1.0 implementation out the door.
And having a bit of experience with plugin APIs, if they're worth
doing at all they're usually worth doing right, so I'd like to make
sure we have the time to really focus on doing a thorough job.

-- Jeremy

On Aug 12, 9:11 am, Alan Jones <sky...@gmail.com> wrote:
Hi All,

I was thinking it'd be neat if OCIO provided an API for plugin LUTs
(i.e. libraries that perform a LUT - they could use formula or whatever
internally without any restrictions on syntax, outside of C++ of
course). Making the API SIMD compatible could also be worth
considering.

I thought this may have some benefits over a straight formula syntax
support. Particularly not requiring a syntax, the ability to use any
library out there, also would make it simple for someone to offload
to GPU and use built-in LUT support on those.

I'm thinking it'd still be referenced by an xml config the same as all
the others - just the source instead of myspace.lut would be
myspace.so

Cheers,

Alan.


Re: [ocs-dev] Re: The S-Log formula

Alan Jones <sky...@...>
 

Hi Jeremy,

On Thu, Aug 12, 2010 at 12:01 PM, Jeremy Selan <jeremy...@gmail.com> wrote:
Ah, when you read Sony Camera documents you often have to put on your
"video engineer" goggles. :)
Indeed - I'm still pretty fresh to dealing with this stuff so directly. Time to
re-read Poynton's Digital Video.

Which camera are you using?  We've done a few Sony camera
characterizations, and may have real data for the camera you're
interested in.  F35, perhaps?
Yes - the F35 :)

In my experience, if you have the
luxury of actually running exposure sweeps on a camera you tend to get
much more plausible linearizations than by obeying manufacturer
claims.
Anywhere you could point me to for reading up on doing this and using it
to generate LUTs?

Sometimes it's a communication issue, but more often the
documentation fails to discriminate between the transform to get to a
scene referred linear (input space) vs an output referred linear
(display space).
Yeah - I've taken the formula as being input space and then applying
linear to rec709 to the result in order to generate the slog to rec709
LUT.

Are you referring to this document for the formulas?  (SRW_ITG_S-
Log_001_IO_EN.pdf)  (google search: sony slog)
Yes.

Assuming we trust the document for the moment, I think the rule of
thumb is understanding that whenever these guys talk about numbers
that include percentages (such as 0%, or 109%), these are video folks
talking in IRE land. (Ugh!)
Ahhhh - thanks :)

So when the document says "t has a range of 0 to 1.09", I take this to
mean that you're expected to have input 10-bit codevalues from 64 -
1023.
code 64 = t 0.0
code 1023 = t 1.09
Perfect :)

In the later example "S-Log Formula" this is already taken into
account for you.
Y = 379.044 * log10(((x-128)/1752 + 0.037584) + 630
(This assumes 10-bit input, which in practice will only contain values
from 3-1019 due to HD link peculiarities, which you can safely ignore
in this case).
I'm still trying to figure out where some of these values come from. The
1752 is Reference white minus Black level and the 128 is black level.
Though the 379.044 and 630 are still mysteries to me. I've tried dividing
them by the equivalent part of the formula (though I've been using anti
s-log to work on this rather than s-log, but same numbers anyway) and
the resulting numbers don't have any easily identifiable correlation with
either the input or output spaces.

I'd love to know how those are calculated as at the moment I can get my
results close when trying to find a generic way to deal with the formula
(so I can make it for an arbitrary bit depth), but not exact.

Cheers,

Alan.


Re: The S-Log formula

Jeremy Selan <jeremy...@...>
 

Ah, when you read Sony Camera documents you often have to put on your
"video engineer" goggles. :)

Which camera are you using? We've done a few Sony camera
characterizations, and may have real data for the camera you're
interested in. F35, perhaps? In my experience, if you have the
luxury of actually running exposure sweeps on a camera you tend to get
much more plausible linearizations than by obeying manufacturer
claims. Sometimes it's a communication issue, but more often the
documentation fails to discriminate between the transform to get to a
scene referred linear (input space) vs an output referred linear
(display space).

Are you referring to this document for the formulas? (SRW_ITG_S-
Log_001_IO_EN.pdf) (google search: sony slog)

Assuming we trust the document for the moment, I think the rule of
thumb is understanding that whenever these guys talk about numbers
that include percentages (such as 0%, or 109%), these are video folks
talking in IRE land. (Ugh!) In the world of broadcast HD television
(rec709 with headroom), a "broadcast safe" black level is at 64/1023,
and safe white is 940/1023. Thus for folks in a broadcast-land
mindset, if you use the full 10-bit code range you're 'over white' by
(1023 / 940) = 1.09.

So when the document says "t has a range of 0 to 1.09", I take this to
mean that you're expected to have input 10-bit codevalues from 64 -
1023.
code 64 = t 0.0
code 1023 = t 1.09

In the later example "S-Log Formula" this is already taken into
account for you.
Y = 379.044 * log10(((x-128)/1752 + 0.037584) + 630
(This assumes 10-bit input, which in practice will only contain values
from 3-1019 due to HD link peculiarities, which you can safely ignore
in this case).

-- Jeremy

On Aug 12, 9:07 am, Alan Jones <sky...@gmail.com> wrote:
Hi All,

I'm currently writing a LUT to go from S-Log to Rec709. I've got the
transfer functions for both and generally the curves I've plotted look
like what I expect, but one part of the formula is bothering me. The
t in the S-Log whitepaper from Sony (camera Sony - not
imageworks) says t ranges from 0 to 109%.

So I've been trying to ascertain whether this means in 10bit (for
example) that 1023 should be 1.09 or whether it should be 1.

A section of the whitepaper shows examples of converting between
10bit S-Log and 14bit linear. It just has some magic numbers in
there and I've been trying to nail down exactly how they're calculated
in order to answer the 1 vs 1.09 question. Though while I can step
kinda close to it I've not just hit exact. So I'm hoping someone here
can shed some light on this.

Cheers,

Alan.


LUT Plugin API

Alan Jones <sky...@...>
 

Hi All,

I was thinking it'd be neat if OCIO provided an API for plugin LUTs
(i.e. libraries that perform a LUT - they could use formula or whatever
internally without any restrictions on syntax, outside of C++ of
course). Making the API SIMD compatible could also be worth
considering.

I thought this may have some benefits over a straight formula syntax
support. Particularly not requiring a syntax, the ability to use any
library out there, also would make it simple for someone to offload
to GPU and use built-in LUT support on those.

I'm thinking it'd still be referenced by an xml config the same as all
the others - just the source instead of myspace.lut would be
myspace.so

Cheers,

Alan.


The S-Log formula

Alan Jones <sky...@...>
 

Hi All,

I'm currently writing a LUT to go from S-Log to Rec709. I've got the
transfer functions for both and generally the curves I've plotted look
like what I expect, but one part of the formula is bothering me. The
t in the S-Log whitepaper from Sony (camera Sony - not
imageworks) says t ranges from 0 to 109%.

So I've been trying to ascertain whether this means in 10bit (for
example) that 1023 should be 1.09 or whether it should be 1.

A section of the whitepaper shows examples of converting between
10bit S-Log and 14bit linear. It just has some magic numbers in
there and I've been trying to nail down exactly how they're calculated
in order to answer the 1 vs 1.09 question. Though while I can step
kinda close to it I've not just hit exact. So I'm hoping someone here
can shed some light on this.

Cheers,

Alan.


OCIO 0.5.11 posted

Jeremy Selan <jeremy...@...>
 

This is a relatively minor update.

Version 0.5.11 (Aug 11 2010):
* DisplayTransform API
* ASC CDL Support

Available on github, and as a .tgz on google code.
http://code.google.com/p/opencolorio/downloads/detail?name=ocio.0.5.11.tgz#makecha

Most important is that I've recently been stuck with writer's block
(coder's block?) on how to generalize the DisplayTransform code, and
this gets us over the hump. Full GPU support should now be just around
the corner. (Feel free to place bets on the check-in date.)

... and I haven't forgotten about the FAQ and documentation either!

-- Jeremy

2081 - 2100 of 2191