Handling of bundled apps/libraries for Linux distro packaging.


Richard <hobbe...@...>
 

Placeholder for continued discussion from:


Malcolm Humphreys <malcolmh...@...>
 

Off list Jeremy, Colin and myself were chatting about this when talking about the windows build.

....
We need to have a consensus on how the project will be split up into packages (ie different sets of rpm's / debs etc).

Something along the lines of
- src
- core (lib + shell apps)
- dev (headers etc)
- gui apps (qt dependancy)
- nuke
- mari
- python (this could be part of core??)

Depending on if the goal is to get these packages included as part of the base linux distributions this creates a bit of work with the bundled in dependancies. To me this feels like a second step on getting ocio accepted into the most relevant distributions.
....

Thinking about this more it doesn't feel like we need to work out how to package everything just the main bits. Something like (note. these would map to the install options in the NIS installer on windows):
- OpenColorIO-src-1.0.1
- OpenColorIO-core-1.0.1
  [ libOpenColorIO, headers, ocio2icc, ociobakelut, ociocheck ]
- OpenColorIO-python{ver}-1.0.1
- OpenColorIO-docs-1.0.1
  [ html, pdf, man (need to add)]

OpenColorIO-core-1.0.1 would depend on lcms2-2.1, tinyxml-2.6.1, yaml-cpp-??, pystring-?? and possibly the md5 code.

Later on when OpenImageIO has be also successfully been packaged we could have another package for ocioconvert and ociodisplay.

The nuke and mari code probably can be left to be compiled/used on a need basis as this is pretty site dependant setup wise.

Thoughts?

Do we have to worry about have different compiler versions of the core package, depending on which host app it might be used in?

.malcolm


On 17/11/2011, at 4:06 PM, Richard wrote:

Placeholder for continued discussion from:



Richard Shaw <hobbe...@...>
 

On Fri, Nov 18, 2011 at 9:40 AM, Malcolm Humphreys
<malcolmh...@...> wrote:
Off list Jeremy, Colin and myself were chatting about this when talking
about the windows build.
....
We need to have a consensus on how the project will be split up into
packages (ie different sets of rpm's / debs etc).

Something along the lines of
- src
- core (lib + shell apps)
- dev (headers etc)
- gui apps (qt dependancy)
- nuke
- mari
- python (this could be part of core??)
Usually it's the packagers job to decide on how to break up the
packages as the guidelines differ somewhat between distros, but
documented suggestions are very welcome.


Depending on if the goal is to get these packages included as part of the
base linux distributions this creates a bit of work with the bundled in
dependancies. To me this feels like a second step on getting ocio accepted
into the most relevant distributions.
Does ocio include significant testing yet? I'm sure RPM does and deb
based systems probably also have the ability to test the builds during
the building process. In RPM's this is done in the %check section. I'd
have to look it up but I'm pretty sure exiting with anything other
than "0" will fail the build. This may do a lot to ease the burden of
having to worry about the versions of yaml-cpp and tinyxml the host
system uses.


....
Thinking about this more it doesn't feel like we need to work out how to
package everything just the main bits. Something like (note. these would map
to the install options in the NIS installer on windows):
- OpenColorIO-src-1.0.1
- OpenColorIO-core-1.0.1
  [ libOpenColorIO, headers, ocio2icc, ociobakelut, ociocheck ]
- OpenColorIO-python{ver}-1.0.1
- OpenColorIO-docs-1.0.1
  [ html, pdf, man (need to add)]
OpenColorIO-core-1.0.1 would depend on lcms2-2.1, tinyxml-2.6.1,
yaml-cpp-??, pystring-?? and possibly the md5 code.
If it helps, here's the current versions of the software in ext on my
Fedora 15 system:

tinyxml-devel-2.6.1-2.fc15.x86_64
yaml-cpp-devel-0.2.5-2.fc15.x86_64
python-docutils-0.8-0.1.20110517svn7036.fc15.noarch
python-jinja2-2.5.5-4.fc15.noarch
lcms2-devel-2.2-1.fc15.x86_64
python-pygments-1.4-1.fc15.noarch
python-setuptools-0.6.24-1.fc15.noarch
python-sphinx-1.0.7-2.fc15.noarch

I can get the same information for Fedora 16 and EL 6 if it would help
(EL means RHEL 6 and derivatives, CentOS, SL, etc.). Are you worried
about EL 5? I would think that's the oldest EL that would matter.


Later on when OpenImageIO has be also successfully been packaged we could
have another package for ocioconvert and ociodisplay.
Do you mean for other distro's? oiio is already in Fedora as I'm the
maintainer for it. I haven't packaged it for any EL's but that could
be added fairly easily.


The nuke and mari code probably can be left to be compiled/used on a need
basis as this is pretty site dependant setup wise.
Yes, since nuke and Truelight are non-free they will not be in Fedora.
If someone wants to build a commercial system based on Fedora they can
use the source RPM and build their own packages very easily. That's
the nice thing about using a source RPM. Most of the hard work is done
for you.

Thanks,
Richard


Richard Shaw <hobbe...@...>
 

Also worth mentioning... Although Toshio (from Fedora devel list)
thought it was strange for the python library to serve a dual purpose,
he agreed that the most appropriate action was to put the library
(with lib prefix) in /usr/lib{,64} and symlink it to site-packages (I
created the link without lib prefix).

Richard


Jeremy Selan <jeremy...@...>
 

So i've learned a lot about packaging these last few days!  Who knew there was so much to it? ;)

For those who want to come up to speed as well, these links are great places to start:


First off, I'm now totally sold on us (eventually) using the system install for all dependencies. (Which doesnt mean we cant ship with those in ext, we just need to play nicely with the system installed ones)

And for those interested, here is Richard's the fedora listserv link:
(thanks Richard for asking)

My hesitancy to pull out yaml / tinyxml was only based on wanting to guarantee serialization compatibility, but I now believe all of my concerns about serialization precision, etc, can be addressed with additional unit tests.  That's better than just locking down to a single library anyways.  Perhaps we could make the unit tests part of the build process, so if they fail we can mark the install as failed too?

The fedora team also mentions that it's okay to include bundled libs for convenience, as long as you defer to the system installs if they exist. (or something similar)  That seems like the best of both worlds (with the new unit tests, of course).

For md5, we dont have to worry about including the source code as the version we use is an accepted copylib.  The same also applies to pystring. (I will update the pystring website docs to make this explicit) ;)

In terms of how we split up the libs, I'm willing to defer to experts.  But if I had to wager a guess for what ocio 'core' would consist of, I'd consider it:

- src
- core (lib + shell apps)
- dev (headers etc)

which would require the dependencies:
 lcms, tinyxml, yaml-cpp

I'd probably leave out nuke / mari from all installs (other than the source), as they'll be disted with the apps anyways.  So that just leaves the docs targets remaining?

Also, in terms of the shell apps, the two that depend on oiio (ociodisplay,ocioconvert) are not really intended to be used or installed necessarily, they're really more included as simple working code examples.   But the other command-line apps (ociocheck, ocio2icc, ociobakelut) are useful parts of the base install.

As far as python is concerned, it'd be great if it were part of the core install, but how does this work with regards to python versions?  With the new installation location, is it possible to have both 2.6 / 2.7 installs on the same system? (not sure if this is critical or not)


ACTION ITEMS:

- Colin - Getting OCIO ready for inclusion on distros is going to take a bit of work, and I see no reason to delay the cpack installs.  So COLIN, why dont you start on the 'core' install?  Let us know if there's anything holding you up?

- Jeremy (me): I'll need to add unit tests for serialization, and to push our yamlcpp edit upstream.  I'll also update to the latest version of all serialization libs and make sure everything still works fine.  I'll also update pystring's website to clarify its expected use as a copylib.

- Richard.  Any thoughts on how we can make the unit tests part of the installation / build process? 

-- Jeremy


Richard Shaw <hobbe...@...>
 

On Sun, Nov 20, 2011 at 7:21 PM, Jeremy Selan <jeremy...@...> wrote:
- Richard.  Any thoughts on how we can make the unit tests part of the
installation / build process?
On the RPM side (including Suse and other RPM systems), it's easy. The
spec file supports a %check section and a failure in this area
(non-zero exit) will fail the build. I'm sure Debian based systems
have an equivalent.

Richard


Jeremy Selan <jeremy...@...>
 

Oh - one other dependency consideration I forgot is our use of an external shared_ptr header.

Currently the default is to look for tr1, and if it's not enabled you can optionally specify boost. What should the standard installation do?  Maybe on linux / osx, we always assume the existence of tr1? How about on windows?

-- Jeremy


Paul Miller <pa...@...>
 

On 11/20/2011 7:44 PM, Jeremy Selan wrote:
Oh - one other dependency consideration I forgot is our use of an
external shared_ptr header.

Currently the default is to look for tr1, and if it's not enabled you
can optionally specify boost. What should the standard installation do?
Maybe on linux / osx, we always assume the existence of tr1? How about
on windows?
I'm not sure you can assume tr1 on Windows. It first showed up in VS2008 SP1, but some people (including us) aren't using SP1, for compatibility reasons with certain runtimes. However, if you need tr1 you can get it from boost (which is what we do). I have no problem with requiring boost, though I can see it being quite the pain for people who aren't yet using it on Windows. A catch 22 if I ever saw one. Maybe you should just assume people have tr1 afterall. :-)


Colin Doncaster <colin.d...@...>
 

Will do - I'll try and get everything humming along on Windows too, at least with the core tools. I'm hoping I'll have a few days this coming week to tackle this.

cheers

On 2011-11-20, at 8:21 PM, Jeremy Selan wrote:

- Colin - Getting OCIO ready for inclusion on distros is going to take a bit of work, and I see no reason to delay the cpack installs. So COLIN, why dont you start on the 'core' install? Let us know if there's anything holding you up?


Richard Shaw <hobbe...@...>
 

Hey guys, I was just wondering. Would it be appropriate at this point
to setup a branch for "unbundled libs"?

That way it could be worked on separately until its ready to merge.

Richard


Jeremy Selan <jeremy...@...>
 

Sure, sounds great.  How about 'unbundled'?  (not sure how spaces in branch names are handled).
-- Jeremy


On Thu, Dec 1, 2011 at 4:45 PM, Richard Shaw <hobbe...@...> wrote:
Hey guys, I was just wondering. Would it be appropriate at this point
to setup a branch for "unbundled libs"?

That way it could be worked on separately until its ready to merge.

Richard


Richard Shaw <hobbe...@...>
 

On Thu, Dec 1, 2011 at 6:49 PM, Jeremy Selan <jeremy...@...> wrote:
Sure, sounds great.  How about 'unbundled'?  (not sure how spaces in branch
names are handled).
Works for me!

I can hack at the unbundling, but I'm probably not going to be much
help on the unit tests.

What's the approach there? I guess we could come up with as many
corner cases we can think of and make sure they're handled in the same
and or sane manner with different versions/disto versions of the
libraries.

Richard


Jeremy Selan <jeremy...@...>
 

I'll add the additional unit tests related to serialization.

But if you could hook the existing unit test process up to the build process, that would be appreciated.  The big issue is making sure we dont have floating-point serialization differences across platform / library.

(Note that we already have one unit test failure on mac related to serialization, I'll look at fixing that at the same time).

-- Jeremy


On Thu, Dec 1, 2011 at 4:57 PM, Richard Shaw <hobbe...@...> wrote:
On Thu, Dec 1, 2011 at 6:49 PM, Jeremy Selan <jeremy...@...> wrote:
> Sure, sounds great.  How about 'unbundled'?  (not sure how spaces in branch
> names are handled).

Works for me!

I can hack at the unbundling, but I'm probably not going to be much
help on the unit tests.

What's the approach there? I guess we could come up with as many
corner cases we can think of and make sure they're handled in the same
and or sane manner with different versions/disto versions of the
libraries.

Richard


Richard Shaw <hobbe...@...>
 

On Thu, Dec 1, 2011 at 7:05 PM, Jeremy Selan <jeremy...@...> wrote:
I'll add the additional unit tests related to serialization.

But if you could hook the existing unit test process up to the build
process, that would be appreciated.  The big issue is making sure we dont
I already have "make test" in my %check section of my rpmbuild spec
file. Is that what you mean? So far I haven't had any test failures in
any of my builds.

On a side note, unbundling yaml-cpp should be pretty easy since it has
a pkg-config file, however, tinyxml doesn't have one or an official
FindTinyXML either. I found one that handled UNIX systems but to be
acceptable I think it should handle APPLE and WIN32 as well, unless we
just want to use the bundled version for windows.

Richard


Malcolm Humphreys <malcolmh...@...>
 

On 03 Dec, 2011,at 01:10 AM, Richard Shaw <hob...@...> wrote:

On a side note, unbundling yaml-cpp should be pretty easy since it has
a pkg-config file, however, tinyxml doesn't have one or an official
FindTinyXML either. I found one that handled UNIX systems but to be
acceptable I think it should handle APPLE and WIN32 as well, unless we
just want to use the bundled version for windows.
 
I think it's important to support both bundled and unbundled libs. It's a necessary evil inside VFX shop software configurations. So you can use the bundled builds for the APPLE and WIN32 cases and have this optional in the unix cases

.malcolm



Richard


Richard Shaw <hobbe...@...>
 

On Fri, Dec 2, 2011 at 8:19 AM, Malcolm Humphreys
<malcolmh...@...> wrote:
I think it's important to support both bundled and unbundled libs. It's a
necessary evil inside VFX shop software configurations. So you can use the
bundled builds for the APPLE and WIN32 cases and have this optional in the
unix cases
Makes sense, but to make sure I fully understand... Are you saying
that even on UNIX systems that bundled libraries should still be used
by default and have an option to use system libraries mainly for
package maintainers who's distribution do not allow bundled libs?

Thanks,
Richard


Malcolm Humphreys <malcolmh...@...>
 

On 03 Dec, 2011,at 01:35 AM, Richard Shaw <hob...@...> wrote:

On Fri, Dec 2, 2011 at 8:19 AM, Malcolm Humphreys
<malcolmh...@...> wrote:
> I think it's important to support both bundled and unbundled libs. It's a
> necessary evil inside VFX shop software configurations. So you can use the
> bundled builds for the APPLE and WIN32 cases and have this optional in the
> unix cases

Makes sense, but to make sure I fully understand... Are you saying
that even on UNIX systems that bundled libraries should still be used
by default and have an option to use system libraries mainly for
package maintainers who's distribution do not allow bundled libs?
 
Yeah that sounds great. I will leave it up to Jeremy to decide what the default is on UNIX.

I personally prefer what you are suggesting. As this means if you checkout and build the code
what ends up in the binary on all platforms is the same and then treat the distributions
packaging as an exception to this. But I don't have particularly strong feelings about it

.malcolm



Richard Shaw <hobbe...@...>
 

On Fri, Dec 2, 2011 at 11:13 AM, Malcolm Humphreys
<malcolmh...@...> wrote:
Yeah that sounds great. I will leave it up to Jeremy to decide what the
default is on UNIX.

I personally prefer what you are suggesting. As this means if you checkout
and build the code
what ends up in the binary on all platforms is the same and then treat the
distributions
packaging as an exception to this. But I don't have particularly strong
feelings about it
Hmm... I never tried "hiding" options but I think in this case it
would be appropriate. Meaning I'll setup options such as
"USE_EXTERNAL_YAMLCPP" and "USE_EXTERNAL_TINYXML" but put them in a
IF(UNIX) conditional so they only appear on *nix system.

Now that I think about it I'll probably have to do it as "IF(UNIX AND
NOT APPLE)" because OSX qualifies an UNIX, right?

Richard


Jeremy Selan <jeremy...@...>
 

My preference is to keep using the bundled libs, by default, on all architectures (for the moment).

Currently, commercial app developers who want to ship OCIO along side their app (aka Foundry, etc) will likely want to build it in this 'self contained' manner to keep distribution as simple as possible.  I'd like to keep life simple for these folks, and make it clear that what they're doing is ok.  The same rules also apply to most of OCIO's current users (large vfx / animation studio) will want to roll it out with binaries that are ABI compatible with commercial releases.

But I'm open to revisiting this in the future, once we get all the new unit tests in place, and can vouch that the unpatched libraries actually produce the same serialization results.

-- Jeremy


On Fri, Dec 2, 2011 at 9:13 AM, Malcolm Humphreys <malcolmh...@...> wrote:
On 03 Dec, 2011,at 01:35 AM, Richard Shaw <hobbe...@...> wrote:

On Fri, Dec 2, 2011 at 8:19 AM, Malcolm Humphreys
<malcolmh...@...> wrote:
> I think it's important to support both bundled and unbundled libs. It's a
> necessary evil inside VFX shop software configurations. So you can use the
> bundled builds for the APPLE and WIN32 cases and have this optional in the
> unix cases

Makes sense, but to make sure I fully understand... Are you saying
that even on UNIX systems that bundled libraries should still be used
by default and have an option to use system libraries mainly for
package maintainers who's distribution do not allow bundled libs?
 
Yeah that sounds great. I will leave it up to Jeremy to decide what the default is on UNIX.

I personally prefer what you are suggesting. As this means if you checkout and build the code
what ends up in the binary on all platforms is the same and then treat the distributions
packaging as an exception to this. But I don't have particularly strong feelings about it

.malcolm




Richard Shaw <hobbe...@...>
 

Ok, I've got good news and I've got bad news. I'll start with the good
since it's shorter :)

Good:
- I've gotten a successful build using the yaml-cpp system library.
- The current unit test works :) See below.

Bad:
- In order to get a good build I have to use yaml-cpp version 0.2.7
which is only available in upcoming Fedora 17.
- The ocio_core_tests fails with stock 0.2.7.

Richard