OpenVDB Jenkins builds broken due to Boost Python not found
Hi Everyone, Since the move of OpenVDB to ASWF. The Jenkins build is now broken due to CMake not able to find the python boost library, you can see the error message here: I've traced it down to this commit which changed causing the build to now fail: https://github.com/AcademySoftwareFoundation/openvdb/commit/566ffa6b802e489011eea8917f0db81d431d47ed I'm not sure what Linux distro that patch was targetting but on Ubuntu 16.04 the boost python library is located in /usr/include/boost/python/ (with no trailing version number). I guess we need to improve the CMake script to detect if there is a trailing version or not. Does anyone else have any other recommendations on what we should do here? Thanks, Thanh |
|
Kimball Thurston
switch to pybind11? ;-P
This is what Larry was talking about the last meeting, and in a much more concrete manner than I had babbled about in our first meeting.
I think it is a great idea to make a repo that has template cmake macros and a collection of all the Find* things we need for the various projects in ASWF. People can then request those Find* be pushed upstream to cmake, and each project can keep a stash of what the current ones they need are, and rev versions as necessary to get whatever new macros, etc. they want.
Back to your actual question, in OpenEXR, we do a very similar thing to compile for a specific python variant, except don't have a period between when asking for boost:
.... <paraphrasing a bit here>
##### NB: with a period between maj.minor find_package(PythonLibs ${OPENEXR_PYTHON_MAJOR}.${OPENEXR_PYTHON_MINOR} ##### NB: withOUT a period between find_package(Boost COMPONENTS python${OPENEXR_PYTHON_MAJOR}${OPENEXR_PYTHON_MINOR}) if (NOT Boost_PYTHON${OPENEXR_PYTHON_MAJOR}${OPENEXR_PYTHON_MINOR}_FOUND) message(WARNING "requested boost python version not found...") endif() ##### find_package(NumPy)
....
Hope that helps, Kimball From: main@... <main@...> on behalf of Thanh Ha <thanh.ha@...>
Sent: Saturday, October 27, 2018 7:36:02 AM To: main@... Subject: [ASWF] OpenVDB Jenkins builds broken due to Boost Python not found *Originates outside of Weta Digital
Hi Everyone,
Since the move of OpenVDB to ASWF. The Jenkins build is now broken due to CMake not able to find the python boost library, you can see the error message here:
I've traced it down to this commit which changed causing the build to now fail:
https://github.com/AcademySoftwareFoundation/openvdb/commit/566ffa6b802e489011eea8917f0db81d431d47ed
I'm not sure what Linux distro that patch was targetting but on Ubuntu 16.04 the boost python library is located in /usr/include/boost/python/ (with no trailing version number). I guess we need to improve the CMake script to detect if there is a trailing
version or not. Does anyone else have any other recommendations on what we should do here?
Thanks,
Thanh
|
|
Larry Gritz
Agreed about pybind11. I can't recommend it strongly enough.
Kimball did a good job of explaining what I was getting at, but I'll make a pitch on the TAC list in a new thread as well. -- lg
|
|
Daniel Heckenberg
It's never good to see a broken build, but in this case what serendipity! It provides the perfect motivation for the cmake tools project which Larry has proposed on the tac mailing list, and demonstrates the kinds of ambiguities that we might reduce by defining a working set of vfxplatform builds to use as dependencies. But back to the problem at hand... Can the OpenVDB folks address this, or should an issue be raised? Thanks, Daniel |
|
Nick Porcino
Boost in general and boost python in particular complicates build environments, maintenance, and compiler migrations. I would like to +1 the suggestion of a move to pybind11. This might be an excellent opportunity for community involvement. - Nick |
|
Dan Bailey
Looking at the version of Boost that's being pulled in (1.58), that's actually older than the the version being specified by VFX Reference Platform for 2017 which is 1.61 (https://www.vfxplatform.com/).
I think we should ideally try and stick with the versions coming from the reference platform if possible rather than getting whatever is coming in with the Linux distro. More than likely this will mean a bit more work building from source and adding Boost headers/libraries for these specific versions into Nexus and pulling them down though. Likewise the compiler is GCC 5.4.0 which isn't the version specified by VFX Reference Platform. I appreciate we're just trying to get things up and running in right now (and we don't even subscribe to this ideology in Travis for OpenVDB), so for expediency I think it makes sense to change CMake in whatever way is needed. Having said that, I would suggest that the Houdini plugins are probably a higher priority to figure out than the Python bindings as I believe most people are exposed to OpenVDB through Houdini these days. |
|
Daniel Heckenberg
Hi Dan, I don't think it's overly ambitious to address a number of these points, as most will be common to ASWF projects and CI. Specifically: 1) The VFX Reference Platform build environment and dependencies is a general goal for the ASWF CI and we'll have a working group on that this week. Please join us (although, I know the time is a little inconvenient for the UK) 2) Larry's proposal for robust dependency discovery modules for CMake projects would address this specific problem nicely. Although the Ubuntu provided boost version is old, it seems to be more a case of the ambiguity / variety of boost library decoration choices that has caused the build regression. 3) We should now have the requisite knowledge and support to incorporate the OpenVDB Houdini builds and tests into the ASWF CI. I think that we were waiting for the project adoption and repo transfer to be completed so we should be good to start moving on that. I'll follow up with an email. That really just leaves us with the broken build as an open problem. I defer to the OpenVDB project and others but my 2c: pybind11 is definitely a worthwhile step, but I expect that is a non-trivial effort and that it would create changes in the OpenVDB python bindings such that we couldn't really build a VFX Platform 2018 or 2019 artifact. Cheers, Daniel |
|
Aloys Baillet
Hi, I've been setting up CentOS-7-based VFX Platform docker images there: https://github.com/AnimalLogic/docker-usd/blob/master/linux/centos7/vfx-lite-2018/Dockerfile It's actually quite a nice way to go, I don't know how packer works exactly but these docker files can be reused to setup everything the right way for a centos-7 based build. Cheers, Aloys Aloys Baillet Lead Software Developer @ Animal Logic |
|
On Tue, Oct 30, 2018 at 6:52 AM Aloys Baillet <aloys.baillet@...> wrote:
Packer is very similar in purpose to a Dockerfile. It's purpose is to create a reusable VM image, similar to how Dockerfile's purpose is to create a reusable container image. The reason we use Packer in CI is because our CI spins up VMs. If folks find Docker to be more handy we could have a minimal Packer image that spins up a VM with Docker installed and then pulls down the Docker image to then run the build to achieve a docker based build. We have a few projects that do this today. Regards, Thanh |
|
Trevor Thomson
Hi,
We're not using docker, but we're doing something conceptually similar to Aloys to set up our platforms. We build the compiler, the VFX Platform libraries, and all their dependencies into unique directories per VFX Platform Year per DCC. It's a little heavy-handed, but then the entire platform is available for use in one location. - Trevor |
|
On Tue, Oct 30, 2018 at 9:34 AM Trevor Thomson <tgt@...> wrote: Hi, I'm not sure how these dependencies are used within a build, but It's fairly easy to switch builders in our CI cloud. If single builds are going to run on a very specific configuration of the platform, I'd recommend creating separate clean images with exact library stacks for the tests such as an image per platform year. This ensures that when running a build that only the dependencies we are expecting for that build is being pulled in and it's not pulling in unexpected / undocumented dependencies. Then the tests can pick the specific platforms it wants to run against at runtime. Regards, Thanh |
|
Aloys Baillet
Hi Thanh,
I've been reading a bit more about Packer and indeed it should be a fairly easy thing to setup a docker image that matches the VFX platform. One reason I've been angling towards Docker is that NVidia (which most VFX studio have lots of hardware from) provides a very rich set of base images that allow us to build against GL and CUDA libraries, which some VFX packages have optional dependencies to (especially OpenSubdiv). Having pre-built GL-enabled variants of these packages might be desirable, especially Qt which is quite tricky to build properly and all the vendors provide in slightly different variations... That said, to get started on the official VFX platform 2018 you would just need a vanilla CentOS-7 VM/docker image and run this: yum install -y centos-release-scl-rh yum install -y devtoolset-7-toolchain make
source /opt/rh/devtoolset-7/enable
Then install cmake using this: cmake-3.12.3-Linux-x86_64.sh --skip-license (the cmake in centos7 is way too old...) Cheers, Aloys |
|
Larry Gritz
As an aside, VFX Platform is, as they have repeated many times, not a list of endorsed or fundamental packages, but rather an agreement of versions for particular packages that have been historically problematic and plagued by "versionitis" (particularly in how they were used by Foundry, Autodesk, and Side Effects). VFX Platform is great at that mission, I'm not knocking what they're doing. But there is a separate, currently nonexistent, but badly needed list of core packages and versions that more accurately forms the basis of a modern VFX software stack, and that is the list that our CI is going to need to cater to. It overlaps VFX Platform somewhat, but is mostly a super-set. Whoever is scoping the work for these things must remember that VFX Platform not the complete list of the packages that need to be available as core dependencies and that need to be included in the matrix of interoperability that we are trying to test with CI. And also that we will need a "version next" entry in the test matrix that intentionally defies both lists by building all the TOT checkouts against each other, so that we can catch upcoming incompatibilities early as part of the review/CI process for PRs. On Wed, Oct 31, 2018 at 5:38 AM Aloys Baillet <aloys.baillet@...> wrote: Hi Thanh, --
Larry Gritz lg@... |
|