Presuming so, I would check two things. In your gpu version, make sure that the gpu texture buffer you're loading into is floating point. You'll want to make sure you're using something like GL_RGB16F_ARB (as opposed to GL_RGB). If you were referencing the ociodisplay app as an example, it unfortunately makes exactly this mistake. (It's fixed though in my most recent code submission, not yet rolled into the trunk).
Yep this fixed it, and yeah I was just copying large sections of ociodisplay.
The other thing to check is the allocation for the colorspace. The allocation determines, if a 3d lut stage is necessary, how to perform the allocation. For 'typical' HDR data, I would recommend something like,
allocation: lg2 allocationvars: [-12, 6]
What this means is that the gpu should sample the colorspace using a logarathimic allocation (code value proportional to stops), from 2**-12, to 2**6. I.e., anything less than 0.0002 or greater than 64.0 is clamped.
Ok thats good to know.
I've been using what ever the values are in the current spi-vfx profile just to cutout any added malcolm factor.
.malcolm
-- Jeremy
On Wed, Jan 12, 2011 at 9:13 PM, Malcolm Humphreys <malcolmh...@...> wrote:
While working on this ocio photoshop plugin I noticed a difference in the cpu vs gpu path.
Attached is a screenshot.
The image on top has been imported by the plugin already and processed on the cpu, while the dialog below is a GL canvas using the gpu path for preview.
I'm guessing this is the allocation we are using on the gpu?