Re: S-Log2/S-gamut 10-bit 422 XAVC to Linear ACES RGB EXR
Vincent Olivier <vin...@...>
Hi Jeremy,
Thanks so much for your reply. On 2013-05-29, at 12:09 AM, Jeremy Selan <jeremy...@...> wrote:
For now, I "only" want to find a way to get the most accurate, richest, "objectively" (standalone) scene-referred images from my camera in ACES-linear. This is the first step: to which degree can the images from the F55/F65 be considered a physically accurate photographic reference measurement of the the scene for processing and also for archival (I don't think only keeping the original camera-referred data is sufficient in the long term). I'm also looking for a way to keep other exposure-related data such as t-stop and sensitivity and other in-camera processing logs into the EXR headers to be able to physically qualify a scene based with regards to the image. I'm a programmer and a photographer. So, applications I'm looking forward to tackle are, first and foremost, of course, related to the mathematical and computational aspects of color and lighting æsthetics. I mean, I am very bored with the state of cinematography right now, there are really only 2 looks applied to every single movie: the Transformers anamorphic-crushed lensflare blue and the HDSLR peaches and cream indy porridge. Since The Archers (especially Red Shoes and Black Narcissus, and notice that these 2-words titles start with the name of a color), I haven't seen cinematography that can get a braingasm out of my visual cortex. Just look at what is created by the contrast between the herbal arrangement and the dresses in the the fashion show at the end of Queen Cotton (starting at 10:00). That's what I call art. I think the job of a cinematographer (a good one anyways) has to extend all the way to the computer imagery, the compositing and the image finishing of the movie in the digital realm. That's my personal goal. Image-based lighting for unbiased rendering is another area of innovation I will be interested to look into in the near future once I get comfortable with ACES. I cannot afford access to Arnold nodes, but some Arnold core developers have contributed to Blender's Cycles in a significant way, and I really like this renderer, and it's open source. And the color pipeline is a really hot topic right now in the Blender community. I'm not sure this answers you question. But I hope it clarifies my intentions a bit.
I only have access to Rec 709 monitors, some OLED ones. So I look at the images through an ACES to Rec 709 LUT, but the computational space is ACES and as far as measurements are concerned, I rely on the various analytical tools, mostly histograms and vectorscopes (I'm building a custom CIE [x,y] diagram vectorscope to track what happens to image data in and out of color transforms as a way to visually represent those transforms: VERY useful when you are trying to communicate where you want to go with your color pipeline).
Well, to me, before it's ACES, it's garbage™. ;-) My code reflects this philosophy. I don't think I will find a need for independent Sgamut and Slog2 transforms. Maybe the YCbCr to RGB transform will become optional from the Sgamut/Slog2 handling depending on the original footage input, yes. But I am also looking for the most computationally efficient code for a specific application (and I will probably write one big CUDA kernel for that too) and code-gathering gets me there (at the price of elegance, perhaps, but, well, performance is important in a production context). And this code is merely a proof of concept for now.
And I would love to contribute the code I'm writing back to OCIO, in the elegant dynamic linear 3D color space transform combination you created. It's just that I need to get my YCbCr to ACES transform right, first. Why don't Imageworks have the color computing equivalent of Google's Summer of Code? I'll bring my sleeping bag and my toothbrush at your signal! ;-)
Agree with the [0, 255] to [0,1023], but then saying that the extra 2 bits are the least significant doesn't really makes sense… But anyway, if we both thing that's what he meant and coding makes sense visually, then, that's that.
Agreed. That's how I do it in lines 187 to 191. But as far as headroom/footroom are concerned I don't go from 8 to 10, I simply take out the assumed 10-bit footroom values from the sample and then divide it by the 10-bit max.
I chose a power function because I do not know the endianness at this point. It is not implemented yet, but I want to be able
Yes, I'm partnering with a local FX company to take reference shots of a macbeth + f-stop chart and we'll see. But I would REALLY appreciate someone from Sony Electronics to validate the code at some point in time. I know they probably don't start with something linear in-camera to get to Slog2/Sgamut, but they sure have done something similar to what I'm doing… I just find it odd that they didn't publish it (if they have it, which might not be the case), like they did for the original Slog/Sgamut transform in the 2009 whitepaper.
I can and did output separate files with for each Y, Cb and Cr channels as grayscale. But I didn't find a way to put all three in one file.
Yes, will be looking for a raw recorder to get their 16-bit Slog2/Sgamut RAW to OpenEXR ACES transform. However, my understanding is that they are at the same point as I: their Slog2/Sgamut transforms are still a work in progress, even for the F65 (based on what I can see in Vegas and in their RAW Viewer).
The ACES community, that's us, right? ;-) One reason I sent this message is to see if there is interest in obtaining a consensus around community official LUTs (and there is, as you must know, VFX supervisors are in a permanent state of panic regarding the increasing influx of Slog2/Sgamut material coming their way, and right now, they settle on the "least visually horrible" frankeinstein transform they can find. And since there is interest (we have to take into account that Sony is wildly pushing with all its political weight to have one and only one camera-referred space and gamma standardized - SMPTE-ST-2048, xvYCC- and preferably their own) I am wondering if we can all join our efforts to at least corroborate the findings.
That being said, I am in touch with them (Sony Electronics) to see if they have information in-house that they could let me take a look at. We are at the NDA stage right now. My feeling is that they "don't". That Sgamut/Slog2 is mostly a marketing initiative for the moment and they have yet to produce rigorous statements about the technical nature of the ideal transforms.
Yes, please!
Hasn't OpenCL gone the way of Cg, already? ;-) Vincent PS: excuse my bad written English. It is really not my mother tongue, nor my working language either. PPS: I'm a total sucker for your ideas that got implemented in Katana, BTW. Really, this is sexy stuff to me. Have you ever had thoughts about extending/abstracting the Katana ontology/wokflow/project management system into more than just postproduction? Because, and my comment is nothing compared to the recognition you already had for this, I think that looking at a whole movie in this way (including previz, physical in-camera capture, audio, etc.) could be one heck of a deal-changer for filmmaking. That probably deserve another thread or another list entirely, but I thought I'd just pitch it here while I have your attention! |
|