openexr-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Openexr-devel] Interpreting Depth?


From: Jonathan Litt
Subject: Re: [Openexr-devel] Interpreting Depth?
Date: Tue, 3 Jun 2014 12:32:20 -0700 (PDT)

For what it's worth, V-Ray also uses type A) for its standard z-depth buffer. I inquired about this a long time ago and they had a reasonably logical answer: B) only works for standard camera projections, but it doesn't work for other projection types such as spherical and cylindrical. So they went with A) for consistency. They could probably be convinced to add an option to use B) for regular cameras, but it was easy enough to write a conversion _expression_ in Nuke and we didn't bother pursuing it further. It's also easy to generate a "camera space P" AOV and get B) from that. Also, they do use B) for the native depth channels in deep exr 2.0 files, which seems an admission that the old way is just legacy at this point.

My $.02: no harm in asking 3delight to add an option for this.



On Friday, May 30, 2014 12:21 PM, Larry Gritz <address@hidden> wrote:


"depth" (aka "Z") always means your choice B. That's true for every textbook, file format, or renderer, from OpenGL z-buffers to RenderMan shadow map files.

I can't speak for 3delight, but if your interpretation is correct, they are just wrong (and incompatible with other renderers they try hard to be compatible with), or have chosen a very strange naming convention that is different than the rest of the computer graphics field.



On May 29, 2014, at 6:34 PM, Daniel Dresser <address@hidden> wrote:

I'm not exactly sure what the best way of wording this question is, which may be why I haven't turned up many answers in my searching.  Hopefully someone here can suggest the best terminology and/or point me to an answer.

Assuming that we want to store depth in an image using unnormalized world space distance units, there are two main ways we could do this:
A) Distance from the point location of the camera (ie. if the camera is facing directly at a flat plane, the depth value is highest at the corners and lowest in the middle )
B) Distance from the image plane (ie. if the camera is facing directly at a flat plane, the depth value is constant )

The depth channel in an OpenEXR image is by convention named Z, which suggests interpretation B), where depth is orthogonal to the pixel X/Y location.

I tried looking through the document "Interpreting OpenEXR Deep Pixels" for any sort of suggestion one way or another, but all I could find was:
"Each of these samples is associated with a depth, or distance from the viewer".  I'm not sure how to parse this - it's either defining depth as "distance from the viewer", which suggests A), or it is saying you could use either A) or B).

Is there a convention for this in OpenEXR?  The two renderers I currently have convenient access to are Mantra, which does B), and 3delight, which does A).  I'm wondering whether I should try and pressure 3delight to switch to B), or whether our pipeline needs to support and convert between both.  It shouldn't be hard to convert back and forth, but it's one more confusing thing that can go subtly wrong when moving data between renderers.

-Daniel


--
Larry Gritz
address@hidden




_______________________________________________
Openexr-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/openexr-devel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]