gnu3dkit-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gnu3dkit-discuss] How do RenderKit and Renderer communicate?


From: Brent Gulanowski
Subject: [Gnu3dkit-discuss] How do RenderKit and Renderer communicate?
Date: Wed, 15 Jan 2003 14:26:24 -0500

Phil,

I've cleaned up a bunch of the introductory parts of the white paper, and will be working on expanding your point form additions into full discussions soon. But I'm concerned that I really do not have a good grasp on your preferred model for RenderKit - Renderer communication.

I believe that we established that the RenderKit is the active party, while the Renderer is subservient. I mean, rendering happens due to instructions and data sent by RenderKit to the Renderer, as opposed to the Renderer producing instructions and requesting data from RenderKit. Is this correct? It seems logical, if I consider that a view is conceptually instigated by a viewer, in this case a camera object inside RenderKit.

I have a conceptualization question, which arises from my definition of 3D rendering (part of my definition of all processing tasks). Rendered images are transformations of one set of data (3D object data) into another (pixel data). Geometric and projection transformations are just some of the transformations applied. Culling, texturing, anti-aliasing, blurring, and all sorts of others could be grouped into a general category.

My feeling is that some of these would sensibly arise in relation to the camera or synthetic viewer, but others would originate from the god-like controls of the user (or programmer) of the particular application -- and not just pixel filters (which are the leastrelated to RenderKit -- e.g. Cel animation-style shading). Wireframe view is one that I often think of as being applied "from outside", as is occlusion culling, and others. But what about (non-volumetric) fog? Does it complicate the issue if the renderer were able to post-filter the data (unless such filters were embedded into the renderer)? Because then the application would have to do one set of transformations through Actions and another set through whatever interaction were exposed from the renderer.

Have you specified or assumed a very strict flow of commands for various data transformations from the application, through RenderKit, and then into the renderer? Can we define different categories of such transformations to help us define current and future Actions, and how they are similar or different (depending upon when the actions' effects come into play)?

Should we provide a sharp contrast between transformations which affect the world geometry and those which only affect the way the geometry is rendered? I can't see where to draw such a line, but if it exists, it seems essential to recognize it and incorporate it. Viewing the world orthographically doesn't affect the world per se, but it is a result of a camera. Perhaps camera qualities like ortho, depth of field, angle of view and such are a special case.

Brent Gulanowski
--
Mac game development news and discussion
http://www.idevgames.com





reply via email to

[Prev in Thread] Current Thread [Next in Thread]