gnu3dkit-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu3dkit-discuss] How do RenderKit and Renderer communicate?


From: Brent Gulanowski
Subject: Re: [Gnu3dkit-discuss] How do RenderKit and Renderer communicate?
Date: Fri, 17 Jan 2003 16:44:21 -0500


On Friday, January 17, 2003, at 02:47  PM, Philippe C.D. Robert wrote:

Have you specified or assumed a very strict flow of commands for various data transformations from the application, through RenderKit, and then into the renderer? Can we define different categories of such transformations to help us define current and future Actions, and how they are similar or different (depending upon when the actions' effects come into play)?

Should we provide a sharp contrast between transformations which affect the world geometry and those which only affect the way the geometry is rendered? I can't see where to draw such a line, but if it exists, it seems essential to recognize it and incorporate it. Viewing the world orthographically doesn't affect the world per se, but it is a result of a camera. Perhaps camera qualities like ortho, depth of field, angle of view and such are a special case.

Sorry, but I am not sure I exactly understand what you actually don't know here - try to ask questions more precise so that we can provide good answers in the future...

With respect to your geometry problem from above, transformations are applied using a stack, so every vertex is transformed - while rendered - by the current transformation on the stack (accumulated). There is no such thing as 'world geometry', only local and world coordinate systems (the camera setup (projection type, frustum, ...) has nothing to do with that).

Hmmm, well I think you are misunderstanding that my questions are not exactly technical in nature. What I am talking about are conceptual distinctions between various ways of interpreting data, not about matrices and projections. When I say, "World geometry" I only mean "objects in the scene". But in a file, there is nothing but raw data. A rendering of this data as a 3D drawing is just one way of representing information. I'm talking about -interpretation-, not transformation, I guess. There are many choices to made, and they arise at different times in the process of rendering.

To understand how a scene graph is being rendered means understanding state machines (and common gfx pipelines) - I thus suggest you read some documentation ie. about the GL pipeline, I assume this would give you some answers.

Erm, yeah, I've read lots of these things, and while I'm no expert, I understand what is involved well enough, I think. I simply want to find out where the interpretation of the scene data begins, and if there is a distinction between a viewer interpretation (embodied in a camera) and an application interpretation (as set up by the user of the application). To me, they are different, and I would expect this difference to show up in the software. Sure, at the pipeline the differences disappear, but I'm arguing that some of the interpretations originate within RenderKit, and others are never seen by RenderKit -- perhaps being integral to the Renderer, and perhaps being part of the Renderer but controlled by the application.

Scene->RenderKit(Camera)->Renderer->Pipeline
User->Application->Renderer->Pipeline

If, in fact, you believe that these distinctions are NOT meaningful, OK. I'll ask you to forget what I wrote and just think in terms of where certain rendering decisions are made -- in RenderKit, in the Renderer, or in the application.

My question is this: Can an application affect the rendering process directly through the Renderer without the knowledge of RenderKit, and if so, should we distinguish between the kinds of affects which can be done in this way and those that cannot ... or not?

Brent Gulanowski
--
Mac game development news and discussion
http://www.idevgames.com





reply via email to

[Prev in Thread] Current Thread [Next in Thread]