gnu3dkit-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu3dkit-discuss] considering G3DGeometry


From: Philippe C . D . Robert
Subject: Re: [Gnu3dkit-discuss] considering G3DGeometry
Date: Sun, 23 Mar 2003 11:35:51 +0100

On Saturday, March 22, 2003, at 12:07  Uhr, Brent Gulanowski wrote:
BTW the current design (frontend/backend separation) allows us to easily introduce mechanisms to achieve such concepts. To give an example, you could write a GL based renderer backend which tessellates algebraic surfaces, performs surface subdivision etc. on the fly and then renders them using application specific detail settings.

OK. This is all good. I guess I'm thinking more about pre-rendered stuff being used later, and making it easy to swap different reps in and out. Consider two applications: one to model individual representations (where you'd do the tessellation and other procedural work), and another to display a large scene made up of these reps. You want to save the procedurally generated rep as a fixed mesh, and then just load it up later. Maybe you'd pre-render a bunch of them at different levels of detail, and then load them up as appropriate. I can think of ways that it would be a lot easier to just swap one rep for another inside a resource file, than to have to manipulate the scene graph directly. I'm sure you can do any of this by manipulating the scene graph, but I'm postulating ways that would avoid that if possible, for users' benefit, the same way you can substitute images and strings in resources that are used to describe user interfaces > now.

This is very app specific, but nothing prevents you from doing so using the current design, I believe.

I also had an idea for using the renderer interface for a data source for an NSOutlineView. The scene graph would think it was drawing things, but really it would just be passing object references which the data source would convert into hierarchy and string information. You save yourself having to write a second interface for the data source. If, however, the shape nodes tried to submit geometry, that would all just be ignored, and would be a waste of time. Instead the groups call -objectInstance, and the "renderer" just finds a suitable retained object representation: in this case, a dictionary of strings.

To drive cocoa UI objects most often delegates are required, so what you would do is probably to implement such a delegate controller operating on the scene graph, thus being a custom action class. There is no need for 'misusing' the rendering process here.

That's one example. I'm thinking of a couple more, but they are still too vague. I'll readily admit that I'm sort of pushing an AppKit concept onto a pretty different application, and maybe I'm seeing parallels that aren't really there. What are your feelings about 3D user interfaces? I know they have been tried and usually fail miserably, but I am curious about them. But then there is the hassle of having a conceptual domain model (say, a company organizational chart, or a visual representation of a call tree of a program, or a langauge structure, or a shape grammar) and trying to integrate it with the 3D scene model that portrays it. So on the one hand I'm wondering about 2D views of 3D data, and on the other hand, 3D interpreted views of non-graphical data. If there was one interface that could handle all of it, that would be pretty cool. But it's a lot to ask for.

Again, this is IMHO high level, application specific stuff. The 3DKit's purpose is to provide the means to write such applications, but its design should not be influenced by very specific needs, instead it should be as generic as possible. By separating the data representation from its visualisation it should become possible to address such questions as you ask them.

HTH,

-Phil
--
Philippe C.D. Robert
http://www.nice.ch/~phip





reply via email to

[Prev in Thread] Current Thread [Next in Thread]