gnu3dkit-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu3dkit-discuss] considering G3DGeometry


From: Brent Gulanowski
Subject: Re: [Gnu3dkit-discuss] considering G3DGeometry
Date: Fri, 21 Mar 2003 18:07:15 -0500


On Friday, March 21, 2003, at 01:24  PM, Philippe C.D. Robert wrote:


the topics you mention are definitely worth spending more thoughts, but I doubt that this affects the core design of the 3DKit. On the other hand, transformation techniques and level of detail concepts are indeed very interesting and important. But be aware that these are very mathematical areas (you sound like you are less interested in maths)!

I learn maths as required, and avoid them when possible! It depends on the application.

To remain on-topic, what are your concrete interests here, what kind of object representations, what do you want to address/solve exactly?

BTW the current design (frontend/backend separation) allows us to easily introduce mechanisms to achieve such concepts. To give an example, you could write a GL based renderer backend which tessellates algebraic surfaces, performs surface subdivision etc. on the fly and then renders them using application specific detail settings.

OK. This is all good. I guess I'm thinking more about pre-rendered stuff being used later, and making it easy to swap different reps in and out. Consider two applications: one to model individual representations (where you'd do the tessellation and other procedural work), and another to display a large scene made up of these reps. You want to save the procedurally generated rep as a fixed mesh, and then just load it up later. Maybe you'd pre-render a bunch of them at different levels of detail, and then load them up as appropriate. I can think of ways that it would be a lot easier to just swap one rep for another inside a resource file, than to have to manipulate the scene graph directly. I'm sure you can do any of this by manipulating the scene graph, but I'm postulating ways that would avoid that if possible, for users' benefit, the same way you can substitute images and strings in resources that are used to describe user interfaces now.

I also had an idea for using the renderer interface for a data source for an NSOutlineView. The scene graph would think it was drawing things, but really it would just be passing object references which the data source would convert into hierarchy and string information. You save yourself having to write a second interface for the data source. If, however, the shape nodes tried to submit geometry, that would all just be ignored, and would be a waste of time. Instead the groups call -objectInstance, and the "renderer" just finds a suitable retained object representation: in this case, a dictionary of strings.

That's one example. I'm thinking of a couple more, but they are still too vague. I'll readily admit that I'm sort of pushing an AppKit concept onto a pretty different application, and maybe I'm seeing parallels that aren't really there. What are your feelings about 3D user interfaces? I know they have been tried and usually fail miserably, but I am curious about them. But then there is the hassle of having a conceptual domain model (say, a company organizational chart, or a visual representation of a call tree of a program, or a langauge structure, or a shape grammar) and trying to integrate it with the 3D scene model that portrays it.

So on the one hand I'm wondering about 2D views of 3D data, and on the other hand, 3D interpreted views of non-graphical data. If there was one interface that could handle all of it, that would be pretty cool. But it's a lot to ask for.

Brent Gulanowski                address@hidden
--
If you paid a million monkeys to type on a million keyboards, eventually they would produce Micro$oft Windows.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]