bug-gne
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnupedia] Architecture Questions


From: Bryce Harrington
Subject: Re: [Bug-gnupedia] Architecture Questions
Date: Sat, 20 Jan 2001 16:20:51 -0800 (PST)

On Sat, 20 Jan 2001, [iso-8859-1] Rob Scott wrote:

> Yes the document was written in a rush with bits added
> and bit taken away every now and then and its all
> higgledy piggledy and it probably dowsn't get the
> point across very well.

It's a good starting point, brainstorm-wise.  Determine what your
problem statement is and summarizing your suggested solution.  I have
seen good results from making a listing of all possible solutions, with
pros and cons enumerated for each, and use this to determine the best
answer.

Also consider making a "wishlist" of features or requirements.
Often this can as handy than the design document itself.

> This is actually what I would like to do (make a
> prototype), but i did this coz we werent going to
> bother if you were all going to fly off on some xml
> thing.

There's nothing wrong nor magical about xml...  At this stage, I would
recommend focusing on the architecture, and leaving the decision about
data representation to the next layer.  There are not many XML-based
databases, so I suspect that XML is going to be merely another transport
format.  Only the latest web browsers support XML very well, so I
wouldn't be surprised if people decide to use HTML as the first
transport format, and add XML later when support is more widespread.

I suppose if I had to make a guess at what the architecture would end up
being, it would store the articles themselves as text XML files
(DocBook, perhaps), with a database holding only the indexable material.
When the XML files are added to the repository, converters would be run
to produce articles of other formats (text, word doc, pdf, ps, tex,
etc.)  When the user requests an article, he or she would also specify
the desired format.  When producing mirrors of the repository, only the
XML files, commentary, makefiles, and conversion scripts and templates
would need to be transferred.  The files needn't be local; one could
imagine the indexing database be hosted on one machine, and the articles
themselves served from other places (perhaps multiple places).  I'm sure
others even now have some ideas on how to handle distribution, but for
the near term something rsync-ish would probably be sufficient.  

Bryce




reply via email to

[Prev in Thread] Current Thread [Next in Thread]