groff
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: Re: Fw: [Groff] [groff/patch] transparent gzip


From: Mark Veltzer
Subject: Fwd: Re: Fw: [Groff] [groff/patch] transparent gzip
Date: Sun, 25 Aug 2002 19:46:26 +0300

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sunday 25 August 2002 18:29, you wrote:
> >>>>> "Mark" == Mark Veltzer <address@hidden> writes:
>     >> Hmmm... Why do you want to start an endless cycle of featurism?
>     >> It is bad enough already that you need libtiff, libjpg, libpng,
>     >> libz, ghostscript and netpbm installed to be able to process
>     >> anything into HTML. These packages are usually installed in a
>     >> Linux or a free BSD distro, but not in commercial unices (which
>     >> include MacOSX, btw).
>
>     Mark> It may sound harsh to you but I really don't care that much
>     Mark> for commercial UNIX vendors.
>
> Your rant completely misses the point. Adding all this bloatware cruft
> makes it needlessly harder to install & use groff. If all I want to do
> is format man pages, why should I be required/expected to install
> stuff like the above mentioned libraries and so on? Somewhere along
> the line somebody has lost sight of the fundamentals of the UNIX
> design: ie write small programs that only do one job and do that very
> well. Even if all these things are already installed, it doesn't help.
> You're still digging a hole because the library versions will change,
> creating all sorts of undesirable dependencies and combinatorial
> problems.

1. I am not forcing the user to use this library. Many programs, if not
detecting a library, just don't link to it and use a small simple code as
replacement. Anyone who will want the extra feature will have to install the
library and resintall groff.
2. It is not hard to install libraries. And if it is - fix the software
delivery systems in the open source world instead of avoiding using shared
libraries. Look at projects which use C and C++ shared libraries a lot (KDE
and GNOME) who manage to produce hundreds of applications because they are
sharing much more than printf. Yes they are paying a price for it. And YES -
they are certainly getting their moneys worth.
3. Dependencies on executables are ALSO dependencies. The fact that they are
not listed in your RPM installation (or .deb) is a sham. Your program may
still fail to do its thing. And executables fail also. In fact the old UNIX
way fails miserably here because when executable x uses executable y then
executable x does not check which version executable y is of and therefore
leaves executable y with no room to go forward without breaking x. Ever
wondered why applications like "dd" have weird command line ? Why features
which are long forgotten are kept around ? BECAUSE THERE IS NO WAY OF KNOWING
WHO IS USING THEM AND HOW. with dlls this is not so and interfaces are better
than command line (they are more expandable - there are more ways to keep
them compatible while implementing the undelying technology in a better way -
and the most important part - you can declare a service null and void and
have the application which is using it STOP COMPILING thus reminding it's
authors that they need to shape up).
4. Regarding the term "bloatware". As I explain in earlier post such a system
will actually REDUCE groffs current size (either in the case you are using
the current DLL or in the case you don't have it and groff works with it's
built in "open(2)" "read(2)" "close(2)" calls). Which bloatware are you
refering to ? Groff today can be said to be bloated since it already has gzip
code. It already links with an outside library. It already paid the price.
All it got for it is gzip. In my system it could have gotten "gzip", "bzip",
"lha", "zip", "compress", "ftp" , "http" and various other for the SAME SIZE
CODE. This means "for the same price".
5. Library versions change for a REASON. library versions are intended to let
the developers of libraries make them better in incompatible ways. binaries
don't do that because there is no mechanism that tracks their version and
makes sure it's alright and so it's a pitfall. Library versions is what keeps
the free source movement going (otherwise we would still be stuck with old
interfaces). Binaries on the other hand are going nowhere (they have to keep
and support the old interface since they have no way of "telling" the using
application - "sorry. I've changed. I'm now better. You can use the better
version of me but you need to change also". Misunderstanding this is a major
misunderstanding of how computer technology advances.
6. "undesirable dependencies". Repeat after me : dependencies are desirable.
No one seems to understand this. RPM is good. DEB is good. They are the
reason modern Linux systems are breathing. They keep the whole mess together.
The only reason people are angy with them is when they try to install the
"beeding edge" application and find that it needs "bleeding edge" libraries.
Grow up. In the future these systems (deb already does) will bring everything
you need. Your motto should be "I WANT DEPENDENCIES". dependencies on
binaries are even worse since YOU DON'T KNOW ABOUT THEM. you know how many
problems I've seen in embedded development where they want to install a
minimal system and simply DONT KNOW which binaries to install and have to
find out the hard way when various systems crash ?!? Is this better ?!? Give
me a break. I would pay dear money to have every unix application test for
every binary it uses at ./configure time (they dont). I would pay dear money
for them to test for the precise version (they dont). The hidden assumptions
here are countless. And people think that it's such a perfect deal. The only
reason is because it's GOING NOWHERE. the same systems have been around for
years and are doing the same things. Look at where the action is: new
application development frameworks, free desktop systems. They are very much
against allowing to run external applications. They have seen the light.

>     Mark> You are very wrong. RedHat compressed manual pages because
>     Mark> it's FASTER to show them that way.
>
> So what? Even if what you say is true, it's pointless because most
> people are reading the man pages interactively. Even at an absurdly
> fast eyeball reading speed of 1 page per second, who cares about
> trade-offs between millisecond disk access and nanosecond CPU speed?

That's Right. Way to go. And that stupid Linus who always optimizes his disk
operations. It's fast enough as it is. It was fast enough in 2.0. Right ? I
think you see where I'm going. Either you're an engineer in which case the
burning to make your system better runs through your bones or you are not.
You never say - "who cares". This is reserved for commercial managers who
only care about the money and not the system.

> Personally, I prefer pre-formatted and uncompressed man pages. They're
> easier to browse and you always know how much of the man page has still
> to be read.

I prefer them too. So what ? As I said - RH man prepares a preformatted
version for every man you watch. A good solution since preformatting
everything is an overkill since most users will never use 99.9% of their
manual pages. The fact that one part of the system is ok does not mean that
we should neglect the others.

>     >>  In this hysteric times, when hard-drives are one US dollar per
>     >> gigabyte, 80 GB hard-drives are the norm and CPUs with hardware
>     >> clocks slower than 1.3 Gigahertz are obsolete, talking about
>     >> the supposed need to compress man pages seems pointless to me.
>
>     Mark> Again, this is not about disk size at all... It's about
>     Mark> speed (and if do get the speed benefit - then why waste the
>     Mark> disk space...? You get both ends of the stick...).
>
> Does anyone read the man pages and say to themselves "wow, that sub
> 10ms average disk access really made a difference" or "troff goes
> amazingly fast on a X Ghz CPU"? Get real.

Ok. This is my reply. You can either:
1. Argue on the technical level in which case I win in both speed and time.
2. Say that the technical case does not matter in which case the next time
someone brings an idea to do some optimization of something inside groff you
will reject them too. Right ? BTW: using the same logic I'll soon submit a
wait loop to be inserted where groff recognized tokens (if we don't really
care about performance that much lets go for it all the way...)

Mark.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE9aQnixlxDIcceXTgRArjUAKClCQww4gSXXpAVKhQZBSYQT2kj0ACfVq3f
hBpGDymfluGHvCUgmwuw2+Q=
=C9EY
-----END PGP SIGNATURE-----

reply via email to

[Prev in Thread] Current Thread [Next in Thread]