groff
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Standardize roff (was: *roff `\~` support)


From: Alejandro Colomar
Subject: Re: Standardize roff (was: *roff `\~` support)
Date: Sun, 14 Aug 2022 18:32:01 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.1.0

Hi,

On 8/14/22 16:49, DJ Chase wrote:
On Sun Aug 14, 2022 at 9:56 AM EDT, Ingo Schwarze wrote:
Hi,

DJ Chase wrote on Sat, Aug 13, 2022 at 05:27:34PM +0000:

Have we ever considered a de jure *roff standard?

No, i think that would be pure madness given the amount of working
time available in any of the roff projects.

[…]

This is very sad to hear.

It could also lead to more users & use cases because existing
users could count on systems supporting certain features, so
they could use *roff in more situations, which would lead to
more exposure.

You appear to massively overrate the importance end-users
typically attribute to standardization.

That’s probably because *I* massively overrate the importance of
standardization (I mean I literally carry a standards binder with me).
Still, though, it’s rather annoying that end users — especially
programmers — don’t value standards as much.

(Official) standardization isn't necessarily a good thing. With C, it was originally good, in the times of ISO C89. Now, it's doing more damage to the language and current implementations than any good (it's still doing some good, but a lot of bad).

The best that a standardization process can do is limit itself to describe _only_ features already existing in the language, being a kind of arbiter that decides on which behavior is best for a given feature, so that all implementations follow the best existing one. Where different implementations might have good reasons to do it differently, the standard should describe the behavior as implementation-specific. And of course, a standard should only standardize features that are expected to be good for every implementation, with optional features either not being standardized, or being marked optional by the standard (like Annex K was; although that one was broken, so it was later removed for good).

But that shouldn't be necessary if implementors had some decency and didn't implement features so that they are completely incompatible with those of other systems. I.e., if an existing system has 'foo(int a);', you don't provide 'foo(int *b);'; you go for 'foo2(int *b);' or 'bar(int *b);'. There's plenty of cases where this has happened, and in some cases it might be due to an accident, but in some other cases, it's just due to incompetence. See an example that bit me a month ago: <https://github.com/nginx/unit/issues/737>.

And the bad things that standardization can do are several:

By reserving the power to centrally decide the future of a language, they take power from implementations, which now can't add some features, by fear that they might contradict a future standard. This is very sad, because while the implementations are guided by usefulness and worthiness, and try to come up with the best feature for them (and by natural selection, implementations are then used or not used, depending on their quality), standards have a large part of bureaucracy, and that doesn't provide the best features.

A few examples of that are: a %b printf specifier for binary was rejected by glibc on the terms of something like "the feature is good, and the implementation seems correct, but %b is reserved by the standard, so we don't want to possibly conflict with a future standard"; luckily, the standard defined that, and the feature was added a few years later. One example that is much more necessary is a way to get the size of an array, which currently is impossible in portable C (at least not in a way that safely rejects to compile on non-arrays). I also proposed an addition to glibc, and the reasons to reject it were of the same kind, and arguing that the standard was discussing about adding such a feature; guess what? the standard hasn't added such a feature for C23, and we still have no portable way to do it (and the unportable ways are more cumbersome than what one would expect). I hope C3x adds _Lengthof(arr), but who knows.

<https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2529.pdf>
<https://stackoverflow.com/questions/37538/how-do-i-determine-the-size-of-my-array-in-c/57537491#57537491>


And then we have another problem of standardization committees: their priorities are so broken, that they prefer inventing a completely new feature for C, with nothing even remotely resemblant to it within the existing language (I'm talking about nullptr and nullptr_t), rather than standardizing an existing good feature such as POSIX's NULL ((void *) 0). So now we have 0, NULL, and nullptr for referring to a null pointer constant in C. And none of them is perfect. 0 needs to be casted when passed to variadic functions, and has readability issues. NULL is perfect within the POSIX world, but if you go out of POSIX, it's as bad as 0. nullptr, apart from being incomprehensible, it is unsafe; okay it's not unsafe by itself, and if it were the only way to refer to a null pointer constant, it would be great, but it's not, and even the committee recognizes that it will never be.

<https://discourse.llvm.org/t/iso-c3x-proposal-nonnull-qualifier/59269/48>

Many existing projects that use NULL (especially POSIX projects), are not going to change their whole codebase to use nullptr. nullptr_t adds some features that add safety against null pointer constants based on the type of the constant (by means of _Generic); but that means that one can easily bypass those features by using NULL or 0, which means that it's not really safe, and it might give a sense of safety that it has not. So, without extending my rant about nullptr much more, it's just a feature broken from day 0, invented by the ISO C committee.

Maybe one of the worst problems of the committee (WG14) is that many of its members are also members of the WG21, and as such, they may have incompatible priorities.

I don't see standardization as good as it may seem at first glance.

And of course following the standard should come with a pinch of salt: one should follow the standard, when the standard isn't broken.

But then, the standard isn't better than any other implementation. So, as a programmer, I think programs should target their expected systems, and not more (unless it's easy). If a program is to be run on Linux, then target GNU C. If you can add some partial support for ISO C without interfering in your way significantly, then okay, go for it; but complete ISO C support is unthinkable; a program conforming to ISO C is useless, or unnecessarily complex, or even unsafe. I implement things thinking on my system first, then if it's easy, I can support other FOSS Unix systems, if it's easy, but only if it is. Commercial systems are automatically out of support. I'm not spending a single minute of my time to be nice to those systems when their not nice to me.

I think it's better to let natural selection to work out its way. If a feature is good, other implementations will pick it, and maybe even improve it. If a feature is not good (or it's not needed by other systems), it will not be portable.

Cheers,

Alex


--
Alejandro Colomar
<http://www.alejandro-colomar.es/>

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]