freetype
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ft] otf autohint/nohint problem


From: Roger Flores
Subject: RE: [ft] otf autohint/nohint problem
Date: Mon, 21 Nov 2005 14:41:00 -0800

 

> > 2. Are there any tests showing that fonts at a large point size are 
> > identical regardless of the hint module used?
> 
> Please explain in more detail, given that the scaling bug is 
> now fixed.

Sure.  Suppose you ran a program like ftview to render the whole font to
a bitmap with each hint option, at a large point size like 64.  Would
they be the same?  This would have caught the bug/regression you fixed.
Or maybe I should rephrase the question.  This could be a test tool the
calculates the smallest point size that the bitmap for a font is the
same for all hint options.  Maybe it's just overkill for a problem
you've already fixed, or maybe you'd want to know if suddenly hinting is
making a difference at 100 points for times new roman. :)


> 
> > 3. I was thinking about the error from the hint modules and how to 
> > measure and reduce it.  We obviously can't have an ideal hinter, but
I 
> > was wondering if we could still calculate an ideal hinter even if we

> > can't make one?  For instance, one view of a perfect hinter could be

> > that it shows all the edge transitions.
> 
> Uh, oh, I don't follow.  I don't understand what you mean 
> with `edge transition'.  Please give an example.

I'll try, but I lack proper verbiage.  I'm thinking about detecting
cases where parts of a character touch other places, making them harder
to see.  Hinting, imo, often moves character edges slightly to improve
such separation.  Take the 'o' and render it three pixels wide.  Look at
the horizontal strip through the center.  It should transition from high
to low to high.  Some fonts though might have a hole that is smaller
than the middle pixel, or have it offcenter, resulting, in anti-aliased
rendering, in lots of grey.  A good and working hinter would move/resize
the hole so that the middle pixel contrasts highly with the left and
right.  And when I say high and low, it could be fully black and white,
but practical values might be less.  So the transition count for the
middle horizontal row through a 'o' is three.

Now look at 'I', with serifs at the top and bottom.  At small sizes and
anti-aliasing, a vertical strip crossing only the serifs might be only a
little bit grey for them.  But the strip really should go from high to
low to high.  A transition count of three again.

You can calculate the ideal transitions by sampling the font at larger
point sizes where the hints don't matter.  So render 'o' at 100 points,
sample the middle row, and measure how many transitions it has.

If the ideal transition count for the horizontal strip through a 'o' is
three, but the rendering measures at two, then something might be
considered wrong.  The error for the row is (3-2)/3.  You could test all
characters in a font and see which characters have the most errors.  You
could test the effects of just one hint.  You could compare over time to
insure improvement versus regression.  You could combine measurements
for all characters in a font to make a rating that could indicate the
performance of a hinter.

The basic idea is to calculate ideal values from large sizes where
hinting doesn't matter, and then compare at smaller sizes where hinting
does matter.  Transitions is just my example, but hinter people might
know of other quantities.  Perhaps line widths or kerning or something
is more important or easy to check.

> Honestly, I don't think that an automated routine can measure 
> the improvements done by a hinter. 

I expect this routine to report better numbers for a font rendered with
a hinter at small sizes than a font rendered without one.  I also expect
that a hinter with a bug in transitions will also get reported as having
worse performance.


> Only the eye can do that IMHO.

I agree that only a good eye can measure the entire goodness of a
hinter, but some qualities can be automatically and consistently
measured.



> 4. I was thinking about trying to figure out which hinter to use.
> > The answer seems to vary depending on which fonts are installed, 
> > meaning you can't predict at compile time.  If we had a hint error 
> > measurer we could automatically select the best hinter for 
> 
> The operating system's font manager should provide the 
> possibility to select the hinter on a per-font basis.

Yes.  But, this requires manual intervention.  And smarts.  And a good
eye.  IF we could measure how good a font was hinted, then perhaps we
could measure the hinters on a few characters and then pick the best.
But this is just talk if the above isn't possible. :)


-Roger




reply via email to

[Prev in Thread] Current Thread [Next in Thread]