gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/AniFont anifont.tex


From: Tuomas J. Lukka
Subject: [Gzz-commits] manuscripts/AniFont anifont.tex
Date: Wed, 12 Nov 2003 08:16:52 -0500

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Branch:         
Changes by:     Tuomas J. Lukka <address@hidden>        03/11/12 08:16:51

Modified files:
        AniFont        : anifont.tex 

Log message:
        sync

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/AniFont/anifont.tex.diff?tr1=1.27&tr2=1.28&r1=text&r2=text

Patches:
Index: manuscripts/AniFont/anifont.tex
diff -u manuscripts/AniFont/anifont.tex:1.27 
manuscripts/AniFont/anifont.tex:1.28
--- manuscripts/AniFont/anifont.tex:1.27        Fri Oct 31 12:54:54 2003
+++ manuscripts/AniFont/anifont.tex     Wed Nov 12 08:16:51 2003
@@ -1,8 +1,14 @@
 \documentclass[twocolumn,10pt]{article}
 \usepackage{graphicx}
 \usepackage{fancybox}
-\usepackage{beton}
+% \usepackage{beton}
+\usepackage{times}
 \usepackage{caption2}
+
+%\makeatletter
address@hidden
+%\makeatother
+
 \begin{document}
 
 \renewcommand{\topfraction}{.1}
@@ -49,6 +55,8 @@
 
 \section{Introduction}
 
+XXX GAMMA
+
 Texture mapping is a ubiquitous computer graphics
 primitive\cite{heckbert86survey,haeberli93texture}
 originally introduced in \cite{catmull74}. 
@@ -59,7 +67,8 @@
 hardware-accelerators, it is important that the number of samples can be
 kept constant regardless of the pixel footprint.
 
-Trilinear (mipmap) filtering\cite{williams83pyramidal}
+Trilinear (mipmap) filtering\cite{williams83pyramidal},
+the current \emph{de facto} standard texture filter,
 was designed to avoid temporal and spatial aliasing while only requiring 8 
 texture samples per pixel. 
 For 3D rendering, the most well-known problem of
@@ -70,16 +79,10 @@
 that usually work through some type of
 \emph{footprint assembly}, i.e. assembling a better approximation
 to the pixel footprint in texture space from normal mipmap samples or by using
-trilinear \emph{probes}. %probes?
+trilinear \emph{probes}. 
+Most graphics accelerators today support trilinear filtering along
+with some type of anisotropic filtering and either super- or multisampling.
 
-While summed-area tables(XXX CROWREF) can often provide
-better rendering quality, their hardware implementation is not easy, as 
discussed in 
-% isn't it not even applicable to rotated mappings?!
-% so ``today ...'' below doesn't seem to follow from the above
-(XXX ref to fast footprint/... discussing this), so today trilinear rendering, 
supplemented
-with some form of anisotropic filtering
-is the \emph{de facto} standard in hardware accelerators, supplemented by
-support for full-screen super- or multisampling. 
 
 \def\snapsize{2.4cm}
 \def\snapshot#1{\raisebox{-2cm}{\includegraphics[totalheight=\snapsize]{#1}}}
@@ -126,12 +129,12 @@
 \end{figure*}
 
 
-In this article, we consider texture filtering in the overlooked,
+In this article, we consider texture filtering in the often overlooked
 isotropic or nearly isotropic case. Even in the isotropic case, trilinear
 can blur small features that appear in, e.g., text.
 We discovered accidentally that stretching an image anisotropically
 when placing it into a texture and squishing it back when rendering
-using texture coordinates, all the while enabling anisotropic filtering,
+using texture coordinates, while enabling anisotropic filtering,
 yields a considerably better image quality for text. 
 
 In looking to understand why the stretch-squish method works,
@@ -139,8 +142,7 @@
 extremely useful, contrary to the usual practice in the
 texture filtering literature to 
 visualize the pixel footprint exclusively in the texture space.
-Our PFSS diagrams show a highly magnified pixel (e.g. 100 pixels in side)
-% 100 pixels in side? 100x mag?
+Our PFSS diagrams show a highly magnified (e.g., 100x) pixel 
 and the contributions (assuming box filtering for the mipmap levels)
 from the texels mapped to the surrounding area by a color.
 Figure~\ref{figallpfss} shows 
@@ -159,10 +161,10 @@
 
 
 
-\label{secrelated}
 
 
 \section{Related work}
+\label{secrelated}
 
 In this Section, we discuss the known methods to improve the quality of
 hardware-accelerated texture filtering in isotropic situations.
@@ -215,7 +217,7 @@
 % is orthogonoal in the opengl functionality sense generally understood?
 
 \section{Stretch and squish improves image quality}
-
+\label{secsquish}
 
 \begin{figure}[thb!]
 \centering
@@ -307,7 +309,10 @@
 \begin{table*}
 \begin{minipage}{\textwidth}
 \begin{tabular}{p{3cm}|lllll}
-Method              & HW req & Clarity     & Aliasing    & Code changes   & 
Relative time per pixel\\
+Method              & HW req & Clarity     & Aliasing    & Code changes   & 
Relative time per pixel
+                                                                           
\footnote{The numbers are approximate,
+                                                                           
combined from measurements on several
+                                                                           
kinds of hardware}\\
 \hline\\
 Trilinear           & Any    & Blurry      & ---         & ---            & 1 
\\
 Trilinear, LOD bias & Any    & Less blurry & Bad         & trivial        & 
1---2 \\
@@ -319,7 +324,7 @@
                                       supporting LOD biasing, however with a 
significantly larger performance drop due to multiple passes
                                       and blending..} & Good       & ---       
  & significant    & 4---6 \\
 Fragment-based supersampling & 
-                        NV3X+ & Good       & ---         & trivial        & 
10---20 \\
+                        NV3X+ & Good       & ---         & trivial        & 
10---12 \\
 \hline
 \end{tabular}
 \end{minipage}
@@ -379,44 +384,73 @@
 In our investigations for this article, we found the pixel footprint
 diagrams in screen space (PFSS) diagrams most useful for understanding
 the properties of a filtering method w.r.t.~anisotropy.
+These diagrams show, as dark lines, the edges of the highly 
+(100x or more)
+magnified pixel. The texture is mapped on top of the magnified pixel
+with the some transformation, but instead of colors, the texels are
+made to represent the \emph{contribution} of the texel to the final
+value of the pixel.
+
+It is unfortunate that manufacturers do not provide details of what their
+hardware is actually doing;
+for careful graphics work, it is useful to be able to understand the algorithms
+used. We are left with the approach of looking at the hardware as a physical
+phenomenon and doing \emph{experiments} to find out how it functions.
+This is not always simple:
+each free variable grows number of experiments to make
+exponentially, which is why we have to make as strict assumptions as possible
+about invariances beforehand.
+
+The most important invariance asusmptions we make are
+that the driver is not detecting which software is being run
+and changing its behavior and that the driver is not changing
+the filtering algorithms for screenshot images versus normal images.
+If it is, working around it is nontrivial. Also, we assume that the filters
+used by the hardware are linear --- nonlinear filters would be much more
+difficult to probe experimentally.
+
+Some other assumptions whose violations would be somewhat easier to 
+work around are: that the implementation is not looking at the contents
+of the texture images and deciding filtering algorithms based on
+them (image-sensitive filtering; if this is the case, linear algebra
+might be used to find the filter for particular kinds of
+input images); that there are no negative weights in the filter (if this
+is suspected, gray should be used instead of black in the probe texture,
+and the blending of the final image should be altered);
+that all texture units produce the same results (workaround:
+use more texture units and linear algebra to separate the contribution
+of one);
+that pixel translation invariance in screen space holds
+accurately (workaround: instead of rendering the pixels below
+at different locations, render and CopyTexSubImage a single pixel 
sequentially); 
+that a quad rendered at a single pixel affects no neighbouring 
+pixels (violated, e.g., in NVIDIA's Quincunx multisampling; workaround:
+render 3x3 quads as probes and use the middle pixel for CopyTexSubImage).
+
+%Digit-life XXX NVIDIA, ATI patterns - ? Method of probing not explained; 
+%is the data real?
+
+
+%- ASSUMPTIONS: 
+%  driver not detecting software and applying different rules, driver
+%  not changing algorithm for screenshot images / moving images, 
+%  driver not looking at texture images and deciding filtering 
+%  algorithms based on that (image-sensitive filters). 
+%  (can use linear algebra to do this then).
+%  Filters are linear (nonlinearities in the filters - to our knowledge none 
yet; gamma correction?).
+
+%- SEMI-ASSUMPTIONS (trivial to adjust algorithm): all texture units produce
+%  the same results (in some drivers, this is not the case - 3dcenter about nv 
51.XX series
+%  DirectX),
+%  Driver isn't using a different set of samples for large and small 
triangles, 
+%  pixel translation invariance, in screen space.
+%  Only positive weights in the filter (can use 
+%  gray so we see also if there are negative weights in the filter)
 
-In this Section, we 
-- Graphics companies unfortunately do not provide ...
-
-The diagrams assume a box filter for generating the mipmaps, 
-as contributions from different mipmaps are directly blended
-over each other.
-
-This seems to be a well-known technique that has not so far been published
-anywhere
-A similar technique appears to be used more commonly used for probing hardware
-antialiasing patterns, 
-
-
-- Graphics companies unfortunately do not provide ...
-
-Digit-life XXX NVIDIA, ATI patterns
-
-- for careful work, you'll want to know what your driver is doing
-
-- difficulty in probing hardware: each free variable grows number of probes to 
make
-  exponentially - have to make as strict assumptions as possible
-
-- ASSUMPTIONS: 
-  driver not detecting software and applying different rules, driver
-  not changing algorithm for screenshot images / moving images, 
-  driver not looking at texture images and deciding filtering 
-  algorithms based on that (image-sensitive filters). 
-  (can use linear algebra to do this then).
-  Filters are linear (nonlinearities in the filters - to our knowledge none 
yet; gamma correction?).
-
-- SEMI-ASSUMPTIONS (trivial to adjust algorithm): all texture units produce
-  the same results (in some drivers, this is not the case - 3dcenter about nv 
51.XX series
-  DirectX),
-  Driver isn't using a different set of samples for large and small triangles, 
-  pixel translation invariance, in screen space.
-
-- gray so we see also if there are negative weights in the filter!
+%- FSAA does not blur samples from neighbouring pixels
+%
+%    - if it does, render larger quads further apart.
+%      and use CopyTexSubImage instead of CopyTexImage
 
 - select the texture matrix to map the single-pixel texture quad ...
 
@@ -432,11 +466,11 @@
 
 - utility in our free software OpenGL libvob system
 
-- FSAA does not blur samples from neighbouring pixels
 
-    - if it does, render larger quads further apart.
-      and use CopyTexSubImage instead of CopyTexImage
 
+The diagrams assume a box filter for generating the mipmaps, 
+as contributions from different mipmaps are directly blended
+over each other.
 
 - assumptions about the contents of the mipmap levels
 
@@ -486,5 +520,12 @@
 Note that only the texture matrix and texture filter parameters
 for texture 1 need to be changed
 for probing different texture-to-screen mappings 
+
+Probing hardware texture filters in this
+way is a simple technique that does not appear to have been published
+anywhere;
+a somewhat analogous technique appears to be used more commonly used for 
probing hardware
+antialiasing patterns, by rendering subpixel-sized quads with different
+subpixel shifts and seeing which ones actually cause something to be 
rendered(XXXREF).
 
 \end{document}




reply via email to

[Prev in Thread] Current Thread [Next in Thread]