There are some unwanted artefacts in the demo that people have commented on:
On extreme minification, the texture becomes gray.
This is due to the 8-bit texture precision. The same limitation is seen in Chris Green's original implementation of the method. Distances more than 8 pixels from the edge are clamped, which yields incorrect antialiasing on extreme minification. The remedy is to either generate a stack of traditional mipmaps (mipmapping is not used in the current demo) and switch to conventional texturing under minification, or to use 16-bit precision for the distance map to store a correct distance value for all pixels. The 8-bit encoding was chosen for maximum compatibility with old versions of OpenGL. A 16-bit encoding could be done either by using a native 16-bit texture format, or by using two 8-bit pixels per distance value and compositing them to a higher precision value in the shader. I have used both with good results.
Tilting the polygon out of the screen plane gives nasty artefacts with an Nvidia GPU.
This is a strange bug related to the automatic derivative functions dFdx() and dFdy() and the related function fwidth() in GLSL. I have not yet been able to isolate the cause of this, but it is a problem specific to Nvidia GPUs. I tested this only on my ATI card, and there it works OK.
(BTW, I should have written aastep=length(vec2(dFdx(D), dFdy(D))), not aastep=0.5*length(fwidth(uv)) in the demo, but that is unrelated to the bug. I have now changed that in the archives. The antialiasing is now anisotropic and analytically correct on an ATI GPU.)
In another demo I store the gradients explicitly in the texture and use 16 bits for the distance value. That gives better minification, and no antialiasing problems with Nvidia GPUs because the distance field is not differentiated using dFdx() and dFdy(). The pattern is also smoother because I use a different interpolation scheme that turned out not to be such a great idea after all (it wasn't bad, but more trouble than it was worth):