I have this code posted to many different places, some of which are beyond my control or even unknown to me, but I will update it in where I can. Thanks for the bug report! ]]>

If you don't have proper anti-aliasing of edges, this method won't give you a better result than any of the traditional binary transforms. Look for "jump flooding" or "sweep-and-update" in the literature and you will find a selection of suitable and fast algorithms that are far better than the brute force algorithms I have sen game developers use.

You can resize the output distance field to a smaller size, but it does not give you quite the right result. Some detail is lost. Instead, I would suggest resizing the input image before making the distance field. Make sure to use plain bilinear resampling, no fancy "bicubic" or more advanced sharpening stuff, and for best results, resize the input image only in integer steps, like 2x smaller. Avoid small changes in size or weird scaling factors, as that will mess up the anti-aliasing to the point where my distance transform falls flat on its face and gives downright silly results (wobbly, distorted edges, just plain wrong distances).

Let me know if I can be of any further assistance. I would like to see this being used more.

]]>So I my working knowledge (from game dev) is that I could take say a 2048x2048… and gen a distance field at 512x512 (or lower).

With this C version, I seem to get the same sized df out as the input.

Is it appropriate to re-size it myself, or does that screw up the purpose and math?

Or, can someone help point me in the right direction for changing the output size of the df in this code snippet?

]]>1) Yes, it is possible, and I have a GPU accelerated demo here:

http://webstaff.itn.liu.se/~stegu/JFA/

You want the version that says "validation" at the end. The other ones may not be AA at all. I'm not sure, and I don't quite remember, because that is old code that I have not touched for years. Note that the code is a few years old, so it does not use anything above GLSL 1.2: no integer indexing, no integer texture formats and no multiple render targets, and it is written using an old version of OpenGL (2.1) to be compatible with MacOS X in the version that was current back then. It still works, though, and it's fast. However, I never got around to implementing the more accurate distance measures. This demo is equivalent to "edtaafunc", without any directional dependency for the distance, it's just a linear function of the pixel value, according to equation (1) in the article.

2) Perhaps I can make this more clear: The (u,v) is the position of the "hit point" measured from the center of the pixel. The "hit point" (black circle) is the position where a line from the external point running orthogonal to the edge would hit the edge if it were at that exact distance, with the edge direction estimated from local derivatives. It that point is not within the pixel in question, the measure is not entirely reasonable, but if it's within the pixel, the "true" distance is likely to be a more accurate estimate, and is used instead.

Note that the "improvement" from the "edtaa3func" to the "edtaa4func" versions is somewhat questionable. The extra calculations improve the average accuracy, but there are corner cases at isolated pixels where they do more damage than good. You need to test it to see if it works for the kind of input images you have.

3) The "other" hint to edge direction is the direction to the closest edge pixel, which is available for pixels that are not on the edge itself. I am sorry for any confusion here. What I meant to say was that the direction to the edge pixel is a better estimate of the edge direction if the edge is far away, less accurate if the edge is close, and not applicable if the pixel is on the edge. For pixels on the edge, the derivatives is all you have. For distant pixels, the vector to the edge is a better estimate. For pixels at a distance of a few pixels or less from the edge, the derivatives may be of use to improve the estimate, but not always. I have no hard decision rule here, I can just say "it depends".

Please let me know if you have any further question. Note, however, that I am currently on vacation and may take more time than usual to respond to e-mail.

]]>First, I tried the jump flood, which results with no AA.

Second, I have read the AA EDT in the Pattern Recognition Letters.

I run some tests on the produced DT and they produce nice AA effects, with limitless resolution.

My Questions :

1) Is it is possible to implement it with Jump Flooding?

2) Can you please further explain the U,V you are referring to in that paper - i.e equations (6) and its illustration Fig.4 .

i.e " Therefore, the local gradients were used to improve the result only at the edge and very near it, where most

of the larger absolute errors were located."

and in fig.4, "the hit point" (black circle)

3) "In our experiments, using the local gradient always improved

the accuracy for edge pixels, where no other edge direction information was available"

Can you further explain this sentence - i.e: other edge direction information .

Thank you,

Zebratov

Why not public domain? Well, I am somewhat reluctant to release significant amounts of non-trivial code as public domain. I like openness a lot, but I have had some bad experiences in the past where people have taken my code and put it into a product without giving me as much as a word of thanks. I have no illusions of earning any money from this, but as a researcher, I am very much into getting and giving credit where credit is due.

The algorithm is described in detail in the scientific article, and it should not be a huge effort to re-implement it from the description and use it for any purpose if the MIT license doesn't cut it for you. Of course, if you have a specific use in mind, I would be happy to negotiate an individual license for your particular needs. (Don't worry, I'm cheap. I just want some level of control over where my code is used commercially.) ]]>

I was wondering about the use of GNUGPL - it essentially makes it impossible to use the software in any commercial product. Would it be meaningful to change it to either public domain, the MIT License, or the zlib-license?

(in descending order of usefulness)

Thanks!

]]>I have made a unity package for it. It contains the above code, an editor class that makes an editor window for it, and the GNU GPL. You can find it, along with documentation, over at catlikecoding.com/unity/products/distance-map-generator/

]]>and tweaked, and to see contour rendering catching on! ]]>

I'm working on a text tool for the Unity3D game engine and I included a C# version of the EDTAA algorithm to create nice distance maps from font atlases. The Generate method takes the alpha channel of a source texture and generates the distances from that, either outside, inside, or both. Beyond that, it's conceptually the same code as edtaa3 and the post process part, although I approached a few things differently.

Though it's Unity3D specific, it shouldn't be too hard to integrate the C# code into a non-Unity project.

`using UnityEngine; /// <summary> /// Utility class for generating distance maps from anti-aliased alpha maps. /// </summary> public static class CCDistanceMapGenerator { /// <summary> /// How to fill the RGB channels of the generated distance map /// </summary> public enum RGBMode { /// <summary> /// Set the RGB channels to 1. /// </summary> White, /// <summary> /// Set the RGB channels to 0. /// </summary> Black, /// <summary> /// Set the RGB channels to the computed distance. /// </summary> Distance, /// <summary> /// Copy the source texture's RGB channels. /// </summary> Source } private class Pixel { public float alpha, distance; public Vector2 gradient; public int dX, dY; } private static int width, height; private static Pixel[,] pixels; /// <summary> /// Generates a distance texture from the alpha channel of a source texture. /// </summary> /// <param name="source"> /// The source texture. Alpha values of 1 are considered inside, values of 0 are considered outside, and any other values are considered /// to be on the edge. Make sure the texture is readable and not compressed. /// </param> /// <param name="destination"> /// The destination texture. Must be the same size as the source texture. /// </param> /// <param name="maxInside"> /// The maximum pixel distance measured inside the boundary, resulting in an alpha value of 1. /// If set to zero, everything inside will have an alpha value of 1. /// </param> /// <param name="maxOutside"> /// The maximum pixel distance measured outside the boundary, resulting in an alpha value of 0. /// If set to zero, everything outside will have an alpha value of 0. /// </param> /// <param name="postProcessDistance"> /// Pixel distance from the boundary which will be post-processed using the boundary gradient. /// </param> /// <param name="rgbMode"> /// How to fill the destination texture's RGB channels. /// </param> public static void Generate (Texture2D source, Texture2D destination, float maxInside, float maxOutside, float postProcessDistance, RGBMode rgbMode) { if(source.height != destination.height || source.width != destination.width){ Debug.LogError("Source and destination textures must be the same size."); return; } try{ source.GetPixel(0, 0); } catch{ Debug.LogError("Source texture is not read/write enabled."); return; } width = source.width; height = source.height; pixels = new Pixel[width, height]; int x, y; float scale; Color c = rgbMode == RGBMode.White ? Color.white : Color.black; for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ pixels[x, y] = new Pixel(); } } if(maxInside > 0f){ for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ pixels[x, y].alpha = 1f - source.GetPixel(x, y).a; } } ComputeEdgeGradients(); GenerateDistanceTransform(); if(postProcessDistance > 0f){ PostProcess(postProcessDistance); } scale = 1f / maxInside; for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ c.a = Mathf.Clamp01(pixels[x, y].distance * scale); destination.SetPixel(x, y, c); } } } if(maxOutside > 0f){ for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ pixels[x, y].alpha = source.GetPixel(x, y).a; } } ComputeEdgeGradients(); GenerateDistanceTransform(); if(postProcessDistance > 0f){ PostProcess(postProcessDistance); } scale = 1f / maxOutside; if(maxInside > 0f){ for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ c.a = 0.5f + (destination.GetPixel(x, y).a - Mathf.Clamp01(pixels[x, y].distance * scale)) * 0.5f; destination.SetPixel(x, y, c); } } } else{ for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ c.a = Mathf.Clamp01(1f - pixels[x, y].distance * scale); destination.SetPixel(x, y, c); } } } } if(rgbMode == RGBMode.Distance){ for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ c = destination.GetPixel(x, y); c.r = c.a; c.g = c.a; c.b = c.a; destination.SetPixel(x, y, c); } } } else if(rgbMode == RGBMode.Source){ for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ c = source.GetPixel(x, y); c.a = destination.GetPixel(x, y).a; destination.SetPixel(x, y, c); } } } pixels = null; } private static void ComputeEdgeGradients () { float sqrt2 = Mathf.Sqrt(2f); for(int y = 1; y < height - 1; y++){ for(int x = 1; x < width - 1; x++){ Pixel p = pixels[x, y]; if(p.alpha > 0f && p.alpha < 1f){ // estimate gradient of edge pixel using surrounding pixels float g = - pixels[x - 1, y - 1].alpha - pixels[x - 1, y + 1].alpha + pixels[x + 1, y - 1].alpha + pixels[x + 1, y + 1].alpha; p.gradient.x = g + (pixels[x + 1, y].alpha - pixels[x - 1, y].alpha) * sqrt2; p.gradient.y = g + (pixels[x, y + 1].alpha - pixels[x, y - 1].alpha) * sqrt2; p.gradient.Normalize(); } } } } private static float ApproximateEdgeDelta (float gx, float gy, float a) { // (gx, gy) can be either the local pixel gradient or the direction to the pixel if(gx == 0f || gy == 0f){ // linear function is correct if both gx and gy are zero // and still fair if only one of them is zero return 0.5f - a; } // normalize (gx, gy) float length = Mathf.Sqrt(gx * gx + gy * gy); gx = gx / length; gy = gy / length; // reduce symmetrical equation to first octant only // gx >= 0, gy >= 0, gx >= gy gx = Mathf.Abs(gx); gy = Mathf.Abs(gy); if(gx < gy){ float temp = gx; gx = gy; gy = temp; } // compute delta float a1 = 0.5f * gy / gx; if(a < a1){ // 0 <= a < a1 return 0.5f * (gx + gy) - Mathf.Sqrt(2f * gx * gy * a); } if(a < (1f - a1)){ // a1 <= a <= 1 - a1 return (0.5f - a) * gx; } // 1-a1 < a <= 1 return -0.5f * (gx + gy) + Mathf.Sqrt(2f * gx * gy * (1f - a)); } private static void UpdateDistance (Pixel p, int x, int y, int oX, int oY) { Pixel neighbor = pixels[x + oX, y + oY]; Pixel closest = pixels[x + oX - neighbor.dX, y + oY - neighbor.dY]; if(closest.alpha == 0f || closest == p){ // neighbor has no closest yet // or neighbor's closest is p itself return; } int dX = neighbor.dX - oX; int dY = neighbor.dY - oY; float distance = Mathf.Sqrt(dX * dX + dY * dY) + ApproximateEdgeDelta(dX, dY, closest.alpha); if(distance < p.distance){ p.distance = distance; p.dX = dX; p.dY = dY; } } private static void GenerateDistanceTransform () { // perform anti-aliased Euclidean distance transform int x, y; Pixel p; // initialize distances for(y = 0; y < height; y++){ for(x = 0; x < width; x++){ p = pixels[x, y]; p.dX = 0; p.dY = 0; if(p.alpha <= 0f){ // outside p.distance = 1000000f; } else if (p.alpha < 1f){ // on the edge p.distance = ApproximateEdgeDelta(p.gradient.x, p.gradient.y, p.alpha); } else{ // inside p.distance = 0f; } } } // perform 8SSED (eight-points signed sequential Euclidean distance transform) // scan up for(y = 1; y < height; y++){ // |P. // |XX p = pixels[0, y]; if(p.distance > 0f){ UpdateDistance(p, 0, y, 0, -1); UpdateDistance(p, 0, y, 1, -1); } // --> // XP. // XXX for(x = 1; x < width - 1; x++){ p = pixels[x, y]; if(p.distance > 0f){ UpdateDistance(p, x, y, -1, 0); UpdateDistance(p, x, y, -1, -1); UpdateDistance(p, x, y, 0, -1); UpdateDistance(p, x, y, 1, -1); } } // XP| // XX| p = pixels[width - 1, y]; if(p.distance > 0f){ UpdateDistance(p, width - 1, y, -1, 0); UpdateDistance(p, width - 1, y, -1, -1); UpdateDistance(p, width - 1, y, 0, -1); } // <-- // .PX for(x = width - 2; x >= 0; x--){ p = pixels[x, y]; if(p.distance > 0f){ UpdateDistance(p, x, y, 1, 0); } } } // scan down for(y = height - 2; y >= 0; y--){ // XX| // .P| p = pixels[width - 1, y]; if(p.distance > 0f){ UpdateDistance(p, width - 1, y, 0, 1); UpdateDistance(p, width - 1, y, -1, 1); } // <-- // XXX // .PX for(x = width - 2; x > 0; x--){ p = pixels[x, y]; if(p.distance > 0f){ UpdateDistance(p, x, y, 1, 0); UpdateDistance(p, x, y, 1, 1); UpdateDistance(p, x, y, 0, 1); UpdateDistance(p, x, y, -1, 1); } } // |XX // |PX p = pixels[0, y]; if(p.distance > 0f){ UpdateDistance(p, 0, y, 1, 0); UpdateDistance(p, 0, y, 1, 1); UpdateDistance(p, 0, y, 0, 1); } // --> // XP. for(x = 1; x < width; x++){ p = pixels[x, y]; if(p.distance > 0f){ UpdateDistance(p, x, y, -1, 0); } } } } private static void PostProcess (float maxDistance) { // adjust distances near edges based on the local edge gradient for(int y = 0; y < height; y++){ for(int x = 0; x < width; x++){ Pixel p = pixels[x, y]; if((p.dX == 0 && p.dY == 0) || p.distance >= maxDistance){ // ignore edge, inside, and beyond max distance continue; } float dX = p.dX, dY = p.dY; Pixel closest = pixels[x - p.dX, y - p.dY]; Vector2 g = closest.gradient; if(g.x == 0f && g.y == 0f){ // ignore unknown gradients (inside) continue; } // compute hit point offset on gradient inside pixel float df = ApproximateEdgeDelta(g.x, g.y, closest.alpha); float t = dY * g.x - dX * g.y; float u = -df * g.x + t * g.y; float v = -df * g.y - t * g.x; // use hit point to compute distance if(Mathf.Abs(u) <= 0.5f && Mathf.Abs(v) <= 0.5f){ p.distance = Mathf.Sqrt((dX + u) * (dX + u) + (dY + v) * (dY + v)); } } } } }`

Au départ je veux vous remercie infiniment de m'avoir aidez.

Je sais que je ne devais pas me plaindre, mais je ne sais pas quoi faire, je me suis bloqué dans ce domaine : je cherche et je cherche, je lis et relis sans résultat. Je reviens vers vous tous après une longue période de vous informer que j’ai rien fait. J’ai essayé d’exécuter votre programme (texture_creation), J’ai essayé de le convertir en langage C, et je n’ai jamais arrivé.

Le problème est que je n’ai pas une stratégie à suivre, je vous souhaite d’avoir me guider.

Je vous remercie encore une fois. ]]>

Maintenant j'essaye de programmer votre algorithme pour générer l'image d'entrée, et celle de Green pour faire une comparaison entre les deux. Et dans le futur, je vais chercher ma méthode. ]]>

well enough to work my way through shorter texts. An automatic translation

also makes a lot more sense if you include your French original, so

feel free to write me using either language, or both! ]]>

I have some difficulty to form sentences because I write in French after I use a translator to translate them into English, that's why you can not follow me … Even I shot it a lot information from your answers.

Now I tried your method programmed to generate the input image. And do a little comparison with Green. And then I'll look for another method suitable for me.

the TGA files I used for testing have been corrupted. Two have

noise in some scanlines (shape2 and shape3), and one refuses

to parse (shape4). The code appears to work on ATI and Nvidia

hardware alike. It's not fast on weak GPUs, but it's probably

still faster than doing it in software. ]]>

has not been touched for some time, and that it is probably not my

proudest handiwork. It was a very fast hack, but it worked and

seemed to give the correct results on my Nvidia card at work.

Now the result seems to be wrong for some of the test images

when trying it on my ATI card at home, but I don't have the

time to find the bug. ]]>

The AA version is not online yet. I'll get back to you in a few minutes on this. ]]>