Recent Forum Posts
From categories:
page 1123...next »

The bug is now fixed in the Github repo for "OpenGL Insights".

Re: makedist.c, width / height by stegustegu, 06 Jan 2015 15:50

Oups. Sorry about that. Another case of confusion from my many years of using Matlab, which uses Fortran-style memory layout for 2D arrays. I probably tested this code only with square images, or tested it without taking enough care to use the parameters according to their names.
I have this code posted to many different places, some of which are beyond my control or even unknown to me, but I will update it in where I can. Thanks for the bug report!

Re: makedist.c, width / height by stegustegu, 06 Jan 2015 15:37

I believe that the parameters for width and height are switched around in makedist.c in the repository for OpenGL Insights? The calls to computegradient and edtaa3 should have the parameters switched. Since you recently updated the repository maybe you would like to fix it since it took me a couple of hours of head scratching :) Great work btw!

makedist.c, width / height by jimhadjimhad, 05 Jan 2015 00:38
Re: Exact EDT?
stegustegu 14 Aug 2014 10:21
in discussion Support forum / Suggestions » Exact EDT?

That's an interesting question, but I don't have an answer from the top of my head. I would need to look closer into it, but unfortunately I don't have much time to spend on that at the moment.

Generally speaking, the errors generated by the anti-aliased version are different than the errors from the binary transform, and I would expect it to potentially mess things up a bit for the exact algorithms. The key issue is that you need to evaluate more pixels than the closest one on the discrete grid to be certain of finding the nearest edge in the anti-aliased sense. I have not spent any deep thought on what that would mean for the two specific algorithms you mention.

Thanks for your comment, and your interest. I would be happy to discuss this further, but right now I have very little time to spend on actual coding and testing in this area. My focus when I did this was in getting it to work at all, because I needed the results for my research in computer graphics. The fine details about getting exact results (and even trying to figure out how "exact" would be interpreted in this context) were not my primary concern at the time.

Re: Exact EDT? by stegustegu, 14 Aug 2014 10:21
Exact EDT?
cbrunschencbrunschen 14 Aug 2014 09:32
in discussion Support forum / Suggestions » Exact EDT?

(Sorry for the lack of links; as a low-karma user I am not permitted to post those, it seems. I've tried to give clear references instead; apologies for anything I've gotten wrong.)

The Jump Flooding Algorithm (JFA) as used here for calculating the distance transform is fairly quick and produces good, but not exact, results. But there also exist algorithms that produce the exact Euclidean Distance Transform (EDT) in linear time, such as described in A General Algorithm for Computing Distance Transforms in Linear Time (A. Meijster, J.B.T.M. Roerdink, W.H. Hesselink, A general algorithm for computing distance transforms in linear time, Computational Imaging and Vision 18 (8) (2000) 331–340) and A Linear Time Algorithm for Computing Exact Euclidean Distance Transforms of Binary Images in Arbitrary Dimensions (C.R. Maurer, R. Qi, and V. Raghavan, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2):265–270, February 2003). This is straightforward to parallelise on CPUs, and has been implemented on GPUs (in Cuda) as well as the Parallel Banding Algorithm (Thanh-Tung Cao, Ke Tang, Anis Mohamed, and Tiow-Seng Tan, Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games (I3D '10)).

Would some of this be applicable here and possible to adapt to generate the anti-aliased distance transform? An exact algorithm would avoid the (small) errors still present in what Jump-Flooding calculates, and a linear-time algorithm might be able to calculate them a little bit more quickly as well; and a GPU implementation would seem to fit in well to be able to generate them from rendered content in real time within a game.

Exact EDT? by cbrunschencbrunschen, 14 Aug 2014 09:32

The purpose of this code is to make accurate distance fields from images with properly anti-aliased edges instead of using an oversized binary image as input and downsampling the distance field. The potential improvement over a binary transform corresponds to at least a 16-fold increase in resolution, as argued in the article in Pattern Recognition Letters, but it absolutely requires you to have area-sampled anti-aliased edges in the input image.

If you don't have proper anti-aliasing of edges, this method won't give you a better result than any of the traditional binary transforms. Look for "jump flooding" or "sweep-and-update" in the literature and you will find a selection of suitable and fast algorithms that are far better than the brute force algorithms I have sen game developers use.

You can resize the output distance field to a smaller size, but it does not give you quite the right result. Some detail is lost. Instead, I would suggest resizing the input image before making the distance field. Make sure to use plain bilinear resampling, no fancy "bicubic" or more advanced sharpening stuff, and for best results, resize the input image only in integer steps, like 2x smaller. Avoid small changes in size or weird scaling factors, as that will mess up the anti-aliasing to the point where my distance transform falls flat on its face and gives downright silly results (wobbly, distorted edges, just plain wrong distances).

Let me know if I can be of any further assistance. I would like to see this being used more.

Re: C version only by stegustegu, 19 Jul 2014 10:33

I got this working … it's WAY faster then my brute force python dfGen.

So I my working knowledge (from game dev) is that I could take say a 2048x2048… and gen a distance field at 512x512 (or lower).

With this C version, I seem to get the same sized df out as the input.

Is it appropriate to re-size it myself, or does that screw up the purpose and math?

Or, can someone help point me in the right direction for changing the output size of the df in this code snippet?

Re: C version only by hogjonnyhogjonny, 17 Jul 2014 23:59

Thanks for your interest!

1) Yes, it is possible, and I have a GPU accelerated demo here:
http://webstaff.itn.liu.se/~stegu/JFA/
You want the version that says "validation" at the end. The other ones may not be AA at all. I'm not sure, and I don't quite remember, because that is old code that I have not touched for years. Note that the code is a few years old, so it does not use anything above GLSL 1.2: no integer indexing, no integer texture formats and no multiple render targets, and it is written using an old version of OpenGL (2.1) to be compatible with MacOS X in the version that was current back then. It still works, though, and it's fast. However, I never got around to implementing the more accurate distance measures. This demo is equivalent to "edtaafunc", without any directional dependency for the distance, it's just a linear function of the pixel value, according to equation (1) in the article.

2) Perhaps I can make this more clear: The (u,v) is the position of the "hit point" measured from the center of the pixel. The "hit point" (black circle) is the position where a line from the external point running orthogonal to the edge would hit the edge if it were at that exact distance, with the edge direction estimated from local derivatives. It that point is not within the pixel in question, the measure is not entirely reasonable, but if it's within the pixel, the "true" distance is likely to be a more accurate estimate, and is used instead.
Note that the "improvement" from the "edtaa3func" to the "edtaa4func" versions is somewhat questionable. The extra calculations improve the average accuracy, but there are corner cases at isolated pixels where they do more damage than good. You need to test it to see if it works for the kind of input images you have.

3) The "other" hint to edge direction is the direction to the closest edge pixel, which is available for pixels that are not on the edge itself. I am sorry for any confusion here. What I meant to say was that the direction to the edge pixel is a better estimate of the edge direction if the edge is far away, less accurate if the edge is close, and not applicable if the pixel is on the edge. For pixels on the edge, the derivatives is all you have. For distant pixels, the vector to the edge is a better estimate. For pixels at a distance of a few pixels or less from the edge, the derivatives may be of use to improve the estimate, but not always. I have no hard decision rule here, I can just say "it depends".

Please let me know if you have any further question. Note, however, that I am currently on vacation and may take more time than usual to respond to e-mail.

Dear Prof.Gustavson,
First, I tried the jump flood, which results with no AA.
Second, I have read the AA EDT in the Pattern Recognition Letters.
I run some tests on the produced DT and they produce nice AA effects, with limitless resolution.
My Questions :
1) Is it is possible to implement it with Jump Flooding?
2) Can you please further explain the U,V you are referring to in that paper - i.e equations (6) and its illustration Fig.4 .
i.e " Therefore, the local gradients were used to improve the result only at the edge and very near it, where most
of the larger absolute errors were located."
and in fig.4, "the hit point" (black circle)
3) "In our experiments, using the local gradient always improved
the accuracy for edge pixels, where no other edge direction information was available"
Can you further explain this sentence - i.e: other edge direction information .

Thank you,
Zebratov

Re: License
stegustegu 19 Dec 2012 12:20
in discussion Support forum / Questions » License

As a useful compromise between openness and attribution, I have now changed the license to MIT.
Why not public domain? Well, I am somewhat reluctant to release significant amounts of non-trivial code as public domain. I like openness a lot, but I have had some bad experiences in the past where people have taken my code and put it into a product without giving me as much as a word of thanks. I have no illusions of earning any money from this, but as a researcher, I am very much into getting and giving credit where credit is due.
The algorithm is described in detail in the scientific article, and it should not be a huge effort to re-implement it from the description and use it for any purpose if the MIT license doesn't cut it for you. Of course, if you have a specific use in mind, I would be happy to negotiate an individual license for your particular needs. (Don't worry, I'm cheap. I just want some level of control over where my code is used commercially.)

Re: License by stegustegu, 19 Dec 2012 12:20
License
hronethronet 19 Dec 2012 11:04
in discussion Support forum / Questions » License

Hi

I was wondering about the use of GNUGPL - it essentially makes it impossible to use the software in any commercial product. Would it be meaningful to change it to either public domain, the MIT License, or the zlib-license?
(in descending order of usefulness)

Thanks!

License by hronethronet, 19 Dec 2012 11:04

Glad you appreciate it!

I have made a unity package for it. It contains the above code, an editor class that makes an editor window for it, and the GNU GPL. You can find it, along with documentation, over at catlikecoding.com/unity/products/distance-map-generator/

Re: Unity C# version by Jasper FlickJasper Flick, 15 Feb 2012 12:56

Thank you very much! It's nice to see my code being used
and tweaked, and to see contour rendering catching on!

Re: Unity C# version by stegustegu, 13 Feb 2012 08:11

Hi!

I'm working on a text tool for the Unity3D game engine and I included a C# version of the EDTAA algorithm to create nice distance maps from font atlases. The Generate method takes the alpha channel of a source texture and generates the distances from that, either outside, inside, or both. Beyond that, it's conceptually the same code as edtaa3 and the post process part, although I approached a few things differently.

Though it's Unity3D specific, it shouldn't be too hard to integrate the C# code into a non-Unity project.

using UnityEngine;

/// <summary>
/// Utility class for generating distance maps from anti-aliased alpha maps.
/// </summary>
public static class CCDistanceMapGenerator {

    /// <summary>
    /// How to fill the RGB channels of the generated distance map
    /// </summary>
    public enum RGBMode {
        /// <summary>
        /// Set the RGB channels to 1.
        /// </summary>
        White,
        /// <summary>
        /// Set the RGB channels to 0.
        /// </summary>
        Black,
        /// <summary>
        /// Set the RGB channels to the computed distance.
        /// </summary>
        Distance,
        /// <summary>
        /// Copy the source texture's RGB channels.
        /// </summary>
        Source
    }

    private class Pixel {
        public float alpha, distance;
        public Vector2 gradient;
        public int dX, dY;
    }

    private static int width, height;
    private static Pixel[,] pixels;

    /// <summary>
    /// Generates a distance texture from the alpha channel of a source texture.
    /// </summary>
    /// <param name="source">
    /// The source texture. Alpha values of 1 are considered inside, values of 0 are considered outside, and any other values are considered
    /// to be on the edge. Make sure the texture is readable and not compressed.
    /// </param>
    /// <param name="destination">
    /// The destination texture. Must be the same size as the source texture.
    /// </param>
    /// <param name="maxInside">
    /// The maximum pixel distance measured inside the boundary, resulting in an alpha value of 1.
    /// If set to zero, everything inside will have an alpha value of 1.
    /// </param>
    /// <param name="maxOutside">
    /// The maximum pixel distance measured outside the boundary, resulting in an alpha value of 0.
    /// If set to zero, everything outside will have an alpha value of 0.
    /// </param>
    /// <param name="postProcessDistance">
    /// Pixel distance from the boundary which will be post-processed using the boundary gradient.
    /// </param>
    /// <param name="rgbMode">
    /// How to fill the destination texture's RGB channels.
    /// </param>
    public static void Generate (Texture2D source, Texture2D destination, float maxInside, float maxOutside, float postProcessDistance,
                                 RGBMode rgbMode) {
        if(source.height != destination.height || source.width != destination.width){
            Debug.LogError("Source and destination textures must be the same size.");
            return;
        }
        try{
            source.GetPixel(0, 0);
        }
        catch{
            Debug.LogError("Source texture is not read/write enabled.");
            return;
        }

        width = source.width;
        height = source.height;
        pixels = new Pixel[width, height];
        int x, y;
        float scale;
        Color c = rgbMode == RGBMode.White ? Color.white : Color.black;
        for(y = 0; y < height; y++){
            for(x = 0; x < width; x++){
                pixels[x, y] = new Pixel();
            }
        }
        if(maxInside > 0f){
            for(y = 0; y < height; y++){
                for(x = 0; x < width; x++){
                    pixels[x, y].alpha = 1f - source.GetPixel(x, y).a;
                }
            }
            ComputeEdgeGradients();
            GenerateDistanceTransform();
            if(postProcessDistance > 0f){
                PostProcess(postProcessDistance);
            }
            scale = 1f / maxInside;
            for(y = 0; y < height; y++){
                for(x = 0; x < width; x++){
                    c.a = Mathf.Clamp01(pixels[x, y].distance * scale);
                    destination.SetPixel(x, y, c);
                }
            }
        }
        if(maxOutside > 0f){
            for(y = 0; y < height; y++){
                for(x = 0; x < width; x++){
                    pixels[x, y].alpha = source.GetPixel(x, y).a;
                }
            }
            ComputeEdgeGradients();
            GenerateDistanceTransform();
            if(postProcessDistance > 0f){
                PostProcess(postProcessDistance);
            }
            scale = 1f / maxOutside;
            if(maxInside > 0f){
                for(y = 0; y < height; y++){
                    for(x = 0; x < width; x++){
                        c.a = 0.5f + (destination.GetPixel(x, y).a - Mathf.Clamp01(pixels[x, y].distance * scale)) * 0.5f;
                        destination.SetPixel(x, y, c);
                    }
                }
            }
            else{
                for(y = 0; y < height; y++){
                    for(x = 0; x < width; x++){
                        c.a = Mathf.Clamp01(1f - pixels[x, y].distance * scale);
                        destination.SetPixel(x, y, c);
                    }
                }
            }
        }

        if(rgbMode == RGBMode.Distance){
            for(y = 0; y < height; y++){
                for(x = 0; x < width; x++){
                    c = destination.GetPixel(x, y);
                    c.r = c.a;
                    c.g = c.a;
                    c.b = c.a;
                    destination.SetPixel(x, y, c);
                }
            }
        }
        else if(rgbMode == RGBMode.Source){
            for(y = 0; y < height; y++){
                for(x = 0; x < width; x++){
                    c = source.GetPixel(x, y);
                    c.a = destination.GetPixel(x, y).a;
                    destination.SetPixel(x, y, c);
                }
            }
        }
        pixels = null;
    }

    private static void ComputeEdgeGradients () {
        float sqrt2 = Mathf.Sqrt(2f);
        for(int y = 1; y < height - 1; y++){ 
            for(int x = 1; x < width - 1; x++){
                Pixel p = pixels[x, y];
                if(p.alpha > 0f && p.alpha < 1f){
                    // estimate gradient of edge pixel using surrounding pixels
                    float g =
                        - pixels[x - 1, y - 1].alpha
                        - pixels[x - 1, y + 1].alpha
                        + pixels[x + 1, y - 1].alpha
                        + pixels[x + 1, y + 1].alpha;
                    p.gradient.x = g + (pixels[x + 1, y].alpha - pixels[x - 1, y].alpha) * sqrt2;
                    p.gradient.y = g + (pixels[x, y + 1].alpha - pixels[x, y - 1].alpha) * sqrt2;
                    p.gradient.Normalize();
                }
            }
        }
    }

    private static float ApproximateEdgeDelta (float gx, float gy, float a) {
        // (gx, gy) can be either the local pixel gradient or the direction to the pixel

        if(gx == 0f || gy == 0f){
            // linear function is correct if both gx and gy are zero
            // and still fair if only one of them is zero
            return 0.5f - a;
        }

        // normalize (gx, gy)
        float length = Mathf.Sqrt(gx * gx + gy * gy);
        gx = gx / length;
        gy = gy / length;

        // reduce symmetrical equation to first octant only
        // gx >= 0, gy >= 0, gx >= gy
        gx = Mathf.Abs(gx);
        gy = Mathf.Abs(gy);
        if(gx < gy){
            float temp = gx;
            gx = gy;
            gy = temp;
        }

        // compute delta
        float a1 = 0.5f * gy / gx;
        if(a < a1){
            // 0 <= a < a1
            return 0.5f * (gx + gy) - Mathf.Sqrt(2f * gx * gy * a);
        }
        if(a < (1f - a1)){
            // a1 <= a <= 1 - a1
            return (0.5f - a) * gx;
        }
        // 1-a1 < a <= 1
        return -0.5f * (gx + gy) + Mathf.Sqrt(2f * gx * gy * (1f - a));
    }

    private static void UpdateDistance (Pixel p, int x, int y, int oX, int oY) {
        Pixel neighbor = pixels[x + oX, y + oY];
        Pixel closest = pixels[x + oX - neighbor.dX, y + oY - neighbor.dY];

        if(closest.alpha == 0f || closest == p){
            // neighbor has no closest yet
            // or neighbor's closest is p itself
            return;
        }

        int dX = neighbor.dX - oX;
        int dY = neighbor.dY - oY;
        float distance = Mathf.Sqrt(dX * dX + dY * dY) + ApproximateEdgeDelta(dX, dY, closest.alpha);
        if(distance < p.distance){
            p.distance = distance;
            p.dX = dX;
            p.dY = dY;
        }
    }

    private static void GenerateDistanceTransform () {
        // perform anti-aliased Euclidean distance transform
        int x, y;
        Pixel p;

        // initialize distances
        for(y = 0; y < height; y++){ 
            for(x = 0; x < width; x++){
                p = pixels[x, y];
                p.dX = 0;
                p.dY = 0;
                if(p.alpha <= 0f){
                    // outside
                    p.distance = 1000000f;
                }
                else if (p.alpha < 1f){
                    // on the edge
                    p.distance = ApproximateEdgeDelta(p.gradient.x, p.gradient.y, p.alpha);
                }
                else{
                    // inside
                    p.distance = 0f;
                }
            }
        }
        // perform 8SSED (eight-points signed sequential Euclidean distance transform)
        // scan up
        for(y = 1; y < height; y++){
            // |P.
            // |XX
            p = pixels[0, y];
            if(p.distance > 0f){
                UpdateDistance(p, 0, y, 0, -1);
                UpdateDistance(p, 0, y, 1, -1);
            }
            // -->
            // XP.
            // XXX
            for(x = 1; x < width - 1; x++){
                p = pixels[x, y];
                if(p.distance > 0f){
                    UpdateDistance(p, x, y, -1, 0);
                    UpdateDistance(p, x, y, -1, -1);
                    UpdateDistance(p, x, y, 0, -1);
                    UpdateDistance(p, x, y, 1, -1);
                }
            }
            // XP|
            // XX|
            p = pixels[width - 1, y];
            if(p.distance > 0f){
                UpdateDistance(p, width - 1, y, -1, 0);
                UpdateDistance(p, width - 1, y, -1, -1);
                UpdateDistance(p, width - 1, y, 0, -1);
            }
            // <--
            // .PX
            for(x = width - 2; x >= 0; x--){
                p = pixels[x, y];
                if(p.distance > 0f){
                    UpdateDistance(p, x, y, 1, 0);
                }
            }            
        }
        // scan down
        for(y = height - 2; y >= 0; y--){
            // XX|
            // .P|
            p = pixels[width - 1, y];
            if(p.distance > 0f){
                UpdateDistance(p, width - 1, y, 0, 1);
                UpdateDistance(p, width - 1, y, -1, 1);
            }
            // <--
            // XXX
            // .PX
            for(x = width - 2; x > 0; x--){
                p = pixels[x, y];
                if(p.distance > 0f){
                    UpdateDistance(p, x, y, 1, 0);
                    UpdateDistance(p, x, y, 1, 1);
                    UpdateDistance(p, x, y, 0, 1);
                    UpdateDistance(p, x, y, -1, 1);
                }
            }
            // |XX
            // |PX
            p = pixels[0, y];
            if(p.distance > 0f){
                UpdateDistance(p, 0, y, 1, 0);
                UpdateDistance(p, 0, y, 1, 1);
                UpdateDistance(p, 0, y, 0, 1);
            }
            // -->
            // XP.
            for(x = 1; x < width; x++){
                p = pixels[x, y];
                if(p.distance > 0f){
                    UpdateDistance(p, x, y, -1, 0);
                }
            }
        }
    }

    private static void PostProcess (float maxDistance) {
        // adjust distances near edges based on the local edge gradient
        for(int y = 0; y < height; y++){
            for(int x = 0; x < width; x++){
                Pixel p = pixels[x, y];
                if((p.dX == 0 && p.dY == 0) || p.distance >= maxDistance){
                    // ignore edge, inside, and beyond max distance
                    continue;
                }
                float
                    dX = p.dX,
                    dY = p.dY;
                Pixel closest = pixels[x - p.dX, y - p.dY];
                Vector2 g = closest.gradient;

                if(g.x == 0f && g.y == 0f){
                    // ignore unknown gradients (inside)
                    continue;
                }

                // compute hit point offset on gradient inside pixel
                float df = ApproximateEdgeDelta(g.x, g.y, closest.alpha);
                float t = dY * g.x - dX * g.y;
                float u = -df * g.x + t * g.y;
                float v = -df * g.y - t * g.x;

                // use hit point to compute distance
                if(Mathf.Abs(u) <= 0.5f && Mathf.Abs(v) <= 0.5f){
                    p.distance = Mathf.Sqrt((dX + u) * (dX + u) + (dY + v) * (dY + v));
                }
            }
        }
    }

}
Unity C# version by Jasper FlickJasper Flick, 10 Feb 2012 17:15

In a different thread in this forum you will find a C port of the texture creation program. It takes a grayscale TGA file as input. Please have a look at that. If that is not what you want, I must ask you to explain your problem in more detail. I still do not understand what you are trying to do.

Re: help by stegustegu, 26 Nov 2011 22:10

Bonsoir,
Au départ je veux vous remercie infiniment de m'avoir aidez.
Je sais que je ne devais pas me plaindre, mais je ne sais pas quoi faire, je me suis bloqué dans ce domaine : je cherche et je cherche, je lis et relis sans résultat. Je reviens vers vous tous après une longue période de vous informer que j’ai rien fait. J’ai essayé d’exécuter votre programme (texture_creation), J’ai essayé de le convertir en langage C, et je n’ai jamais arrivé.
Le problème est que je n’ai pas une stratégie à suivre, je vous souhaite d’avoir me guider.
Je vous remercie encore une fois.

help by htemphtemp, 26 Nov 2011 21:44

Merci,
Maintenant j'essaye de programmer votre algorithme pour générer l'image d'entrée, et celle de Green pour faire une comparaison entre les deux. Et dans le futur, je vais chercher ma méthode.

by htemphtemp, 06 Oct 2011 09:28

I can read French. I am not good at writing it, but I can understand it
well enough to work my way through shorter texts. An automatic translation
also makes a lot more sense if you include your French original, so
feel free to write me using either language, or both!

Thank you very much for your answers that have been targeted.

I have some difficulty to form sentences because I write in French after I use a translator to translate them into English, that's why you can not follow me … Even I shot it a lot information from your answers.
Now I tried your method programmed to generate the input image. And do a little comparison with Green. And then I'll look for another method suitable for me.

The bugs I saw were not with the code. Rather, it appears that
the TGA files I used for testing have been corrupted. Two have
noise in some scanlines (shape2 and shape3), and one refuses
to parse (shape4). The code appears to work on ATI and Nvidia
hardware alike. It's not fast on weak GPUs, but it's probably
still faster than doing it in software.

Re: C version only by stegustegu, 01 Oct 2011 11:19
page 1123...next »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License