Friday 11 September 2015

Benchmarking a fractal algorithm

An algorithm is only "promising" until you perform some benchmarks against others similar state-of-the-art algorithms that are aimed at the same kind of problems.

Benchmarking a general AI algorithm is not easy as it is supposed to generate "intelligent behaviour" and it is not quite clearly defined, but I also develop some other fractal algorithms aimed at finding the global minimum of a real function, or "optimising" functions, and in this case, the algorithm is easily comparable with others, so here we go.



In the area of function optimising I have developed tree different algorithms this far, none of them being something I would consider a finished one.

The first one, code-named just "FractalMax1", was just a prof of concept, my first "doing something" fractal, so it needed some ugly parameters to be defined based on the function being studied before doing something, so it was not a real useful one.
 
The second one, code-named "Fractal 1x1", is a very early design that aimed specifically at global optimising 2D functions, but was also capable of trying out with functions with arbitrary number of parameters.

The algorithm was very naive, it didn't need the function to be even continuous, but it had the nice nature of being totally parallelizable and, at first sight, also very stable (it was very difficult to fool it with local minimums, it always found the global one in a short time). I was really surprised that such a simple approach did something at all, but it did.

Before going into the numbers you can watch a couple of videos of the algorithm in this previous post about optimisation.

So my friend Guillem proposed to performed some standard benchmarks for me using his python skills and test it against the most used algorithms that don't impose the continuity of the function to work: "Basin hopping" and "Simulated annealing".

As test functions we choose a number of them from different sources, specially from this page.
For each algorithm and each function, 100 tests were performed starting on randomly chosen initial positions, and then the errors (minimum found - known global minimum) were processed and adjusted to a Gaussian normal distribution. Means and standard deviation were then used to score each.

You can check the complete benchmark here: Fractal 1x1 Benchmark. For the lazy ones, like me, here you are the plots of the numbers found (the lowest the better), please keep in mind y-scale is logarithmic, so "objects in mirror are not so close as they appear"!



As you can see, the fractal one is consistently better (more reliable) than the other two, specially the std dev is kept under control all the times, meaning all 100 tests do find the global minimum. The more classical ones some times find it, some times don't, so the mean and specially the std dev grow about 7 orders of magnitude (in average) for Basin Hopping and 4 for Simulated annealing.

Fractal 1x1 is then an "all terrain" algorithm while Basin Hooping or Simulated Annealing are very good in some functions only, so if you only perform one test, you will hardly be sure the solution found is even near to the real global optimum.

The drawback for this "Fractal 1x1" algorithm is it doesn't escalate nicely when you rise the number of parameters of the function. For 2D functions it works like a charm, 3D is also near perfect, but having 12 parameters doesn't feel that nice, and I would not even try it against a 2000 parameters function, for instance.

No comments:

Post a Comment