Thursday 18 June 2015

Using Feynman Integrals

Some posts ago I commented on the strange need of using a "Pauli's Exclusion Principle" on the Fractal AI in order to make it work as spected.

It may sound strange that quantum physics play a role on building a fractal AI, but actually it is the real base for such an algortihm.

Today I want to comment on my very last idea: using real Feynman integrals on the Fractal AI.

First try to draw the fractal

The Fractal AI I have been working on is finished, I can't make it more intelligent. It is in fact very intelligent, it could scape from a cloud of 50 falling asteroids in the last post (I repeated the test today with 200 futures, I will add it at the end).

But I feelt it was not really completed. There were 3 very dark spots on the algortihm and the idea behind it:

1)  I was still using one heuristic to accumulate the final decision. It was needed but was ugly, and not friendly when converting the algorithm into a quantum one. Basically, when the future reached the desired time, I needed to "remember" how did each future started on t=0 to be able to accumulate my decision.

2) It was not totally symmetric on the effect on the fractal shape of a clone vs a collapse. Cloning was more powerful than collpasing, so the cloud of futures always tended to expand. The assimetry was not that ugly, it basically means that creating is more optimal than destroying for any long term goal you may have.

3) It was not symetric in time. Futures only walk towards the future, they don't walk back in time. It may sound strange to name it a "asymmetry" but it is, and a big one.

Then I found a nice solution: if futures could travel back in time, then the asymmetry 2 would swap and cancel, as the futures going back compensate with the others. The ones coming back in time would tend to agrupate instead than expanding and so on.

This solution also eliminated heuristic 1, as the futures would "come back" and tell me where to go or what to decide. I just needed to account for negative dt on the fractal formulas. And it was easy! the idea fitted perfectly with the formulas, I just needed to add "* dt" on them instead of assuming it was "one".

Here you have a simplified drawing of the idea:

First try to draw the fractal

On the left-most side you see your actual position, where the system is now. The left side of the flame represent the futures you are imagining toward the future, as in the actual incarnation of the Fractal AI. As you see, it expands and create flames like a real fire. It does it exactly this way.

When time reachs the time horizon "Tau" in the middle of the drawing, a gray image is created. It represnt the most probables futures you will have given your actual position, after a time Tau. It correspond with the "probability cloud" some times is used instead of the wave equation, that actually corresponds with the whole flame.

On the right side, time is inverted, so futures travel back to present. Well, not exactly to present, instead, they travel to present+dt.

On his travel back to present, the fractal laws of growth used in the first half are inverted, so, for instance, exclusion principle is inverted so clusters of futures are created, as dense as possible.

This makes the falling futures to shrink and compactify into smaller and smaller areas as they reach present+dt. It could still end up forming two of those "hurricanes", so particle could travel one way of the other. You have a particle with to superimposed states.

Also, potencial differences, that in the first half made futures to clone, now repel. Futures now don't want to fall down using a lot of the potential energy they accumulated in the way up to future, they want to retain as much as possible, so inverting this growth law, makes futures coming back to tend to the path of "minimum action", the one that cost less energy to travel, or using the shortest path in the metric it defines over space, if you prefer relativity way.

I am quite sure this new way of using the fractal growth would outperform my actual Fractal AI (the one that don't come back from the future) in some orders of magnitud, but this is not a real Feynman integral, it is not the way it should be.

There is still one "human simplification", one that should not be there: futures bounce back when they arrive to time Tau, magically, using a hacker's trick: I just invert dt for them by code. I feel dirty!

There was still a missing part: collapsing a future was not what I was doing (send it to a random neighborn position), instead, I just needed to make "collapsing" mean: invert dt direction.

With this addition, everything seems perfect to me. There are no more heuristics at all, no more asymmetries, and no more human coeficients: "seconds to think" is now obsolete! Given a problem, and given a number of futures to use (or how powerful your CPU is) the fractal will reach as far in the future as it is safe and profitable to go.

BTW, the resulting algorithm can be converted into a quantum algorithm, and, in this case, fractals disapear from code... well, ALL the code would dissapear!

I have not coded this fractal so far, so I can only show you a more accurate drawing of what the real fractal would look like.

Second version of the fractal

This strange drawing represent the whole idea of using Feynman integrals formed of fractals.

It starts in the lower-left black particle. It is your state at present time t. From this point, hundreds of virtual particles like just randomly travel into the future, governed by the fractal growth rules, so the fractal they form evolves up into the future, forming flames and whirpools. Where this fractal get very dense, futures collapse here and there and start to fall down to present, traveling back in time.

In this way down, they attrack each other, forming narrow cones, and travel following the minimum action path, the easiest path in other words, again only by applied the inverted fractal growth laws.

Eventually, some futures reach you present. You just need to follow them. After one future fall to the present, you take a small step towards it, picking it up, and sending it again to fly.

This form what you see in the drawing: a cloud of futures evolves over your present, following some growth laws that mimics the possible paths it could follow and forms a evolving fractal. The portions of this fractal cloud that separates and fall down to the present, are the answers to where should I move next. The particle will move toward those folling futures, and the intelligence will decide to go where those futures falled, and so on.

This fractal I have described here, when translated to particles with charge (QED), looks like a Feynman fractal integral, using fractal paths, and when Feynman dyagrams on the cloning and collpaing points of the fractal. I call it the "Feynman Fractal" as I think I first discovered it, but had no fractals to label it for us.

A lot of interesting things goes under this fractal unnoticed at first: particles that travesl on space, but not in time (as dt can be zero), particles with no charge at all formed from charged particles, and particles that produce exclusion forces but don't interact with any other particle. Quite similar with dark matter and dark energy!

I plan to have a working QED fractal simulator using this new approach in one or two weeks, and if it work out as spected, it should be the nicest screen saver ever!

Adapting it to the Fractal AI will need a little more time, as this fractal is not continuously generated, it collapses back at every frame, and I need it to be stable in time.

This is surely one of the most surealistics post in the blog, may be quite a long shot to comment here without showing any facts, but I feel the whole idea is so perfect and self consistent that deserved being commented.

And the new video with the "half done" fractal AI vs 50 asteroids, now with 200 futures:


Look terrible easy from this distance!

4 comments:

  1. Hello, I stumbled on your blog yesterday and I was quite fascinated by it. I had heard about the A.I. effect the increase of entropy principle could achieve and I've always been quite eager to program it myself. However, I had no idea how to approach it and together with lack of time I had to drop it pretty quickly.

    You blog gave me inspiration to start on it again. I'm not familiar with Delphi, so I decided to give it a go in Matlab. I've tried to recreate the principle you showed in your first video, and after a day or so of programming these were the results:

    http://imgur.com/a/JjKsp#18

    Thanks again, also for the great explanations you give. Seeing that this summer I have more time on my hands I might get into it and see where it brings me.

    Kind regards, Alexander

    ReplyDelete
    Replies
    1. Hi Alexander, it was great to see your images, it is the first time I see this algorithm out out my hands and was quite refreshing!

      Any question you should had, don't hesitate to ask here.

      As I understand by the pictures, you are using raced distance as a weigth for the futures, it is the best and easiest way to make a good pilot out of it.

      As you know, I moved into using fractal -not linear- paths, but before that I walked the "linear road" all way down, so there are some tips I can offer you at your actual stage.

      If you noticed, this AI has a small problem when it has two almos identically nice ways to go, as it will wait to take a decision too long. You can see it on you images, when green lines come too near to the black dot, and then react and take one side. It is better if it decides sooner.

      THe way I did it was like this: You have 2 or 4 options (depending on the accelerator being controlled by AI or fixed), and you evaluate 2 or 4 weigths, one for each, but counting different futures, then adding its distance reaced, so far so good.

      The tip is: Before normalizing those weigths so they sum one, find the lower one, and substract it from all the weights, so minimum one is now zero. This will make AI to decide sooner, but also to move more "nervously", you can find a sweet point by substracting only one half of the minimum, so you have an averaged behaviour.

      Update: I first replied with my wife account (now deleted).

      Delete
    2. Actually, the very first pictures did not even inlcude scoring based on raced length. It was still working purely on unique solutions, which worked pretty well; I was quite amazed about that. Of course its still pretty bad as a 'smart' A.I., but still quite astonishing.

      Today I've added weighting of the solutions by raced length. The same I did for the speed, which is now also controlled by the algorythm. After a lot of debugging it finally worked, but there's still somthing that bothers me. The speed's being changed automatically, but its always more or less around the average from its max and min limit. I had expected more drastic results. Did you have the same experience?

      To clarify the method I used: there are the 4 options, 1. L-fast, 2. L-slow, 3. R-fast, 4. R-slow. Each give a number of unique solutions and a sum of raced distance of all futures. I normalize both, so I get a weighting that adds up to 1 for the options based on (U)niqueness and (D)istance separately. Speed = [max * (W_U1+W_U3+W_D1+W_D3) + min * (W_U2+W_U4+W_D2+W_D4]/2 and change of angle = delta_angle_max * (W_U1+W_U2+W_D1+W_D2) - delta_angle_max * (W_U3+W_U4+W_D3+W_D4).

      Since it was too stable to my taste I manually added a factor which changes the option's speed weight (so 1 and 3 would get a higher value, while the sum still adds up to 1 of course), which worked somewhat and gave me a way to let the AI behave more like 'racer'. The rest of the day I mostly worked on optimizing the code so it ran quicker. I've also used your tip for normalizing and indeed, it's able to make decisions much quicker now. Its amazing how its still capable to solve the track even at low values for future and steps. You can see a few of today's results here:

      http://imgur.com/a/JC3Dg#0

      The color of the line changes according to the speed, redder is slower and greener is faster. What I love the most is how it's capable of breaking before making turns and speeding up right after. It's just like a real race driver!

      Delete
    3. Your problem is double.

      First, using a fixed grid size to detect duplicated futures is nice at first, but hard situations demand a smaller grid. It is not the main problem but I added an heuristic to control it this so the percentage of discarded futures is ina given interval like 5-10%: below 5% you need to discard more futures, so you lower the grid size and viceversa. You also need to set a minimum grid size.

      Second, the amount of change to use for options is important. If turning uses +/-5 degrees and speed +/-1 m/s, it may be they are too out of relative scale. This is why *3 was useful. But not the right solution.

      To address it I added a ·mass· to each joystick so it start with 1 (so you move +-5) but then add a new degree of freedown to change this 1 by +/-0.1 for instance.

      Now AU will also move this new stick and make the "scaling" or "mass" of ths joystick to change. You have auto sesitivity on the original degree of freedown. Now the relative scaling is full auto.

      This last idea is quite an improvement as car will pownder small moves in straights and more drastic turns on complicated turns by its own.

      Basically, the more coef the ai controls the best.

      Best regards
      Sergio.

      Delete