Tuesday 1 July 2014

Why do reductive goals exits?

Ten minutes ago I discovered what exactly "reductive" goals represent, or I think so.

As you know (other way, read the old post first) this entropic intelligence needs a simulation of a system to be able to work, but also, if you pretend it to make some hard work for you, you also need a set of "goals" that represent how much you earn when system travels from point A to point B.

Those goals I already talked about, could be categorised in "positive" goals, like earning points for the meters you run, or the energy you pick. Then we also needed "reductive" goals to make it work properly.



At first, they started being a value from 0 to 1 representing the "health" of the kart, so if I multiply the score (meters raced for instance) by the health coeficient, you got a value that made more sense: if you get 5 of energy, but crash and loose all your health, it scores 5x0, nothing.

This only coeficient, with a boost in the simulation engine that made it able to properly calculate bounces and the energy of the impacts, made possible that karts learned to avoid hard crash, as they take much health from you, lowering all the scorings of those ugly futures.

So it is clear they work pretty well, but if the positive goals represented "profits" of some kind, those coeficients were... the instincs?

Well, it made some sense, some way your reptilian brain tells you that crashing with a wall is a bad idea even if it makes you get 5 of energy. So I adopted this explanation, letting it excluded from the "entropic" part of the algorithm.

But wait, entropy gain can be anything from zero to... not infinity but unbounded, and living beings have a kind of entropy they never want to grow, at any cost: its internal entropy must be keept as low as possible in orther to continue living!

You have a really low internal entropy, if you compare with a pile of food of the same weight, for instance. Your internal temperature is constant, so this factor doesn't add much entropy, neither pH levels can change too much, or cardiac rithm, or glucose levels, any other level you could think of.

Your internal body is a piece of flesh really well ordered and controlled. So well placed everything that a doctor can open you and fix it as it knows where everything will be placed.

So internal entropy is something the AI have to keep really low in the game, and using the inverse of this internal entropy as a filter for all good things that could happed (the positive goals that represent the entropy you want to maximize) makes a lot of sense to me now.

The idea of using a positive goal multiplied by a factor from 0 to 1 representing health (or energy level) correspond to the function to be maximized being:

External entropy production / Internal entropy value

The health coeficient correspond to h = 1/Internal entropy value.

So both parts of the equation correspond to entropy, only that reductive part is about getting low the internal entropy value, and positive part was about getting high the external entropy.

Note I use the internal entropy "value" and not the "production", as production can be zero, and I only want to panic when the "value" of my health is getting much lower... and dividing by zero is deadly, you know, your whole universe collapses.

Finally, I am glad I could remove this ugly part of a general intelligence representing "instincs". I didn't want to create "hulligans" of any kind, so knowing it was just the inverse of a perfectly defined entropy makes me feel better. 

No comments:

Post a Comment