Tuesday 22 April 2014

Layers and layers of intelligence.

Those days I have been busy ioroning out the ideas about how different "levels" of entropic intelligence could be layered, one over the other, to make up our complex and sophisticated mind.

I have come across with a very simple -once you get the idea- way to arrange it like a bunch of layers of "common sense" placed one on top of the previous one.

There are at least two ways of explaining it: algortihmic (for programmers) or entropy laws (for physicists) so I will focus first in the algortihmic aspect so you can make your own "multilayered common sense intelligence machine" if you are in the need (I already am on the work, but it is still far from done).

So lets go for it the easy way using the old and good "kart simulation" example.

To be clear about the problem we are facing I will just go to the point: usign 100 crazy blind monkeys to randomly drive the kart in the 100 futures I have to imagine, and then take a "common sense" decision based on the things those crazy monkeys did, may be, only may be, was not such a clever idea after all.

Negative goals

We have seen how "common sense" works and how to bend it to our likings by adding positive and reductive goals, the video clearly showed the benefit of the mix of goals used, but are they enough avoid danger, or do we need something more... powerful?

Negative goals are quite natural for us: if the kart lower its health by a 10%, you can think of it as a mere "reduction" applied to the possitive goals -distance raced squared in this case- or something purely negative: a -10 in the final score.

If we try to get the same results as in previous video but using some short of negative goals, we will end up with something odd: the fear is ok in some really dangerous situations, they help you avoiding them efectively, but too much fear, a big negative scoring arising in some moment, will make the "common sense" to freeze. You have added a "phobia" to something.

Monday 21 April 2014

"Reduction" goals

In the last post we describen "common sense" and how to use them with positive goals, but I also commented how badly we need to learn to deal with negativeness: as it was always said, a little of frear is good.

In the physical layer of the algortihm, we always talk about entropy, and it is not different this time, so lets go down to the basics to understand how to think about negative goals the rigth way.

A living thing is a polar estructure, it follows two apparently oposite laws of entropy at the same time.

First of all, a living thing is a physical thing, so all the physic laws of the macroscopic world we live in apply, and we know it means obveying the second law of thermodinamics: the instantaneus entropy always has to grow, and in the optimum possible way.

On top of this physic law seats the second one: keep your internal entropy low, as low as possible.

Robotic psicology?

Entropic intelligence, or the natural tendency on intelligent beings to do whatever it takes in order to maximize entropy generation (but measured not on the present moment but at some point in the future) not only do generate intelligent behaviour as the original paper authors suggested, it is the missing part we needed to push actual AI into real "human like" intelligence.

It is now a year from my first contact with this idea, and for the first time I find my self prepared to name it correctly and give a definition of what this algortihm is really doing inside.

During all this time, the "intelligence" algortihm itself and the resulting behaviours have been named -in my thoughts, the code and the post here- with many different worlds, basically because I didn't know what exactly was emerging on the simulations, just that it seemd deeply related to the intelligence concept some how.