Wednesday 24 October 2018

Hacking Reinforced Learning

My good friend and close colleague Guillem had a really busy year attending talks about Reinforced Learning in several events like Piter Py 2017 (Saint Petersburg, Russia), Europython 2018 (Edinburgh, UK) or PyConEs 2018 (Málaga, Spain), and PyData Mallorca (among others!) introducing Fractal Monte Carlo to a broad audience.

All the talks versed about RL, but the talks held at Europython (english) and PyConES (spanish) were both about "hacking RL" by introducing Fractal Monte Carlo (FMC) algorithm as a cheap and efficient way to generate lots of high quality rollouts of the game/system being controlled.

Tuesday 16 October 2018

Graph entropy 7: slides

After the series of six posts about Tree-Graph Entropy (starting here), I have prepared a short presentation about Graph Entropy, mainly to clarify the concepts to my own (and to anyone interested) and present some real-world use cases.



One of the most interesting ideas introduced in this presentation is a method for, once you had defined the entropy of all the nodes in a static disrected and acyclic and directed graph (a tree), to easily update all those entropy values as the graph evolves over time, both altering the conditional probability of some connections, as also by adding or taking connections, by considering nodes and connection as cellular automaton that can adjust its internal entropies asynchronously.

You can also jump to the original google slides version if you want to comment on a particular slide.

If this was not enought for you and what to read more weird things about those entropies, you can dive into the unknown realm of negative probabilities entropy here!

Update (24 Oct 2018): this post was referenced in the article "A Brief Review of Generalized Entropies"where the (c, d) exponents of these generalized entropies are calculated.


Saturday 25 August 2018

Curiosity solving Atari games

Some days ago I read in tweeter about playing Atari games without having access to the reward, that is, without knowing you score at all. This is called "curiosity driven" learning as your only goal is to scan as much space as possible, to try out new things regardless of the score it will add or take. Finally, a NN learns from those examples how to move around in the game just avoiding its end.



Our FMC algorithm is a planning algorithm, it doesn't learn form past experiences but decide after sampling a number of possible future outcomes after taking different actions, but still it can scan the future without any reward.

Friday 3 August 2018

Roadmap to AGI

Artificial General Intelligence (AGI) is the holy grail of artificial intelligence and my personal goal from 2013, where this blog started. I seriously plan to build one AGI from scratch, with the help of my good friend Guillem Duran, and here is how I plan to do this: a plausible and doable raodmap to build an efficient AGI.

Plase keep in mind we both use our spare time to work on it so, even if the roadmap is practically finished in the theorical aspects, coding it is kind of hard and time-consuming -we don't have acces to any extra computer power except for our personal laptops- so at the actual pace, don't spect anything spectacular in a near future.

That said, the thing is doable in terms of a few years given some extra resources, so let's start now!

AGI structure

A general intelligence, being it artificial or not, is a compound of only three modules, each one with its own purpose that can do its job both autonomously and cooperating with the other modules.

It is only when they work together that we could say it is "intelligence" in the same sense we consider our selves intelligent. May be their internal dynamics, algorithms and physical substrate are not the same nor even close, but the idea of the three subsystems and their roles are always the same in both cases, just they are solved with different implementations.

In this initial post I just enumerate the modules, the state of its developemnt, and its basic functions. In next posts I will get depper into the details of each one. Interactions between moduels will be covered later, when the different modules are properly introduced

Wednesday 18 July 2018

Graph entropy 6: Separability

This is 6th post on a serie, so you should had read the previous posts (and in the correct order) first. If you just landed here, please start your reading here and follow the links to the next post until you arrive here again, this time with the appropiated backgroud.

In the standard Gibbs-Shannon entropy, the 4th Shannon-Khinchin axiom about separability says 2 different things (that we will label here as sub-axioms 4.1 and 4.2 respectively) that, given two independent distributions P and Q, the entropy of the combined distribution PxQ is:

Axiom 4.1) H(PxQ) = H(P) + H(Q)

When P and Q are not independent, this formula becomes an inequality:
Axiom 4.2) H(PxQ) ≤ H(P) + H(Q)

Graph entropy, being applied to graphs instead of distributions, allows for some more forms of combining two distributions, giving not one but at least three intersting inequalities:

Wednesday 13 June 2018

Graph entropy 5: Relations

After some introductory posts (that you should had read first, starting here) we face the main task of defining the entropy of a graph, something looking like this:


Relations

We will start by dividing the graph into a collection of "Relations", a minimal graph where a pair of nodes A and B are connected by an edge representing the conditional probability of both events, P(A|B):


Tuesday 12 June 2018

Graph entropy 4: Distribution vs Graph

In previous posts, after complaining about Gibbs cross-entropy and failing to find an easy fix, I presented a new product-based formula for the entropy of a probability distribution, but now I plan to generalise it to a graph.

Why is it so great to have an entropy for graphs? Because distributions are special cases of graphs, but many real-world cases are not distributions, so the standard entropy can not be applied correctly on those cases.

Graph vs distribution

Let's take a simple but relevant example: there is a parking lot with 500 cars and we want to collect information about the kind of engines they use (gas engines and/or electric engines) to finally present a measurement of how much information we have.

We will assume that 350 of them are gas-only cars, 50 are pure electric and 100 are hybrids (but we don't know this in advance).

Using distributions

If we were limited to probability distributions -as in Gibbs entropy- we would say there are three disjoint subgroups of cars ('Only gas', 'Only electric', 'Hybrid') and that the probabilities of a random car to be on one subgroup are P = {p1 = 350/500 = 0.7, p2 = 50/500 = 0.1, p3 = 100/500 = 0.2}, so the results of the experiment of inspecting the engines of those car has an Gibbs entropy of:

HG(P) = -𝚺(pi × log(pi)) = 0.2496 + 0.3218 + 0.2302 = 0.818

If we use the new H2 and H3 formula, we get a different result, but the difference is just a matter of scale:

H2(P) = ∏(2 - pipi) = 1.2209 * 1.2752 * 1.2056 = 1.8771

H3(P) = 1 + Log(1.8771) = 1.6297



Monday 11 June 2018

Graph entropy 3: Changing the rules

After showing that the standard Gibbs cross-entropy was flawed and tried to fix it with a also flawed initial formulation of "free-of-logs" entropy, we faced the problem of finding a way to substitute a summary by a product without breaking anything important. Here we go...

When you define an entropy as a summary, each of the terms is supposed to be a "a little above zero": small and positive 𝛆 ≥0 so, when you add it to the entropy it can only slightly increase the entropy. Also, when you add a new probability term having (p=0) or (p=1), you need this new term to be 0 so it doesn't change the resulting entropy at all.

Conversely, when you want to define an entropy as a product of terms, they need to be "a little above 1" in the form (1+𝛆), and the terms associated with the extreme probabilities (p=0) and (p=1) can not change the resulting entropy, so they need to be exactly 1.

In the previous entropy this 𝛆 term was defined as (1-pipi), and now we need something like (1+𝛆) so why not just try with (2-pipi)?

Let us be naive again an propose the following formulas for entropy and cross-entropy:

H2(P) = ∏(2-pipi)

H2(Q|P) = ∏(2-qipi)

Once again it looks too easy to be worth researching, but once again I did, and it proved (well, my friend José María Amigó actually did) to be a perfectly defined generalised entropy of a really weird class, with Hanel-Thurner exponents being (0, 0), something never seen in the literature.

As you can see, this new cross-entropy formula is perfectly well defined for any combination of pi and qi (in this context, we are assuming 00 = 1) and, if you graphically compare both cross-entropy terms, you find that, for the Gibbs version, this term is unbounded (when q=0 the term value goes up to infinity):

𝛟G(p, q) = -(p × log(q))



In the new multiplicative form of entropy, this term is 'smoothed out' and nicely bounded between 1 and 2:

𝛟2(p, q) = (2-qp)


Sunday 10 June 2018

Graph Entopy 2: A first replacement

As I commented on a previous post, I found that there were cases where cross-entropy and KL-divergence were not well defined. Unluckily, in my theory those cases where the norm.

I had two options: Not even mentioning it, or try to go and fix it. I opted for the first as I had no idea of how to fix it, but I felt I was hiding a big issue with the theory under the carpet, so one random day I tried to find a fix.

Ideally, I thought, I would only need to replace the (pi × log(pi)) part with something like (pipi), but it was such a naive idea I almost gave up before having a look, but I did: how different do those two functions looks like when plotted on their domain interval (0, 1)?

Wow! They were just mirror images one of each other! In fact, you only need a small change to match them: (1-(pipi)):


Graph entropy 1: The problem


This post is the first on a series about a new form of entropy I came across some months ago while trying to formalise Fractal AI, the possible uses for it as a entropy of a graph, and how I plan to apply it to neural network learning and even to generate conscious behaviour in agents.

Failing to use Gibbs

The best formula so far accounting for the entropy of a discrete probability distribution P={pi} is the so-called Gibbs-Boltzmann-Shannon entropy:

H(P) = -k*𝚺(pi × log(pi))

In this famous formula, the set of all the possible next states of the systems is divided into a partition P with n disjoint subsets, with pi representing the probability of the next state being in the i-th element of this partition. The constant k can be anything positive so we will just assume k=1 for us.

Most of the times we will be interested in the cross-entropy between two distributions P={pi} and Q={qi}, or the entropy of Q given P, H(Q|P), a measure of how different they both are or, in terms of information, how much new information is in knowing Q if I already know P.

In that case, the Gibbs formulation for the cross-entropy goes like this:

H(Q|P) = -𝚺(pi × log(qi))

Cross-entropy is the real formula defining an entropy, as the entropy of P can be defined as its own cross-entropy H(P|P), having the property of being the maximal value of H(Q|P) for all possible distributions Q.

H(P) = H(P|P) ≥ H(Q|P)

As good and general as it may looks, this formula hides a very important limitation I came across when trying to formalise the Fractal AI theory: if, for some index you have qi=0 then if pi is not zero too, the above formula is simply not defined, as log(0) is as undefined as 1/0 is.

Wednesday 14 March 2018

Fractal AI "recipe"

Now that the algorithm is public and people is trying to catch up, it can be of help -mainly to us- to have a simplistic schema of what it does, with no theories nor long formulae, nothing extra but the raw skeleton of the idea, so you can read it, get an idea, and go on with your life.

I will try to simplify it to the sweet point of becoming... almost smoke!

Ingredient list:

  1. A system you can -partially- inspect to know its actual state -or part of it- as a vector.
  2. A simulation you can ask "what will happen if system evolves from initial state X0 for a dt".
  3. A distance between two states, so our "state space" becomes a metric space.
  4. A number of "degree of freedom" the AI can push.

Thursday 15 February 2018

Fractal AI goes public!

Today we are glad to announce that we are finally releasing the "Fractal AI" artificial intelligence framework to the public!

This first release includes:

We have modified Fractal AI a little so now it is more powerful than ever, in fact we have been able to beat a lot of the actual records (from state-of-the-art neural network like DQN or A3C) with about 1000 samples per strep (and remember, no learning, no adaptation to any game, same code for all of them).

We are specially proud of beating even some absolute human world records, but hey, it was going to happen anyhow!

Fractal AI playing Ms Packman. It reached an unknow score limit of 999,999 present in a total of 15 games. Fractal AI beated the best human records in 49 out of 50 games.