[SciPy-User] [SciPy-user] Maximum entropy distribution for Ising model - setup?

Robin robince@gmail....
Fri Jan 8 07:17:25 CST 2010


On Thu, Jan 7, 2010 at 9:19 AM, Jordi Molins Coronado
<jordi_molins@hotmail.com> wrote:
>
> Sorry, I see my previous message has been a disaster in formatting. I try now in a different way. Sorry for the inconveniences.
>
>
> Hello, I am new to this forum. I am looking for a numerical solution to the inverse problem of an Ising model (or a model not-unlike the Ising model, see below). I have seen an old discussion, but very interesting, about this subject on this forum (http://mail.scipy.org/pipermail/scipy-user/2006-October/009703.html).
> I would like to pose my problem (which is quite similar to the problem discussed in the thread above) and kindly ask you your opinion on that:
>
> My space is a set of discrete nodes,s_i, where i=1,...,N, which can take two values, {0,1}. Empirically I have the following information:
> <s_i>_emp and <s_i*s_j>_emp, where i,j=1,...,N with i!=j.
>
> It is well known in the literature that the Ising model
>
> P(s_1, s_2, ..., s_N) = 1 / Z * exp( sum{for all i}(h_i*s_i) + 0.5*sum{for all i!=j}(J_ij*s_i*s_j) )
> maximizes entropy with the constraints given above (in fact, this is not the Ising model, because the Ising model assumes only nearest-neigbour interactions, and I have interactions with all other nodes, but I believe it is still true that the above P(s1,...sN) still maximizes entropy given the constraints above).
> What I would like is to solve the inverse problem of finding the h_i and J_ij which maximize entropy given my constraints. However, I would like to restrict the number of h_i and J_ij possible, since having complete freedom could become an unwieldly problem. For example, I could restrict h_i = H and J_ij = J for all i,j=1,...N, i!=j, or I could have a partition of my nodes, say nodes from 1 to M having h_i = H1 and J_ij=J1 i,j=1,...,M i!=j, and h_i=H2 and J_ij=J2 i,j=M+1,...,N i!=j, and the J_ij=J3 for i=1,...,M and j=M+1,...N.
> If I understand correctly the discussion in the thread shown above, a numerical solution for the inverse problem would be:
> hi_{new}=hi_{old} + K * (<si> - <si>_{emp})
> Jij_{new}=Jij_{old}+ K' * (<si*sj> - <si*sj>_{emp})
>
> where K and K' are pos. "step size" constants. (On the RHS, <si> and <si*sj> are w.r.t. hi_{old} and Jij_{old}.)
> Have I understood all this correctly? In particular, for the case h_i = H and J_ij = J for all i,j=1,...N, i!=j could I simplify the previous algorithm by restricting the calculations only to say i=1 (i=2,...,N should be the same?), and for the case h_i = H1 and J_ij=J1 i,j=1,...,M i!=j, and h_i=H2 and J_ij=J2 i,j=M+1,...,N i!=j simplify it by restricting the calculations only to say i=1 and i=M+1?
> Thank you for your help and sorry if I am new here and I have committed some "ettiquette" mistake.

Hi,

I'm not so familiar with the statisitical mechanics notation, but you
might be interested in the maxent module of the pyentropy package I
have produced as part of my PhD:
http://code.google.com/p/pyentropy/

The main purpose of pyentropy is calculation of bias corrected entropy
and information values from limited data sets, but it includes the
maxent module which computes maximum entropy distributions over finite
alphabet spaces with marginal constraints of up to any order. (I am
working in computational neuroscience so much of the notation will
probably be a bit different).
So I think a second order solution from this framework over a binary
space is the same as the Ising model. You can get the hi and J's
directly from the results (they are called theta in the code) -
although I think they have a slightly different normalisation because
of Ising being -1,1 and this being 0,1...
http://pyentropy.googlecode.com/svn/docs/api.html#module-pyentropy.maxent

With this on a normal computer I can solve for about 18 binary
variables in a reasonable amount of time (ie less than an hour for
several runs), but it becomes highly exponential with more vectors. (I
have a much more efficient but more hackish version of the same
algorithm that I haven't added to pyentropy yet, but will in the next
few weeks).

In the case you describe where the thetas are constrained to be equal
at each order the system can be solved much more efficiently. I have
code to do this which is not released (and a bit messy) but if you are
interested I could send it to you. In neuroscience this situation is
called the 'pooled model'. You can see a description for how to solve
in this reduced case here:
http://rsta.royalsocietypublishing.org/content/367/1901/3297.short

The method is using information geomery from Amari - by transforming
between the P space (probability vector), eta space (marginals) and
theta space (the h,J's etc.) it is possible to find the maximum
entropy solution as a projection.

Anyway I'm not sure if that helps... probably the documentation on how
to get the thetas and the ordering of the vector might not be so
clear, so give me a shout if you start using it and have any
questions. If you don't have the full probability distribution you can
pass the eta vector to the solve function (which would be a vector of
<si>, <si,sj> ) and set optional argument eta_given=True (hopefully
clear from the option handling in the code).

Cheers

Robin


More information about the SciPy-User mailing list