[SciPy-user] Maximum entropy distribution for Ising model - setup?
Martin Spacek
scipy at mspacek.mm.st
Sat Oct 28 07:23:36 CDT 2006
Hi James,
Bear with me, I don't exactly have a complete understanding of this.
James Coughlan wrote:
> Hi,
>
> I've done some max. ent. modeling, although I'm unfamiliar with the
> scipy maxentropy package. A few points:
>
> 1.) If N=10 spins suffices, then there are only 2^10=1024 possible
> configurations of your model, so you can calculate Z and thus the
> average spin <si> (and spin product <si*sj>) values exactly by hand, and
> completely bypass the maxentropy package. On the other hand, maybe scipy
> maxentropy can save you a lot of work!
Hmm. I don't really follow you. I already have the average spin values
<si> and average spin product values <si*sj> (I get those from
experiment). I want to find a combination of the individual spins and
spin products that maximizes the entropy = -sum( pi*log(pi) )
(where each pi is the model's predicted probability of one of 1024
states) of the probability distribution for all possible combinations of
spin states (up, up, down.... , or up, down, up...).
> 2.) If you are doing max. ent. modeling, then you are given *empirical*
> (measured) values of <si>_{emp} and <si*sj>_{emp} and are trying to find
> model parameters hi and Jij to match these values, i.e. <si>=<si>_{emp}
> and <si*sj>=<si*sj>_{emp} where the LHS's are the averages w.r.t. the
> model. You can solve for the model parameters by iterating these
> gradient descent equations:
>
> hi_{new}=hi_{old} + K * (<si> - <si>_{emp})
> Jij_{new}=Jij_{old}+ K' * (<si*sj> - <si*sj>_{emp})
>
> where K and K' are pos. "step size" constants. (On the RHS, <si> and
> <si*sj> are w.r.t. hi_{old} and Jij_{old}.)
>
> But this process can be slow. It can be sped up by choosing K, K'
> adaptively; maybe the maxentropy package does this more intelligently.
I thought the whole magic of running maximum entropy is not just that
you end up with a set of hi and Jij that give you a probability
distribution that matches the expectation values that you asked for (to
within some tolerance), but that you also choose those hi and Jij such
that the distribution's entropy is maximized, thereby minimizing any
assumptions about it that weren't specified by the constraints (such as
third, or fourth, or higher order statistics).
> 3.) In a typical Ising model, interactions are nearest neighbor, which
> means that Jij is sparse (i.e. it only =1 for neighboring spins i and
> j). But in principal your model can certainly handle arbitrary Jij.
> However, it might be easier (and less prone to overfitting) if you keep
> Jij sparse.
In this case, there's no measure of nearness of neighbours that I can
use. They're all equally potential neighbours of each other. I'm not
sure this is a typical nearest neighbour Ising model. After some more
reading, I almost suspect that the authors of the Nature paper I
mentioned shouldn't have chosen to call it Ising. I think many of the
Jij will be quite small, but I want that to come out of the model. I
don't have any way to justify setting some of them to zero ahead of
time. I certainly doubt that any of the Jij would end up equal to 1.
Martin
More information about the SciPy-user
mailing list