[SciPy-user] Maximum entropy distribution for Ising model - setup?
James Coughlan
coughlan at ski.org
Fri Oct 27 15:11:39 CDT 2006
Hi,
I've done some max. ent. modeling, although I'm unfamiliar with the
scipy maxentropy package. A few points:
1.) If N=10 spins suffices, then there are only 2^10=1024 possible
configurations of your model, so you can calculate Z and thus the
average spin <si> (and spin product <si*sj>) values exactly by hand, and
completely bypass the maxentropy package. On the other hand, maybe scipy
maxentropy can save you a lot of work!
2.) If you are doing max. ent. modeling, then you are given *empirical*
(measured) values of <si>_{emp} and <si*sj>_{emp} and are trying to find
model parameters hi and Jij to match these values, i.e. <si>=<si>_{emp}
and <si*sj>=<si*sj>_{emp} where the LHS's are the averages w.r.t. the
model. You can solve for the model parameters by iterating these
gradient descent equations:
hi_{new}=hi_{old} + K * (<si> - <si>_{emp})
Jij_{new}=Jij_{old}+ K' * (<si*sj> - <si*sj>_{emp})
where K and K' are pos. "step size" constants. (On the RHS, <si> and
<si*sj> are w.r.t. hi_{old} and Jij_{old}.)
But this process can be slow. It can be sped up by choosing K, K'
adaptively; maybe the maxentropy package does this more intelligently.
3.) In a typical Ising model, interactions are nearest neighbor, which
means that Jij is sparse (i.e. it only =1 for neighboring spins i and
j). But in principal your model can certainly handle arbitrary Jij.
However, it might be easier (and less prone to overfitting) if you keep
Jij sparse.
Hope this helps.
Best,
James
More information about the SciPy-user
mailing list