[SciPy-user] maxentropy

Matthew Cooper m.cooper at computer.org
Thu Mar 30 19:55:59 CST 2006


Ed,

I apologize for not looking at this sooner.  I went through the new
conditionalexample_high_level.py and I still think there a small change that
needs to be made (I think it's small anyway).  I think that we want F to be
the size

F = sparse.lil_matrix((len(f), numcorpus*numsamplespace))

where numcorpus = len(corpus)

basically, the space over which we evaluate each feature to compute
expectations under the current model is (X*N) where X is the size of the
samplespace (number of classes) and N is the number of labeled training
observations.  for a feature f_i the expected value under the model is

<f_i>_{theta} = \sum_{n=1}^N  \frac{1}{N}  \sum_{x in samplespace}
P_{\theta}(x|w_n)  f_i(x,w_n)

so that for each function, we need a look up table that covers all pairs of
x from the samplespace and w_n from the training set.  The first sum is
simply the empirical context distribution which is uniform over the training
set.  The model distribution is only defined conditioning on contexts from
the training set.  This equation replaces an exponentially large space of
contexts with only the N contexts from the training set.

I don't think this alters your code, as long as the pmf and F matrices are
initialized correctly.

At test time, we do need to evaluate the feature functions on unseen
documents, but this can be handled more easily.

I have another question.  I haven't installed your version of scipy outright
since it was a bit of a pain to get the current stable distribution up on my
machine.  However, if I need to load a bunch of modules from your version to
test the conditional models is there an easy way to do that?  At the moment,
I couldn't import sparseutils (I can't find the .py file since I probably
haven't built it?).

Thanks,
Matt


On 3/26/06, Ed Schofield <schofield at ftw.at> wrote:
>
>
> On 23/03/2006, at 6:29 PM, Ed Schofield wrote:
>
> >
> > On 21/03/2006, at 9:53 PM, Matthew Cooper wrote:
> >
> >>
> >> Hi Ed,
> >>
> >> I am playing around with the code on some more small examples and
> >> everything has been fine.  The thing that will hold me back from
> >> testing on larger datasets is the F matrix which thus far requires
> >> the
> >> space of (context,label) pairs to be enumerable.  I know that
> >> internally you are using a sparse representation for this matrix.
> >> Can
> >> I initialize the model with a sparse matrix also?  This also requires
> >> changes with the indices_context parameter in the examples.
> >
> > Hi Matt,
> > Yes, good point.  I'd conveniently forgotten about this little problem
> > ;)  It turns out scipy's sparse matrices need extending to support
> > this.
>
> And done.  I've committed the new sparse matrix features to the ejs
> branch and fixed conditional maxent models to work with them.  The
> examples seem to work fine too.  Please let me know how you go with it!
>
> -- Ed
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.scipy.net/pipermail/scipy-user/attachments/20060330/4d5eb23e/attachment.htm


More information about the SciPy-user mailing list