[SciPy-User] How to fit data obtained from a Monte Carlo simulation?
Thu Sep 22 17:03:46 CDT 2011
I think the seed is the same in all cases. Each random number is obtained using random.random(), but I never use random.seed(). So you think me being unable to get a satisfactory result when using optimize.leastsq does indeed point to me having incorrectly set up the optimization itself? I'll do some more checking tomorrow (although the dimensions should be fine (in case by that you meant the ranks of the arrays in your first reply), I checked those)!
Thanks very much for your time!
22 sep 2011 kl. 18:21 skrev email@example.com:
> On Thu, Sep 22, 2011 at 11:50 AM, D K <firstname.lastname@example.org> wrote:
>> Dear David
>> thank you very much for your reply. I have been playing around with
>> fmin's ftol and xtol arguments, as you suggested. It's looking promising
>> so far, but then my initial guesses have been rather close to the true
>> values of my test set. I will keep testing, and maybe write to the
>> mailing list again at some point. Thanks again!
> If the problem is the MonteCarlo noise, then the question is whether
> you keep a fixed random.seed during the calculations?
> If you have a fixed seed, then you have the same MonteCarlo noise in
> all calculations and it shouldn't affect the derivative calculations
> or the calculations for different parameters.
>> PS: Also thanks very much to Josef, who also replied to my email. I will
>> keep trying a bit with fmin and its parameters at first, and answer your
>> questions in case I still don't get anywhere this way. I hope this
>> approach is ok...
>> On 09/21/2011 09:20 PM, J. David Lee wrote:
>>> On 09/21/2011 09:47 AM, D K wrote:
>>>> Hi everyone
>>>> I would like to fit data obtained from a Monte Carlo simulation to
>>>> experimental data, in order to extract two parameters from the
>>>> experiments. There are several issues with this:
>>>> a) There is a small element of randomness to each simulated data point;
>>>> we don't actually have a function describing the curve (the overall
>>>> curve shape is reproducible though).
>>>> b) I have never performed curve fitting before, and I haven't got a clue
>>>> how to even go about looking for the required information.
>>>> b) I don't have a strong maths background.
>>>> I tried using optimize.leastsq, but I learnt that, apparently, I ought
>>>> to know the function describing my data to be able to use this (I kept
>>>> researching, as it exited with code 2, claiming that the fit had been
>>>> successful, but it mainly returned the initial guess as the fitting
>>>> result). So I switched to optimize.fmin (having read that it only uses
>>>> the function values); this, however, does not converge and simply exits
>>>> after the maximum number of iterations have been performed.
>>> Hi Donata,
>>> Because your model varies from run to run, you may not be able to reach
>>> the default tolerances necessary for successful termination of leastsq.
>>> If you look at the documentation for leastsq, you will see several
>>> tolerance parameters, ftol, xtol, and gtol. Modifying these may help in
>>> your case.
>>> Most (all?) of these optimization routines are doing some kind of
>>> gradient descent. The variability in your model will affect both the
>>> error estimate and the search direction. Because you'll be calculating
>>> the Jacobian matrix (gradients) numerically, you're almost certainly
>>> want to modify leastsq's epsfcn parameter. Using the default value, it
>>> may be that the variability in your model will be larger than the
>>> difference due to the delta x used. In that case, your search direction
>>> could be essentially random.
>>> After writing this, I'm thinking that fmin would be a better fit, as it
>>> doesn't have the numerical gradient calculation and associated problems.
>>> fmin has the same xtol and ftol arguments as leastsq that might be useful.
>>> SciPy-User mailing list
>> SciPy-User mailing list
> SciPy-User mailing list
More information about the SciPy-User