[SciPy-User] ODR fitting several equations to the same parameters
Thu Nov 12 09:45:34 CST 2009
On 11/12/2009 05:35 AM, ms wrote:
> Hi Bruce,
> Thanks for your reply but there are several things I don't really grasp:
> Bruce Southey ha scritto:
>> On 11/11/2009 10:26 AM, ms wrote:
>>> Let's start with a simple example. Imagine I have several linear data
>>> sets y=ax+b which have different b (all of them are known) but that
>>> should fit to the same (unknown) a. To have my best estimate of a, I
>>> would want to fit them all together. In this case it is trivial, you
>>> just subtract the known b from the data set and fit them all at the same
>> Although b is known without error you still have potentially effects due
>> to each data set.
>> What I would do is fit:
>> y= mu + dataset + a*x + dataset*a*x
>> Where mu is some overall mean,
> Mean of what? The b's?
Depending on what your terms are, y=a*x +b can be viewed is a simple
linear regression then b is an intercept and a is a slope. Under a
different view (typically general linear modeling), b can be a factor or
class variable where 'b' can have multiple levels. As in the model
above, this is analysis of covariance. You can get your estimate of 'b'
for each data set as mu plus the appropriate solution of dataset. (While
you can parameterize the model as y= dataset + ..., it is not as easy to
interpret as the one using mu.)
The reason for using this type of model is that you can quantify the
variation between the data sets.
>> dataset is the effect of the ith dataset - allows different intercepts
>> for each data set
>> dataset*a is the interaction between a and the dataset - allows
>> different slopes for each dataset.
> I don't really understand what quantities you mean by "effect" and
> "interaction", and why should I want to allow different slopes for each
> dataset -the aim to fit one and only one slope from all datasets.
The reason is that you can test that the slopes are the same and see if
any data sets appear unusual. If the slopes are the same then you are
back to what you wanted to know. Otherwise, you need to address why one
or more data sets are different from the others.
>> Obviously you first test that interaction is zero. In theory, the
>> difference between the solutions of dataset should equate to the
>> differences between the known b's.
> ...same as above...
>> Now you just expand your linear model to nonlinear one. The formulation
>> depends on your equation. But really you just replace f(a*x) with
>> So I first try with a linear model before a nonlinear. Also I would see
>> if I could linearize the non-linear function.
> Well, the function is for sure non linear (it has a sigmoidal shape). To
> linearize it is a good idea but I am doubtful it is doable.
Again it depends on the function because some of these do have
linearized forms or can be well approximated by a linear model.
More information about the SciPy-User