[SciPy-User] scipy.interpolate.rbf sensitive to input noise ?

josef.pktd@gmai... josef.pktd@gmai...
Wed Mar 3 15:31:26 CST 2010

On Wed, Mar 3, 2010 at 4:15 PM, Robert Kern <robert.kern@gmail.com> wrote:
> On Tue, Feb 23, 2010 at 11:59,  <josef.pktd@gmail.com> wrote:
>> On Tue, Feb 23, 2010 at 12:57 PM,  <josef.pktd@gmail.com> wrote:
>>> On Tue, Feb 23, 2010 at 12:46 PM, Robert Kern <robert.kern@gmail.com> wrote:
>>>> On Tue, Feb 23, 2010 at 11:43, denis <denis-bz-gg@t-online.de> wrote:
>>>>> Robert, Josef,
>>>>>  thanks much for taking the time to look at RBF some more.
>>>>> Summary, correct me:
>>>>>    A - smooth*I in rbf.py is a sign error (ticket ?)
>>>> Not necessarily. It seems to work well in at least some cases. Find a
>>>> reference that says otherwise if you want it changed.
>>> chapter 2 page 16, for gaussian process. As I said I don't know about
>>> the other methods
>> http://docs.google.com/viewer?a=v&q=cache:qs8AaAxO6nkJ:www.gaussianprocess.org/gpml/chapters/RW2.pdf+gaussian+process+noise+Ridge&hl=en&gl=ca&pid=bl&srcid=ADGEESj4j8osT6cOIc65r3OaeAtQO_dzgZD4YxSAEkFTeRZajBcROJpJJ9zTlMSrD2OaK1iOJYgy8QqH_Nr0rNxf41faNihCdIzWyVOYxtCFIR7H8mdQZAKFoeaRkFamQlCKhp_s1FOI&sig=AHIEtbQK35MLfnZAySw3lF-dR_mNcSaP3w
>> google links are very short, missed a part
> I've found a couple of things on RBFs specifically that agree.
> However, the original source on which our implementation is based does
> subtract. He may have a source that uses a negative sign.
> http://www.mathworks.co.uk/matlabcentral/fileexchange/10056-scattered-data-interpolation-and-approximation-using-radial-base-functions
> Playing around, it seems to me that some of the radial functions
> create smoother approximations with large positive values (with the
> current implementation) while others create smoother approximations
> with large negative values. It's possible that it's just a convention
> as to which sign you use.

>From the examples I checked, it looks like all other than gaussian
have large positive and negative eigenvalues, so adding or subtracting
doesn't change whether the matrix is definite.

For the gaussian case it is different, with smooth=0 the smallest
eigenvalues are zero and all others are positive. Only by *adding* a
penalization, the matrix becomes positive definite. (I've never seen a
case where eigenvalues are made negative to be able to invert a matrix
in the Ridge regression type of penalization.)

So it might be a convention if the kernel is increasing in distance
but not for the gaussian kernel that is decreasing.

If it's a convention than we could flip the sign and make it
correspond  to the gaussian case, and the other cases would not be
strongly affected.

I will look at the reference. I haven't found much in terms of
references directly for general rbf.


> --
> Robert Kern
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>  -- Umberto Eco
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user

More information about the SciPy-User mailing list