# [SciPy-User] qr decompostion gives negative q, r ?

Virgil Stokes vs@it.uu...
Tue Nov 20 18:36:08 CST 2012

```On 2012-11-21 00:29, Robert Kern wrote:
> On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes <vs@it.uu.se> wrote:
>> On 2012-11-20 23:59, Robert Kern wrote:
>>> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <vs@it.uu.se> wrote:
>>>
>>>> Ok Skipper,
>>>> Unfortunately, things are worse than I had hoped, numpy sometimes
>>>> returns the negative of the q,r and other times the same as MATLAB!
>>>> Thus, as someone has already mentioned in this discussion, the "sign"
>>>> seems to depend on the matrix being decomposed. This could be a
>>>> nightmare to track down.
>>>>
>>>> I hope that I can return to some older versions of numpy/scipy to work
>>>> around this problem until this problem is fixed. Any suggestions on how
>>>> to recover earlier versions would be appreciated.
>>> That's not going to help you. The only thing that we guarantee (or
>>> have *ever* guaranteed) is that the result is a valid QR
>>> decomposition. If you need to swap signs to normalize things to your
>>> desired convention, you will need to do that as a postprocessing step.
>> But why do I need to normalize with numpy (at least with latest
>> release); but not with MATLAB.
> Because MATLAB decided to do the normalization step for you. That's a
> valid decision. And so is ours.
>
>> A simple question for you.
>>
>> In my application MATLAB generates a sequence of QR factorizations for
>> covariance matrices in which R is always PD --- which is corect!
> That is not part of the definition of a QR decomposition. Failing to
> meet that property does not make the QR decomposition incorrect.
>
> The only thing that is incorrect is passing an arbitrary, but valid,
> QR decomposition to something that is expecting a strict *subset* of
> valid QR decompositions.
Sorry but I do not understand this...
Let me give you an example that I believe illustrates the problem in numpy

I have the following matrix, A:

array([[  7.07106781e+02,   5.49702852e-04,   1.66675481e-19],
[ -3.53553391e+01,   7.07104659e+01,   1.66675481e-19],
[  0.00000000e+00,  -3.97555166e+00,   7.07106781e-03],
[ -7.07106781e+02,  -6.48214647e-04,   1.66675481e-19],
[  3.53553391e+01,  -7.07104226e+01,   1.66675481e-19],
[  0.00000000e+00,   3.97560687e+00,  -7.07106781e-03],
[  0.00000000e+00,   0.00000000e+00,   0.00000000e+00],
[  0.00000000e+00,   0.00000000e+00,   0.00000000e+00],
[  0.00000000e+00,   0.00000000e+00,   0.00000000e+00]])

Note, this is clearly not a covariance matrix, but, it does contain a
covariance matrix (3x3). I refer you to the paper for how this matrix
was generated.

Using np.linalg.qr(A) I get the following for R (3x3) which is
"square-root" of the covariance matrix:

array([[ -1.00124922e+03,   4.99289918e+00,   0.00000000e+00],
[  0.00000000e+00,  -1.00033071e+02,   5.62045938e-04],
[  0.00000000e+00,   0.00000000e+00,  -9.98419272e-03]])

which is clearly not PD, since the it's 3 eigenvalues (diagonal
elements) are all negative.

Now, if I use qr(A,0) in MATLAB:

I get the following for R (3x3)

1001.24922,    -4.99290,      0.00000
0.00000,    100.03307,     -0.00056
-0.00000,       0.00000,       0.00998

This is obviously PD, as it should be, and gives the correct results.
Note, it is the negative of the R obtained with numpy.

I can provide other examples in which both R's obtained are the same and
they both lead to correct results. That is, when the R's are different,
the R obtained with MATLAB is always PD and always gives the correct end
result, while the R with numpy is not PD  and does not  give the correct
end result.

I hope that this helps you to understand my problem better. If there are
more details that you need then let me know what, please.
```