[SciPy-User] understanding machine precision
Tue Dec 14 13:07:39 CST 2010
On Tue, Dec 14, 2010 at 12:57, <email@example.com> wrote:
> On Tue, Dec 14, 2010 at 1:47 PM, Robert Kern <firstname.lastname@example.org> wrote:
>> On Tue, Dec 14, 2010 at 12:42, Keith Goodman <email@example.com> wrote:
>>> On Tue, Dec 14, 2010 at 9:42 AM, <firstname.lastname@example.org> wrote:
>>>> I thought that we get deterministic results, with identical machine
>>>> precision errors, but I get (with some random a0, b0)
>>>>>>> for i in range(5):
>>>> x = scipy.linalg.lstsq(a0,b0)
>>>> x2 = scipy.linalg.lstsq(a0,b0)
>>>> print np.max(np.abs(x-x2))
>>> I've started a couple of threads in the past on repeatability. Most of
>>> the discussion ends up being about ATLAS. I suggest repeating the test
>>> without ATLAS.
> Is there a way to turn ATLAS off without recompiling?
>> On OS X with numpy linked against the builtin Accelerate.framework
>> (which is based off of ATLAS), I get the same result every time.
> When I run the script on the commandline (with a new python each
> time), I get the same results each time, but within the loop the
> results still differ up to 1.55431223448e-015. On IDLE when I remain
> in the same session, results differ with each run.
I mean that I get "0.0" for each iteration of each loop even if I push
the number of iterations up to 500 or so.
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the SciPy-User