[Numpy-discussion] setting decimal accuracy in array operations (scikits.timeseries)
Wed Mar 3 16:29:49 CST 2010
On Wed, Mar 3, 2010 at 16:23, Marco Tuckner
> Thanks to all who answered.
> This is really helpful!
>>> If you are still seeing actual calculation differences, we will
>>> need to see a complete, self-contained example that demonstrates
>>> the difference.
>> To add a bit more detail -- unless you are explicitly specifying
>> single precision floats (dtype=float32), then both numpy and excel
>> are using doubles -- so that's not the source of the differences.
>> Even if you are using single precision in numpy, It's pretty rare for
>> that to make a significant difference. Something else is going on.
>> I suspect a different algorithm, you can tell timeseries.convert how
>> you want it to interpolate -- who knows what excel is doing.
> I checked the values row by row comparing Excel against the Python results.
> The the values of both programs match perfectly at the data points where
> no periodic sequence occurs:
> so those values where the aggregated value results in a straight value
> (e.g. 12.04) the results were the same.
> At values points where the result was a periodic sequence (e.g.
> 12.222222 ...) the described difference could be observed.
I think you are just seeing the effect of the different printing that
I described. These are not differences in the actual values.
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion