[SciPy-User] Single precision FFT insufficiently accurate.
Mon Jun 28 09:48:42 CDT 2010
On Mon, Jun 28, 2010 at 4:24 PM, Sturla Molden <email@example.com> wrote:
> Den 28.06.2010 13:21, skrev Sebastian Haase:
>> What size of error are talking about anyway .. ?
>> Personally I would leave it in,
> Leave in FFT code that produces 5% relative error?
> Single-precision is not the solution to memory issues anyway. Get a 64
> bit system and buy more RAM. Buying RAM is far cheaper than even
> re-coding for single precision, if you value the time spent coding, not
> to mention that the result is far more accurate.
> Single-precision used to be faster than double precision some 30 years
> ago. And on 8 bit and 16 bit computers, memory did matter more. For
> example on a 16 bit CPU with power-of-2 FFT, the largest FFT size would
> be just 2096 in double precision. With single precision you could get
> 4096 ... Oorah! Today we rearely see those issues. Python does not even
> support single precision.
That's why "numerical"(!) Python is so great ;-)
I'm working with image (sequence) data where the raw data (2-byte
unsigned int) approaches often 1GB.
To open (memmap) those I already learned liking 64-Linux for a while
--- it's really great.
Just wanted remind you that data really can get large, such that "just
buy more memory" also reaches it's limits.
- Sebastian (long time advocate of single precision -- check the
archives .... ;-) )
More information about the SciPy-User