[Numpy-discussion] Re: Vote: complex64 vs complex128
jh at oobleck.astro.cornell.edu
Tue Apr 4 11:14:04 CDT 2006
When I first heard of Complex128, my first response was, "Cool! I
didn't even know there was a Double128!"
Folks seem to agree that precision-based naming would be most
intuitive to new users, but that length-based naming would be most
intuitive to low-level programmers. This is a high-level package,
whose purpose is to hide the numerical details and programming
drudgery from the user as much as possible, while still offering high
performance and not limiting capability too much. For this type of
package, a good metric is "when it doesn't restrict capability, do
what makes sense for new/naiive users".
So, I favor Complex32 and Complex64. When you say "complex", everyone
knows you mean 2 numbers. When you say 32 or 64 or 128, in the
context of bits for floating values, almost everyone assumes you are
talking that many bits of precision to represent one number. Consider
future conversations about precision and data size. In precision
discussions, you'd always have to clarify that complex128 had 64 bits
of precision, just to make sure everyone was on the same key
(particularly when 128-bit machines arrive). In data-size
discussions, everyone would know to double the size for the two
components. No extra clarification would be needed.
IDL's behavior is irrelevant to us, since they just say "complex", and
"dcomplex" for 32-bit and 64-bit precision.
More information about the Numpy-discussion