[Numpy-discussion] Assigning complex values to a real array
Wed Dec 9 09:20:50 CST 2009
On Wed, Dec 9, 2009 at 9:46 AM, Ryan May <email@example.com> wrote:
> On Wed, Dec 9, 2009 at 3:51 AM, David Warde-Farley <firstname.lastname@example.org> wrote:
>> On 9-Dec-09, at 1:26 AM, Dr. Phillip M. Feldman wrote:
>>> Unfortunately, NumPy seems to be a sort of step-child of Python,
>>> but not fully accepted. There are a number of people who continue to
>>> use Matlab,
>>> despite all of its deficiencies, because it can at least be counted
>>> on to
>>> produce correct answers most of the time.
>> Except that you could never fully verify that it produces correct
>> results, even if that was your desire.
>> There are legitimate reasons for wanting to use Matlab (e.g.
>> familiarity, because collaborators do, and for certain things it's
>> still faster than the alternatives) but correctness of results isn't
>> one of them. That said, people routinely let price tags influence
>> their perceptions of worth.
> While I'm not going to argue in favor of Matlab, and think it's
> benefits are being over-stated, let's call a spade a spade. Silent
> downcasting of complex types to float is a *wart*. It's not sensible
> behavior, it's an implementation detail that smacks new users in the
> face. It's completely insensible to consider converting from complex
> to float in the same vein as a simple loss of precision from 64-bit to
> 32-bit. The following doesn't work:
> a = np.array(['bob', 'sarah'])
> b = np.arange(2.)
> b[:] = a
> ValueError Traceback (most recent call last)
> /home/rmay/<ipython console> in <module>()
> ValueError: invalid literal for float(): bob
> Why doesn't that silently downcast the strings to 0.0 or something
> silly? Because that would be *stupid*. So why doesn't trying to
> stuff 3+4j into the array get the same error, because 3+4j is
> definitely not a float value either.
Real numbers are a special case of complex, so I think the
integer/float analogy is better.
Numpy requires quite a bit more learning than programs like matlab and
gauss with a more rigid type structure. And numpy has quite a few
issues with "Is this a bug or a feature".
numpy downcasting looks pretty consistent (for most parts) and it just
one more thing to keep in mind like integer division and integer
Instead of requiring numpy to emit hundreds of warnings, I think it's
better to properly unit test the code. For example, inspection and a
test case showed pretty quickly that the way I tried to use
scipy.integrate.quad with complex numbers didn't return the correct
complex answer but only the correct real part.
Compared to some questionable behavior with views and rearranging the
axes with fancy indexing, I think the casting problem is easy to keep
Maybe we should start to collect these warts for a numpy 3000.
> Ryan May
> Graduate Research Assistant
> School of Meteorology
> University of Oklahoma
> NumPy-Discussion mailing list
More information about the NumPy-Discussion