[Numpy-discussion] Segfault with dodgy type descriptors
oliphant.travis at ieee.org
Mon Jan 30 01:04:03 CST 2006
Ed Schofield wrote:
>Here's a segfault from the crazy input department:
>>>>cs = [('A', 1), ('B', 2)]
>>>>a = numpy.array(cs, dtype=(object,5))
>Using the descriptor dtype=(object,x) for other values of x gives
>varying results, but all of them should probably raise a ValueError.
These data-types are technically o.k. because (object, 5) means a
data-type which is an array of 5 objects.
The problem was that your constructor was not creating 5 objects and so
an error should have been raised. It is now.
>On a related note, it seems that it's now possible again to use an array
>as a dtype. You changed this behaviour in November after I suggested
>that this should be illegal. (See
>argument then was that calling array(a, b.dtype) is clearer and safer
>than using array(a, b), which can be confused with array((a,b)). Is it
>an oversight that arrays can again be used as dtypes?
I think this is an oversight, but I'm not exactly sure how to fix it.
Basically during design of the new data-types, I allowed the
possibility that any object with a dtype attribute could be considered a
data-type. I wasn't thinking about arrays (which of course have that
attribute as well), I was thinking about record objects with nested
records and making it relatively easy to specify record data-types using
objects with a dtype attribute.
But, I could be persuaded (again), that it is easy enough to request the
data-type attribute should you every actually need it, then have it
specially-coded in the descriptor converter. In fact, I don't see any
real benefit to it at all. My thinking about record objects changed
during development and this could be just a holdover.
More information about the Numpy-discussion