[Numpy-discussion] Bitwise operations and unsigned types
Fri Apr 6 09:01:57 CDT 2012
Good morning all-- didn't realize this would generate quite such a buzz.
To answer a direct question, I'm using the github master. A few thoughts (from a fairly heavy numpy user for numerical simulations and analysis):
The current behavior is confusing and (as far as i can tell) undocumented.
Scalars act up only if they are big:
In : np.uint32(1) & 1
In : np.uint64(1) & 1
TypeError Traceback (most recent call last)
/Users/claumann/<ipython-input-153-191a0b5fe216> in <module>()
----> 1 np.uint64(1) & 1
TypeError: ufunc 'bitwise_and' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
But arrays don't seem to mind:
In : ones(3, dtype=np.uint32) & 1
Out: array([1, 1, 1], dtype=uint32)
In : ones(3, dtype=np.uint64) & 1
Out: array([1, 1, 1], dtype=uint64)
As you mentioned, explicitly casting 1 to np.uint makes the above scalar case work, but I don't understand why this is unnecessary for the arrays. I could understand a general argument that type casting rules should always be the same independent of the underlying ufunc, but I'm not sure if that is sufficiently smart. Bitwise ops probably really ought to treat nonnegative python integers as unsigned.
>> I disagree, promoting to object kind of destroys the whole idea of bitwise operations. I think we *fixed* a bug.
> That is an interesting point of view. I could see that point of view. But, was this discussed as a bug prior to this change occurring?
I'm not sure what 'promoting to object' constitutes in the new numpy, but just a small thought. I can think of two reasons to go to the trouble of using bitfields over more pythonic (higher level) representations: speed/memory overhead and interfacing with external hardware/software. For me, it's mostly the former -- I've already implemented this program once using a much more pythonic approach but it just has too much memory overhead to scale to where I want it. If a coder goes to the trouble of using bitfields, there's probably a good reason they wanted a lower level representation in which bitfield ops happen in parallel as integer operations.
But, what do you mean that bitwise operations are destroyed by promotion to objects?
On Apr 6, 2012, at 5:57 AM, Nathaniel Smith wrote:
> On Fri, Apr 6, 2012 at 7:19 AM, Travis Oliphant <firstname.lastname@example.org> wrote:
>> That is an interesting point of view. I could see that point of view.
>> But, was this discussed as a bug prior to this change occurring?
>> I just heard from a very heavy user of NumPy that they are nervous about
>> upgrading because of little changes like this one. I don't know if this
>> particular issue would affect them or not, but I will re-iterate my view
>> that we should be very careful of these kinds of changes.
> I agree -- these changes make me very nervous as well, especially
> since I haven't seen any short, simple description of what changed or
> what the rules actually are now (comparable to the old "scalars do not
> affect the type of arrays").
> But, I also want to speak up in favor in one respect, since real world
> data points are always good. I had some code that did
> def do_something(a):
> a = np.asarray(a)
> a -= np.mean(a)
> If someone happens to pass in an integer array, then this is totally
> broken -- np.mean(a) may be non-integral, and in 1.6, numpy silently
> discards the fractional part and performs the subtraction anyway,
> In : a
> Out: array([0, 1, 2, 3])
> In : a -= 1.5
> In : a
> Out: array([-1, 0, 0, 1])
> The bug was discovered when Skipper tried running my code against
> numpy master, and it errored out on the -=. So Mark's changes did
> catch one real bug that would have silently caused completely wrong
> numerical results!
> - Nathaniel
> NumPy-Discussion mailing list
More information about the NumPy-Discussion