[Numpy-discussion] Numpy and PEP 343

David M. Cooke cookedm at physics.mcmaster.ca
Fri Mar 3 12:59:03 CST 2006


Tim Hochberg <tim.hochberg at cox.net> writes:

> That makes sense. One thought I had with respect to the various numpy
> functions (sin, cos, pow, etc) was to just have the bytecodes:
>
> call_unary_function, function_id, store_in, source
> call_binary_function, function_id, store_in, source1, source2
> call_trinary_function, function_id, store_in, source1, source2, source3
>
> Then just store pointers to the functions in relevant tables. In it's
> most straightforward form, you'd need 6 character chunks of bytecode
> instead of four.  However, if that turns out to slow everything else
> down I think it could be packed down to 4 again. The function_ids
> could probably be packed into the opcode (as long as we stay below 200
> or so functions, which is probably safe), the other way to pack things
> down is to require that one of the sources for trinary functions is
> always a certain register (say register-0). That requires a bit more
> cleverness at the compiler level, but is probably feasible.

That's along the lines I'm thinking of. It seems to me that if
evaluating the function requires a function call (and not an inlined
machine instruction like the basic ops), then we may as well dispatch
like this (plus, it's easier :). This could also allow for user
extensions. Binary and trinary (how many of those do we have??) could
maybe be handled by storing the extra arguments in a separate array.

I'm going to look at adding more smarts to the compiler, too. Got a
couple books on them :-)

Different data types could be handled by separate input arrays, and a
conversion opcode ('int2float', say).

-- 
|>|\/|<
/--------------------------------------------------------------------------\
|David M. Cooke                      http://arbutus.physics.mcmaster.ca/dmc/
|cookedm at physics.mcmaster.ca




More information about the Numpy-discussion mailing list