[Numpy-discussion] More complex data types
Fri Oct 5 10:48:39 CDT 2007
Charles R Harris wrote:
> On 10/5/07, Neal Becker <firstname.lastname@example.org> wrote:
>> I'm thinking (again) about using numpy for signal processing
>> applications. One issue is that there are more data types that are
>> commonly used in signal processing that are not available in numpy (or
>> python). Specifically, it is frequently required to convert floating
>> algorithms into integer algorithms. numpy is fine for arrays of integers
>> (of various sizes), but it is also very useful to have arrays of
>> complex<integers>. While numpy has complex<double,float>, it doesn't
>> complex<int,int_64...> Has anyone thought about this?
> A bit. Multiplication begins to be a problem, though. Would you also want
> fixed point multiplication with scaling, a la PPC with altivec? What about
> division? So on and so forth. I think something like this would best be
> implemented in a specialized signal processing package but I am not sure
> of the best way to do it.
I'd keep things as simple as possible. No fixed point/scaling. It's simple
enough to explictly rescale things as you wish.
That is (using c++ syntax):
complex<int> a, b;
complex<int> c = a * b;
complex<int> d = d >> 4;
Complicating life is interoperability (conversion) of types.
I've used this concept for some years with c++/python - but not with numpy.
It's pretty trivial to make a complex<int> type as a C extension to python.
Adding this to numpy would be really useful.
More information about the Numpy-discussion