[Numpy-discussion] Numpy-discussion Digest, Vol 18, Issue 35
Blubaugh, David A.
dblubaugh@belcan....
Mon Mar 17 16:10:37 CDT 2008
Robert,
What I envisioned would be a simple but quick means to develop a FFT. I
have worked this issue before with others who say that the way to do it
would be to convert enough of the Numpy to MyHDL, which would then allow
scipy to be imported within a python program. The question is to how
this would be accomplished?? It should be stated that MyHDL is pure
python programming which has no fewer capabilities than standard python.
If I need to elaborate more please say so!!
Thanks,
David Blubaugh
-----Original Message-----
From: numpy-discussion-bounces@scipy.org
[mailto:numpy-discussion-bounces@scipy.org] On Behalf Of
numpy-discussion-request@scipy.org
Sent: Monday, March 17, 2008 4:45 PM
To: numpy-discussion@scipy.org
Subject: Numpy-discussion Digest, Vol 18, Issue 35
Send Numpy-discussion mailing list submissions to
numpy-discussion@scipy.org
To subscribe or unsubscribe via the World Wide Web, visit
http://projects.scipy.org/mailman/listinfo/numpy-discussion
or, via email, send a message with subject or body 'help' to
numpy-discussion-request@scipy.org
You can reach the person managing the list at
numpy-discussion-owner@scipy.org
When replying, please edit your Subject line so it is more specific than
"Re: Contents of Numpy-discussion digest..."
Today's Topics:
1. Re: Numpy and OpenMP (Gnata Xavier)
2. Scipy to MyHDL! (Blubaugh, David A.)
3. Re: numpy.ma bug: need sanity check in masked_where
(Eric Firing)
4. Re: Numpy and OpenMP (Charles R Harris)
5. Re: how to build a series of arrays as I go? (Alan G Isaac)
6. Re: Scipy to MyHDL! (Robert Kern)
7. View ND Homogeneous Record Array as (N+1)D Array?
(Alexander Michael)
----------------------------------------------------------------------
Message: 1
Date: Mon, 17 Mar 2008 20:59:08 +0100
From: Gnata Xavier <xavier.gnata@gmail.com>
Subject: Re: [Numpy-discussion] Numpy and OpenMP
To: Discussion of Numerical Python <numpy-discussion@scipy.org>
Message-ID: <47DECD8C.3040809@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Francesc Altet wrote:
> A Monday 17 March 2008, Christopher Barker escrigu?:
>
>>> > Plus a certain amount of numpy code depends on order of >
>>> evaluation:
>>> >
>>> > a[:-1] = 2*a[1:]
>>>
>> I'm confused here. My understanding of how it now works is that the
>> above translates to:
>>
>> 1) create a new array (call it temp1) from a[1:], which shares a's
>> data block.
>> 2) create a temp2 array by multiplying 2 times each of the elements
>> in temp1, and writing them into a new array, with a new data block 3)
>> copy that temporary array into a[:-1]
>>
>> Why couldn't step (2) be parallelized? Why isn't it already with,
>> BLAS? Doesn't BLAS must have such simple routines?
>>
>
> Probably yes, but the problem is that this kind of operations, namely,
> vector-to-vector (usually found in the BLAS1 subset of BLAS), are
> normally memory-bounded, so you can take little avantage from using
> BLAS, most specially in modern processors, where the gap between the
> CPU throughput and the memory bandwith is quite high (and increasing).
> In modern machines, the use of BLAS is more interesting in
> vector-matrix
> (BLAS2) computations, but definitely is in matrix-matrix (BLAS3) ones
> (which is where the oportunities for cache reuse is higher) where the
> speedups can really be very good.
>
>
>> Also, maybe numexpr could benefit from this?
>>
>
> Maybe, but unfortunately it wouldn't be able to achieve high speedups.
> Right now, numexpr is focused in accelerating mainly vector-vector
> operations (or matrix-matrix, but element-wise, much like NumPy, so
> that the cache cannot be reused), with some smart optimizations for
> strided and unaligned arrays (in this scenario, it can be 2x or 3x
> faster than NumPy, even for very simple operations like 'a+b').
>
> In a similar way, OpenMP (or whatever parallel paradigm) will only
> generally be useful when you have to deal with lots of data, and your
> algorithm can have the oportunity to structure it so that small
> portions of them can be reused many times.
>
> Cheers,
>
>
Well, linear alagera is another topic.
What I can see from IDL (for innstance) is that it provides the user
with a TOTAL function which take avantage of several CPU when the
number of elements is large. It also provides a very simple way to set a
max number of threads.
I really really would like to see something like that in numpy (just to
be able to tell somone "switch to numpy it is free and you will get
exactly the same"). For now, I have a problem when they ask for //
functions like TOTAL.
For now, we can do that using C inline threaded code but it is *complex*
and 2000x2000 images are now common. It is not a corner case any more.
Xavier
------------------------------
Message: 2
Date: Mon, 17 Mar 2008 16:17:56 -0400
From: "Blubaugh, David A." <dblubaugh@belcan.com>
Subject: [Numpy-discussion] Scipy to MyHDL!
To: <numpy-discussion@scipy.org>
Message-ID:
<27CC3060AF71DA40A5DC85F7D5B70F3802C51833@AWMAIL04.belcan.com>
Content-Type: text/plain; charset="us-ascii"
To Whom It May Concern,
Please allow me to introduce myself. My name is David Allen Blubaugh.
I am currently in the developmental stages of a
Field-Programmable-Gate-Array (FPGA) device for a high-performance
computing application. I am currently evaluating the MyHDL environment
for translating python source code to verilog. I am also wondering as
to what would be necessary to interface both Scipy and Numpy to the
MyHDL environment? I believe that there will definitely be the need for
modifications done within Numpy framework in order to quickly prototype
an algorithm, like the FFT, and have it translated to verilog. Do you
have any additional suggestions?
Thanks,
David Blubaugh
This e-mail transmission contains information that is confidential and
may be privileged. It is intended only for the addressee(s) named
above. If you receive this e-mail in error, please do not read, copy or
disseminate it in any manner. If you are not the intended recipient, any
disclosure, copying, distribution or use of the contents of this
information is prohibited. Please reply to the message immediately by
informing the sender that the message was misdirected. After replying,
please erase it from your computer system. Your assistance in correcting
this error is appreciated.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://projects.scipy.org/pipermail/numpy-discussion/attachments/2008031
7/7e24c9ce/attachment-0001.html
------------------------------
Message: 3
Date: Mon, 17 Mar 2008 10:34:41 -1000
From: Eric Firing <efiring@hawaii.edu>
Subject: Re: [Numpy-discussion] numpy.ma bug: need sanity check in
masked_where
To: Discussion of Numerical Python <numpy-discussion@scipy.org>
Message-ID: <47DED5E1.7060102@hawaii.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Charles R Harris wrote:
> File a ticket.
#703
Eric
>
> On Mon, Mar 17, 2008 at 12:26 PM, Eric Firing <efiring@hawaii.edu
> <mailto:efiring@hawaii.edu>> wrote:
>
> Pierre,
>
> I just tripped over what boils down to the sequence given below.
It
> would be useful if the error in line 53 were trapped right away;
as it
> is, it results in a masked array that looks reasonable but fails
in a
> non-obvious way.
>
> Eric
>
> In [52]:x = [1,2]
>
> In [53]:y = ma.masked_where(False, x)
>
> In [54]:y
> Out[54]:
> masked_array(data = [1 2],
> mask = False,
> fill_value=999999)
>
>
> In [55]:y[1]
>
------------------------------------------------------------------------
---
> IndexError Traceback (most recent
> call last)
>
> /home/efiring/<ipython console> in <module>()
>
> /usr/local/lib/python2.5/site-packages/numpy/ma/core.pyc in
> __getitem__(self, indx)
> 1307 if not getattr(dout,'ndim', False):
> 1308 # Just a scalar............
> -> 1309 if m is not nomask and m[indx]:
> 1310 return masked
> 1311 else:
>
> IndexError: 0-d arrays can't be indexed
>
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org <mailto:Numpy-discussion@scipy.org>
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
> ----------------------------------------------------------------------
> --
>
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
------------------------------
Message: 4
Date: Mon, 17 Mar 2008 14:37:50 -0600
From: "Charles R Harris" <charlesr.harris@gmail.com>
Subject: Re: [Numpy-discussion] Numpy and OpenMP
To: "Discussion of Numerical Python" <numpy-discussion@scipy.org>
Message-ID:
<e06186140803171337t747b20ffk170e5cc461f200b7@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
On Mon, Mar 17, 2008 at 1:59 PM, Gnata Xavier <xavier.gnata@gmail.com>
wrote:
> Francesc Altet wrote:
> > A Monday 17 March 2008, Christopher Barker escrigu?:
> >
> >>> > Plus a certain amount of numpy code depends on order of >
> >>> evaluation:
> >>> >
> >>> > a[:-1] = 2*a[1:]
> >>>
> >> I'm confused here. My understanding of how it now works is that the
> >> above translates to:
> >>
> >> 1) create a new array (call it temp1) from a[1:], which shares a's
> >> data block.
> >> 2) create a temp2 array by multiplying 2 times each of the elements
> >> in temp1, and writing them into a new array, with a new data block
> >> 3) copy that temporary array into a[:-1]
> >>
> >> Why couldn't step (2) be parallelized? Why isn't it already with,
> >> BLAS? Doesn't BLAS must have such simple routines?
> >>
> >
> > Probably yes, but the problem is that this kind of operations,
> > namely, vector-to-vector (usually found in the BLAS1 subset of
> > BLAS), are normally memory-bounded, so you can take little avantage
> > from using BLAS, most specially in modern processors, where the gap
> > between the CPU throughput and the memory bandwith is quite high
(and increasing).
> > In modern machines, the use of BLAS is more interesting in
> > vector-matrix
> > (BLAS2) computations, but definitely is in matrix-matrix (BLAS3)
> > ones (which is where the oportunities for cache reuse is higher)
> > where the speedups can really be very good.
> >
> >
> >> Also, maybe numexpr could benefit from this?
> >>
> >
> > Maybe, but unfortunately it wouldn't be able to achieve high
speedups.
> > Right now, numexpr is focused in accelerating mainly vector-vector
> > operations (or matrix-matrix, but element-wise, much like NumPy, so
> > that the cache cannot be reused), with some smart optimizations for
> > strided and unaligned arrays (in this scenario, it can be 2x or 3x
> > faster than NumPy, even for very simple operations like 'a+b').
> >
> > In a similar way, OpenMP (or whatever parallel paradigm) will only
> > generally be useful when you have to deal with lots of data, and
> > your algorithm can have the oportunity to structure it so that small
> > portions of them can be reused many times.
> >
> > Cheers,
> >
> >
>
> Well, linear alagera is another topic.
>
> What I can see from IDL (for innstance) is that it provides the user
> with a TOTAL function which take avantage of several CPU when the
> number of elements is large. It also provides a very simple way to set
> a max number of threads.
>
> I really really would like to see something like that in numpy (just
> to be able to tell somone "switch to numpy it is free and you will get
> exactly the same"). For now, I have a problem when they ask for //
> functions like TOTAL.
>
> For now, we can do that using C inline threaded code but it is
> *complex* and 2000x2000 images are now common. It is not a corner case
any more.
>
Image processing may be a special in that many cases it is almost
embarrassingly parallel. Perhaps some special libraries for that sort of
application could be put together and just bits of c code be run on
different processors. Not that I know much about parallel processing,
but that would be my first take.
Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://projects.scipy.org/pipermail/numpy-discussion/attachments/2008031
7/474f93a6/attachment-0001.html
------------------------------
Message: 5
Date: Mon, 17 Mar 2008 16:43:52 -0400
From: Alan G Isaac <aisaac@american.edu>
Subject: Re: [Numpy-discussion] how to build a series of arrays as I
go?
To: Discussion of Numerical Python <numpy-discussion@scipy.org>
Message-ID: <Mahogany-0.67.0-1224-20080317-164352.00@american.edu>
Content-Type: TEXT/PLAIN; CHARSET=UTF-8
> Alan suggested:
>> 1. http://www.scipy.org/Numpy_Example_List_With_Doc
On Mon, 17 Mar 2008, Chris Withers apparently wrote:
> Yeah, read that, wood, trees, can't tell the...
Oh, then you might want
http://www.scipy.org/Tentative_NumPy_Tutorial
or the other stuff at
http://www.scipy.org/Documentation
All in all, I've found the resources quite good.
Cheers,
Alan Isaac
------------------------------
Message: 6
Date: Mon, 17 Mar 2008 15:42:36 -0500
From: "Robert Kern" <robert.kern@gmail.com>
Subject: Re: [Numpy-discussion] Scipy to MyHDL!
To: "Discussion of Numerical Python" <numpy-discussion@scipy.org>
Message-ID:
<3d375d730803171342i47b39382ndcdcc37a73c7a433@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
On Mon, Mar 17, 2008 at 3:17 PM, Blubaugh, David A.
<dblubaugh@belcan.com> wrote:
>
> To Whom It May Concern,
>
> Please allow me to introduce myself. My name is David Allen Blubaugh.
> I am currently in the developmental stages of a
> Field-Programmable-Gate-Array
> (FPGA) device for a high-performance computing application. I am
> currently evaluating the MyHDL environment for translating python
> source code to verilog. I am also wondering as to what would be
> necessary to interface both Scipy and Numpy to the MyHDL environment?
> I believe that there will definitely be the need for modifications
> done within Numpy framework in order to quickly prototype an
> algorithm, like the FFT, and have it translated to verilog. Do you
have any additional suggestions?
Can you sketch out in more detail exactly what you are envisioning? My
gut feeling is that there is very little direct interfacing that can be
fruitfully done. numpy and scipy provide much higher level abstractions
than MyHDL provides. I don't think there is even going to be a good way
to translate those abstractions to MyHDL. One programs for silicon in an
HDL rather differently than one programs for a modern microprocessor in
a VHLL.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
------------------------------
Message: 7
Date: Mon, 17 Mar 2008 16:44:34 -0400
From: "Alexander Michael" <lxander.m@gmail.com>
Subject: [Numpy-discussion] View ND Homogeneous Record Array as (N+1)D
Array?
To: "Discussion of Numerical Python" <numpy-discussion@scipy.org>
Message-ID:
<525f23e80803171344q4da8a604we700172e9826b422@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Is there a way to view an N-dimensional array with a *homogeneous*
record dtype as an array of N+1 dimensions? An example will make it
clear:
import numpy
a = numpy.array([(1.0,2.0), (3.0,4.0)], dtype=[('A',float),('B',float)])
b = a.view(...) # do something magical print b array([[ 1., 2.],
[ 3., 4.]])
b[0,0] = 0.0
print a
[(0.0, 2.0) (3.0, 4.0)]
Thanks,
Alex
------------------------------
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
End of Numpy-discussion Digest, Vol 18, Issue 35
************************************************
More information about the Numpy-discussion
mailing list