[SciPy-user] Automating Matlab
Alex Liberzon
alex.liberzon@gmail....
Sun Feb 1 15:27:10 CST 2009
+1
On Sun, Feb 1, 2009 at 5:29 PM, <scipy-user-request@scipy.org> wrote:
> Send SciPy-user mailing list submissions to
> scipy-user@scipy.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://projects.scipy.org/mailman/listinfo/scipy-user
> or, via email, send a message with subject or body 'help' to
> scipy-user-request@scipy.org
>
> You can reach the person managing the list at
> scipy-user-owner@scipy.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of SciPy-user digest..."
>
>
> Today's Topics:
>
> 1. Automating Matlab (Eric Schug)
> 2. Re: Automating Matlab (Robert Kern)
> 3. Re: Automating Matlab (David Warde-Farley)
> 4. shared memory machines (Gideon Simpson)
> 5. Re: Automating Matlab (Young, Karl)
> 6. Re: Automating Matlab (gsever)
> 7. Re: Automating Matlab (Stef Mientki)
> 8. Re: shared memory machines (Gael Varoquaux)
> 9. Re: shared memory machines (Gideon Simpson)
> 10. Re: shared memory machines (Gael Varoquaux)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 31 Jan 2009 20:06:20 -0500
> From: Eric Schug <schugschug@gmail.com>
> Subject: [SciPy-user] Automating Matlab
> To: scipy-user@scipy.org
> Message-ID: <4984F58C.5070605@gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Is there strong interest in automating matlab to numpy conversion?
>
> I have a working version of a matlab to python translator.
> It allows translation of matlab scripts into numpy constructs,
> supporting most of the matlab language. The parser is nearly complete.
> Most of the remaining work involves providing a robust translation. Such as
> * making sure that copies on assign are done when needed.
> * correct indexing a(:) becomes a.flatten(1) when on the left hand
> side (lhs) of equals
> and a[:] when on the right hand side
>
>
> I've seen a few projects attempt to do this, but for one reason or
> another have stopped it.
>
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 31 Jan 2009 19:34:57 -0600
> From: Robert Kern <robert.kern@gmail.com>
> Subject: Re: [SciPy-user] Automating Matlab
> To: SciPy Users List <scipy-user@scipy.org>
> Message-ID:
> <3d375d730901311734o388adf56y9f3241032ed409c2@mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Sat, Jan 31, 2009 at 19:06, Eric Schug <schugschug@gmail.com> wrote:
> > Is there strong interest in automating matlab to numpy conversion?
>
> Yes! Please post your code somewhere!
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
> -- Umberto Eco
>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 31 Jan 2009 20:49:32 -0500
> From: David Warde-Farley <dwf@cs.toronto.edu>
> Subject: Re: [SciPy-user] Automating Matlab
> To: SciPy Users List <scipy-user@scipy.org>
> Message-ID: <5BC40EFF-8964-45CB-9DA3-D4FA87EE4B2E@cs.toronto.edu>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
> On 31-Jan-09, at 8:06 PM, Eric Schug wrote:
>
> > Is there strong interest in automating matlab to numpy conversion?
>
> I think there is a strong interest in this. One of the main obstacles
> to changing environments is inertia and familiarity. My advisor
> repeatedly expresses his wish to give Python another try, and having
> an easy way to show him how his existing scripts translate would be
> awesome.
>
> Of course there are caveats, corner cases where such translations will
> fail, but a fairly foolproof method of converting simple scripts would
> be just fantastic. I imagine if you've gotten further along than
> previous attempts you'll receive a lot of street cred on this list and
> probably a lot of patches to make things work better. :)
>
> David
>
>
> ------------------------------
>
> Message: 4
> Date: Sun, 1 Feb 2009 00:37:48 -0500
> From: Gideon Simpson <simpson@math.toronto.edu>
> Subject: [SciPy-user] shared memory machines
> To: SciPy Users List <scipy-user@scipy.org>
> Message-ID: <2AE6D153-799C-450E-8E69-CA80D12E2FF5@math.toronto.edu>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
> Has anyone been able to take advantage of shared memory machines with
> scipy? How did you do it?
>
> -gideon
>
>
>
> ------------------------------
>
> Message: 5
> Date: Sat, 31 Jan 2009 21:45:59 -0800
> From: "Young, Karl" <karl.young@ucsf.edu>
> Subject: Re: [SciPy-user] Automating Matlab
> To: "SciPy Users List" <scipy-user@scipy.org>
> Message-ID:
> <9D202D4E86A4BF47BA6943ABDF21BE78058FAB62@EXVS06.net.ucsf.edu>
> Content-Type: text/plain; charset=iso-8859-1
>
>
> >> Is there strong interest in automating matlab to numpy conversion?
>
> > Yes! Please post your code somewhere!
>
> seconded !!!!! I'm currently working on a grant that has turned out to
> involve porting a lot of matlab code to python; you will be gratefully
> acknowledged in whatever comes of the work of the grant.
>
> -- KY
>
>
>
> ------------------------------
>
> Message: 6
> Date: Sat, 31 Jan 2009 23:49:32 -0800 (PST)
> From: gsever <gokhansever@gmail.com>
> Subject: Re: [SciPy-user] Automating Matlab
> To: scipy-user@scipy.org
> Message-ID:
> <27b4bf7a-bb75-457d-8ed0-eec3465b92f1@t13g2000yqc.googlegroups.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> I am interested with this project, too. Would be much better to have
> an automated tool than doing manual conversations.
>
> Just for your information, there is a IDL-to-Python conversation tool
> named i2py @ http://code.google.com/p/i2py/
>
> On Jan 31, 7:06?pm, Eric Schug <schugsc...@gmail.com> wrote:
> > Is there strong interest in automating matlab to numpy conversion?
> >
> > I have a working version of a matlab to python translator.
> > It allows translation of matlab scripts into numpy constructs,
> > supporting most of the matlab language. ?The parser is nearly complete. ?
> > Most of the remaining work involves providing a robust translation. Such
> as
> > ? ? * making sure that copies on assign are done when needed.
> > ? ? * correct indexing a(:) becomes a.flatten(1) when on the left hand
> > side (lhs) of equals
> > ? ? ? ?and a[:] when on the right hand side
> >
> > I've seen a few projects attempt to do this, but for one reason or
> > another have stopped it.
> >
> > _______________________________________________
> > SciPy-user mailing list
> > SciPy-u...@scipy.orghttp://
> projects.scipy.org/mailman/listinfo/scipy-user
>
>
> ------------------------------
>
> Message: 7
> Date: Sun, 01 Feb 2009 10:27:11 +0100
> From: Stef Mientki <s.mientki@ru.nl>
> Subject: Re: [SciPy-user] Automating Matlab
> To: scipy-user@scipy.org
> Message-ID: <49856AEF.9050605@ru.nl>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>
>
> Robert Kern wrote:
> > On Sat, Jan 31, 2009 at 19:06, Eric Schug <schugschug@gmail.com> wrote:
> >
> >> Is there strong interest in automating matlab to numpy conversion?
> >>
> >
> > Yes! Please post your code somewhere!
> >
> >
> +1
>
> And this is a very good moment for the persons who are creating a Matlab
> like environment,
> including the Matlab-like workspace,
> to show there creations.
>
> cheers,
> Stef
>
>
> ------------------------------
>
> Message: 8
> Date: Sun, 1 Feb 2009 10:57:46 +0100
> From: Gael Varoquaux <gael.varoquaux@normalesup.org>
> Subject: Re: [SciPy-user] shared memory machines
> To: SciPy Users List <scipy-user@scipy.org>
> Message-ID: <20090201095746.GA1099@phare.normalesup.org>
> Content-Type: text/plain; charset=iso-8859-1
>
> On Sun, Feb 01, 2009 at 12:37:48AM -0500, Gideon Simpson wrote:
> > Has anyone been able to take advantage of shared memory machines with
> > scipy? How did you do it?
>
> I am not sure I understand your question. You want to do parallel
> computing and share the arrays between processes, is that it?
>
> Ga?l
>
>
> ------------------------------
>
> Message: 9
> Date: Sun, 1 Feb 2009 10:03:30 -0500
> From: Gideon Simpson <simpson@math.toronto.edu>
> Subject: Re: [SciPy-user] shared memory machines
> To: SciPy Users List <scipy-user@scipy.org>
> Message-ID: <FDC4C5E2-3740-44BB-83C5-5F29620B6A34@math.toronto.edu>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes
>
> Yes, but I'm talking about when you have a multiprocessor/multicore
> system, not a commodity cluster. In these shared memory
> configurations, were I using compiled code, I'd be able to use OpenMP
> to take advantage of the additional cores/processors. I'm wondering
> if anyone has looked at ways to take advantage of such configurations
> with scipy.
>
> -gideon
>
> On Feb 1, 2009, at 4:57 AM, Gael Varoquaux wrote:
>
> > On Sun, Feb 01, 2009 at 12:37:48AM -0500, Gideon Simpson wrote:
> >> Has anyone been able to take advantage of shared memory machines with
> >> scipy? How did you do it?
> >
> > I am not sure I understand your question. You want to do parallel
> > computing and share the arrays between processes, is that it?
> >
> > Ga?l
> > _______________________________________________
> > SciPy-user mailing list
> > SciPy-user@scipy.org
> > http://projects.scipy.org/mailman/listinfo/scipy-user
>
>
>
> ------------------------------
>
> Message: 10
> Date: Sun, 1 Feb 2009 16:29:40 +0100
> From: Gael Varoquaux <gael.varoquaux@normalesup.org>
> Subject: Re: [SciPy-user] shared memory machines
> To: SciPy Users List <scipy-user@scipy.org>
> Message-ID: <20090201152940.GD9757@phare.normalesup.org>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Sun, Feb 01, 2009 at 10:03:30AM -0500, Gideon Simpson wrote:
> > Yes, but I'm talking about when you have a multiprocessor/multicore
> > system, not a commodity cluster. In these shared memory
> > configurations, were I using compiled code, I'd be able to use OpenMP
> > to take advantage of the additional cores/processors. I'm wondering
> > if anyone has looked at ways to take advantage of such configurations
> > with scipy.
>
> I use the multiprocessing module:
> http://docs.python.org/library/multiprocessing.html
>
> I also have some code to share arrays between processes. I'd love to
> submit it for integration with numpy, but first I'd like it to get more
> exposure so that the eventual flaws in the APIs are found. I am attaching
> it.
>
> Actually I wrote this code a few months ago, and now that I am looking at
> it, I realise that the SharedMemArray should probably be a subclass of
> numpy.ndarray, and implement the full array signature. I am not sure if
> this is possible or not (ie if it will still be easy to have
> multiprocessing share the data between processes or not). I don't really
> have time for polishing this right, anybody wants to have a go?
>
> Ga?l
>
> > On Feb 1, 2009, at 4:57 AM, Gael Varoquaux wrote:
>
> > > On Sun, Feb 01, 2009 at 12:37:48AM -0500, Gideon Simpson wrote:
> > >> Has anyone been able to take advantage of shared memory machines with
> > >> scipy? How did you do it?
>
> > > I am not sure I understand your question. You want to do parallel
> > > computing and share the arrays between processes, is that it?
>
> -------------- next part --------------
> """
> Small helper module to share arrays between processes without copying
> data.
>
> Numpy arrays can be converted to shared memory arrays, which implement
> the array protocole, but are allocated in memory that can be
> share transparently by the multiprocessing module.
> """
>
> # Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>
> # Copyright: Gael Varoquaux
> # License: BSD
>
> import numpy as np
> import multiprocessing
> import ctypes
>
> _ctypes_to_numpy = {
> ctypes.c_char : np.int8,
> ctypes.c_wchar : np.int16,
> ctypes.c_byte : np.int8,
> ctypes.c_ubyte : np.uint8,
> ctypes.c_short : np.int16,
> ctypes.c_ushort : np.uint16,
> ctypes.c_int : np.int32,
> ctypes.c_uint : np.int32,
> ctypes.c_long : np.int32,
> ctypes.c_ulong : np.int32,
> ctypes.c_float : np.float32,
> ctypes.c_double : np.float64
> }
>
> _numpy_to_ctypes = dict((value, key) for key, value in
> _ctypes_to_numpy.iteritems())
>
> def shmem_as_ndarray(data, dtype=float):
> """ Given a multiprocessing.Array object, as created by
> ndarray_to_shmem, returns an ndarray view on the data.
> """
> dtype = np.dtype(dtype)
> size = data._wrapper.get_size()/dtype.itemsize
> arr = np.frombuffer(buffer=data, dtype=dtype, count=size)
> return arr
>
>
> def ndarray_to_shmem(arr):
> """ Converts a numpy.ndarray to a multiprocessing.Array object.
>
> The memory is copied, and the array is flattened.
> """
> arr = arr.reshape((-1, ))
> data = multiprocessing.RawArray(_numpy_to_ctypes[arr.dtype.type],
> arr.size)
> ctypes.memmove(data, arr.data[:], len(arr.data))
> return data
>
>
>
> def test_ndarray_conversion():
> """ Check that the conversion to multiprocessing.Array and back works.
> """
> a = np.random.random((100, ))
> a_sh = ndarray_to_shmem(a)
> b = shmem_as_ndarray(a_sh)
> np.testing.assert_almost_equal(a, b)
>
>
> def test_conversion_non_flat():
> """ Check that the conversion also works with non-flat arrays.
> """
> a = np.random.random((100, 2))
> a_flat = a.flatten()
> a_sh = ndarray_to_shmem(a)
> b = shmem_as_ndarray(a_sh)
> np.testing.assert_almost_equal(a_flat, b)
>
>
> def test_conversion_non_contiguous():
> """ Check that the conversion also works with non-contiguous arrays.
> """
> a = np.indices((3, 3, 3))
> a = a.T
> a_flat = a.flatten()
> a_sh = ndarray_to_shmem(a)
> b = shmem_as_ndarray(a_sh, dtype=a.dtype)
> np.testing.assert_almost_equal(a_flat, b)
>
>
>
> def test_no_copy():
> """ Check that the data is not copied from the multiprocessing.Array.
> """
> a = np.random.random((100, ))
> a_sh = ndarray_to_shmem(a)
> a = shmem_as_ndarray(a_sh)
> b = shmem_as_ndarray(a_sh)
> a[0] = 1
> np.testing.assert_equal(a[0], b[0])
> a[0] = 0
> np.testing.assert_equal(a[0], b[0])
>
>
>
> ################################################################################
> # A class to carry around the relevant information
>
> ################################################################################
>
> class SharedMemArray(object):
> """ Wrapper around multiprocessing.Array to share an array accross
> processes.
> """
>
> def __init__(self, arr):
> """ Initialize a shared array from a numpy array.
>
> The data is copied.
> """
> self.data = ndarray_to_shmem(arr)
> self.dtype = arr.dtype
> self.shape = arr.shape
>
> def __array__(self):
> """ Implement the array protocole.
> """
> arr = shmem_as_ndarray(self.data, dtype=self.dtype)
> arr.shape = self.shape
> return arr
>
> def asarray(self):
> return self.__array__()
>
>
> def test_sharing_array():
> """ Check that a SharedMemArray shared between processes is indeed
> modified in place.
> """
> # Our worker function
> def f(arr):
> a = arr.asarray()
> a *= -1
>
> a = np.random.random((10, 3, 1))
> arr = SharedMemArray(a)
> # b is a copy of a
> b = arr.asarray()
> np.testing.assert_array_equal(a, b)
> multiprocessing.Process(target=f, args=(arr,)).run()
> np.testing.assert_equal(-b, a)
>
>
> if __name__ == '__main__':
>
> import nose
> nose.runmodule()
>
>
>
> ------------------------------
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
>
> End of SciPy-user Digest, Vol 66, Issue 1
> *****************************************
>
--
Alex Liberzon
Turbulence Structure Laboratory (http://www.eng.tau.ac.il/efdl)
School of Mechanical Engineering
Tel Aviv University
Tel: +972-3-640-8928 (office)
Tel: +972-3-640-6860 (lab)
E-mail: alexlib@eng.tau.ac.il
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20090201/23ecb3ba/attachment-0001.html
More information about the SciPy-user
mailing list