[Numpy-discussion] matlab vs. python question

belinda thom bthom@cs.hmc....
Fri Apr 27 00:23:33 CDT 2007


Thank you so much! When I finally get a moment to take a break, I'll  
look in more detail into using your suggestion.

--b


On Apr 26, 2007, at 12:47 AM, Pauli Virtanen wrote:

> belinda thom kirjoitti:
>> On Apr 25, 2007, at 12:46 PM, Bill Baxter wrote:
>>
>> Agree w/most of what you've said, but will add one other thing that
>> drives me nuts in python that hasn't been a problem in Matplotlib:
>>
>> In Python, if interacting w/the interpreter as your primary IDE, and
>> if you've got multiple files that depend on one another that you're
>> modifying, then you need to restart the interpreter frequently b/c
>> otherwise things in the interpreter can be stale; IOW, changes to
>> several interdependent files aren't easy to import so that everything
>> in your interpreted environment reflects the latest code. Yeah,
>> there's reload tricks, but doing them in the right order and the
>> right number of times can be a pain when dependencies are cyclic.
>>
>> I realize in general that this issue of stale code is just difficult,
>> that its not inherently a Python problem per se, for automatic
>> percolation of code changes backwards in time is difficult in
>> general, but I've never had the problem bite me when I was developing
>> in Matlab. I just save whatever file, and it appears that whatever's
>> latest on disk is what's executed. (Friends who know more about PL
>> than I tell me I've been lucky.)
>
> I've been using the attached autoreload code (by Thomas Heller,
> different versions are probably floating around the net) rather
> successfully for a more Matlabish feel to IPython. It reloads modules
> every time their files have been changed.
>
> Of course, this doesn't solve all staleness problems you describe, but
> only eliminates manual work involved in solving the most common ones.
> Reloading modules does have pitfalls [1], but at least in my use these
> haven't really mattered much in practice.
>
> I think having autoreload functionality bundled with IPython (maybe
> turned off per default) would be quite useful: although in some rare
> cases autoreloading doesn't work in the ideal way, it's very  
> convenient
> not to have to type reload(foo) after every change, especially when
> tweaking for example plotting scripts.
>
> 	Pauli
>
> [1] For example, instances of classes created are not updated on  
> reload,
> but continue using the old code. Also, from foo import * imports  
> are not
> updated, so you'll have to manually touch files that do this if 'foo'
> has been changed.
> """
>
> autoreload.py - automatically reload changed source
> code into a running program
>
> You might want to add the following to your ~/.ipython/ipythonrc
>
>     import_mod sys
>     execute sys.path.append('path/where/this/file/resides')
>     import_mod autoreload
>     execute autoreload.run()
>
> or adding the following
>
>     import sys
>     sys.path.append('path/where/this/file/resides')
>     import autoreload
>     autoreload.run()
>
> to some startup file.
>
>
> Created: Thomas Heller, 2000-04-17
> Modified: Pauli Virtanen, 2006
> """
>
> # $Id: autoreload.py 3117 2006-09-27 20:28:46Z pauli $
> #
> # $Log: autoreload.py,v $
> # Revision 1.9  2001/11/15 18:41:18  thomas
> # Cleaned up and made working again before posting to c.l.p.
> # Added code to update bound (or unbound) methods as suggested
> # by Just van Rossum. Thanks!
> #
> # ...
> #
> # Revision 1.1  2001/10/04 16:54:04  thomas
> # Discovered this old module on my machine, it didn't work too well,
> # but seems worth to continue with it...
> # Checked in as a first step.
>
>
> __version__ = "$Revision: 1.9 $".split()[1]
>
> # ToDo:
> #
> #  Cannot reload __main__ - explain why this cannot work,
> #  and explain a workaround.
> #
> #  Optimize - the number of watches objects (in old_objects)
> #  grows without limits. Think if this is really necessary...
>
>
> import time, os, threading, sys, types, imp, inspect, traceback
>
> def _get_compiled_ext():
>     for ext, mode, typ in imp.get_suffixes():
>         if typ == imp.PY_COMPILED:
>             return ext
>
> # the official way to get the extension of compiled files (.pyc  
> or .pyo)
> PY_COMPILED_EXT = _get_compiled_ext()
>
> class ModuleWatcher:
>     running = 0
>     def __init__(self):
>         # If we don't do this, there may be tracebacks
>         # when shutting down python.
>         import atexit
>         atexit.register(self.stop)
>
>     def run(self):
>         if self.running:
>             print "# autoreload already running"
>             return
>         print "# starting autoreload"
>         self.running = 1
>         self.thread = threading.Thread(target=self._check_modules)
>         self.thread.setDaemon(1)
>         self.thread.start()
>
>     def stop(self):
>         if not self.running:
>             print "# autoreload not running"
>             return
>         self.running = 0
>         self.thread.join()
>         #print "# autoreload stopped"
>
>     def _check_modules(self):
>         skipped = {}
>         while self.running:
>             time.sleep(0.01)
>             for m in sys.modules.values():
>                 if not hasattr(m, '__file__'):
>                     continue
>                 if m.__name__ == '__main__':
>
>                     # we cannot reload(__main__) First I thought we
>                     # could use mod = imp.load_module() and then
>                     # reload(mod) to simulate reload(main), but this
>                     # would execute the code in __main__ a second
>                     # time.
>
>                     continue
>                 file = m.__file__
>                 dirname = os.path.dirname(file)
>                 path, ext = os.path.splitext(file)
>
>                 if ext.lower() == '.py':
>                     ext = PY_COMPILED_EXT
>                     file = os.path.join(dirname, path +  
> PY_COMPILED_EXT)
>
>                 if ext != PY_COMPILED_EXT:
>                     continue
>
>                 try:
> 		    pymtime = os.stat(file[:-1])[8]
>                     if pymtime <= os.stat(file)[8]:
>                         continue
> 		    if skipped.get(file[:-1], None) == pymtime:
> 		        continue
>                 except OSError:
>                     continue
>
>                 try:
>                     superreload(m)
> 		    if file[:-1] in skipped:
> 		        del skipped[file[:-1]]
>                 except:
> 		    skipped[file[:-1]] = pymtime
>                     import traceback
>                     traceback.print_exc(0)
>
> def update_function(old, new, attrnames):
>     for name in attrnames:
>         setattr(old, name, getattr(new, name))
>
> def superreload(module,
>                 reload=reload,
>                 _old_objects = {}):
>     """superreload (module) -> module
>
>     Enhanced version of the builtin reload function.
>     superreload replaces the class dictionary of every top-level
>     class in the module with the new one automatically,
>     as well as every function's code object.
>     """
> ##    start = time.clock()
>     # retrieve the attributes from the module before the reload,
>     # and remember them in _old_objects.
>     for name, object in module.__dict__.items():
>         key = (module.__name__, name)
>         _old_objects.setdefault(key, []).append(object)
>         # print the refcount of old objects:
> ##        if type(object) in (types.FunctionType, types.ClassType):
> ##            print name, map(sys.getrefcount, _old_objects[key])
>
> ##    print "# reloading module %r" % module
>
>     module = reload(module)
>     # XXX We have a problem here if importing the module fails!
>
>     # iterate over all objects and update them
>     count = 0
>     # XXX Can we optimize here?
>     # It may be that no references to the objects are present
>     # except those from our _old_objects dictionary.
>     # We should remove those. I have to learn about weak-refs!
>     for name, new_obj in module.__dict__.items():
>         key = (module.__name__, name)
>         if _old_objects.has_key(key):
>             for old_obj in _old_objects[key]:
>                 if type(new_obj) == types.ClassType:
>                     old_obj.__dict__.update(new_obj.__dict__)
>                     count += 1
>                 elif type(new_obj) == types.FunctionType:
>                     update_function(old_obj,
>                            new_obj,
>                            "func_code func_defaults func_doc".split())
>                     count += 1
>                 elif type(new_obj) == types.MethodType:
>                     update_function(old_obj.im_func,
>                            new_obj.im_func,
>                            "func_code func_defaults func_doc".split())
>                     count += 1
> ##    stop = time.clock()
> ##    print "# updated %d objects from %s" % (count, module)
> ##    print "# This took %.3f seconds" % (stop - start)
>
>     return module
>
> _watcher = ModuleWatcher()
>
> run = _watcher.run
> stop = _watcher.stop
>
> __all__ = ['run', 'stop', 'superreload']
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion



More information about the Numpy-discussion mailing list