[IPython-user] Need post_runcode_hook
Sun Mar 15 22:28:13 CDT 2009
> 2008/12/19 Ville M. Vainio <firstname.lastname@example.org>
>> On Sat, Sep 27, 2008 at 7:12 PM, Ville M. Vainio <email@example.com>
>> > While I'm at it, I could also add a run_code_hook that would enable
>> > customizing the 'exec code_obj in self.user_global_ns, self.user_ns'
>> > part (which would allow overriding one of the most critical phases of
>> > ipython code execution). It could even be used to implement the
>> > current MTInteractiveShell stuff in extension (w/o subclassing), which
>> > could be enabled "on the fly" by importing an extension.
>> Blast from the past!
>> I have implement runcode_hook in my trunk_dev:
I just did a review of that branch:
there are a few, easy to fix issues, but I did raise the point that
I'd like more thinking on the hooks system. Then I just saw this old
thread, so we might as well finish the discussion.
It's extremely useful for us to provide user-accessible entry points
into ipython, which is why we added the notion of hooks ages ago, so
that people don't need to go around monkeypatching everything.
But it's also true that each set of hooks does add runtime overhead,
where instead of a direct call to code, we now have a hook mechanism
that often involves multiple nested function calls and try/excepts. I
don't have hard numbers, but it annoys me that over time, ipython
feels more and more sluggish.
I remember a while ago, ipython felt hardly different from python
itself: its startup time was barely slower, and once started it felt
identical. That's not true anymore. Here are some numbers, using
these two trivial files for the test:
maqroll[scratch]> cat pyquit.py
import sys; sys.exit(0)
maqroll[scratch]> cat ipquit.py
On my (fairly modern and fast) laptop, these numbers are typical after
running them multiple times so disk caches are hot:
maqroll[scratch]> time python pyquit.py
maqroll[scratch]> time ipython ipquit.py
ipython is ~3x slower to start. Not the end of the world, but I'd
prefer it if it were less. But on a slower machine, like my EEE PC
701 (celeron CPU):
haiku[~]> time python pyquit.py
haiku[~]> time ipython ipquit.py
The discrepancy is ~10x, and a startup time of ~1second is starting to
get really annoying. My old laptop (now dead) was about like this
EEEPC, and back when I had it, I made every effort possible to keep
startup times and performance in check.
I realize that features cost CPU, and we do a lot more than plain python.
But let's not turn ipython into the Microsoft Word of shells. After
0.10, I'd like to add to our test suite a tracking of the ratio of
python/ipython startup times for each release, so we can track at
least how this number evolves. Ideally we'll make it go *down*, and
at the very least it should never go up, across a reasonable range of
I'd also like to start using Robert's profiler to track import times,
so we can get a better idea of which parts are costing us so much.
So in summary, regarding these hooks: I'm all in for the
functionality, but let's see what we can do about the cost. I'm very,
very unhappy that we've gotten so slow, and I'm sure that if we put
our minds to writing tight code, we can get back a zippy ipython...
All the best
More information about the IPython-user