[IPython-user] Is ipython too nosy?
Thu Mar 1 15:00:22 CST 2007
thanks for the feedback.
"Fernando Perez" <email@example.com> writes:
> I don't have time right now for a detailed discussion, unfortunately, but
> could you please try SVN trunk (r >= 2122)?
Thanks -- I'll give it a try in the next couple of days.
> So in general it is a good idea to have objects that respond as
> robustly as possible to getattr() calls.
Agreed, but you'll have to admit that mlabwrap might be forgiven for not
handling access to attribues like "this('is" well ;)
Anyway, I've changed mlabwrap to make sure such attribute lookup won't cause
spurious error messages now and I'll upload it soon.
> However, the points you made were most certainly bugs, and I hope they
> are resolved now. Please let me know how it goes for you.
BTW just as a half baked idea -- have you considered just running
tokenizer.generate_tokens over the input lines (provided they don't start with
"!") for ipython and work from that? I haven't found some quick overview of
the syntax extensions ipython supports, but I would suspect the ones I am
aware of could be implemented on top of a tokenstream from generate_tokens
and that this might make for simpler and more robust code and possibly a nicer
interface for syntactic extensions. One benefit is that you'd get future
lexical syntax supported for free, the other would be that IIRC in the past
there have been possibly confusing bugs that resulted from ipython tokenizing
things in a somewhat ad hoc manner at unexepcted occassions (such as this
one), which presumably mostly involved getting confused by strings.
 The tokenizer will just spit out error tokens rather than just die so
occurrences of "tokens" such as "!" that don't occur in legal python, so
as long as you don't define lots of compound tokens (e.g. :==>) or a
special string literal syntax that isn't a subset of python strings, it
might also work reasonably well for dealing with "extended" python-like
More information about the IPython-user