[IPython-dev] Testing headaches

Laurent Dufréchou laurent.dufrechou@gmail....
Wed Apr 15 16:47:17 CDT 2009

Just a stupid thought. How about a simple top level test script?
It test if modules are present and if version are OK.
Depending on the result, it propose a menu that run tests related to
different parts.
Ex: wx,twisted present, no qt.

0/core test
1/wx  test
2/twisted related test

Note: missing test:
- qt test: missing qt-4.5 depedency

Perhaps this is a dumb approach, but it has the advantage to allows us to
better handle which test will be run.
And dumb user like me will know what he is testing. And he/I will not
complain because test fails stupidly :)
Another advantage is that you can do more test before launching test to
check is installed system is conform to what test needs. And thus explain ti
the user possible cuase of a failure (bad nose version etc....)

That's the easy part, sure you will say me that some part needs wx+twisted
:) how do you do in this case...


> -----Message d'origine-----
> De : ipython-dev-bounces@scipy.org [mailto:ipython-dev-
> bounces@scipy.org] De la part de Brian Granger
> Envoyé : mercredi 15 avril 2009 23:17
> À : IPython Development list
> Objet : [IPython-dev] Testing headaches
> Hi,
> As we settle into the new workflow (code reviews, etc.) one of the
> biggest things that keeps getting us is that our testing system seems
> very easy to break in really wierd ways.  This is different from not
> having tests or having tests that simply fail.  In our case, we are
> seeing a number of things that actually break the test suite itself:
> * Both Ville and Jorgen have seens really odd things in the last few
> days.
> * I am seeing a nasty memory leak that is intermittent on OS X.
> * Twisted and nose are somehow very unhappy with each other.
> * Sometimes tests rely on dependencies (wx, twisted, etc.).  Those
> tests get run on systems that don't have the deps and they fail with
> ImportError.  I have tried to add code that tests for dependencies,
> but these keep creeping in.  The problem with this is that the person
> who finds the problem is not the person who wrote the code or tests.
> Part of the difficulty with all of these things is that debugging them
> is HELL.  Often times it is difficult to even begin to see where the
> problem is coming from.
> I am wondering what we can do to make our testing framework more
> robust.  We need an approach that can run tests in a more isolated
> manner so we immediately know where such problems are coming from.
> Also, we need to come up with a uniform and consistent way of handling
> dependencies in tests.
> Thoughts?
> Brian
> _______________________________________________
> IPython-dev mailing list
> IPython-dev@scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev

More information about the IPython-dev mailing list