[IPython-dev] Introduction to contributing to IPython
Sat Feb 25 20:01:52 CST 2012
On Fri, Feb 24, 2012 at 2:45 PM, Thomas Kluyver <firstname.lastname@example.org> wrote:
> Just a quick reminder about this: I'll be preparing at the weekend, so
> any tips would be greatly appreciated!
Mmh, it's unfortunately true that we've done a poor job with
organizing small, newcomer-friendly tasks. The big event I had of
this kind was in India, and because of this problem I ended up
directing most people to just do docstring fixes in magics. This
isn't necessarily a bad idea, BTW, for people who may be completely
new to the github worfklow, as it's pretty safe and leads to very easy
to review requests. So you should keep it in mind as an option in
case you have some absolute newbies in the group. But for more
experienced developers, just fixing a docstring isn't really too
A few other things that come to mind:
- helping review existing PRs. This is genuinely useful, they can run
the test suite, report on it and then comment on the code. It will
require reading the code, which is a good way to learn up, and helps
the project. Not super interesting perhaps, but doing one or two
before diving into a feature may be a good warmup.
- Starting to write the notebook export code. My quick script
(https://gist.github.com/1569580) is just a start, but this is a nice
isolated project that would be very useful. I'd call it 'nbexport' to
start with, and later we can turn it into a standalone sub-command
'ipython nbexport'. A reasonably modular architecture with options to
export to rest for sphinx (html and latex), pure docutils html, and
direct latex, would be great to have.
- Starting something for testing the web notebook with selenium.
Stefan and a Berkeley student have been planning to tackle this one,
but if you get a head start on the problem, it would be great. We
*really* need to get a grip on testing the notebook before long.
- Integrating the sympy bot with our PRs. I'd *love* it if every PR
had automatically a link to a report from the bot with the test
results. Initially it could be just on one platform, though
ultimately the best would be to have a summary of tests on various
buildbots. This would make the review process even quicker, as one
would know that the tests are already in good shape. Aaron gave a
great description of their system a few weeks ago.
- Tagging groups and tests in our test suite with 'slow', so that it's
easy to have a 'quick tests' option in addition to the full suite.
Eventually I hope we'll have three levels: default, 'quick' and
'full', where default takes at most 2 minutes, quick is < 30s, and
'full' takes however long is necessary.
I'm afraid these aren't all super newbie-level, unfortunately, but
it's the best I can come up with right now...
Anyone else have other ideas?
More information about the IPython-dev