Sun Jan 29 17:00:13 CST 2012
I'd like to clarify something about the SymPy review bot. Our review
bot is entirely distributed. What that means is that all test runs
are run by people on their own computers. This is more easily seen by
the summary that is posted as a comment to the pull request. See for
This was run by the user @goodok. This and the one below it show the
difference between passing tests and failing tests.
This system has actually been working pretty well for us. The bot
makes it very easy to run the tests. If you set up a config file with
your github API (so that you don't have to enter your password), the
whole thing requires one command to do everything (./sympy-bot review
<pull request number>). This downloads the pull request into a
temporary directory, merges with the git master, runs 2to3 if the
chosen interpreter is Python 3, runs the tests, generates a report,
and uploads it to review.sympy.org and to the pull request. Not only
does it automate the whole process of pulling down the branch to run
the tests, but they are posted to the review site and github, so they
are publicly viewable. We have a general rule that no pull request
should be merged unless there is a passing sympy-bot review after the
last commit in the github discussion.
There are plans to extend it to work with a centralized testing
server. Our idea was to set up a dispatch system in the app engine,
that keeps track of which pull requests need to be reviewed, and
prioritizes them based on some heuristics. Then anybody could then
just run ./sympy-bot work, and this would query the server for a pull
request to review, review it, and repeat. It would then be simple to
just run ./sympy-bot work on a server (or even on your home computer
while you sleep). But this has not yet been implemented (patches
We also planned to have the bot set a label to the pull request in the
GitHub issue tracker to indicate if the tests passed or not, but this
has not been implemented yet either.
Setting it up to work with IPython should not be difficult. You'll
probably have to modularize out some SymPy specific code, but that
won't be hard.
On Sat, Jan 28, 2012 at 5:54 PM, Fernando Perez <firstname.lastname@example.org> wrote:
> On Sat, Jan 28, 2012 at 3:54 PM, MinRK <email@example.com> wrote:
>> 1. has the advantage of being automatic rather than voluntary (less
>> vulnerable to "There's no way this tiny change would break anything"),
>> but 2. fits better in our current pattern.
> I think I prefer 2 also because "all tests pass" is still not a
> guarantee of correctness, especially in light of our spotty coverage
> in some areas. So I prefer to keep a human at the wheel, but the more
> we can do to give that human all the information to streamline the
> process, the better.
> The sympy review page is really awesome (btw, the correct link is
> http://reviews.sympy.org). There are two things I'd like on top of
> 1. A way for authorized users to request a refresh of a specific PR,
> that would both recheck merge status and rerun the tests. Maybe they
> have that already and it's just not visible on the page.
> 2. On the main page, a status indicator in the list indicating whether
> tests passed or not. That way you could see at a glance which ones to
> focus on first.
> But that is a beautiful system... It would be great to set it up for
> ipython as well... Their code is here:
> IPython-dev mailing list
More information about the IPython-dev