[IPython-User] ipython parallel
Fri Jun 22 12:14:20 CDT 2012
On 2012-06-22, at 12:15 PM, Harald Schilly wrote:
> +1 to the ipython parallel praise :)
> On Fri, Jun 22, 2012 at 6:03 PM, Wolfgang Kerzendorf
> <firstname.lastname@example.org> wrote:
> If I'm not mistaken, you can put this into the config file. look into
> For my application, I created my own profile with it's own config
> files. That's really great, too.
Can you send me your config file? It would be great to have an example to work from.
>> I would like to preload this data on all of the engines, so I can just access it as a global variable. How do I do this?
> The cluster I use has an NFS filesystem. I think that it what you
> want. Much easier to deal with this (+ you'll need shared user
> accounts, of course)
That's not quite what I mean. So I want to run a function a thousand times (f(a,b)) and a=arange(1000) and b is a constant variable for all of these 1000 tasks consisting of a 100mb numpy array. I don't want to send this everytime. I want it to send it once to all engines and then just reuse it everytime. I also don't want to load it everytime from an NFS file system.
>> I suggest making a little easy config file that contains ip adresses and number of engines to start (and connection method) that would go along with ipcluster. ipcluster would then start up engines on the specified ip addresses and build a cluster.
> In my config file, mentioned above, I have a small loop that generates
> all the names of the machines.
> ipcluster start --profile=<name> then does the rest :)
if you could send it (with private information replaced by xxx's that would be awesome.
> IPython-User mailing list
More information about the IPython-User