[Numpy-discussion] Revisiting numpy/scipy on 64 bit OSX

Michael Abshoff michael.abshoff@googlemail....
Fri Aug 22 11:34:33 CDT 2008

Robert Kern wrote:
> On Fri, Aug 22, 2008 at 07:00, Chris Kees
> <christopher.e.kees@usace.army.mil> wrote:


>> I've been experimenting with both a non-framework, non-universal 64-bit
>> build and a 4-way universal build of the python (2.6) trunk with numpy
>> 1.1.1. The non-framework 64 build appears to give me exactly the same
>> results from numpy.test() as the standard 32-bit version (as well as
>> allowing large arrays like numpy.zeros((1000,1000,1000),'d') ), which is
>> <unittest._TextTestResult run=1300 errors=0 failures=0>

I used a gcc 4.2.4 build from sources and last time I used XCode (and in 
addition a gfortran build from 4.2.3 sources) most things went fine. 
Note that I am using Python 2.5.2 and that I had to make configure.in 
not add some default BASEFLAGS since those added flags only exist for 
the Apple version of gcc. This was also numpy and scipy svn tip :)

>> Our numerical models also seem to run fine with it using 8-16G. The 4-way
>> universal python gives the same results in 32-bit  but when running in
>> 64-bit I get an error in the tests below, which I haven't had time to look
>> at.  It also gives the error
>>>>> a = numpy.zeros((1000,1000,1000),'d')
>> Traceback (most recent call last):
>>  File "<stdin>", line 1, in <module>
>> ValueError: dimensions too large.
> Much of our configuration occurs by compiling small C programs and
> executing them. Probably, one of these got run in 32-bit mode, and
> that fooled the numpy build into thinking that it was for 32-bit only.

Yeah, building universal binaries is still fraught with issues since 
much code out there uses values derived at configure time for endianess 
and other values. IIRC Apple patches Python to make some of those 
constants functions, but that recollection might be wrong in case of 
Python. I usually use lipo to make universal binaries theses days to get 
around that limitation, but that means four compilations instead of two.

My main goal in all of this is a 64 bit Sage on OSX (which I am 
reasonable close to fully working), but due to above mentioned problems 
for example with gmp it seems unlikely that I can produce a universal 
version directly and lipo is a way out of this.

> Unfortunately, what you are trying to do is tantamount to
> cross-compiling, and neither distutils nor the additions we have built
> on top of it work very well with cross-compiling. It's possible that
> we could special case the configuration on OS X, though. Instead of
> trusting the results of the executables, we can probably recognize
> each of the 4 OS X variants through #ifdefs and reset the discovered
> results. This isn't easily extended to all platforms (which is why we
> went with the executable approach in the first place), but OS X on
> both 32-bit and 64-bit will be increasingly common but still
> manageable. I would welcome contributions in this area.

I am actually fairly confident that 64 bit Intel will dominate the Apple 
userbase in the short term. Every laptop and workstation on sale by 
Apple now and in the last two years or so has been 64 bit capable and 
for console applications OSX 10.4 or higher will do. So I see little 
benefit from doing 32 bit on OSX except for legacy support :). I know 
that Apple hardware tends to stick around longer, so even Sage will 
support 32 bit OSX for a while.



More information about the Numpy-discussion mailing list