[Numpy-discussion] Proposed Roadmap Overview

Nathaniel Smith njs@pobox....
Sun Feb 19 17:39:45 CST 2012

On Sun, Feb 19, 2012 at 7:13 PM, Mark Wiebe <mwwiebe@gmail.com> wrote:
> On Sun, Feb 19, 2012 at 5:25 AM, Nathaniel Smith <njs@pobox.com> wrote:
>> Precompiled headers can help some, but require complex and highly
>> non-portable build-system support. (E.g., gcc's precompiled header
>> constraints are here:
>> http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html -- only one
>> per source file, etc.)
> This doesn't look too bad, I think it would be worth setting these up in
> NumPy. The complexity you see is because its pretty close to the only way
> that precompiled headers could be set up.

Sure, so long as you know what headers every file needs. (Or more
likely, figure out a more-or-less complete set of all the headers
might ever need, and then -include that into every file.)

>> To demonstrate: a trivial hello-world in C using <stdio.h>, versus a
>> trivial version in C++ using <iostream>.
>> On my laptop (gcc 4.5.2), compiling each program 100 times in a loop
>> requires:
>>  C: 2.28 CPU seconds
>>  C compiled with C++ compiler: 4.61 CPU seconds
>>  C++: 17.66 CPU seconds
>> Slowdown for using g++ instead of gcc: 2.0x
>> Slowdown for using C++ standard library: 3.8x
>> Total C++ penalty: 7.8x
>> Lines of code compiled in each case:
>>  $ gcc -E hello.c | wc
>>      855    2039   16934
>>  $ g++ -E hello.cc | wc
>>    18569   40994  437954
>> (I.e., the C++ hello world is almost half a megabyte.)
>> Of course we won't be using <iostream>, but <vector>, <unordered_map>
>> etc. all have the same basic character.
> Thanks for doing the benchmark. It is a bit artificial, however, and when I
> tried these trivial examples with -O0 and -O2, the difference (in gcc 4.7)
> of the C++ compile time was about 4%. In NumPy presently as it is in C, the
> difference between -O0 and -O2 is very significant, and any comparisons need
> to take this kind of thing into account. When I said I thought the
> compile-time differences would be smaller than many people expect, I was
> thinking about how this optimization phase, which is shared between C and
> C++, often dominating the compile times.

Sure -- but the effective increased code-size for STL-using C++
affects the optimizer too; it's effectively re-optimizing all the used
parts of STL again for each source file. (Presumably in this benchmark
that half megabyte of extra code is mostly unused, and therefore
getting thrown out before the optimizer does any work on it -- but
that doesn't happen if you're actually using the library!) Maybe
things have gotten better in the last year or two, I dunno; if you run
a better benchmark I'll listen. But there's an order-of-magnitude
difference in compile times between most real-world C projects and
most real-world C++ projects. It might not be a deal-breaker and it
might not apply for subset of C++ you're planning to use, but AFAICT
that's the facts.

-- Nathaniel

More information about the NumPy-Discussion mailing list