[Numpy-discussion] one-offset arrays
nodwell at physics.ubc.ca
Wed Aug 29 00:00:50 CDT 2001
>> Does anyone have a simple method of using one-offset arrays?
>> Specifically, can I define an array "a" so that a refers to the
>> first element?
> Inherit from the UserArray and redefine slicing to your hearts content.
>> Without one-offset indexing, it seems to me that Python is minimally
>> useful for numerical computations. Many, perhaps the majority, of
>> numerical algorithms are one-indexed. Matlab for example is one-based
>> for this reason. In fact it seems strange to me that a "high-level"
>> language like Python should use zero-offset lists.
> Wow, that is a pretty condescending-sounding statement, though I'm sure you
> didn't mean it that way. Python is indeed being used for quite serious
> numerical computations. I have been using Python for quite some time for
> Numerical work and find it's zero-based indexing combined with the
> leave-last-one-out slicing notation to be much more convenient.
Oops, what I really need is a wet-ware (i.e. brain) macro which
enforces the correct order of the pair (think,write,send)! The above
unconsidered comments arose from the sequence
Obviously there is a lot of numerical work being done with python and
people are very happy with it. But for me I still think it would be
"minimally useful" without 1-indexed arrays. Here's why:
In my view, the most important reason to prefer 1-based indexing
versus 0-based indexing is compatibility. For numerical work, some of
the languages which I use or have used are Matlab, Mathematica, Maple
and Fortran. These are all 1-indexed. (C is by nature 0-indexed
because it is so close to machine architecture, but with a little bit
of not-entirely-clean pointer manipulation, you can easily make
1-indexed arrays and matrices.) Obviously Python can't be compatible
with these languages in a strict sense, but like most people who do
some programming work, I've built up a library of my own commonly used
routines specific to my work; in general it's a trivial matter to
translate numerical routines from one language to the another if
translation is just a matter of substituting of one set of syntactical
symbols and function names for anther. However it can be damn tricky
to convert 1-indexed code to 0-indexed code or visa versa without
introducing any errors- believe me! (Yes it's possible to call nearly
any language from nearly any other language these days so in theory
you don't have to recode, but there are lots of reasons why often
recoding is the preferable route.)
The other reason for choosing 1-based indexing is to keep the code as
near to the standard notation as possible. This is one of the
advantages of using a high level language - it approximates the way
you think about things instead of the way the computer organizes
them. Of course, this can go either way depending on the quantity in
question: as a simple example a spatial vector (x,y,z) is
conventionally labelled 1,2,3 (1-indexed), but a relativistic
four-vector with time included (t,x,y,z) is conventionally labelled
0,1,2,3 (0-indexed). So ideally one would be able to choose the
indexing-type case-by-case. I'm sure that computer programmers will
argue vehemently that code which mixes both 0-indexed and 1-indexed
arrays is hard to understand and maintain, but for experts in a
particular field who are accustomed to certain ingrained notations, it
is the code which breaks the conventional notation which is most
error-prone. In my case, I'm dealing at the moment with crystal
structures with which are associated certain conventional sets of
vectors and tensors - all 1-indexed by convention. I find it a
delicate enough task to always get the correct vector or tensor
without having to remember that d is actually d3. Defining d1,d2,d3
is not convenient because often the index itself needs to be
I guess if I understood the reason for 0-indexed lists and tuples in
Python I would be happier. In normal, everyday usage, sets,
collections and lists are 1-indexed (the first item, the second item,
the third item, and so on). Python is otherwise such an elegant and
natural language. Why the ugly exception of making the user conform to
the underlying mechanism of an array being an address plus an offset?
All this is really neither here nor there, since this debate, at least
as far as Python is concerned, was probably settled 10 years ago and
I'm sure nobody wants to hear anything more about it at this point.
As you point out, I can define my own array type with inheritance. I
will also need my own range command and several other functions which
haven't occured to me yet. I was hoping that there would be a standard
module to implement this.
By the way, what is leave-last-one-out slicing? Is it
or is it
or is it something else?
More information about the Numpy-discussion