[Numpy-discussion] [Python-3000] PEP 31XX: A Type Hierarchy for Numbers (and other algebraic entities)
Guido van Rossum
guido@python....
Sun Apr 29 17:46:58 CDT 2007
On 4/29/07, Jeffrey Yasskin <jyasskin@gmail.com> wrote:
> On 4/28/07, Baptiste Carvello <baptiste13@altern.org> wrote:
> > 2) In the PEP, the concepts are used *inconsistently*. Complex derives from Ring
> > because the set of complex numbers *is a* ring. Int derives from Complex because
> > integer are complex numbers (or, alternatively, the set of integers *is included
> > in* the set of complex numbers). The consistent way could be to make the Complex
> > class an instance of Ring, not a subclass.
>
> Good point. In this structure, isinstance(3, Ring) really means that 3
> is a member of some (unspecified) ring, not that 3 isa Ring,
To ask whether x is a Ring, you'd use issubclass(x, Ring). Now, in a
different context (still in Python) you might define a class Ring
whose *instances* are rings; but in the current draft of PEP 3141,
Ring is a class whose subclasses are rings.
[BTW "isa" is not an English word, nor used anywhere in Python. If
"isa" means "is a" you might as well write proper English; if it has
additional connotations, they're likely lost to this crowd so you
can't count on your readers knowing the distinction.]
> but the ambiguity there is probably the root cause of the problem with
> mixed-mode operations.
(Where by mixed-mode operations you mean e.g. trying to add or
multiply members of two *different* rings, right?)
> We should also say that isinstance(3, Complex)
> means that 3 is a member of some subring of the complex numbers, which
> preserves the claim that Complex is a subtype of Ring.
Hm... it's beginning to look like binary operations in the
mathematical sense just don't have an exact equivalent in OO type
theory. (Or vice versa.) There really isn't a way to define an
operation binop(a: T, b: T) -> T in such a way that it is clear what
should happen if a and b are members of two different subtypes of T,
named T1 and T2. Classic OO seems to indicate that this must be
defined, and there are plenty of examples that seem to agree, e.g.
when T1 and T2 are trivial subtypes of T (maybe each adding a
different inessential method). OTOH we have the counter-example where
T==Ring and T1 and T2 are two different, unrelated Rings, and the
result may not be defined.
Hmm... Maybe the conclusion to draw from this is that we shouldn't
make Ring a class? Maybe it ought to be a metaclass, so we could ask
isinstance(Complex, Ring)?
Perhaps a similar line of reasoning migtht apply to PartiallyOrdered
and TotallyOrdered.
> Up to here, things make sense, but because of how ABCs work, we need
> issubclass(rational, Complex). I suppose that's true too, since
> isinstance(3.4, rational) means "3.4 is a member of the rational
> subring of the complex numbers", which implies that "3.4 is a member
> of some subring of the complex numbers."
>
> There may be better names for these concepts. Perhaps suffixing every
> numeric ABC with "Element"? Do you have suggestions?
Maybe we should stop trying to capture radically different
mathematical number systems using classes or types, and limit
ourselves to capturing the systems one learns in high school: C, R, Q,
Z, and (perhaps) N (really N0). The concrete types would be complex <:
C, float<:R, Decimal<:R, int<:Z. NumPy would have many more. One could
argue that float and Decimal are <:Q, but I'm not sure if that makes
things better pragmatically; I guess I'm coming from the old Algol
school where float was actually called real (and in retrospect I wish
I'd called it that in Python). I'd rather reserve membership of Q for
an infinite precision rational type (which many people have
independently implemented).
> Jason Orendorff points out that Haskell typeclasses capture the fact
> that complex is an instance of Ring. I think imitating them as much as
> possible would indeed imply making the numeric ABCs into metaclasses
> (in Haskell terminology, "kinds"). To tell if the arguments to a
> function were in the same total order, you'd check if they had any
> common superclasses that were themselves instances of TotallyOrdered.
> I don't know enough about how metaclasses are typically used to know
> how that would conflict.
The more I think about it, it sounds like the right thing to do. To
take PartiallyOrdered (let's say PO for brevity) as an example, the
Set class should specify PO as a metaclass. The PO metaclass could
require that the class implement __lt__ and __le__. If it found a
class that didn't implement them, it could make the class abstract by
adding the missing methods to its __abstractmethods__ attribute. Or,
if it found that the class implemented one but not the other, it could
inject a default implementation of the other in terms of the one and
__eq__.
This leaves us with the question of how to check whether an object is
partially orderable. Though that may really be the wrong question --
perhaps you should ask whether two objects are partially orderable
relative to each other. For that, you would first have to find the
most derived common base class (if that is even always a defined
operation(*)), and then check whether that class is an instance of PO.
It seems easier to just try the comparison -- duck typing isn't dead
yet! I don't think this is worth introducing a new inspection
primitive ('ismetainstance(x, PO)').
The PO class may still be useful for introspection: at the meta-level,
it may be useful occasionally to insist that or inquire whether a
given *class* is PO. (Or TO, or a Ring, etc.)
Now, you could argue that Complex should also be a metaclass. While
that may mathematically meaningful (for all I know there are people
doing complex number theory using Complex[Z/n]), for Python's numeric
classes I think it's better to make Complex a regular class
representing all the usual complex numbers (i.e. a pair of Real
numbers). I expect that the complex subclasses used in practice are
all happy under mixed arithmetic using the usual definition of mixed
arithmetic: convert both arguments to a common base class and compute
the operation in that domain.
(*) consider classes AB derived from (B, A) and BA derived from (A,
B). Would A or B be the most derived base class? Or would we have to
skip both and continue the search with A's and B's base classes?
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
More information about the Numpy-discussion
mailing list