[Numpy-svn] r5681 - in trunk: . doc doc/floatint doc/test numpy numpy/doc numpy/doc/reference

numpy-svn@scip... numpy-svn@scip...
Sat Aug 23 18:20:13 CDT 2008


Author: stefan
Date: 2008-08-23 18:17:23 -0500 (Sat, 23 Aug 2008)
New Revision: 5681

Added:
   trunk/doc/
   trunk/doc/CAPI.txt
   trunk/doc/DISTUTILS.txt
   trunk/doc/EXAMPLE_DOCSTRING.txt
   trunk/doc/HOWTO_BUILD_DOCS.txt
   trunk/doc/HOWTO_DOCUMENT.txt
   trunk/doc/README.txt
   trunk/doc/cython/
   trunk/doc/example.py
   trunk/doc/floatint/
   trunk/doc/html/
   trunk/doc/newdtype_example/
   trunk/doc/npy-format.txt
   trunk/doc/pep_buffer.txt
   trunk/doc/pyrex/
   trunk/doc/records.txt
   trunk/doc/swig/
   trunk/doc/test/
   trunk/doc/ufuncs.txt
   trunk/numpy/doc/basics.py
   trunk/numpy/doc/broadcasting.py
   trunk/numpy/doc/creation.py
   trunk/numpy/doc/glossary.py
   trunk/numpy/doc/howtofind.py
   trunk/numpy/doc/indexing.py
   trunk/numpy/doc/internals.py
   trunk/numpy/doc/io.py
   trunk/numpy/doc/jargon.py
   trunk/numpy/doc/methods_vs_functions.py
   trunk/numpy/doc/misc.py
   trunk/numpy/doc/performance.py
   trunk/numpy/doc/structured_arrays.py
   trunk/numpy/doc/ufuncs.py
Removed:
   trunk/doc/floatint/__init__.py
   trunk/doc/test/Array.i
   trunk/doc/test/Array1.cxx
   trunk/doc/test/Array1.h
   trunk/doc/test/Array2.cxx
   trunk/doc/test/Array2.h
   trunk/doc/test/Farray.cxx
   trunk/doc/test/Farray.h
   trunk/doc/test/Farray.i
   trunk/doc/test/Fortran.cxx
   trunk/doc/test/Fortran.h
   trunk/doc/test/Fortran.i
   trunk/doc/test/Makefile
   trunk/doc/test/Matrix.cxx
   trunk/doc/test/Matrix.h
   trunk/doc/test/Matrix.i
   trunk/doc/test/Tensor.cxx
   trunk/doc/test/Tensor.h
   trunk/doc/test/Tensor.i
   trunk/doc/test/Vector.cxx
   trunk/doc/test/Vector.h
   trunk/doc/test/Vector.i
   trunk/doc/test/setup.py
   trunk/doc/test/testArray.py
   trunk/doc/test/testFarray.py
   trunk/doc/test/testFortran.py
   trunk/doc/test/testMatrix.py
   trunk/doc/test/testTensor.py
   trunk/doc/test/testVector.py
   trunk/numpy/doc/CAPI.txt
   trunk/numpy/doc/DISTUTILS.txt
   trunk/numpy/doc/EXAMPLE_DOCSTRING.txt
   trunk/numpy/doc/HOWTO_BUILD_DOCS.txt
   trunk/numpy/doc/HOWTO_DOCUMENT.txt
   trunk/numpy/doc/README.txt
   trunk/numpy/doc/cython/
   trunk/numpy/doc/example.py
   trunk/numpy/doc/html/
   trunk/numpy/doc/newdtype_example/
   trunk/numpy/doc/npy-format.txt
   trunk/numpy/doc/pep_buffer.txt
   trunk/numpy/doc/pyrex/
   trunk/numpy/doc/records.txt
   trunk/numpy/doc/reference/basics.py
   trunk/numpy/doc/reference/broadcasting.py
   trunk/numpy/doc/reference/creation.py
   trunk/numpy/doc/reference/glossary.py
   trunk/numpy/doc/reference/howtofind.py
   trunk/numpy/doc/reference/indexing.py
   trunk/numpy/doc/reference/internals.py
   trunk/numpy/doc/reference/io.py
   trunk/numpy/doc/reference/jargon.py
   trunk/numpy/doc/reference/methods_vs_functions.py
   trunk/numpy/doc/reference/misc.py
   trunk/numpy/doc/reference/performance.py
   trunk/numpy/doc/reference/structured_arrays.py
   trunk/numpy/doc/reference/ufuncs.py
   trunk/numpy/doc/swig/
   trunk/numpy/doc/ufuncs.txt
Modified:
   trunk/numpy/__init__.py
   trunk/numpy/doc/__init__.py
Log:
Move documentation outside of source tree.  Remove `doc` import from __init__.


Copied: trunk/doc/CAPI.txt (from rev 5669, trunk/numpy/doc/CAPI.txt)

Copied: trunk/doc/DISTUTILS.txt (from rev 5669, trunk/numpy/doc/DISTUTILS.txt)

Copied: trunk/doc/EXAMPLE_DOCSTRING.txt (from rev 5669, trunk/numpy/doc/EXAMPLE_DOCSTRING.txt)

Copied: trunk/doc/HOWTO_BUILD_DOCS.txt (from rev 5669, trunk/numpy/doc/HOWTO_BUILD_DOCS.txt)

Copied: trunk/doc/HOWTO_DOCUMENT.txt (from rev 5669, trunk/numpy/doc/HOWTO_DOCUMENT.txt)

Copied: trunk/doc/README.txt (from rev 5669, trunk/numpy/doc/README.txt)

Copied: trunk/doc/cython (from rev 5669, trunk/numpy/doc/cython)

Copied: trunk/doc/example.py (from rev 5669, trunk/numpy/doc/example.py)

Copied: trunk/doc/floatint (from rev 5669, trunk/numpy/doc/newdtype_example/floatint)

Deleted: trunk/doc/floatint/__init__.py
===================================================================

Copied: trunk/doc/html (from rev 5669, trunk/numpy/doc/html)

Copied: trunk/doc/newdtype_example (from rev 5669, trunk/numpy/doc/newdtype_example)

Copied: trunk/doc/npy-format.txt (from rev 5669, trunk/numpy/doc/npy-format.txt)

Copied: trunk/doc/pep_buffer.txt (from rev 5669, trunk/numpy/doc/pep_buffer.txt)

Copied: trunk/doc/pyrex (from rev 5669, trunk/numpy/doc/pyrex)

Copied: trunk/doc/records.txt (from rev 5669, trunk/numpy/doc/records.txt)

Copied: trunk/doc/swig (from rev 5669, trunk/numpy/doc/swig)

Copied: trunk/doc/test (from rev 5669, trunk/numpy/doc/swig/test)

Deleted: trunk/doc/test/Array.i
===================================================================
--- trunk/numpy/doc/swig/test/Array.i	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Array.i	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,107 +0,0 @@
-// -*- c++ -*-
-
-%module Array
-
-%{
-#define SWIG_FILE_WITH_INIT
-#include "Array1.h"
-#include "Array2.h"
-%}
-
-// Get the NumPy typemaps
-%include "../numpy.i"
-
- // Get the STL typemaps
-%include "stl.i"
-
-// Handle standard exceptions
-%include "exception.i"
-%exception
-{
-  try
-  {
-    $action
-  }
-  catch (const std::invalid_argument& e)
-  {
-    SWIG_exception(SWIG_ValueError, e.what());
-  }
-  catch (const std::out_of_range& e)
-  {
-    SWIG_exception(SWIG_IndexError, e.what());
-  }
-}
-%init %{
-  import_array();
-%}
-
-// Global ignores
-%ignore *::operator=;
-%ignore *::operator[];
-
-// Apply the 1D NumPy typemaps
-%apply (int DIM1  , long* INPLACE_ARRAY1)
-      {(int length, long* data          )};
-%apply (long** ARGOUTVIEW_ARRAY1, int* DIM1  )
-      {(long** data             , int* length)};
-
-// Apply the 2D NumPy typemaps
-%apply (int DIM1 , int DIM2 , long* INPLACE_ARRAY2)
-      {(int nrows, int ncols, long* data          )};
-%apply (int* DIM1 , int* DIM2 , long** ARGOUTVIEW_ARRAY2)
-      {(int* nrows, int* ncols, long** data             )};
-// Note: the %apply for INPLACE_ARRAY2 above gets successfully applied
-// to the constructor Array2(int nrows, int ncols, long* data), but
-// does not get applied to the method Array2::resize(int nrows, int
-// ncols, long* data).  I have no idea why.  For this reason the test
-// for Apply2.resize(numpy.ndarray) in testArray.py is commented out.
-
-// Array1 support
-%include "Array1.h"
-%extend Array1
-{
-  void __setitem__(int i, long v)
-  {
-    self->operator[](i) = v;
-  }
-
-  long __getitem__(int i)
-  {
-    return self->operator[](i);
-  }
-
-  int __len__()
-  {
-    return self->length();
-  }
-
-  std::string __str__()
-  {
-    return self->asString();
-  }
-}
-
-// Array2 support
-%include "Array2.h"
-%extend Array2
-{
-  void __setitem__(int i, Array1 & v)
-  {
-    self->operator[](i) = v;
-  }
-
-  Array1 & __getitem__(int i)
-  {
-    return self->operator[](i);
-  }
-
-  int __len__()
-  {
-    return self->nrows() * self->ncols();
-  }
-
-  std::string __str__()
-  {
-    return self->asString();
-  }
-}

Deleted: trunk/doc/test/Array1.cxx
===================================================================
--- trunk/numpy/doc/swig/test/Array1.cxx	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Array1.cxx	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,131 +0,0 @@
-#include "Array1.h"
-#include <iostream>
-#include <sstream>
-
-// Default/length/array constructor
-Array1::Array1(int length, long* data) :
-  _ownData(false), _length(0), _buffer(0)
-{
-  resize(length, data);
-}
-
-// Copy constructor
-Array1::Array1(const Array1 & source) :
-  _length(source._length)
-{
-  allocateMemory();
-  *this = source;
-}
-
-// Destructor
-Array1::~Array1()
-{
-  deallocateMemory();
-}
-
-// Assignment operator
-Array1 & Array1::operator=(const Array1 & source)
-{
-  int len = _length < source._length ? _length : source._length;
-  for (int i=0;  i < len; ++i)
-  {
-    (*this)[i] = source[i];
-  }
-  return *this;
-}
-
-// Equals operator
-bool Array1::operator==(const Array1 & other) const
-{
-  if (_length != other._length) return false;
-  for (int i=0; i < _length; ++i)
-  {
-    if ((*this)[i] != other[i]) return false;
-  }
-  return true;
-}
-
-// Length accessor
-int Array1::length() const
-{
-  return _length;
-}
-
-// Resize array
-void Array1::resize(int length, long* data)
-{
-  if (length < 0) throw std::invalid_argument("Array1 length less than 0");
-  if (length == _length) return;
-  deallocateMemory();
-  _length = length;
-  if (!data)
-  {
-    allocateMemory();
-  }
-  else
-  {
-    _ownData = false;
-    _buffer  = data;
-  }
-}
-
-// Set item accessor
-long & Array1::operator[](int i)
-{
-  if (i < 0 || i >= _length) throw std::out_of_range("Array1 index out of range");
-  return _buffer[i];
-}
-
-// Get item accessor
-const long & Array1::operator[](int i) const
-{
-  if (i < 0 || i >= _length) throw std::out_of_range("Array1 index out of range");
-  return _buffer[i];
-}
-
-// String output
-std::string Array1::asString() const
-{
-  std::stringstream result;
-  result << "[";
-  for (int i=0; i < _length; ++i)
-  {
-    result << " " << _buffer[i];
-    if (i < _length-1) result << ",";
-  }
-  result << " ]";
-  return result.str();
-}
-
-// Get view
-void Array1::view(long** data, int* length) const
-{
-  *data   = _buffer;
-  *length = _length;
-}
-
-// Private methods
- void Array1::allocateMemory()
- {
-   if (_length == 0)
-   {
-     _ownData = false;
-     _buffer  = 0;
-   }
-   else
-   {
-     _ownData = true;
-     _buffer = new long[_length];
-   }
- }
-
- void Array1::deallocateMemory()
- {
-   if (_ownData && _length && _buffer)
-   {
-     delete [] _buffer;
-   }
-   _ownData = false;
-   _length  = 0;
-   _buffer  = 0;
- }

Deleted: trunk/doc/test/Array1.h
===================================================================
--- trunk/numpy/doc/swig/test/Array1.h	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Array1.h	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,55 +0,0 @@
-#ifndef ARRAY1_H
-#define ARRAY1_H
-
-#include <stdexcept>
-#include <string>
-
-class Array1
-{
-public:
-
-  // Default/length/array constructor
-  Array1(int length = 0, long* data = 0);
-
-  // Copy constructor
-  Array1(const Array1 & source);
-
-  // Destructor
-  ~Array1();
-
-  // Assignment operator
-  Array1 & operator=(const Array1 & source);
-
-  // Equals operator
-  bool operator==(const Array1 & other) const;
-
-  // Length accessor
-  int length() const;
-
-  // Resize array
-  void resize(int length, long* data = 0);
-
-  // Set item accessor
-  long & operator[](int i);
-
-  // Get item accessor
-  const long & operator[](int i) const;
-
-  // String output
-  std::string asString() const;
-
-  // Get view
-  void view(long** data, int* length) const;
-
-private:
-  // Members
-  bool _ownData;
-  int _length;
-  long * _buffer;
-
-  // Methods
-  void allocateMemory();
-  void deallocateMemory();
-};
-
-#endif

Deleted: trunk/doc/test/Array2.cxx
===================================================================
--- trunk/numpy/doc/swig/test/Array2.cxx	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Array2.cxx	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,168 +0,0 @@
-#include "Array2.h"
-#include <sstream>
-
-// Default constructor
-Array2::Array2() :
-  _ownData(false), _nrows(0), _ncols(), _buffer(0), _rows(0)
-{ }
-
-// Size/array constructor
-Array2::Array2(int nrows, int ncols, long* data) :
-  _ownData(false), _nrows(0), _ncols(), _buffer(0), _rows(0)
-{
-  resize(nrows, ncols, data);
-}
-
-// Copy constructor
-Array2::Array2(const Array2 & source) :
-  _nrows(source._nrows), _ncols(source._ncols)
-{
-  _ownData = true;
-  allocateMemory();
-  *this = source;
-}
-
-// Destructor
-Array2::~Array2()
-{
-  deallocateMemory();
-}
-
-// Assignment operator
-Array2 & Array2::operator=(const Array2 & source)
-{
-  int nrows = _nrows < source._nrows ? _nrows : source._nrows;
-  int ncols = _ncols < source._ncols ? _ncols : source._ncols;
-  for (int i=0; i < nrows; ++i)
-  {
-    for (int j=0; j < ncols; ++j)
-    {
-      (*this)[i][j] = source[i][j];
-    }
-  }
-  return *this;
-}
-
-// Equals operator
-bool Array2::operator==(const Array2 & other) const
-{
-  if (_nrows != other._nrows) return false;
-  if (_ncols != other._ncols) return false;
-  for (int i=0; i < _nrows; ++i)
-  {
-    for (int j=0; j < _ncols; ++j)
-    {
-      if ((*this)[i][j] != other[i][j]) return false;
-    }
-  }
-  return true;
-}
-
-// Length accessors
-int Array2::nrows() const
-{
-  return _nrows;
-}
-
-int Array2::ncols() const
-{
-  return _ncols;
-}
-
-// Resize array
-void Array2::resize(int nrows, int ncols, long* data)
-{
-  if (nrows < 0) throw std::invalid_argument("Array2 nrows less than 0");
-  if (ncols < 0) throw std::invalid_argument("Array2 ncols less than 0");
-  if (nrows == _nrows && ncols == _ncols) return;
-  deallocateMemory();
-  _nrows = nrows;
-  _ncols = ncols;
-  if (!data)
-  {
-    allocateMemory();
-  }
-  else
-  {
-    _ownData = false;
-    _buffer  = data;
-    allocateRows();
-  }
-}
-
-// Set item accessor
-Array1 & Array2::operator[](int i)
-{
-  if (i < 0 || i > _nrows) throw std::out_of_range("Array2 row index out of range");
-  return _rows[i];
-}
-
-// Get item accessor
-const Array1 & Array2::operator[](int i) const
-{
-  if (i < 0 || i > _nrows) throw std::out_of_range("Array2 row index out of range");
-  return _rows[i];
-}
-
-// String output
-std::string Array2::asString() const
-{
-  std::stringstream result;
-  result << "[ ";
-  for (int i=0; i < _nrows; ++i)
-  {
-    if (i > 0) result << "  ";
-    result << (*this)[i].asString();
-    if (i < _nrows-1) result << "," << std::endl;
-  }
-  result << " ]" << std::endl;
-  return result.str();
-}
-
-// Get view
-void Array2::view(int* nrows, int* ncols, long** data) const
-{
-  *nrows = _nrows;
-  *ncols = _ncols;
-  *data  = _buffer;
-}
-
-// Private methods
-void Array2::allocateMemory()
-{
-  if (_nrows * _ncols == 0)
-  {
-    _ownData = false;
-    _buffer  = 0;
-    _rows    = 0;
-  }
-  else
-  {
-    _ownData = true;
-    _buffer = new long[_nrows*_ncols];
-    allocateRows();
-  }
-}
-
-void Array2::allocateRows()
-{
-  _rows = new Array1[_nrows];
-  for (int i=0; i < _nrows; ++i)
-  {
-    _rows[i].resize(_ncols, &_buffer[i*_ncols]);
-  }
-}
-
-void Array2::deallocateMemory()
-{
-  if (_ownData && _nrows*_ncols && _buffer)
-  {
-    delete [] _rows;
-    delete [] _buffer;
-  }
-  _ownData = false;
-  _nrows   = 0;
-  _ncols   = 0;
-  _buffer  = 0;
-  _rows    = 0;
-}

Deleted: trunk/doc/test/Array2.h
===================================================================
--- trunk/numpy/doc/swig/test/Array2.h	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Array2.h	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,63 +0,0 @@
-#ifndef ARRAY2_H
-#define ARRAY2_H
-
-#include "Array1.h"
-#include <stdexcept>
-#include <string>
-
-class Array2
-{
-public:
-
-  // Default constructor
-  Array2();
-
-  // Size/array constructor
-  Array2(int nrows, int ncols, long* data=0);
-
-  // Copy constructor
-  Array2(const Array2 & source);
-
-  // Destructor
-  ~Array2();
-
-  // Assignment operator
-  Array2 & operator=(const Array2 & source);
-
-  // Equals operator
-  bool operator==(const Array2 & other) const;
-
-  // Length accessors
-  int nrows() const;
-  int ncols() const;
-
-  // Resize array
-  void resize(int ncols, int nrows, long* data=0);
-
-  // Set item accessor
-  Array1 & operator[](int i);
-
-  // Get item accessor
-  const Array1 & operator[](int i) const;
-
-  // String output
-  std::string asString() const;
-
-  // Get view
-  void view(int* nrows, int* ncols, long** data) const;
-
-private:
-  // Members
-  bool _ownData;
-  int _nrows;
-  int _ncols;
-  long * _buffer;
-  Array1 * _rows;
-
-  // Methods
-  void allocateMemory();
-  void allocateRows();
-  void deallocateMemory();
-};
-
-#endif

Deleted: trunk/doc/test/Farray.cxx
===================================================================
--- trunk/numpy/doc/swig/test/Farray.cxx	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Farray.cxx	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,122 +0,0 @@
-#include "Farray.h"
-#include <sstream>
-
-// Size constructor
-Farray::Farray(int nrows, int ncols) :
-  _nrows(nrows), _ncols(ncols), _buffer(0)
-{
-  allocateMemory();
-}
-
-// Copy constructor
-Farray::Farray(const Farray & source) :
-  _nrows(source._nrows), _ncols(source._ncols)
-{
-  allocateMemory();
-  *this = source;
-}
-
-// Destructor
-Farray::~Farray()
-{
-  delete [] _buffer;
-}
-
-// Assignment operator
-Farray & Farray::operator=(const Farray & source)
-{
-  int nrows = _nrows < source._nrows ? _nrows : source._nrows;
-  int ncols = _ncols < source._ncols ? _ncols : source._ncols;
-  for (int i=0; i < nrows; ++i)
-  {
-    for (int j=0; j < ncols; ++j)
-    {
-      (*this)(i,j) = source(i,j);
-    }
-  }
-  return *this;
-}
-
-// Equals operator
-bool Farray::operator==(const Farray & other) const
-{
-  if (_nrows != other._nrows) return false;
-  if (_ncols != other._ncols) return false;
-  for (int i=0; i < _nrows; ++i)
-  {
-    for (int j=0; j < _ncols; ++j)
-    {
-      if ((*this)(i,j) != other(i,j)) return false;
-    }
-  }
-  return true;
-}
-
-// Length accessors
-int Farray::nrows() const
-{
-  return _nrows;
-}
-
-int Farray::ncols() const
-{
-  return _ncols;
-}
-
-// Set item accessor
-long & Farray::operator()(int i, int j)
-{
-  if (i < 0 || i > _nrows) throw std::out_of_range("Farray row index out of range");
-  if (j < 0 || j > _ncols) throw std::out_of_range("Farray col index out of range");
-  return _buffer[offset(i,j)];
-}
-
-// Get item accessor
-const long & Farray::operator()(int i, int j) const
-{
-  if (i < 0 || i > _nrows) throw std::out_of_range("Farray row index out of range");
-  if (j < 0 || j > _ncols) throw std::out_of_range("Farray col index out of range");
-  return _buffer[offset(i,j)];
-}
-
-// String output
-std::string Farray::asString() const
-{
-  std::stringstream result;
-  result << "[ ";
-  for (int i=0; i < _nrows; ++i)
-  {
-    if (i > 0) result << "  ";
-    result << "[";
-    for (int j=0; j < _ncols; ++j)
-    {
-      result << " " << (*this)(i,j);
-      if (j < _ncols-1) result << ",";
-    }
-    result << " ]";
-    if (i < _nrows-1) result << "," << std::endl;
-  }
-  result << " ]" << std::endl;
-  return result.str();
-}
-
-// Get view
-void Farray::view(int* nrows, int* ncols, long** data) const
-{
-  *nrows = _nrows;
-  *ncols = _ncols;
-  *data  = _buffer;
-}
-
-// Private methods
-void Farray::allocateMemory()
-{
-  if (_nrows <= 0) throw std::invalid_argument("Farray nrows <= 0");
-  if (_ncols <= 0) throw std::invalid_argument("Farray ncols <= 0");
-  _buffer = new long[_nrows*_ncols];
-}
-
-inline int Farray::offset(int i, int j) const
-{
-  return i + j * _nrows;
-}

Deleted: trunk/doc/test/Farray.h
===================================================================
--- trunk/numpy/doc/swig/test/Farray.h	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Farray.h	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,56 +0,0 @@
-#ifndef FARRAY_H
-#define FARRAY_H
-
-#include <stdexcept>
-#include <string>
-
-class Farray
-{
-public:
-
-  // Size constructor
-  Farray(int nrows, int ncols);
-
-  // Copy constructor
-  Farray(const Farray & source);
-
-  // Destructor
-  ~Farray();
-
-  // Assignment operator
-  Farray & operator=(const Farray & source);
-
-  // Equals operator
-  bool operator==(const Farray & other) const;
-
-  // Length accessors
-  int nrows() const;
-  int ncols() const;
-
-  // Set item accessor
-  long & operator()(int i, int j);
-
-  // Get item accessor
-  const long & operator()(int i, int j) const;
-
-  // String output
-  std::string asString() const;
-
-  // Get view
-  void view(int* nrows, int* ncols, long** data) const;
-
-private:
-  // Members
-  int _nrows;
-  int _ncols;
-  long * _buffer;
-
-  // Default constructor: not implemented
-  Farray();
-
-  // Methods
-  void allocateMemory();
-  int  offset(int i, int j) const;
-};
-
-#endif

Deleted: trunk/doc/test/Farray.i
===================================================================
--- trunk/numpy/doc/swig/test/Farray.i	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Farray.i	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,73 +0,0 @@
-// -*- c++ -*-
-
-%module Farray
-
-%{
-#define SWIG_FILE_WITH_INIT
-#include "Farray.h"
-%}
-
-// Get the NumPy typemaps
-%include "../numpy.i"
-
- // Get the STL typemaps
-%include "stl.i"
-
-// Handle standard exceptions
-%include "exception.i"
-%exception
-{
-  try
-  {
-    $action
-  }
-  catch (const std::invalid_argument& e)
-  {
-    SWIG_exception(SWIG_ValueError, e.what());
-  }
-  catch (const std::out_of_range& e)
-  {
-    SWIG_exception(SWIG_IndexError, e.what());
-  }
-}
-%init %{
-  import_array();
-%}
-
-// Global ignores
-%ignore *::operator=;
-%ignore *::operator();
-
-// Apply the 2D NumPy typemaps
-%apply (int* DIM1 , int* DIM2 , long** ARGOUTVIEW_FARRAY2)
-      {(int* nrows, int* ncols, long** data              )};
-
-// Farray support
-%include "Farray.h"
-%extend Farray
-{
-  PyObject * __setitem__(PyObject* index, long v)
-  {
-    int i, j;
-    if (!PyArg_ParseTuple(index, "ii:Farray___setitem__",&i,&j)) return NULL;
-    self->operator()(i,j) = v;
-    return Py_BuildValue("");
-  }
-
-  PyObject * __getitem__(PyObject * index)
-  {
-    int i, j;
-    if (!PyArg_ParseTuple(index, "ii:Farray___getitem__",&i,&j)) return NULL;
-    return SWIG_From_long(self->operator()(i,j));
-  }
-
-  int __len__()
-  {
-    return self->nrows() * self->ncols();
-  }
-
-  std::string __str__()
-  {
-    return self->asString();
-  }
-}

Deleted: trunk/doc/test/Fortran.cxx
===================================================================
--- trunk/numpy/doc/swig/test/Fortran.cxx	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Fortran.cxx	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,24 +0,0 @@
-#include <stdlib.h>
-#include <math.h>
-#include <iostream>
-#include "Fortran.h"
-
-#define TEST_FUNCS(TYPE, SNAME) \
-\
-TYPE SNAME ## SecondElement(TYPE * matrix, int rows, int cols) {	  \
-  TYPE result = matrix[1];                                \
-  return result;                                          \
-}                                                         \
-
-TEST_FUNCS(signed char       , schar    )
-TEST_FUNCS(unsigned char     , uchar    )
-TEST_FUNCS(short             , short    )
-TEST_FUNCS(unsigned short    , ushort   )
-TEST_FUNCS(int               , int      )
-TEST_FUNCS(unsigned int      , uint     )
-TEST_FUNCS(long              , long     )
-TEST_FUNCS(unsigned long     , ulong    )
-TEST_FUNCS(long long         , longLong )
-TEST_FUNCS(unsigned long long, ulongLong)
-TEST_FUNCS(float             , float    )
-TEST_FUNCS(double            , double   )

Deleted: trunk/doc/test/Fortran.h
===================================================================
--- trunk/numpy/doc/swig/test/Fortran.h	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Fortran.h	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,21 +0,0 @@
-#ifndef FORTRAN_H
-#define FORTRAN_H
-
-#define TEST_FUNC_PROTOS(TYPE, SNAME) \
-\
-TYPE SNAME ## SecondElement(    TYPE * matrix, int rows, int cols); \
-
-TEST_FUNC_PROTOS(signed char       , schar    )
-TEST_FUNC_PROTOS(unsigned char     , uchar    )
-TEST_FUNC_PROTOS(short             , short    )
-TEST_FUNC_PROTOS(unsigned short    , ushort   )
-TEST_FUNC_PROTOS(int               , int      )
-TEST_FUNC_PROTOS(unsigned int      , uint     )
-TEST_FUNC_PROTOS(long              , long     )
-TEST_FUNC_PROTOS(unsigned long     , ulong    )
-TEST_FUNC_PROTOS(long long         , longLong )
-TEST_FUNC_PROTOS(unsigned long long, ulongLong)
-TEST_FUNC_PROTOS(float             , float    )
-TEST_FUNC_PROTOS(double            , double   )
-
-#endif

Deleted: trunk/doc/test/Fortran.i
===================================================================
--- trunk/numpy/doc/swig/test/Fortran.i	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Fortran.i	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,36 +0,0 @@
-// -*- c++ -*-
-%module Fortran
-
-%{
-#define SWIG_FILE_WITH_INIT
-#include "Fortran.h"
-%}
-
-// Get the NumPy typemaps
-%include "../numpy.i"
-
-%init %{
-  import_array();
-%}
-
-%define %apply_numpy_typemaps(TYPE)
-
-%apply (TYPE* IN_FARRAY2, int DIM1, int DIM2) {(TYPE* matrix, int rows, int cols)};
-
-%enddef    /* %apply_numpy_typemaps() macro */
-
-%apply_numpy_typemaps(signed char       )
-%apply_numpy_typemaps(unsigned char     )
-%apply_numpy_typemaps(short             )
-%apply_numpy_typemaps(unsigned short    )
-%apply_numpy_typemaps(int               )
-%apply_numpy_typemaps(unsigned int      )
-%apply_numpy_typemaps(long              )
-%apply_numpy_typemaps(unsigned long     )
-%apply_numpy_typemaps(long long         )
-%apply_numpy_typemaps(unsigned long long)
-%apply_numpy_typemaps(float             )
-%apply_numpy_typemaps(double            )
-
-// Include the header file to be wrapped
-%include "Fortran.h"

Deleted: trunk/doc/test/Makefile
===================================================================
--- trunk/numpy/doc/swig/test/Makefile	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Makefile	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,34 +0,0 @@
-# SWIG
-INTERFACES = Array.i Farray.i Vector.i Matrix.i Tensor.i Fortran.i
-WRAPPERS   = $(INTERFACES:.i=_wrap.cxx)
-PROXIES    = $(INTERFACES:.i=.py      )
-
-# Default target: build the tests
-.PHONY : all
-all: $(WRAPPERS) Array1.cxx Array1.h Farray.cxx Farray.h Vector.cxx Vector.h \
-	Matrix.cxx Matrix.h Tensor.cxx Tensor.h Fortran.h Fortran.cxx
-	./setup.py build_ext -i
-
-# Test target: run the tests
-.PHONY : test
-test: all
-	python testVector.py
-	python testMatrix.py
-	python testTensor.py
-	python testArray.py
-	python testFarray.py
-	python testFortran.py
-
-# Rule: %.i -> %_wrap.cxx
-%_wrap.cxx: %.i %.h ../numpy.i
-	swig -c++ -python $<
-%_wrap.cxx: %.i %1.h %2.h ../numpy.i
-	swig -c++ -python $<
-
-# Clean target
-.PHONY : clean
-clean:
-	$(RM) -r build
-	$(RM) *.so
-	$(RM) $(WRAPPERS)
-	$(RM) $(PROXIES)

Deleted: trunk/doc/test/Matrix.cxx
===================================================================
--- trunk/numpy/doc/swig/test/Matrix.cxx	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Matrix.cxx	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,112 +0,0 @@
-#include <stdlib.h>
-#include <math.h>
-#include <iostream>
-#include "Matrix.h"
-
-// The following macro defines a family of functions that work with 2D
-// arrays with the forms
-//
-//     TYPE SNAMEDet(    TYPE matrix[2][2]);
-//     TYPE SNAMEMax(    TYPE * matrix, int rows, int cols);
-//     TYPE SNAMEMin(    int rows, int cols, TYPE * matrix);
-//     void SNAMEScale(  TYPE matrix[3][3]);
-//     void SNAMEFloor(  TYPE * array,  int rows, int cols, TYPE floor);
-//     void SNAMECeil(   int rows, int cols, TYPE * array, TYPE ceil);
-//     void SNAMELUSplit(TYPE in[3][3], TYPE lower[3][3], TYPE upper[3][3]);
-//
-// for any specified type TYPE (for example: short, unsigned int, long
-// long, etc.) with given short name SNAME (for example: short, uint,
-// longLong, etc.).  The macro is then expanded for the given
-// TYPE/SNAME pairs.  The resulting functions are for testing numpy
-// interfaces, respectively, for:
-//
-//  * 2D input arrays, hard-coded length
-//  * 2D input arrays
-//  * 2D input arrays, data last
-//  * 2D in-place arrays, hard-coded lengths
-//  * 2D in-place arrays
-//  * 2D in-place arrays, data last
-//  * 2D argout arrays, hard-coded length
-//
-#define TEST_FUNCS(TYPE, SNAME) \
-\
-TYPE SNAME ## Det(TYPE matrix[2][2]) {                          \
-  return matrix[0][0]*matrix[1][1] - matrix[0][1]*matrix[1][0]; \
-}                                                               \
-\
-TYPE SNAME ## Max(TYPE * matrix, int rows, int cols) {	  \
-  int i, j, index;                                        \
-  TYPE result = matrix[0];                                \
-  for (j=0; j<cols; ++j) {                                \
-    for (i=0; i<rows; ++i) {                              \
-      index = j*rows + i;                                 \
-      if (matrix[index] > result) result = matrix[index]; \
-    }                                                     \
-  }                                                       \
-  return result;                                          \
-}                                                         \
-\
-TYPE SNAME ## Min(int rows, int cols, TYPE * matrix) {    \
-  int i, j, index;                                        \
-  TYPE result = matrix[0];                                \
-  for (j=0; j<cols; ++j) {                                \
-    for (i=0; i<rows; ++i) {                              \
-      index = j*rows + i;                                 \
-      if (matrix[index] < result) result = matrix[index]; \
-    }                                                     \
-  }                                                       \
-  return result;                                          \
-}                                                         \
-\
-void SNAME ## Scale(TYPE array[3][3], TYPE val) { \
-  for (int i=0; i<3; ++i)                         \
-    for (int j=0; j<3; ++j)                       \
-      array[i][j] *= val;                         \
-}                                                 \
-\
-void SNAME ## Floor(TYPE * array, int rows, int cols, TYPE floor) { \
-  int i, j, index;                                                  \
-  for (j=0; j<cols; ++j) {                                          \
-    for (i=0; i<rows; ++i) {                                        \
-      index = j*rows + i;                                           \
-      if (array[index] < floor) array[index] = floor;               \
-    }                                                               \
-  }                                                                 \
-}                                                                   \
-\
-void SNAME ## Ceil(int rows, int cols, TYPE * array, TYPE ceil) { \
-  int i, j, index;                                                \
-  for (j=0; j<cols; ++j) {                                        \
-    for (i=0; i<rows; ++i) {                                      \
-      index = j*rows + i;                                         \
-      if (array[index] > ceil) array[index] = ceil;               \
-    }                                                             \
-  }                                                               \
-}								  \
-\
-void SNAME ## LUSplit(TYPE matrix[3][3], TYPE lower[3][3], TYPE upper[3][3]) { \
-  for (int i=0; i<3; ++i) {						       \
-    for (int j=0; j<3; ++j) {						       \
-      if (i >= j) {						 	       \
-	lower[i][j] = matrix[i][j];					       \
-	upper[i][j] = 0;					 	       \
-      } else {							 	       \
-	lower[i][j] = 0;					 	       \
-	upper[i][j] = matrix[i][j];					       \
-      }								 	       \
-    }								 	       \
-  }								 	       \
-}
-
-TEST_FUNCS(signed char       , schar    )
-TEST_FUNCS(unsigned char     , uchar    )
-TEST_FUNCS(short             , short    )
-TEST_FUNCS(unsigned short    , ushort   )
-TEST_FUNCS(int               , int      )
-TEST_FUNCS(unsigned int      , uint     )
-TEST_FUNCS(long              , long     )
-TEST_FUNCS(unsigned long     , ulong    )
-TEST_FUNCS(long long         , longLong )
-TEST_FUNCS(unsigned long long, ulongLong)
-TEST_FUNCS(float             , float    )
-TEST_FUNCS(double            , double   )

Deleted: trunk/doc/test/Matrix.h
===================================================================
--- trunk/numpy/doc/swig/test/Matrix.h	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Matrix.h	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,52 +0,0 @@
-#ifndef MATRIX_H
-#define MATRIX_H
-
-// The following macro defines the prototypes for a family of
-// functions that work with 2D arrays with the forms
-//
-//     TYPE SNAMEDet(    TYPE matrix[2][2]);
-//     TYPE SNAMEMax(    TYPE * matrix, int rows, int cols);
-//     TYPE SNAMEMin(    int rows, int cols, TYPE * matrix);
-//     void SNAMEScale(  TYPE array[3][3]);
-//     void SNAMEFloor(  TYPE * array,  int rows, int cols, TYPE floor);
-//     void SNAMECeil(   int rows, int cols, TYPE * array,  TYPE ceil );
-//     void SNAMELUSplit(TYPE in[3][3], TYPE lower[3][3], TYPE upper[3][3]);
-//
-// for any specified type TYPE (for example: short, unsigned int, long
-// long, etc.) with given short name SNAME (for example: short, uint,
-// longLong, etc.).  The macro is then expanded for the given
-// TYPE/SNAME pairs.  The resulting functions are for testing numpy
-// interfaces, respectively, for:
-//
-//  * 2D input arrays, hard-coded lengths
-//  * 2D input arrays
-//  * 2D input arrays, data last
-//  * 2D in-place arrays, hard-coded lengths
-//  * 2D in-place arrays
-//  * 2D in-place arrays, data last
-//  * 2D argout arrays, hard-coded length
-//
-#define TEST_FUNC_PROTOS(TYPE, SNAME) \
-\
-TYPE SNAME ## Det(    TYPE matrix[2][2]); \
-TYPE SNAME ## Max(    TYPE * matrix, int rows, int cols); \
-TYPE SNAME ## Min(    int rows, int cols, TYPE * matrix); \
-void SNAME ## Scale(  TYPE array[3][3], TYPE val); \
-void SNAME ## Floor(  TYPE * array, int rows, int cols, TYPE floor); \
-void SNAME ## Ceil(   int rows, int cols, TYPE * array, TYPE ceil ); \
-void SNAME ## LUSplit(TYPE matrix[3][3], TYPE lower[3][3], TYPE upper[3][3]);
-
-TEST_FUNC_PROTOS(signed char       , schar    )
-TEST_FUNC_PROTOS(unsigned char     , uchar    )
-TEST_FUNC_PROTOS(short             , short    )
-TEST_FUNC_PROTOS(unsigned short    , ushort   )
-TEST_FUNC_PROTOS(int               , int      )
-TEST_FUNC_PROTOS(unsigned int      , uint     )
-TEST_FUNC_PROTOS(long              , long     )
-TEST_FUNC_PROTOS(unsigned long     , ulong    )
-TEST_FUNC_PROTOS(long long         , longLong )
-TEST_FUNC_PROTOS(unsigned long long, ulongLong)
-TEST_FUNC_PROTOS(float             , float    )
-TEST_FUNC_PROTOS(double            , double   )
-
-#endif

Deleted: trunk/doc/test/Matrix.i
===================================================================
--- trunk/numpy/doc/swig/test/Matrix.i	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Matrix.i	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,45 +0,0 @@
-// -*- c++ -*-
-%module Matrix
-
-%{
-#define SWIG_FILE_WITH_INIT
-#include "Matrix.h"
-%}
-
-// Get the NumPy typemaps
-%include "../numpy.i"
-
-%init %{
-  import_array();
-%}
-
-%define %apply_numpy_typemaps(TYPE)
-
-%apply (TYPE IN_ARRAY2[ANY][ANY]) {(TYPE matrix[ANY][ANY])};
-%apply (TYPE* IN_ARRAY2, int DIM1, int DIM2) {(TYPE* matrix, int rows, int cols)};
-%apply (int DIM1, int DIM2, TYPE* IN_ARRAY2) {(int rows, int cols, TYPE* matrix)};
-
-%apply (TYPE INPLACE_ARRAY2[ANY][ANY]) {(TYPE array[3][3])};
-%apply (TYPE* INPLACE_ARRAY2, int DIM1, int DIM2) {(TYPE* array, int rows, int cols)};
-%apply (int DIM1, int DIM2, TYPE* INPLACE_ARRAY2) {(int rows, int cols, TYPE* array)};
-
-%apply (TYPE ARGOUT_ARRAY2[ANY][ANY]) {(TYPE lower[3][3])};
-%apply (TYPE ARGOUT_ARRAY2[ANY][ANY]) {(TYPE upper[3][3])};
-
-%enddef    /* %apply_numpy_typemaps() macro */
-
-%apply_numpy_typemaps(signed char       )
-%apply_numpy_typemaps(unsigned char     )
-%apply_numpy_typemaps(short             )
-%apply_numpy_typemaps(unsigned short    )
-%apply_numpy_typemaps(int               )
-%apply_numpy_typemaps(unsigned int      )
-%apply_numpy_typemaps(long              )
-%apply_numpy_typemaps(unsigned long     )
-%apply_numpy_typemaps(long long         )
-%apply_numpy_typemaps(unsigned long long)
-%apply_numpy_typemaps(float             )
-%apply_numpy_typemaps(double            )
-
-// Include the header file to be wrapped
-%include "Matrix.h"

Deleted: trunk/doc/test/Tensor.cxx
===================================================================
--- trunk/numpy/doc/swig/test/Tensor.cxx	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Tensor.cxx	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,131 +0,0 @@
-#include <stdlib.h>
-#include <math.h>
-#include <iostream>
-#include "Tensor.h"
-
-// The following macro defines a family of functions that work with 3D
-// arrays with the forms
-//
-//     TYPE SNAMENorm(   TYPE tensor[2][2][2]);
-//     TYPE SNAMEMax(    TYPE * tensor, int rows, int cols, int num);
-//     TYPE SNAMEMin(    int rows, int cols, int num, TYPE * tensor);
-//     void SNAMEScale(  TYPE tensor[3][3][3]);
-//     void SNAMEFloor(  TYPE * array,  int rows, int cols, int num, TYPE floor);
-//     void SNAMECeil(   int rows, int cols, int num, TYPE * array, TYPE ceil);
-//     void SNAMELUSplit(TYPE in[2][2][2], TYPE lower[2][2][2], TYPE upper[2][2][2]);
-//
-// for any specified type TYPE (for example: short, unsigned int, long
-// long, etc.) with given short name SNAME (for example: short, uint,
-// longLong, etc.).  The macro is then expanded for the given
-// TYPE/SNAME pairs.  The resulting functions are for testing numpy
-// interfaces, respectively, for:
-//
-//  * 3D input arrays, hard-coded length
-//  * 3D input arrays
-//  * 3D input arrays, data last
-//  * 3D in-place arrays, hard-coded lengths
-//  * 3D in-place arrays
-//  * 3D in-place arrays, data last
-//  * 3D argout arrays, hard-coded length
-//
-#define TEST_FUNCS(TYPE, SNAME) \
-\
-TYPE SNAME ## Norm(TYPE tensor[2][2][2]) {	     \
-  double result = 0;				     \
-  for (int k=0; k<2; ++k)			     \
-    for (int j=0; j<2; ++j)			     \
-      for (int i=0; i<2; ++i)			     \
-	result += tensor[i][j][k] * tensor[i][j][k]; \
-  return (TYPE)sqrt(result/8);			     \
-}						     \
-\
-TYPE SNAME ## Max(TYPE * tensor, int rows, int cols, int num) { \
-  int i, j, k, index;						\
-  TYPE result = tensor[0];					\
-  for (k=0; k<num; ++k) {					\
-    for (j=0; j<cols; ++j) {					\
-      for (i=0; i<rows; ++i) {					\
-	index = k*rows*cols + j*rows + i;			\
-	if (tensor[index] > result) result = tensor[index];	\
-      }								\
-    }								\
-  }								\
-  return result;						\
-}								\
-\
-TYPE SNAME ## Min(int rows, int cols, int num, TYPE * tensor) {	\
-  int i, j, k, index;						\
-  TYPE result = tensor[0];					\
-  for (k=0; k<num; ++k) {					\
-    for (j=0; j<cols; ++j) {					\
-      for (i=0; i<rows; ++i) {					\
-	index = k*rows*cols + j*rows + i;			\
-	if (tensor[index] < result) result = tensor[index];	\
-      }								\
-    }								\
-  }								\
-  return result;						\
-}								\
-\
-void SNAME ## Scale(TYPE array[3][3][3], TYPE val) { \
-  for (int i=0; i<3; ++i)			     \
-    for (int j=0; j<3; ++j)			     \
-      for (int k=0; k<3; ++k)			     \
-	array[i][j][k] *= val;			     \
-}						     \
-\
-void SNAME ## Floor(TYPE * array, int rows, int cols, int num, TYPE floor) { \
-  int i, j, k, index;							     \
-  for (k=0; k<num; ++k) {						     \
-    for (j=0; j<cols; ++j) {						     \
-      for (i=0; i<rows; ++i) {						     \
-	index = k*cols*rows + j*rows + i;				     \
-	if (array[index] < floor) array[index] = floor;			     \
-      }									     \
-    }									     \
-  }									     \
-}									     \
-\
-void SNAME ## Ceil(int rows, int cols, int num, TYPE * array, TYPE ceil) { \
-  int i, j, k, index;							   \
-  for (k=0; k<num; ++k) {						   \
-    for (j=0; j<cols; ++j) {						   \
-      for (i=0; i<rows; ++i) {						   \
-	index = j*rows + i;						   \
-	if (array[index] > ceil) array[index] = ceil;			   \
-      }									   \
-    }									   \
-  }									   \
-}									   \
-\
-void SNAME ## LUSplit(TYPE tensor[2][2][2], TYPE lower[2][2][2], \
-		      TYPE upper[2][2][2]) {			 \
-  int sum;							 \
-  for (int k=0; k<2; ++k) {					 \
-    for (int j=0; j<2; ++j) {					 \
-      for (int i=0; i<2; ++i) {					 \
-	sum = i + j + k;					 \
-	if (sum < 2) {						 \
-	  lower[i][j][k] = tensor[i][j][k];			 \
-	  upper[i][j][k] = 0;					 \
-	} else {						 \
-	  upper[i][j][k] = tensor[i][j][k];			 \
-	  lower[i][j][k] = 0;					 \
-	}							 \
-      }								 \
-    }								 \
-  }								 \
-}
-
-TEST_FUNCS(signed char       , schar    )
-TEST_FUNCS(unsigned char     , uchar    )
-TEST_FUNCS(short             , short    )
-TEST_FUNCS(unsigned short    , ushort   )
-TEST_FUNCS(int               , int      )
-TEST_FUNCS(unsigned int      , uint     )
-TEST_FUNCS(long              , long     )
-TEST_FUNCS(unsigned long     , ulong    )
-TEST_FUNCS(long long         , longLong )
-TEST_FUNCS(unsigned long long, ulongLong)
-TEST_FUNCS(float             , float    )
-TEST_FUNCS(double            , double   )

Deleted: trunk/doc/test/Tensor.h
===================================================================
--- trunk/numpy/doc/swig/test/Tensor.h	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Tensor.h	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,52 +0,0 @@
-#ifndef TENSOR_H
-#define TENSOR_H
-
-// The following macro defines the prototypes for a family of
-// functions that work with 3D arrays with the forms
-//
-//     TYPE SNAMENorm(   TYPE tensor[2][2][2]);
-//     TYPE SNAMEMax(    TYPE * tensor, int rows, int cols, int num);
-//     TYPE SNAMEMin(    int rows, int cols, int num, TYPE * tensor);
-//     void SNAMEScale(  TYPE array[3][3][3]);
-//     void SNAMEFloor(  TYPE * array,  int rows, int cols, int num, TYPE floor);
-//     void SNAMECeil(   int rows, int cols, int num, TYPE * array,  TYPE ceil );
-//     void SNAMELUSplit(TYPE in[3][3][3], TYPE lower[3][3][3], TYPE upper[3][3][3]);
-//
-// for any specified type TYPE (for example: short, unsigned int, long
-// long, etc.) with given short name SNAME (for example: short, uint,
-// longLong, etc.).  The macro is then expanded for the given
-// TYPE/SNAME pairs.  The resulting functions are for testing numpy
-// interfaces, respectively, for:
-//
-//  * 3D input arrays, hard-coded lengths
-//  * 3D input arrays
-//  * 3D input arrays, data last
-//  * 3D in-place arrays, hard-coded lengths
-//  * 3D in-place arrays
-//  * 3D in-place arrays, data last
-//  * 3D argout arrays, hard-coded length
-//
-#define TEST_FUNC_PROTOS(TYPE, SNAME) \
-\
-TYPE SNAME ## Norm(   TYPE tensor[2][2][2]); \
-TYPE SNAME ## Max(    TYPE * tensor, int rows, int cols, int num); \
-TYPE SNAME ## Min(    int rows, int cols, int num, TYPE * tensor); \
-void SNAME ## Scale(  TYPE array[3][3][3], TYPE val); \
-void SNAME ## Floor(  TYPE * array, int rows, int cols, int num, TYPE floor); \
-void SNAME ## Ceil(   int rows, int cols, int num, TYPE * array, TYPE ceil ); \
-void SNAME ## LUSplit(TYPE tensor[2][2][2], TYPE lower[2][2][2], TYPE upper[2][2][2]);
-
-TEST_FUNC_PROTOS(signed char       , schar    )
-TEST_FUNC_PROTOS(unsigned char     , uchar    )
-TEST_FUNC_PROTOS(short             , short    )
-TEST_FUNC_PROTOS(unsigned short    , ushort   )
-TEST_FUNC_PROTOS(int               , int      )
-TEST_FUNC_PROTOS(unsigned int      , uint     )
-TEST_FUNC_PROTOS(long              , long     )
-TEST_FUNC_PROTOS(unsigned long     , ulong    )
-TEST_FUNC_PROTOS(long long         , longLong )
-TEST_FUNC_PROTOS(unsigned long long, ulongLong)
-TEST_FUNC_PROTOS(float             , float    )
-TEST_FUNC_PROTOS(double            , double   )
-
-#endif

Deleted: trunk/doc/test/Tensor.i
===================================================================
--- trunk/numpy/doc/swig/test/Tensor.i	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Tensor.i	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,49 +0,0 @@
-// -*- c++ -*-
-%module Tensor
-
-%{
-#define SWIG_FILE_WITH_INIT
-#include "Tensor.h"
-%}
-
-// Get the NumPy typemaps
-%include "../numpy.i"
-
-%init %{
-  import_array();
-%}
-
-%define %apply_numpy_typemaps(TYPE)
-
-%apply (TYPE IN_ARRAY3[ANY][ANY][ANY]) {(TYPE tensor[ANY][ANY][ANY])};
-%apply (TYPE* IN_ARRAY3, int DIM1, int DIM2, int DIM3)
-      {(TYPE* tensor, int rows, int cols, int num)};
-%apply (int DIM1, int DIM2, int DIM3, TYPE* IN_ARRAY3)
-      {(int rows, int cols, int num, TYPE* tensor)};
-
-%apply (TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) {(TYPE array[3][3][3])};
-%apply (TYPE* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3)
-      {(TYPE* array, int rows, int cols, int num)};
-%apply (int DIM1, int DIM2, int DIM3, TYPE* INPLACE_ARRAY3)
-      {(int rows, int cols, int num, TYPE* array)};
-
-%apply (TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) {(TYPE lower[2][2][2])};
-%apply (TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) {(TYPE upper[2][2][2])};
-
-%enddef    /* %apply_numpy_typemaps() macro */
-
-%apply_numpy_typemaps(signed char       )
-%apply_numpy_typemaps(unsigned char     )
-%apply_numpy_typemaps(short             )
-%apply_numpy_typemaps(unsigned short    )
-%apply_numpy_typemaps(int               )
-%apply_numpy_typemaps(unsigned int      )
-%apply_numpy_typemaps(long              )
-%apply_numpy_typemaps(unsigned long     )
-%apply_numpy_typemaps(long long         )
-%apply_numpy_typemaps(unsigned long long)
-%apply_numpy_typemaps(float             )
-%apply_numpy_typemaps(double            )
-
-// Include the header file to be wrapped
-%include "Tensor.h"

Deleted: trunk/doc/test/Vector.cxx
===================================================================
--- trunk/numpy/doc/swig/test/Vector.cxx	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Vector.cxx	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,100 +0,0 @@
-#include <stdlib.h>
-#include <math.h>
-#include <iostream>
-#include "Vector.h"
-
-// The following macro defines a family of functions that work with 1D
-// arrays with the forms
-//
-//     TYPE SNAMELength( TYPE vector[3]);
-//     TYPE SNAMEProd(   TYPE * series, int size);
-//     TYPE SNAMESum(    int size, TYPE * series);
-//     void SNAMEReverse(TYPE array[3]);
-//     void SNAMEOnes(   TYPE * array,  int size);
-//     void SNAMEZeros(  int size, TYPE * array);
-//     void SNAMEEOSplit(TYPE vector[3], TYPE even[3], odd[3]);
-//     void SNAMETwos(   TYPE * twoVec, int size);
-//     void SNAMEThrees( int size, TYPE * threeVec);
-//
-// for any specified type TYPE (for example: short, unsigned int, long
-// long, etc.) with given short name SNAME (for example: short, uint,
-// longLong, etc.).  The macro is then expanded for the given
-// TYPE/SNAME pairs.  The resulting functions are for testing numpy
-// interfaces, respectively, for:
-//
-//  * 1D input arrays, hard-coded length
-//  * 1D input arrays
-//  * 1D input arrays, data last
-//  * 1D in-place arrays, hard-coded length
-//  * 1D in-place arrays
-//  * 1D in-place arrays, data last
-//  * 1D argout arrays, hard-coded length
-//  * 1D argout arrays
-//  * 1D argout arrays, data last
-//
-#define TEST_FUNCS(TYPE, SNAME) \
-\
-TYPE SNAME ## Length(TYPE vector[3]) {                   \
-  double result = 0;                                     \
-  for (int i=0; i<3; ++i) result += vector[i]*vector[i]; \
-  return (TYPE)sqrt(result);   			         \
-}                                                        \
-\
-TYPE SNAME ## Prod(TYPE * series, int size) {     \
-  TYPE result = 1;                                \
-  for (int i=0; i<size; ++i) result *= series[i]; \
-  return result;                                  \
-}                                                 \
-\
-TYPE SNAME ## Sum(int size, TYPE * series) {      \
-  TYPE result = 0;                                \
-  for (int i=0; i<size; ++i) result += series[i]; \
-  return result;                                  \
-}                                                 \
-\
-void SNAME ## Reverse(TYPE array[3]) { \
-  TYPE temp = array[0];		       \
-  array[0] = array[2];                 \
-  array[2] = temp;                     \
-}                                      \
-\
-void SNAME ## Ones(TYPE * array, int size) { \
-  for (int i=0; i<size; ++i) array[i] = 1;   \
-}                                            \
-\
-void SNAME ## Zeros(int size, TYPE * array) { \
-  for (int i=0; i<size; ++i) array[i] = 0;    \
-}                                             \
-\
-void SNAME ## EOSplit(TYPE vector[3], TYPE even[3], TYPE odd[3]) { \
-  for (int i=0; i<3; ++i) {					   \
-    if (i % 2 == 0) {						   \
-      even[i] = vector[i];					   \
-      odd[ i] = 0;						   \
-    } else {							   \
-      even[i] = 0;						   \
-      odd[ i] = vector[i];					   \
-    }								   \
-  }								   \
-}								   \
-\
-void SNAME ## Twos(TYPE* twoVec, int size) { \
-  for (int i=0; i<size; ++i) twoVec[i] = 2;  \
-}					     \
-\
-void SNAME ## Threes(int size, TYPE* threeVec) { \
-  for (int i=0; i<size; ++i) threeVec[i] = 3;	 \
-}
-
-TEST_FUNCS(signed char       , schar    )
-TEST_FUNCS(unsigned char     , uchar    )
-TEST_FUNCS(short             , short    )
-TEST_FUNCS(unsigned short    , ushort   )
-TEST_FUNCS(int               , int      )
-TEST_FUNCS(unsigned int      , uint     )
-TEST_FUNCS(long              , long     )
-TEST_FUNCS(unsigned long     , ulong    )
-TEST_FUNCS(long long         , longLong )
-TEST_FUNCS(unsigned long long, ulongLong)
-TEST_FUNCS(float             , float    )
-TEST_FUNCS(double            , double   )

Deleted: trunk/doc/test/Vector.h
===================================================================
--- trunk/numpy/doc/swig/test/Vector.h	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Vector.h	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,58 +0,0 @@
-#ifndef VECTOR_H
-#define VECTOR_H
-
-// The following macro defines the prototypes for a family of
-// functions that work with 1D arrays with the forms
-//
-//     TYPE SNAMELength( TYPE vector[3]);
-//     TYPE SNAMEProd(   TYPE * series, int size);
-//     TYPE SNAMESum(    int size, TYPE * series);
-//     void SNAMEReverse(TYPE array[3]);
-//     void SNAMEOnes(   TYPE * array,  int size);
-//     void SNAMEZeros(  int size, TYPE * array);
-//     void SNAMEEOSplit(TYPE vector[3], TYPE even[3], TYPE odd[3]);
-//     void SNAMETwos(   TYPE * twoVec, int size);
-//     void SNAMEThrees( int size, TYPE * threeVec);
-//
-// for any specified type TYPE (for example: short, unsigned int, long
-// long, etc.) with given short name SNAME (for example: short, uint,
-// longLong, etc.).  The macro is then expanded for the given
-// TYPE/SNAME pairs.  The resulting functions are for testing numpy
-// interfaces, respectively, for:
-//
-//  * 1D input arrays, hard-coded length
-//  * 1D input arrays
-//  * 1D input arrays, data last
-//  * 1D in-place arrays, hard-coded length
-//  * 1D in-place arrays
-//  * 1D in-place arrays, data last
-//  * 1D argout arrays, hard-coded length
-//  * 1D argout arrays
-//  * 1D argout arrays, data last
-//
-#define TEST_FUNC_PROTOS(TYPE, SNAME) \
-\
-TYPE SNAME ## Length( TYPE vector[3]); \
-TYPE SNAME ## Prod(   TYPE * series, int size); \
-TYPE SNAME ## Sum(    int size, TYPE * series); \
-void SNAME ## Reverse(TYPE array[3]); \
-void SNAME ## Ones(   TYPE * array,  int size); \
-void SNAME ## Zeros(  int size, TYPE * array); \
-void SNAME ## EOSplit(TYPE vector[3], TYPE even[3], TYPE odd[3]); \
-void SNAME ## Twos(   TYPE * twoVec, int size); \
-void SNAME ## Threes( int size, TYPE * threeVec); \
-
-TEST_FUNC_PROTOS(signed char       , schar    )
-TEST_FUNC_PROTOS(unsigned char     , uchar    )
-TEST_FUNC_PROTOS(short             , short    )
-TEST_FUNC_PROTOS(unsigned short    , ushort   )
-TEST_FUNC_PROTOS(int               , int      )
-TEST_FUNC_PROTOS(unsigned int      , uint     )
-TEST_FUNC_PROTOS(long              , long     )
-TEST_FUNC_PROTOS(unsigned long     , ulong    )
-TEST_FUNC_PROTOS(long long         , longLong )
-TEST_FUNC_PROTOS(unsigned long long, ulongLong)
-TEST_FUNC_PROTOS(float             , float    )
-TEST_FUNC_PROTOS(double            , double   )
-
-#endif

Deleted: trunk/doc/test/Vector.i
===================================================================
--- trunk/numpy/doc/swig/test/Vector.i	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/Vector.i	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,47 +0,0 @@
-// -*- c++ -*-
-%module Vector
-
-%{
-#define SWIG_FILE_WITH_INIT
-#include "Vector.h"
-%}
-
-// Get the NumPy typemaps
-%include "../numpy.i"
-
-%init %{
-  import_array();
-%}
-
-%define %apply_numpy_typemaps(TYPE)
-
-%apply (TYPE IN_ARRAY1[ANY]) {(TYPE vector[3])};
-%apply (TYPE* IN_ARRAY1, int DIM1) {(TYPE* series, int size)};
-%apply (int DIM1, TYPE* IN_ARRAY1) {(int size, TYPE* series)};
-
-%apply (TYPE INPLACE_ARRAY1[ANY]) {(TYPE array[3])};
-%apply (TYPE* INPLACE_ARRAY1, int DIM1) {(TYPE* array, int size)};
-%apply (int DIM1, TYPE* INPLACE_ARRAY1) {(int size, TYPE* array)};
-
-%apply (TYPE ARGOUT_ARRAY1[ANY]) {(TYPE even[3])};
-%apply (TYPE ARGOUT_ARRAY1[ANY]) {(TYPE odd[ 3])};
-%apply (TYPE* ARGOUT_ARRAY1, int DIM1) {(TYPE* twoVec, int size)};
-%apply (int DIM1, TYPE* ARGOUT_ARRAY1) {(int size, TYPE* threeVec)};
-
-%enddef    /* %apply_numpy_typemaps() macro */
-
-%apply_numpy_typemaps(signed char       )
-%apply_numpy_typemaps(unsigned char     )
-%apply_numpy_typemaps(short             )
-%apply_numpy_typemaps(unsigned short    )
-%apply_numpy_typemaps(int               )
-%apply_numpy_typemaps(unsigned int      )
-%apply_numpy_typemaps(long              )
-%apply_numpy_typemaps(unsigned long     )
-%apply_numpy_typemaps(long long         )
-%apply_numpy_typemaps(unsigned long long)
-%apply_numpy_typemaps(float             )
-%apply_numpy_typemaps(double            )
-
-// Include the header file to be wrapped
-%include "Vector.h"

Deleted: trunk/doc/test/setup.py
===================================================================
--- trunk/numpy/doc/swig/test/setup.py	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/setup.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,66 +0,0 @@
-#! /usr/bin/env python
-
-# System imports
-from distutils.core import *
-from distutils      import sysconfig
-
-# Third-party modules - we depend on numpy for everything
-import numpy
-
-# Obtain the numpy include directory.  This logic works across numpy versions.
-try:
-    numpy_include = numpy.get_include()
-except AttributeError:
-    numpy_include = numpy.get_numpy_include()
-
-# Array extension module
-_Array = Extension("_Array",
-                   ["Array_wrap.cxx",
-                    "Array1.cxx",
-                    "Array2.cxx"],
-                   include_dirs = [numpy_include],
-                   )
-
-# Farray extension module
-_Farray = Extension("_Farray",
-                    ["Farray_wrap.cxx",
-                     "Farray.cxx"],
-                    include_dirs = [numpy_include],
-                    )
-
-# _Vector extension module
-_Vector = Extension("_Vector",
-                    ["Vector_wrap.cxx",
-                     "Vector.cxx"],
-                    include_dirs = [numpy_include],
-                    )
-
-# _Matrix extension module
-_Matrix = Extension("_Matrix",
-                    ["Matrix_wrap.cxx",
-                     "Matrix.cxx"],
-                    include_dirs = [numpy_include],
-                    )
-
-# _Tensor extension module
-_Tensor = Extension("_Tensor",
-                    ["Tensor_wrap.cxx",
-                     "Tensor.cxx"],
-                    include_dirs = [numpy_include],
-                    )
-
-_Fortran = Extension("_Fortran",
-                    ["Fortran_wrap.cxx",
-                     "Fortran.cxx"],
-                    include_dirs = [numpy_include],
-                    )
-
-# NumyTypemapTests setup
-setup(name        = "NumpyTypemapTests",
-      description = "Functions that work on arrays",
-      author      = "Bill Spotz",
-      py_modules  = ["Array", "Farray", "Vector", "Matrix", "Tensor",
-                     "Fortran"],
-      ext_modules = [_Array , _Farray , _Vector , _Matrix , _Tensor,
-                     _Fortran]
-      )

Deleted: trunk/doc/test/testArray.py
===================================================================
--- trunk/numpy/doc/swig/test/testArray.py	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/testArray.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,283 +0,0 @@
-#! /usr/bin/env python
-
-# System imports
-from   distutils.util import get_platform
-import os
-import sys
-import unittest
-
-# Import NumPy
-import numpy as np
-major, minor = [ int(d) for d in np.__version__.split(".")[:2] ]
-if major == 0:
-    BadListError = TypeError
-else:
-    BadListError = ValueError
-
-import Array
-
-######################################################################
-
-class Array1TestCase(unittest.TestCase):
-
-    def setUp(self):
-        self.length = 5
-        self.array1 = Array.Array1(self.length)
-
-    def testConstructor0(self):
-        "Test Array1 default constructor"
-        a = Array.Array1()
-        self.failUnless(isinstance(a, Array.Array1))
-        self.failUnless(len(a) == 0)
-
-    def testConstructor1(self):
-        "Test Array1 length constructor"
-        self.failUnless(isinstance(self.array1, Array.Array1))
-
-    def testConstructor2(self):
-        "Test Array1 array constructor"
-        na = np.arange(self.length)
-        aa = Array.Array1(na)
-        self.failUnless(isinstance(aa, Array.Array1))
-
-    def testConstructor3(self):
-        "Test Array1 copy constructor"
-        for i in range(self.array1.length()): self.array1[i] = i
-        arrayCopy = Array.Array1(self.array1)
-        self.failUnless(arrayCopy == self.array1)
-
-    def testConstructorBad(self):
-        "Test Array1 length constructor, negative"
-        self.assertRaises(ValueError, Array.Array1, -4)
-
-    def testLength(self):
-        "Test Array1 length method"
-        self.failUnless(self.array1.length() == self.length)
-
-    def testLen(self):
-        "Test Array1 __len__ method"
-        self.failUnless(len(self.array1) == self.length)
-
-    def testResize0(self):
-        "Test Array1 resize method, length"
-        newLen = 2 * self.length
-        self.array1.resize(newLen)
-        self.failUnless(len(self.array1) == newLen)
-
-    def testResize1(self):
-        "Test Array1 resize method, array"
-        a = np.zeros((2*self.length,), dtype='l')
-        self.array1.resize(a)
-        self.failUnless(len(self.array1) == len(a))
-
-    def testResizeBad(self):
-        "Test Array1 resize method, negative length"
-        self.assertRaises(ValueError, self.array1.resize, -5)
-
-    def testSetGet(self):
-        "Test Array1 __setitem__, __getitem__ methods"
-        n = self.length
-        for i in range(n):
-            self.array1[i] = i*i
-        for i in range(n):
-            self.failUnless(self.array1[i] == i*i)
-
-    def testSetBad1(self):
-        "Test Array1 __setitem__ method, negative index"
-        self.assertRaises(IndexError, self.array1.__setitem__, -1, 0)
-
-    def testSetBad2(self):
-        "Test Array1 __setitem__ method, out-of-range index"
-        self.assertRaises(IndexError, self.array1.__setitem__, self.length+1, 0)
-
-    def testGetBad1(self):
-        "Test Array1 __getitem__ method, negative index"
-        self.assertRaises(IndexError, self.array1.__getitem__, -1)
-
-    def testGetBad2(self):
-        "Test Array1 __getitem__ method, out-of-range index"
-        self.assertRaises(IndexError, self.array1.__getitem__, self.length+1)
-
-    def testAsString(self):
-        "Test Array1 asString method"
-        for i in range(self.array1.length()): self.array1[i] = i+1
-        self.failUnless(self.array1.asString() == "[ 1, 2, 3, 4, 5 ]")
-
-    def testStr(self):
-        "Test Array1 __str__ method"
-        for i in range(self.array1.length()): self.array1[i] = i-2
-        self.failUnless(str(self.array1) == "[ -2, -1, 0, 1, 2 ]")
-
-    def testView(self):
-        "Test Array1 view method"
-        for i in range(self.array1.length()): self.array1[i] = i+1
-        a = self.array1.view()
-        self.failUnless(isinstance(a, np.ndarray))
-        self.failUnless(len(a) == self.length)
-        self.failUnless((a == [1,2,3,4,5]).all())
-
-######################################################################
-
-class Array2TestCase(unittest.TestCase):
-
-    def setUp(self):
-        self.nrows = 5
-        self.ncols = 4
-        self.array2 = Array.Array2(self.nrows, self.ncols)
-
-    def testConstructor0(self):
-        "Test Array2 default constructor"
-        a = Array.Array2()
-        self.failUnless(isinstance(a, Array.Array2))
-        self.failUnless(len(a) == 0)
-
-    def testConstructor1(self):
-        "Test Array2 nrows, ncols constructor"
-        self.failUnless(isinstance(self.array2, Array.Array2))
-
-    def testConstructor2(self):
-        "Test Array2 array constructor"
-        na = np.zeros((3,4), dtype="l")
-        aa = Array.Array2(na)
-        self.failUnless(isinstance(aa, Array.Array2))
-
-    def testConstructor3(self):
-        "Test Array2 copy constructor"
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.array2[i][j] = i * j
-        arrayCopy = Array.Array2(self.array2)
-        self.failUnless(arrayCopy == self.array2)
-
-    def testConstructorBad1(self):
-        "Test Array2 nrows, ncols constructor, negative nrows"
-        self.assertRaises(ValueError, Array.Array2, -4, 4)
-
-    def testConstructorBad2(self):
-        "Test Array2 nrows, ncols constructor, negative ncols"
-        self.assertRaises(ValueError, Array.Array2, 4, -4)
-
-    def testNrows(self):
-        "Test Array2 nrows method"
-        self.failUnless(self.array2.nrows() == self.nrows)
-
-    def testNcols(self):
-        "Test Array2 ncols method"
-        self.failUnless(self.array2.ncols() == self.ncols)
-
-    def testLen(self):
-        "Test Array2 __len__ method"
-        self.failUnless(len(self.array2) == self.nrows*self.ncols)
-
-    def testResize0(self):
-        "Test Array2 resize method, size"
-        newRows = 2 * self.nrows
-        newCols = 2 * self.ncols
-        self.array2.resize(newRows, newCols)
-        self.failUnless(len(self.array2) == newRows * newCols)
-
-    #def testResize1(self):
-    #    "Test Array2 resize method, array"
-    #    a = np.zeros((2*self.nrows, 2*self.ncols), dtype='l')
-    #    self.array2.resize(a)
-    #    self.failUnless(len(self.array2) == len(a))
-
-    def testResizeBad1(self):
-        "Test Array2 resize method, negative nrows"
-        self.assertRaises(ValueError, self.array2.resize, -5, 5)
-
-    def testResizeBad2(self):
-        "Test Array2 resize method, negative ncols"
-        self.assertRaises(ValueError, self.array2.resize, 5, -5)
-
-    def testSetGet1(self):
-        "Test Array2 __setitem__, __getitem__ methods"
-        m = self.nrows
-        n = self.ncols
-        array1 = [ ]
-        a = np.arange(n, dtype="l")
-        for i in range(m):
-            array1.append(Array.Array1(i*a))
-        for i in range(m):
-            self.array2[i] = array1[i]
-        for i in range(m):
-            self.failUnless(self.array2[i] == array1[i])
-
-    def testSetGet2(self):
-        "Test Array2 chained __setitem__, __getitem__ methods"
-        m = self.nrows
-        n = self.ncols
-        for i in range(m):
-            for j in range(n):
-                self.array2[i][j] = i*j
-        for i in range(m):
-            for j in range(n):
-                self.failUnless(self.array2[i][j] == i*j)
-
-    def testSetBad1(self):
-        "Test Array2 __setitem__ method, negative index"
-        a = Array.Array1(self.ncols)
-        self.assertRaises(IndexError, self.array2.__setitem__, -1, a)
-
-    def testSetBad2(self):
-        "Test Array2 __setitem__ method, out-of-range index"
-        a = Array.Array1(self.ncols)
-        self.assertRaises(IndexError, self.array2.__setitem__, self.nrows+1, a)
-
-    def testGetBad1(self):
-        "Test Array2 __getitem__ method, negative index"
-        self.assertRaises(IndexError, self.array2.__getitem__, -1)
-
-    def testGetBad2(self):
-        "Test Array2 __getitem__ method, out-of-range index"
-        self.assertRaises(IndexError, self.array2.__getitem__, self.nrows+1)
-
-    def testAsString(self):
-        "Test Array2 asString method"
-        result = """\
-[ [ 0, 1, 2, 3 ],
-  [ 1, 2, 3, 4 ],
-  [ 2, 3, 4, 5 ],
-  [ 3, 4, 5, 6 ],
-  [ 4, 5, 6, 7 ] ]
-"""
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.array2[i][j] = i+j
-        self.failUnless(self.array2.asString() == result)
-
-    def testStr(self):
-        "Test Array2 __str__ method"
-        result = """\
-[ [ 0, -1, -2, -3 ],
-  [ 1, 0, -1, -2 ],
-  [ 2, 1, 0, -1 ],
-  [ 3, 2, 1, 0 ],
-  [ 4, 3, 2, 1 ] ]
-"""
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.array2[i][j] = i-j
-        self.failUnless(str(self.array2) == result)
-
-    def testView(self):
-        "Test Array2 view method"
-        a = self.array2.view()
-        self.failUnless(isinstance(a, np.ndarray))
-        self.failUnless(len(a) == self.nrows)
-
-######################################################################
-
-if __name__ == "__main__":
-
-    # Build the test suite
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(Array1TestCase))
-    suite.addTest(unittest.makeSuite(Array2TestCase))
-
-    # Execute the test suite
-    print "Testing Classes of Module Array"
-    print "NumPy version", np.__version__
-    print
-    result = unittest.TextTestRunner(verbosity=2).run(suite)
-    sys.exit(len(result.errors) + len(result.failures))

Deleted: trunk/doc/test/testFarray.py
===================================================================
--- trunk/numpy/doc/swig/test/testFarray.py	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/testFarray.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,158 +0,0 @@
-#! /usr/bin/env python
-
-# System imports
-from   distutils.util import get_platform
-import os
-import sys
-import unittest
-
-# Import NumPy
-import numpy as np
-major, minor = [ int(d) for d in np.__version__.split(".")[:2] ]
-if major == 0: BadListError = TypeError
-else:          BadListError = ValueError
-
-# Add the distutils-generated build directory to the python search path and then
-# import the extension module
-libDir = "lib.%s-%s" % (get_platform(), sys.version[:3])
-sys.path.insert(0,os.path.join("build", libDir))
-import Farray
-
-######################################################################
-
-class FarrayTestCase(unittest.TestCase):
-
-    def setUp(self):
-        self.nrows = 5
-        self.ncols = 4
-        self.array = Farray.Farray(self.nrows, self.ncols)
-
-    def testConstructor1(self):
-        "Test Farray size constructor"
-        self.failUnless(isinstance(self.array, Farray.Farray))
-
-    def testConstructor2(self):
-        "Test Farray copy constructor"
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.array[i,j] = i + j
-        arrayCopy = Farray.Farray(self.array)
-        self.failUnless(arrayCopy == self.array)
-
-    def testConstructorBad1(self):
-        "Test Farray size constructor, negative nrows"
-        self.assertRaises(ValueError, Farray.Farray, -4, 4)
-
-    def testConstructorBad2(self):
-        "Test Farray size constructor, negative ncols"
-        self.assertRaises(ValueError, Farray.Farray, 4, -4)
-
-    def testNrows(self):
-        "Test Farray nrows method"
-        self.failUnless(self.array.nrows() == self.nrows)
-
-    def testNcols(self):
-        "Test Farray ncols method"
-        self.failUnless(self.array.ncols() == self.ncols)
-
-    def testLen(self):
-        "Test Farray __len__ method"
-        self.failUnless(len(self.array) == self.nrows*self.ncols)
-
-    def testSetGet(self):
-        "Test Farray __setitem__, __getitem__ methods"
-        m = self.nrows
-        n = self.ncols
-        for i in range(m):
-            for j in range(n):
-                self.array[i,j] = i*j
-        for i in range(m):
-            for j in range(n):
-                self.failUnless(self.array[i,j] == i*j)
-
-    def testSetBad1(self):
-        "Test Farray __setitem__ method, negative row"
-        self.assertRaises(IndexError, self.array.__setitem__, (-1, 3), 0)
-
-    def testSetBad2(self):
-        "Test Farray __setitem__ method, negative col"
-        self.assertRaises(IndexError, self.array.__setitem__, (1, -3), 0)
-
-    def testSetBad3(self):
-        "Test Farray __setitem__ method, out-of-range row"
-        self.assertRaises(IndexError, self.array.__setitem__, (self.nrows+1, 0), 0)
-
-    def testSetBad4(self):
-        "Test Farray __setitem__ method, out-of-range col"
-        self.assertRaises(IndexError, self.array.__setitem__, (0, self.ncols+1), 0)
-
-    def testGetBad1(self):
-        "Test Farray __getitem__ method, negative row"
-        self.assertRaises(IndexError, self.array.__getitem__, (-1, 3))
-
-    def testGetBad2(self):
-        "Test Farray __getitem__ method, negative col"
-        self.assertRaises(IndexError, self.array.__getitem__, (1, -3))
-
-    def testGetBad3(self):
-        "Test Farray __getitem__ method, out-of-range row"
-        self.assertRaises(IndexError, self.array.__getitem__, (self.nrows+1, 0))
-
-    def testGetBad4(self):
-        "Test Farray __getitem__ method, out-of-range col"
-        self.assertRaises(IndexError, self.array.__getitem__, (0, self.ncols+1))
-
-    def testAsString(self):
-        "Test Farray asString method"
-        result = """\
-[ [ 0, 1, 2, 3 ],
-  [ 1, 2, 3, 4 ],
-  [ 2, 3, 4, 5 ],
-  [ 3, 4, 5, 6 ],
-  [ 4, 5, 6, 7 ] ]
-"""
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.array[i,j] = i+j
-        self.failUnless(self.array.asString() == result)
-
-    def testStr(self):
-        "Test Farray __str__ method"
-        result = """\
-[ [ 0, -1, -2, -3 ],
-  [ 1, 0, -1, -2 ],
-  [ 2, 1, 0, -1 ],
-  [ 3, 2, 1, 0 ],
-  [ 4, 3, 2, 1 ] ]
-"""
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.array[i,j] = i-j
-        self.failUnless(str(self.array) == result)
-
-    def testView(self):
-        "Test Farray view method"
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.array[i,j] = i+j
-        a = self.array.view()
-        self.failUnless(isinstance(a, np.ndarray))
-        self.failUnless(a.flags.f_contiguous)
-        for i in range(self.nrows):
-            for j in range(self.ncols):
-                self.failUnless(a[i,j] == i+j)
-
-######################################################################
-
-if __name__ == "__main__":
-
-    # Build the test suite
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(FarrayTestCase))
-
-    # Execute the test suite
-    print "Testing Classes of Module Farray"
-    print "NumPy version", np.__version__
-    print
-    result = unittest.TextTestRunner(verbosity=2).run(suite)
-    sys.exit(len(result.errors) + len(result.failures))

Deleted: trunk/doc/test/testFortran.py
===================================================================
--- trunk/numpy/doc/swig/test/testFortran.py	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/testFortran.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,169 +0,0 @@
-#! /usr/bin/env python
-
-# System imports
-from   distutils.util import get_platform
-import os
-import sys
-import unittest
-
-# Import NumPy
-import numpy as np
-major, minor = [ int(d) for d in np.__version__.split(".")[:2] ]
-if major == 0: BadListError = TypeError
-else:          BadListError = ValueError
-
-import Fortran
-
-######################################################################
-
-class FortranTestCase(unittest.TestCase):
-
-    def __init__(self, methodName="runTests"):
-        unittest.TestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-
-    # Test (type* IN_FARRAY2, int DIM1, int DIM2) typemap
-    def testSecondElementContiguous(self):
-        "Test luSplit function with a Fortran-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        second = Fortran.__dict__[self.typeStr + "SecondElement"]
-        matrix = np.arange(9).reshape(3, 3).astype(self.typeCode)
-        self.assertEquals(second(matrix), 3)
-
-    def testSecondElementFortran(self):
-        "Test luSplit function with a Fortran-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        second = Fortran.__dict__[self.typeStr + "SecondElement"]
-        matrix = np.asfortranarray(np.arange(9).reshape(3, 3),
-                                   self.typeCode)
-        self.assertEquals(second(matrix), 3)
-
-    def testSecondElementObject(self):
-        "Test luSplit function with a Fortran-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        second = Fortran.__dict__[self.typeStr + "SecondElement"]
-        matrix = np.asfortranarray([[0,1,2],[3,4,5],[6,7,8]], self.typeCode)
-        self.assertEquals(second(matrix), 3)
-
-######################################################################
-
-class scharTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "schar"
-        self.typeCode = "b"
-
-######################################################################
-
-class ucharTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "uchar"
-        self.typeCode = "B"
-
-######################################################################
-
-class shortTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "short"
-        self.typeCode = "h"
-
-######################################################################
-
-class ushortTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "ushort"
-        self.typeCode = "H"
-
-######################################################################
-
-class intTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "int"
-        self.typeCode = "i"
-
-######################################################################
-
-class uintTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "uint"
-        self.typeCode = "I"
-
-######################################################################
-
-class longTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "long"
-        self.typeCode = "l"
-
-######################################################################
-
-class ulongTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "ulong"
-        self.typeCode = "L"
-
-######################################################################
-
-class longLongTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "longLong"
-        self.typeCode = "q"
-
-######################################################################
-
-class ulongLongTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "ulongLong"
-        self.typeCode = "Q"
-
-######################################################################
-
-class floatTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "float"
-        self.typeCode = "f"
-
-######################################################################
-
-class doubleTestCase(FortranTestCase):
-    def __init__(self, methodName="runTest"):
-        FortranTestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-
-######################################################################
-
-if __name__ == "__main__":
-
-    # Build the test suite
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(    scharTestCase))
-    suite.addTest(unittest.makeSuite(    ucharTestCase))
-    suite.addTest(unittest.makeSuite(    shortTestCase))
-    suite.addTest(unittest.makeSuite(   ushortTestCase))
-    suite.addTest(unittest.makeSuite(      intTestCase))
-    suite.addTest(unittest.makeSuite(     uintTestCase))
-    suite.addTest(unittest.makeSuite(     longTestCase))
-    suite.addTest(unittest.makeSuite(    ulongTestCase))
-    suite.addTest(unittest.makeSuite( longLongTestCase))
-    suite.addTest(unittest.makeSuite(ulongLongTestCase))
-    suite.addTest(unittest.makeSuite(    floatTestCase))
-    suite.addTest(unittest.makeSuite(   doubleTestCase))
-
-    # Execute the test suite
-    print "Testing 2D Functions of Module Matrix"
-    print "NumPy version", np.__version__
-    print
-    result = unittest.TextTestRunner(verbosity=2).run(suite)
-    sys.exit(len(result.errors) + len(result.failures))

Deleted: trunk/doc/test/testMatrix.py
===================================================================
--- trunk/numpy/doc/swig/test/testMatrix.py	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/testMatrix.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,361 +0,0 @@
-#! /usr/bin/env python
-
-# System imports
-from   distutils.util import get_platform
-import os
-import sys
-import unittest
-
-# Import NumPy
-import numpy as np
-major, minor = [ int(d) for d in np.__version__.split(".")[:2] ]
-if major == 0: BadListError = TypeError
-else:          BadListError = ValueError
-
-import Matrix
-
-######################################################################
-
-class MatrixTestCase(unittest.TestCase):
-
-    def __init__(self, methodName="runTests"):
-        unittest.TestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-
-    # Test (type IN_ARRAY2[ANY][ANY]) typemap
-    def testDet(self):
-        "Test det function"
-        print >>sys.stderr, self.typeStr, "... ",
-        det = Matrix.__dict__[self.typeStr + "Det"]
-        matrix = [[8,7],[6,9]]
-        self.assertEquals(det(matrix), 30)
-
-    # Test (type IN_ARRAY2[ANY][ANY]) typemap
-    def testDetBadList(self):
-        "Test det function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        det = Matrix.__dict__[self.typeStr + "Det"]
-        matrix = [[8,7], ["e", "pi"]]
-        self.assertRaises(BadListError, det, matrix)
-
-    # Test (type IN_ARRAY2[ANY][ANY]) typemap
-    def testDetWrongDim(self):
-        "Test det function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        det = Matrix.__dict__[self.typeStr + "Det"]
-        matrix = [8,7]
-        self.assertRaises(TypeError, det, matrix)
-
-    # Test (type IN_ARRAY2[ANY][ANY]) typemap
-    def testDetWrongSize(self):
-        "Test det function with wrong size"
-        print >>sys.stderr, self.typeStr, "... ",
-        det = Matrix.__dict__[self.typeStr + "Det"]
-        matrix = [[8,7,6], [5,4,3], [2,1,0]]
-        self.assertRaises(TypeError, det, matrix)
-
-    # Test (type IN_ARRAY2[ANY][ANY]) typemap
-    def testDetNonContainer(self):
-        "Test det function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        det = Matrix.__dict__[self.typeStr + "Det"]
-        self.assertRaises(TypeError, det, None)
-
-    # Test (type* IN_ARRAY2, int DIM1, int DIM2) typemap
-    def testMax(self):
-        "Test max function"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Matrix.__dict__[self.typeStr + "Max"]
-        matrix = [[6,5,4],[3,2,1]]
-        self.assertEquals(max(matrix), 6)
-
-    # Test (type* IN_ARRAY2, int DIM1, int DIM2) typemap
-    def testMaxBadList(self):
-        "Test max function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Matrix.__dict__[self.typeStr + "Max"]
-        matrix = [[6,"five",4], ["three", 2, "one"]]
-        self.assertRaises(BadListError, max, matrix)
-
-    # Test (type* IN_ARRAY2, int DIM1, int DIM2) typemap
-    def testMaxNonContainer(self):
-        "Test max function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Matrix.__dict__[self.typeStr + "Max"]
-        self.assertRaises(TypeError, max, None)
-
-    # Test (type* IN_ARRAY2, int DIM1, int DIM2) typemap
-    def testMaxWrongDim(self):
-        "Test max function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Matrix.__dict__[self.typeStr + "Max"]
-        self.assertRaises(TypeError, max, [0, 1, 2, 3])
-
-    # Test (int DIM1, int DIM2, type* IN_ARRAY2) typemap
-    def testMin(self):
-        "Test min function"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Matrix.__dict__[self.typeStr + "Min"]
-        matrix = [[9,8],[7,6],[5,4]]
-        self.assertEquals(min(matrix), 4)
-
-    # Test (int DIM1, int DIM2, type* IN_ARRAY2) typemap
-    def testMinBadList(self):
-        "Test min function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Matrix.__dict__[self.typeStr + "Min"]
-        matrix = [["nine","eight"], ["seven","six"]]
-        self.assertRaises(BadListError, min, matrix)
-
-    # Test (int DIM1, int DIM2, type* IN_ARRAY2) typemap
-    def testMinWrongDim(self):
-        "Test min function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Matrix.__dict__[self.typeStr + "Min"]
-        self.assertRaises(TypeError, min, [1,3,5,7,9])
-
-    # Test (int DIM1, int DIM2, type* IN_ARRAY2) typemap
-    def testMinNonContainer(self):
-        "Test min function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Matrix.__dict__[self.typeStr + "Min"]
-        self.assertRaises(TypeError, min, False)
-
-    # Test (type INPLACE_ARRAY2[ANY][ANY]) typemap
-    def testScale(self):
-        "Test scale function"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Matrix.__dict__[self.typeStr + "Scale"]
-        matrix = np.array([[1,2,3],[2,1,2],[3,2,1]],self.typeCode)
-        scale(matrix,4)
-        self.assertEquals((matrix == [[4,8,12],[8,4,8],[12,8,4]]).all(), True)
-
-    # Test (type INPLACE_ARRAY2[ANY][ANY]) typemap
-    def testScaleWrongDim(self):
-        "Test scale function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Matrix.__dict__[self.typeStr + "Scale"]
-        matrix = np.array([1,2,2,1],self.typeCode)
-        self.assertRaises(TypeError, scale, matrix)
-
-    # Test (type INPLACE_ARRAY2[ANY][ANY]) typemap
-    def testScaleWrongSize(self):
-        "Test scale function with wrong size"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Matrix.__dict__[self.typeStr + "Scale"]
-        matrix = np.array([[1,2],[2,1]],self.typeCode)
-        self.assertRaises(TypeError, scale, matrix)
-
-    # Test (type INPLACE_ARRAY2[ANY][ANY]) typemap
-    def testScaleWrongType(self):
-        "Test scale function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Matrix.__dict__[self.typeStr + "Scale"]
-        matrix = np.array([[1,2,3],[2,1,2],[3,2,1]],'c')
-        self.assertRaises(TypeError, scale, matrix)
-
-    # Test (type INPLACE_ARRAY2[ANY][ANY]) typemap
-    def testScaleNonArray(self):
-        "Test scale function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Matrix.__dict__[self.typeStr + "Scale"]
-        matrix = [[1,2,3],[2,1,2],[3,2,1]]
-        self.assertRaises(TypeError, scale, matrix)
-
-    # Test (type* INPLACE_ARRAY2, int DIM1, int DIM2) typemap
-    def testFloor(self):
-        "Test floor function"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Matrix.__dict__[self.typeStr + "Floor"]
-        matrix = np.array([[6,7],[8,9]],self.typeCode)
-        floor(matrix,7)
-        np.testing.assert_array_equal(matrix, np.array([[7,7],[8,9]]))
-
-    # Test (type* INPLACE_ARRAY2, int DIM1, int DIM2) typemap
-    def testFloorWrongDim(self):
-        "Test floor function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Matrix.__dict__[self.typeStr + "Floor"]
-        matrix = np.array([6,7,8,9],self.typeCode)
-        self.assertRaises(TypeError, floor, matrix)
-
-    # Test (type* INPLACE_ARRAY2, int DIM1, int DIM2) typemap
-    def testFloorWrongType(self):
-        "Test floor function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Matrix.__dict__[self.typeStr + "Floor"]
-        matrix = np.array([[6,7], [8,9]],'c')
-        self.assertRaises(TypeError, floor, matrix)
-
-    # Test (type* INPLACE_ARRAY2, int DIM1, int DIM2) typemap
-    def testFloorNonArray(self):
-        "Test floor function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Matrix.__dict__[self.typeStr + "Floor"]
-        matrix = [[6,7], [8,9]]
-        self.assertRaises(TypeError, floor, matrix)
-
-    # Test (int DIM1, int DIM2, type* INPLACE_ARRAY2) typemap
-    def testCeil(self):
-        "Test ceil function"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Matrix.__dict__[self.typeStr + "Ceil"]
-        matrix = np.array([[1,2],[3,4]],self.typeCode)
-        ceil(matrix,3)
-        np.testing.assert_array_equal(matrix, np.array([[1,2],[3,3]]))
-
-    # Test (int DIM1, int DIM2, type* INPLACE_ARRAY2) typemap
-    def testCeilWrongDim(self):
-        "Test ceil function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Matrix.__dict__[self.typeStr + "Ceil"]
-        matrix = np.array([1,2,3,4],self.typeCode)
-        self.assertRaises(TypeError, ceil, matrix)
-
-    # Test (int DIM1, int DIM2, type* INPLACE_ARRAY2) typemap
-    def testCeilWrongType(self):
-        "Test ceil function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Matrix.__dict__[self.typeStr + "Ceil"]
-        matrix = np.array([[1,2], [3,4]],'c')
-        self.assertRaises(TypeError, ceil, matrix)
-
-    # Test (int DIM1, int DIM2, type* INPLACE_ARRAY2) typemap
-    def testCeilNonArray(self):
-        "Test ceil function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Matrix.__dict__[self.typeStr + "Ceil"]
-        matrix = [[1,2], [3,4]]
-        self.assertRaises(TypeError, ceil, matrix)
-
-    # Test (type ARGOUT_ARRAY2[ANY][ANY]) typemap
-    def testLUSplit(self):
-        "Test luSplit function"
-        print >>sys.stderr, self.typeStr, "... ",
-        luSplit = Matrix.__dict__[self.typeStr + "LUSplit"]
-        lower, upper = luSplit([[1,2,3],[4,5,6],[7,8,9]])
-        self.assertEquals((lower == [[1,0,0],[4,5,0],[7,8,9]]).all(), True)
-        self.assertEquals((upper == [[0,2,3],[0,0,6],[0,0,0]]).all(), True)
-
-######################################################################
-
-class scharTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "schar"
-        self.typeCode = "b"
-
-######################################################################
-
-class ucharTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "uchar"
-        self.typeCode = "B"
-
-######################################################################
-
-class shortTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "short"
-        self.typeCode = "h"
-
-######################################################################
-
-class ushortTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "ushort"
-        self.typeCode = "H"
-
-######################################################################
-
-class intTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "int"
-        self.typeCode = "i"
-
-######################################################################
-
-class uintTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "uint"
-        self.typeCode = "I"
-
-######################################################################
-
-class longTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "long"
-        self.typeCode = "l"
-
-######################################################################
-
-class ulongTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "ulong"
-        self.typeCode = "L"
-
-######################################################################
-
-class longLongTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "longLong"
-        self.typeCode = "q"
-
-######################################################################
-
-class ulongLongTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "ulongLong"
-        self.typeCode = "Q"
-
-######################################################################
-
-class floatTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "float"
-        self.typeCode = "f"
-
-######################################################################
-
-class doubleTestCase(MatrixTestCase):
-    def __init__(self, methodName="runTest"):
-        MatrixTestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-
-######################################################################
-
-if __name__ == "__main__":
-
-    # Build the test suite
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(    scharTestCase))
-    suite.addTest(unittest.makeSuite(    ucharTestCase))
-    suite.addTest(unittest.makeSuite(    shortTestCase))
-    suite.addTest(unittest.makeSuite(   ushortTestCase))
-    suite.addTest(unittest.makeSuite(      intTestCase))
-    suite.addTest(unittest.makeSuite(     uintTestCase))
-    suite.addTest(unittest.makeSuite(     longTestCase))
-    suite.addTest(unittest.makeSuite(    ulongTestCase))
-    suite.addTest(unittest.makeSuite( longLongTestCase))
-    suite.addTest(unittest.makeSuite(ulongLongTestCase))
-    suite.addTest(unittest.makeSuite(    floatTestCase))
-    suite.addTest(unittest.makeSuite(   doubleTestCase))
-
-    # Execute the test suite
-    print "Testing 2D Functions of Module Matrix"
-    print "NumPy version", np.__version__
-    print
-    result = unittest.TextTestRunner(verbosity=2).run(suite)
-    sys.exit(len(result.errors) + len(result.failures))

Deleted: trunk/doc/test/testTensor.py
===================================================================
--- trunk/numpy/doc/swig/test/testTensor.py	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/testTensor.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,401 +0,0 @@
-#! /usr/bin/env python
-
-# System imports
-from   distutils.util import get_platform
-from   math           import sqrt
-import os
-import sys
-import unittest
-
-# Import NumPy
-import numpy as np
-major, minor = [ int(d) for d in np.__version__.split(".")[:2] ]
-if major == 0: BadListError = TypeError
-else:          BadListError = ValueError
-
-import Tensor
-
-######################################################################
-
-class TensorTestCase(unittest.TestCase):
-
-    def __init__(self, methodName="runTests"):
-        unittest.TestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-        self.result   = sqrt(28.0/8)
-
-    # Test (type IN_ARRAY3[ANY][ANY][ANY]) typemap
-    def testNorm(self):
-        "Test norm function"
-        print >>sys.stderr, self.typeStr, "... ",
-        norm = Tensor.__dict__[self.typeStr + "Norm"]
-        tensor = [[[0,1], [2,3]],
-                  [[3,2], [1,0]]]
-        if isinstance(self.result, int):
-            self.assertEquals(norm(tensor), self.result)
-        else:
-            self.assertAlmostEqual(norm(tensor), self.result, 6)
-
-    # Test (type IN_ARRAY3[ANY][ANY][ANY]) typemap
-    def testNormBadList(self):
-        "Test norm function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        norm = Tensor.__dict__[self.typeStr + "Norm"]
-        tensor = [[[0,"one"],[2,3]],
-                  [[3,"two"],[1,0]]]
-        self.assertRaises(BadListError, norm, tensor)
-
-    # Test (type IN_ARRAY3[ANY][ANY][ANY]) typemap
-    def testNormWrongDim(self):
-        "Test norm function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        norm = Tensor.__dict__[self.typeStr + "Norm"]
-        tensor = [[0,1,2,3],
-                  [3,2,1,0]]
-        self.assertRaises(TypeError, norm, tensor)
-
-    # Test (type IN_ARRAY3[ANY][ANY][ANY]) typemap
-    def testNormWrongSize(self):
-        "Test norm function with wrong size"
-        print >>sys.stderr, self.typeStr, "... ",
-        norm = Tensor.__dict__[self.typeStr + "Norm"]
-        tensor = [[[0,1,0], [2,3,2]],
-                  [[3,2,3], [1,0,1]]]
-        self.assertRaises(TypeError, norm, tensor)
-
-    # Test (type IN_ARRAY3[ANY][ANY][ANY]) typemap
-    def testNormNonContainer(self):
-        "Test norm function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        norm = Tensor.__dict__[self.typeStr + "Norm"]
-        self.assertRaises(TypeError, norm, None)
-
-    # Test (type* IN_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testMax(self):
-        "Test max function"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Tensor.__dict__[self.typeStr + "Max"]
-        tensor = [[[1,2], [3,4]],
-                  [[5,6], [7,8]]]
-        self.assertEquals(max(tensor), 8)
-
-    # Test (type* IN_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testMaxBadList(self):
-        "Test max function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Tensor.__dict__[self.typeStr + "Max"]
-        tensor = [[[1,"two"], [3,4]],
-                  [[5,"six"], [7,8]]]
-        self.assertRaises(BadListError, max, tensor)
-
-    # Test (type* IN_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testMaxNonContainer(self):
-        "Test max function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Tensor.__dict__[self.typeStr + "Max"]
-        self.assertRaises(TypeError, max, None)
-
-    # Test (type* IN_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testMaxWrongDim(self):
-        "Test max function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        max = Tensor.__dict__[self.typeStr + "Max"]
-        self.assertRaises(TypeError, max, [0, -1, 2, -3])
-
-    # Test (int DIM1, int DIM2, int DIM3, type* IN_ARRAY3) typemap
-    def testMin(self):
-        "Test min function"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Tensor.__dict__[self.typeStr + "Min"]
-        tensor = [[[9,8], [7,6]],
-                  [[5,4], [3,2]]]
-        self.assertEquals(min(tensor), 2)
-
-    # Test (int DIM1, int DIM2, int DIM3, type* IN_ARRAY3) typemap
-    def testMinBadList(self):
-        "Test min function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Tensor.__dict__[self.typeStr + "Min"]
-        tensor = [[["nine",8], [7,6]],
-                  [["five",4], [3,2]]]
-        self.assertRaises(BadListError, min, tensor)
-
-    # Test (int DIM1, int DIM2, int DIM3, type* IN_ARRAY3) typemap
-    def testMinNonContainer(self):
-        "Test min function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Tensor.__dict__[self.typeStr + "Min"]
-        self.assertRaises(TypeError, min, True)
-
-    # Test (int DIM1, int DIM2, int DIM3, type* IN_ARRAY3) typemap
-    def testMinWrongDim(self):
-        "Test min function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        min = Tensor.__dict__[self.typeStr + "Min"]
-        self.assertRaises(TypeError, min, [[1,3],[5,7]])
-
-    # Test (type INPLACE_ARRAY3[ANY][ANY][ANY]) typemap
-    def testScale(self):
-        "Test scale function"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Tensor.__dict__[self.typeStr + "Scale"]
-        tensor = np.array([[[1,0,1], [0,1,0], [1,0,1]],
-                          [[0,1,0], [1,0,1], [0,1,0]],
-                          [[1,0,1], [0,1,0], [1,0,1]]],self.typeCode)
-        scale(tensor,4)
-        self.assertEquals((tensor == [[[4,0,4], [0,4,0], [4,0,4]],
-                                      [[0,4,0], [4,0,4], [0,4,0]],
-                                      [[4,0,4], [0,4,0], [4,0,4]]]).all(), True)
-
-    # Test (type INPLACE_ARRAY3[ANY][ANY][ANY]) typemap
-    def testScaleWrongType(self):
-        "Test scale function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Tensor.__dict__[self.typeStr + "Scale"]
-        tensor = np.array([[[1,0,1], [0,1,0], [1,0,1]],
-                          [[0,1,0], [1,0,1], [0,1,0]],
-                          [[1,0,1], [0,1,0], [1,0,1]]],'c')
-        self.assertRaises(TypeError, scale, tensor)
-
-    # Test (type INPLACE_ARRAY3[ANY][ANY][ANY]) typemap
-    def testScaleWrongDim(self):
-        "Test scale function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Tensor.__dict__[self.typeStr + "Scale"]
-        tensor = np.array([[1,0,1], [0,1,0], [1,0,1],
-                          [0,1,0], [1,0,1], [0,1,0]],self.typeCode)
-        self.assertRaises(TypeError, scale, tensor)
-
-    # Test (type INPLACE_ARRAY3[ANY][ANY][ANY]) typemap
-    def testScaleWrongSize(self):
-        "Test scale function with wrong size"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Tensor.__dict__[self.typeStr + "Scale"]
-        tensor = np.array([[[1,0], [0,1], [1,0]],
-                          [[0,1], [1,0], [0,1]],
-                          [[1,0], [0,1], [1,0]]],self.typeCode)
-        self.assertRaises(TypeError, scale, tensor)
-
-    # Test (type INPLACE_ARRAY3[ANY][ANY][ANY]) typemap
-    def testScaleNonArray(self):
-        "Test scale function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        scale = Tensor.__dict__[self.typeStr + "Scale"]
-        self.assertRaises(TypeError, scale, True)
-
-    # Test (type* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testFloor(self):
-        "Test floor function"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Tensor.__dict__[self.typeStr + "Floor"]
-        tensor = np.array([[[1,2], [3,4]],
-                          [[5,6], [7,8]]],self.typeCode)
-        floor(tensor,4)
-        np.testing.assert_array_equal(tensor, np.array([[[4,4], [4,4]],
-                                                      [[5,6], [7,8]]]))
-
-    # Test (type* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testFloorWrongType(self):
-        "Test floor function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Tensor.__dict__[self.typeStr + "Floor"]
-        tensor = np.array([[[1,2], [3,4]],
-                          [[5,6], [7,8]]],'c')
-        self.assertRaises(TypeError, floor, tensor)
-
-    # Test (type* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testFloorWrongDim(self):
-        "Test floor function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Tensor.__dict__[self.typeStr + "Floor"]
-        tensor = np.array([[1,2], [3,4], [5,6], [7,8]],self.typeCode)
-        self.assertRaises(TypeError, floor, tensor)
-
-    # Test (type* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3) typemap
-    def testFloorNonArray(self):
-        "Test floor function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        floor = Tensor.__dict__[self.typeStr + "Floor"]
-        self.assertRaises(TypeError, floor, object)
-
-    # Test (int DIM1, int DIM2, int DIM3, type* INPLACE_ARRAY3) typemap
-    def testCeil(self):
-        "Test ceil function"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Tensor.__dict__[self.typeStr + "Ceil"]
-        tensor = np.array([[[9,8], [7,6]],
-                          [[5,4], [3,2]]],self.typeCode)
-        ceil(tensor,5)
-        np.testing.assert_array_equal(tensor, np.array([[[5,5], [5,5]],
-                                                      [[5,4], [3,2]]]))
-
-    # Test (int DIM1, int DIM2, int DIM3, type* INPLACE_ARRAY3) typemap
-    def testCeilWrongType(self):
-        "Test ceil function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Tensor.__dict__[self.typeStr + "Ceil"]
-        tensor = np.array([[[9,8], [7,6]],
-                          [[5,4], [3,2]]],'c')
-        self.assertRaises(TypeError, ceil, tensor)
-
-    # Test (int DIM1, int DIM2, int DIM3, type* INPLACE_ARRAY3) typemap
-    def testCeilWrongDim(self):
-        "Test ceil function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Tensor.__dict__[self.typeStr + "Ceil"]
-        tensor = np.array([[9,8], [7,6], [5,4], [3,2]], self.typeCode)
-        self.assertRaises(TypeError, ceil, tensor)
-
-    # Test (int DIM1, int DIM2, int DIM3, type* INPLACE_ARRAY3) typemap
-    def testCeilNonArray(self):
-        "Test ceil function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        ceil = Tensor.__dict__[self.typeStr + "Ceil"]
-        tensor = [[[9,8], [7,6]],
-                  [[5,4], [3,2]]]
-        self.assertRaises(TypeError, ceil, tensor)
-
-    # Test (type ARGOUT_ARRAY3[ANY][ANY][ANY]) typemap
-    def testLUSplit(self):
-        "Test luSplit function"
-        print >>sys.stderr, self.typeStr, "... ",
-        luSplit = Tensor.__dict__[self.typeStr + "LUSplit"]
-        lower, upper = luSplit([[[1,1], [1,1]],
-                                [[1,1], [1,1]]])
-        self.assertEquals((lower == [[[1,1], [1,0]],
-                                     [[1,0], [0,0]]]).all(), True)
-        self.assertEquals((upper == [[[0,0], [0,1]],
-                                     [[0,1], [1,1]]]).all(), True)
-
-######################################################################
-
-class scharTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "schar"
-        self.typeCode = "b"
-        self.result   = int(self.result)
-
-######################################################################
-
-class ucharTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "uchar"
-        self.typeCode = "B"
-        self.result   = int(self.result)
-
-######################################################################
-
-class shortTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "short"
-        self.typeCode = "h"
-        self.result   = int(self.result)
-
-######################################################################
-
-class ushortTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "ushort"
-        self.typeCode = "H"
-        self.result   = int(self.result)
-
-######################################################################
-
-class intTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "int"
-        self.typeCode = "i"
-        self.result   = int(self.result)
-
-######################################################################
-
-class uintTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "uint"
-        self.typeCode = "I"
-        self.result   = int(self.result)
-
-######################################################################
-
-class longTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "long"
-        self.typeCode = "l"
-        self.result   = int(self.result)
-
-######################################################################
-
-class ulongTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "ulong"
-        self.typeCode = "L"
-        self.result   = int(self.result)
-
-######################################################################
-
-class longLongTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "longLong"
-        self.typeCode = "q"
-        self.result   = int(self.result)
-
-######################################################################
-
-class ulongLongTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "ulongLong"
-        self.typeCode = "Q"
-        self.result   = int(self.result)
-
-######################################################################
-
-class floatTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "float"
-        self.typeCode = "f"
-
-######################################################################
-
-class doubleTestCase(TensorTestCase):
-    def __init__(self, methodName="runTest"):
-        TensorTestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-
-######################################################################
-
-if __name__ == "__main__":
-
-    # Build the test suite
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(    scharTestCase))
-    suite.addTest(unittest.makeSuite(    ucharTestCase))
-    suite.addTest(unittest.makeSuite(    shortTestCase))
-    suite.addTest(unittest.makeSuite(   ushortTestCase))
-    suite.addTest(unittest.makeSuite(      intTestCase))
-    suite.addTest(unittest.makeSuite(     uintTestCase))
-    suite.addTest(unittest.makeSuite(     longTestCase))
-    suite.addTest(unittest.makeSuite(    ulongTestCase))
-    suite.addTest(unittest.makeSuite( longLongTestCase))
-    suite.addTest(unittest.makeSuite(ulongLongTestCase))
-    suite.addTest(unittest.makeSuite(    floatTestCase))
-    suite.addTest(unittest.makeSuite(   doubleTestCase))
-
-    # Execute the test suite
-    print "Testing 3D Functions of Module Tensor"
-    print "NumPy version", np.__version__
-    print
-    result = unittest.TextTestRunner(verbosity=2).run(suite)
-    sys.exit(len(result.errors) + len(result.failures))

Deleted: trunk/doc/test/testVector.py
===================================================================
--- trunk/numpy/doc/swig/test/testVector.py	2008-08-20 23:44:20 UTC (rev 5669)
+++ trunk/doc/test/testVector.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,380 +0,0 @@
-#! /usr/bin/env python
-
-# System imports
-from   distutils.util import get_platform
-import os
-import sys
-import unittest
-
-# Import NumPy
-import numpy as np
-major, minor = [ int(d) for d in np.__version__.split(".")[:2] ]
-if major == 0: BadListError = TypeError
-else:          BadListError = ValueError
-
-import Vector
-
-######################################################################
-
-class VectorTestCase(unittest.TestCase):
-
-    def __init__(self, methodName="runTest"):
-        unittest.TestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-
-    # Test the (type IN_ARRAY1[ANY]) typemap
-    def testLength(self):
-        "Test length function"
-        print >>sys.stderr, self.typeStr, "... ",
-        length = Vector.__dict__[self.typeStr + "Length"]
-        self.assertEquals(length([5, 12, 0]), 13)
-
-    # Test the (type IN_ARRAY1[ANY]) typemap
-    def testLengthBadList(self):
-        "Test length function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        length = Vector.__dict__[self.typeStr + "Length"]
-        self.assertRaises(BadListError, length, [5, "twelve", 0])
-
-    # Test the (type IN_ARRAY1[ANY]) typemap
-    def testLengthWrongSize(self):
-        "Test length function with wrong size"
-        print >>sys.stderr, self.typeStr, "... ",
-        length = Vector.__dict__[self.typeStr + "Length"]
-        self.assertRaises(TypeError, length, [5, 12])
-
-    # Test the (type IN_ARRAY1[ANY]) typemap
-    def testLengthWrongDim(self):
-        "Test length function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        length = Vector.__dict__[self.typeStr + "Length"]
-        self.assertRaises(TypeError, length, [[1,2], [3,4]])
-
-    # Test the (type IN_ARRAY1[ANY]) typemap
-    def testLengthNonContainer(self):
-        "Test length function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        length = Vector.__dict__[self.typeStr + "Length"]
-        self.assertRaises(TypeError, length, None)
-
-    # Test the (type* IN_ARRAY1, int DIM1) typemap
-    def testProd(self):
-        "Test prod function"
-        print >>sys.stderr, self.typeStr, "... ",
-        prod = Vector.__dict__[self.typeStr + "Prod"]
-        self.assertEquals(prod([1,2,3,4]), 24)
-
-    # Test the (type* IN_ARRAY1, int DIM1) typemap
-    def testProdBadList(self):
-        "Test prod function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        prod = Vector.__dict__[self.typeStr + "Prod"]
-        self.assertRaises(BadListError, prod, [[1,"two"], ["e","pi"]])
-
-    # Test the (type* IN_ARRAY1, int DIM1) typemap
-    def testProdWrongDim(self):
-        "Test prod function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        prod = Vector.__dict__[self.typeStr + "Prod"]
-        self.assertRaises(TypeError, prod, [[1,2], [8,9]])
-
-    # Test the (type* IN_ARRAY1, int DIM1) typemap
-    def testProdNonContainer(self):
-        "Test prod function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        prod = Vector.__dict__[self.typeStr + "Prod"]
-        self.assertRaises(TypeError, prod, None)
-
-    # Test the (int DIM1, type* IN_ARRAY1) typemap
-    def testSum(self):
-        "Test sum function"
-        print >>sys.stderr, self.typeStr, "... ",
-        sum = Vector.__dict__[self.typeStr + "Sum"]
-        self.assertEquals(sum([5,6,7,8]), 26)
-
-    # Test the (int DIM1, type* IN_ARRAY1) typemap
-    def testSumBadList(self):
-        "Test sum function with bad list"
-        print >>sys.stderr, self.typeStr, "... ",
-        sum = Vector.__dict__[self.typeStr + "Sum"]
-        self.assertRaises(BadListError, sum, [3,4, 5, "pi"])
-
-    # Test the (int DIM1, type* IN_ARRAY1) typemap
-    def testSumWrongDim(self):
-        "Test sum function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        sum = Vector.__dict__[self.typeStr + "Sum"]
-        self.assertRaises(TypeError, sum, [[3,4], [5,6]])
-
-    # Test the (int DIM1, type* IN_ARRAY1) typemap
-    def testSumNonContainer(self):
-        "Test sum function with non-container"
-        print >>sys.stderr, self.typeStr, "... ",
-        sum = Vector.__dict__[self.typeStr + "Sum"]
-        self.assertRaises(TypeError, sum, True)
-
-    # Test the (type INPLACE_ARRAY1[ANY]) typemap
-    def testReverse(self):
-        "Test reverse function"
-        print >>sys.stderr, self.typeStr, "... ",
-        reverse = Vector.__dict__[self.typeStr + "Reverse"]
-        vector = np.array([1,2,4],self.typeCode)
-        reverse(vector)
-        self.assertEquals((vector == [4,2,1]).all(), True)
-
-    # Test the (type INPLACE_ARRAY1[ANY]) typemap
-    def testReverseWrongDim(self):
-        "Test reverse function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        reverse = Vector.__dict__[self.typeStr + "Reverse"]
-        vector = np.array([[1,2], [3,4]],self.typeCode)
-        self.assertRaises(TypeError, reverse, vector)
-
-    # Test the (type INPLACE_ARRAY1[ANY]) typemap
-    def testReverseWrongSize(self):
-        "Test reverse function with wrong size"
-        print >>sys.stderr, self.typeStr, "... ",
-        reverse = Vector.__dict__[self.typeStr + "Reverse"]
-        vector = np.array([9,8,7,6,5,4],self.typeCode)
-        self.assertRaises(TypeError, reverse, vector)
-
-    # Test the (type INPLACE_ARRAY1[ANY]) typemap
-    def testReverseWrongType(self):
-        "Test reverse function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        reverse = Vector.__dict__[self.typeStr + "Reverse"]
-        vector = np.array([1,2,4],'c')
-        self.assertRaises(TypeError, reverse, vector)
-
-    # Test the (type INPLACE_ARRAY1[ANY]) typemap
-    def testReverseNonArray(self):
-        "Test reverse function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        reverse = Vector.__dict__[self.typeStr + "Reverse"]
-        self.assertRaises(TypeError, reverse, [2,4,6])
-
-    # Test the (type* INPLACE_ARRAY1, int DIM1) typemap
-    def testOnes(self):
-        "Test ones function"
-        print >>sys.stderr, self.typeStr, "... ",
-        ones = Vector.__dict__[self.typeStr + "Ones"]
-        vector = np.zeros(5,self.typeCode)
-        ones(vector)
-        np.testing.assert_array_equal(vector, np.array([1,1,1,1,1]))
-
-    # Test the (type* INPLACE_ARRAY1, int DIM1) typemap
-    def testOnesWrongDim(self):
-        "Test ones function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        ones = Vector.__dict__[self.typeStr + "Ones"]
-        vector = np.zeros((5,5),self.typeCode)
-        self.assertRaises(TypeError, ones, vector)
-
-    # Test the (type* INPLACE_ARRAY1, int DIM1) typemap
-    def testOnesWrongType(self):
-        "Test ones function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        ones = Vector.__dict__[self.typeStr + "Ones"]
-        vector = np.zeros((5,5),'c')
-        self.assertRaises(TypeError, ones, vector)
-
-    # Test the (type* INPLACE_ARRAY1, int DIM1) typemap
-    def testOnesNonArray(self):
-        "Test ones function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        ones = Vector.__dict__[self.typeStr + "Ones"]
-        self.assertRaises(TypeError, ones, [2,4,6,8])
-
-    # Test the (int DIM1, type* INPLACE_ARRAY1) typemap
-    def testZeros(self):
-        "Test zeros function"
-        print >>sys.stderr, self.typeStr, "... ",
-        zeros = Vector.__dict__[self.typeStr + "Zeros"]
-        vector = np.ones(5,self.typeCode)
-        zeros(vector)
-        np.testing.assert_array_equal(vector, np.array([0,0,0,0,0]))
-
-    # Test the (int DIM1, type* INPLACE_ARRAY1) typemap
-    def testZerosWrongDim(self):
-        "Test zeros function with wrong dimensions"
-        print >>sys.stderr, self.typeStr, "... ",
-        zeros = Vector.__dict__[self.typeStr + "Zeros"]
-        vector = np.ones((5,5),self.typeCode)
-        self.assertRaises(TypeError, zeros, vector)
-
-    # Test the (int DIM1, type* INPLACE_ARRAY1) typemap
-    def testZerosWrongType(self):
-        "Test zeros function with wrong type"
-        print >>sys.stderr, self.typeStr, "... ",
-        zeros = Vector.__dict__[self.typeStr + "Zeros"]
-        vector = np.ones(6,'c')
-        self.assertRaises(TypeError, zeros, vector)
-
-    # Test the (int DIM1, type* INPLACE_ARRAY1) typemap
-    def testZerosNonArray(self):
-        "Test zeros function with non-array"
-        print >>sys.stderr, self.typeStr, "... ",
-        zeros = Vector.__dict__[self.typeStr + "Zeros"]
-        self.assertRaises(TypeError, zeros, [1,3,5,7,9])
-
-    # Test the (type ARGOUT_ARRAY1[ANY]) typemap
-    def testEOSplit(self):
-        "Test eoSplit function"
-        print >>sys.stderr, self.typeStr, "... ",
-        eoSplit = Vector.__dict__[self.typeStr + "EOSplit"]
-        even, odd = eoSplit([1,2,3])
-        self.assertEquals((even == [1,0,3]).all(), True)
-        self.assertEquals((odd  == [0,2,0]).all(), True)
-
-    # Test the (type* ARGOUT_ARRAY1, int DIM1) typemap
-    def testTwos(self):
-        "Test twos function"
-        print >>sys.stderr, self.typeStr, "... ",
-        twos = Vector.__dict__[self.typeStr + "Twos"]
-        vector = twos(5)
-        self.assertEquals((vector == [2,2,2,2,2]).all(), True)
-
-    # Test the (type* ARGOUT_ARRAY1, int DIM1) typemap
-    def testTwosNonInt(self):
-        "Test twos function with non-integer dimension"
-        print >>sys.stderr, self.typeStr, "... ",
-        twos = Vector.__dict__[self.typeStr + "Twos"]
-        self.assertRaises(TypeError, twos, 5.0)
-
-    # Test the (int DIM1, type* ARGOUT_ARRAY1) typemap
-    def testThrees(self):
-        "Test threes function"
-        print >>sys.stderr, self.typeStr, "... ",
-        threes = Vector.__dict__[self.typeStr + "Threes"]
-        vector = threes(6)
-        self.assertEquals((vector == [3,3,3,3,3,3]).all(), True)
-
-    # Test the (type* ARGOUT_ARRAY1, int DIM1) typemap
-    def testThreesNonInt(self):
-        "Test threes function with non-integer dimension"
-        print >>sys.stderr, self.typeStr, "... ",
-        threes = Vector.__dict__[self.typeStr + "Threes"]
-        self.assertRaises(TypeError, threes, "threes")
-
-######################################################################
-
-class scharTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "schar"
-        self.typeCode = "b"
-
-######################################################################
-
-class ucharTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "uchar"
-        self.typeCode = "B"
-
-######################################################################
-
-class shortTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "short"
-        self.typeCode = "h"
-
-######################################################################
-
-class ushortTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "ushort"
-        self.typeCode = "H"
-
-######################################################################
-
-class intTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "int"
-        self.typeCode = "i"
-
-######################################################################
-
-class uintTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "uint"
-        self.typeCode = "I"
-
-######################################################################
-
-class longTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "long"
-        self.typeCode = "l"
-
-######################################################################
-
-class ulongTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "ulong"
-        self.typeCode = "L"
-
-######################################################################
-
-class longLongTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "longLong"
-        self.typeCode = "q"
-
-######################################################################
-
-class ulongLongTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "ulongLong"
-        self.typeCode = "Q"
-
-######################################################################
-
-class floatTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "float"
-        self.typeCode = "f"
-
-######################################################################
-
-class doubleTestCase(VectorTestCase):
-    def __init__(self, methodName="runTest"):
-        VectorTestCase.__init__(self, methodName)
-        self.typeStr  = "double"
-        self.typeCode = "d"
-
-######################################################################
-
-if __name__ == "__main__":
-
-    # Build the test suite
-    suite = unittest.TestSuite()
-    suite.addTest(unittest.makeSuite(    scharTestCase))
-    suite.addTest(unittest.makeSuite(    ucharTestCase))
-    suite.addTest(unittest.makeSuite(    shortTestCase))
-    suite.addTest(unittest.makeSuite(   ushortTestCase))
-    suite.addTest(unittest.makeSuite(      intTestCase))
-    suite.addTest(unittest.makeSuite(     uintTestCase))
-    suite.addTest(unittest.makeSuite(     longTestCase))
-    suite.addTest(unittest.makeSuite(    ulongTestCase))
-    suite.addTest(unittest.makeSuite( longLongTestCase))
-    suite.addTest(unittest.makeSuite(ulongLongTestCase))
-    suite.addTest(unittest.makeSuite(    floatTestCase))
-    suite.addTest(unittest.makeSuite(   doubleTestCase))
-
-    # Execute the test suite
-    print "Testing 1D Functions of Module Vector"
-    print "NumPy version", np.__version__
-    print
-    result = unittest.TextTestRunner(verbosity=2).run(suite)
-    sys.exit(len(result.errors) + len(result.failures))

Copied: trunk/doc/ufuncs.txt (from rev 5669, trunk/numpy/doc/ufuncs.txt)

Modified: trunk/numpy/__init__.py
===================================================================
--- trunk/numpy/__init__.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/__init__.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -36,14 +36,17 @@
 
   >>> np.lookfor('keyword')
 
+Topical documentation is available under the ``doc`` sub-module::
+
+  >>> from numpy import doc
+  >>> help(doc)
+
 Available subpackages
 ---------------------
-core
-    Defines a multi-dimensional array and useful procedures
-    for Numerical computation.
+doc
+    Topical documentation on broadcasting, indexing, etc.
 lib
-    Basic functions used by several sub-packages and useful
-    to have in the main name-space.
+    Basic functions used by several sub-packages.
 random
     Core Random Tools
 linalg
@@ -52,26 +55,16 @@
     Core FFT routines
 testing
     Numpy testing tools
-
-The following sub-packages must be explicitly imported:
-
 f2py
     Fortran to Python Interface Generator.
 distutils
     Enhancements to distutils with support for
     Fortran compilers support and more.
 
-Global symbols from subpackages
--------------------------------
-Do not import directly from `core` and `lib`: those functions
-have been imported into the `numpy` namespace.
-
-Utility tools
--------------
+Utilities
+---------
 test
     Run numpy unittests
-pkgload
-    Load numpy packages
 show_config
     Show numpy build configuration
 dual
@@ -147,7 +140,6 @@
     import random
     import ctypeslib
     import ma
-    import doc
 
     # Make these accessible from numpy name-space
     #  but not imported in from numpy import *
@@ -159,4 +151,4 @@
                'show_config'])
     __all__.extend(core.__all__)
     __all__.extend(lib.__all__)
-    __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma', 'doc'])
+    __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])

Deleted: trunk/numpy/doc/CAPI.txt
===================================================================
--- trunk/numpy/doc/CAPI.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/CAPI.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,313 +0,0 @@
-===============
-C-API for NumPy
-===============
-
-:Author:          Travis Oliphant
-:Discussions to:  `numpy-discussion@scipy.org`__
-:Created:         October 2005
-
-__ http://www.scipy.org/Mailing_Lists
-
-The C API of NumPy is (mostly) backward compatible with Numeric.
-
-There are a few non-standard Numeric usages (that were not really part
-of the API) that will need to be changed:
-
-* If you used any of the function pointers in the ``PyArray_Descr``
-  structure you will have to modify your usage of those.  First,
-  the pointers are all under the member named ``f``.  So ``descr->cast`` 
-  is now ``descr->f->cast``.  In addition, the
-  casting functions have eliminated the strides argument (use
-  ``PyArray_CastTo`` if you need strided casting). All functions have
-  one or two ``PyArrayObject *`` arguments at the end.  This allows the
-  flexible arrays and mis-behaved arrays to be handled.
-
-* The ``descr->zero`` and ``descr->one`` constants have been replaced with
-  function calls, ``PyArray_Zero``, and ``PyArray_One`` (be sure to read the
-  code and free the resulting memory if you use these calls).
-
-* If you passed ``array->dimensions`` and ``array->strides`` around
-  to functions, you will need to fix some code. These are now
-  ``npy_intp*`` pointers. On 32-bit systems there won't be a problem.
-  However, on 64-bit systems, you will need to make changes to avoid
-  errors and segfaults.
-
-
-The header files ``arrayobject.h`` and ``ufuncobject.h`` contain many defines
-that you may find useful.  The files ``__ufunc_api.h`` and
-``__multiarray_api.h`` contain the available C-API function calls with
-their function signatures.
-
-All of these headers are installed to
-``<YOUR_PYTHON_LOCATION>/site-packages/numpy/core/include``
-
-
-Getting arrays in C-code
-=========================
-
-All new arrays can be created using ``PyArray_NewFromDescr``.  A simple interface
-equivalent to ``PyArray_FromDims`` is ``PyArray_SimpleNew(nd, dims, typenum)``
-and to ``PyArray_FromDimsAndData`` is
-``PyArray_SimpleNewFromData(nd, dims, typenum, data)``.
-
-This is a very flexible function.
-
-::
-
-  PyObject * PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr,
-                                int nd, npy_intp *dims,
-                                npy_intp *strides, char *data,
-                                int flags, PyObject *obj);
-
-``subtype`` : ``PyTypeObject *``
-    The subtype that should be created (either pass in
-    ``&PyArray_Type``, ``&PyBigArray_Type``, or ``obj->ob_type``,
-    where ``obj`` is a an instance of a subtype (or subclass) of
-    ``PyArray_Type`` or ``PyBigArray_Type``).
-
-``descr`` : ``PyArray_Descr *``
-    The type descriptor for the array. This is a Python object (this
-    function steals a reference to it). The easiest way to get one is
-    using ``PyArray_DescrFromType(<typenum>)``. If you want to use a
-    flexible size array, then you need to use
-    ``PyArray_DescrNewFromType(<flexible typenum>)`` and set its ``elsize``
-    paramter to the desired size. The typenum in both of these cases
-    is one of the ``PyArray_XXXX`` enumerated types.
-
-``nd`` : ``int``
-    The number of dimensions (<``MAX_DIMS``)
-
-``*dims`` : ``npy_intp *``
-    A pointer to the size in each dimension. Information will be
-    copied from here.
-
-``*strides`` : ``npy_intp *``
-    The strides this array should have. For new arrays created by this
-    routine, this should be ``NULL``. If you pass in memory for this array
-    to use, then you can pass in the strides information as well
-    (otherwise it will be created for you and default to C-contiguous
-    or Fortran contiguous). Any strides will be copied into the array
-    structure. Do not pass in bad strides information!!!!
-
-    ``PyArray_CheckStrides(...)`` can help but you must call it if you are
-    unsure. You cannot pass in strides information when data is ``NULL``
-    and this routine is creating its own memory.
-
-``*data`` : ``char *``
-    ``NULL`` for creating brand-new memory. If you want this array to wrap
-    another memory area, then pass the pointer here. You are
-    responsible for deleting the memory in that case, but do not do so
-    until the new array object has been deleted. The best way to
-    handle that is to get the memory from another Python object,
-    ``INCREF`` that Python object after passing it's data pointer to this
-    routine, and set the ``->base`` member of the returned array to the
-    Python object. *You are responsible for* setting ``PyArray_BASE(ret)``
-    to the base object. Failure to do so will create a memory leak.
-
-    If you pass in a data buffer, the ``flags`` argument will be the flags
-    of the new array. If you create a new array, a non-zero flags
-    argument indicates that you want the array to be in Fortran order.
-
-``flags`` : ``int``
-    Either the flags showing how to interpret the data buffer passed
-    in, or if a new array is created, nonzero to indicate a Fortran
-    order array. See below for an explanation of the flags.
-
-``obj`` : ``PyObject *``
-    If subtypes is ``&PyArray_Type`` or ``&PyBigArray_Type``, this argument is
-    ignored. Otherwise, the ``__array_finalize__`` method of the subtype
-    is called (if present) and passed this object. This is usually an
-    array of the type to be created (so the ``__array_finalize__`` method
-    must handle an array argument. But, it can be anything...)
-
-Note: The returned array object will be unitialized unless the type is
-``PyArray_OBJECT`` in which case the memory will be set to ``NULL``.
-
-``PyArray_SimpleNew(nd, dims, typenum)`` is a drop-in replacement for
-``PyArray_FromDims`` (except it takes ``npy_intp*`` dims instead of ``int*`` dims
-which matters on 64-bit systems) and it does not initialize the memory
-to zero.
-
-``PyArray_SimpleNew`` is just a macro for ``PyArray_New`` with default arguments.
-Use ``PyArray_FILLWBYTE(arr, 0)``  to fill with zeros.
-
-The ``PyArray_FromDims`` and family of functions are still available and
-are loose wrappers around this function.  These functions still take
-``int *`` arguments.  This should be fine on 32-bit systems, but on 64-bit
-systems you may run into trouble if you frequently passed
-``PyArray_FromDims`` the dimensions member of the old ``PyArrayObject`` structure
-because ``sizeof(npy_intp) != sizeof(int)``.
-
-
-Getting an arrayobject from an arbitrary Python object
-======================================================
-
-``PyArray_FromAny(...)``
-
-This function replaces ``PyArray_ContiguousFromObject`` and friends (those
-function calls still remain but they are loose wrappers around the
-``PyArray_FromAny`` call).
-
-::
-
-  static PyObject *
-  PyArray_FromAny(PyObject *op, PyArray_Descr *dtype, int min_depth,
-  		  int max_depth, int requires, PyObject *context)
-
-
-``op`` : ``PyObject *``
-    The Python object to "convert" to an array object
-
-``dtype`` : ``PyArray_Descr *``
-    The desired data-type descriptor. This can be ``NULL``, if the
-    descriptor should be determined by the object. Unless ``FORCECAST`` is
-    present in ``flags``, this call will generate an error if the data
-    type cannot be safely obtained from the object.
-
-``min_depth`` : ``int``
-    The minimum depth of array needed or 0 if doesn't matter
-
-``max_depth`` : ``int``
-    The maximum depth of array allowed or 0 if doesn't matter
-
-``requires`` : ``int``
-    A flag indicating the "requirements" of the returned array. These
-    are the usual ndarray flags (see `NDArray flags`_ below). In
-    addition, there are three flags used only for the ``FromAny``
-    family of functions:
-
-      - ``ENSURECOPY``: always copy the array. Returned arrays always
-        have ``CONTIGUOUS``, ``ALIGNED``, and ``WRITEABLE`` set.
-      - ``ENSUREARRAY``: ensure the returned array is an ndarray (or a
-        bigndarray if ``op`` is one).
-      - ``FORCECAST``: cause a cast to occur regardless of whether or
-        not it is safe.
-
-``context`` : ``PyObject *``
-    If the Python object ``op`` is not an numpy array, but has an
-    ``__array__`` method, context is passed as the second argument to
-    that method (the first is the typecode). Almost always this
-    parameter is ``NULL``.
-
-
-``PyArray_ContiguousFromAny(op, typenum, min_depth, max_depth)`` is
-equivalent to ``PyArray_ContiguousFromObject(...)`` (which is still
-available), except it will return the subclass if op is already a
-subclass of the ndarray. The ``ContiguousFromObject`` version will
-always return an ndarray (or a bigndarray).
-
-Passing Data Type information to C-code
-=======================================
-
-All datatypes are handled using the ``PyArray_Descr *`` structure.
-This structure can be obtained from a Python object using
-``PyArray_DescrConverter`` and ``PyArray_DescrConverter2``.  The former
-returns the default ``PyArray_LONG`` descriptor when the input object
-is None, while the latter returns ``NULL`` when the input object is ``None``.
-
-See the ``arraymethods.c`` and ``multiarraymodule.c`` files for many
-examples of usage.
-
-Getting at the structure of the array.
---------------------------------------
-
-You should use the ``#defines`` provided to access array structure portions:
-
-- ``PyArray_DATA(obj)`` : returns a ``void *`` to the array data
-- ``PyArray_BYTES(obj)`` : return a ``char *`` to the array data
-- ``PyArray_ITEMSIZE(obj)``
-- ``PyArray_NDIM(obj)``
-- ``PyArray_DIMS(obj)``
-- ``PyArray_DIM(obj, n)``
-- ``PyArray_STRIDES(obj)``
-- ``PyArray_STRIDE(obj,n)``
-- ``PyArray_DESCR(obj)``
-- ``PyArray_BASE(obj)``
-
-see more in ``arrayobject.h``
-
-
-NDArray Flags
-=============
-
-The ``flags`` attribute of the ``PyArrayObject`` structure contains important
-information about the memory used by the array (pointed to by the data member)
-This flags information must be kept accurate or strange results and even
-segfaults may result.
-
-There are 6 (binary) flags that describe the memory area used by the
-data buffer.  These constants are defined in ``arrayobject.h`` and
-determine the bit-position of the flag.  Python exposes a nice attribute-
-based interface as well as a dictionary-like interface for getting 
-(and, if appropriate, setting) these flags.
-
-Memory areas of all kinds can be pointed to by an ndarray, necessitating
-these flags.  If you get an arbitrary ``PyArrayObject`` in C-code,
-you need to be aware of the flags that are set.
-If you need to guarantee a certain kind of array
-(like ``NPY_CONTIGUOUS`` and ``NPY_BEHAVED``), then pass these requirements into the
-PyArray_FromAny function.
-
-
-``NPY_CONTIGUOUS``
-    True if the array is (C-style) contiguous in memory.
-``NPY_FORTRAN``
-    True if the array is (Fortran-style) contiguous in memory.
-
-Notice that contiguous 1-d arrays are always both ``NPY_FORTRAN`` contiguous 
-and C contiguous. Both of these flags can be checked and are convenience
-flags only as whether or not an array is ``NPY_CONTIGUOUS`` or ``NPY_FORTRAN``
-can be determined by the ``strides``, ``dimensions``, and ``itemsize``
-attributes.
-
-``NPY_OWNDATA``
-    True if the array owns the memory (it will try and free it using
-    ``PyDataMem_FREE()`` on deallocation --- so it better really own it).
-
-These three flags facilitate using a data pointer that is a memory-mapped
-array, or part of some larger record array.  But, they may have other uses...
-
-``NPY_ALIGNED``
-    True if the data buffer is aligned for the type and the strides
-    are multiples of the alignment factor as well.  This can be
-    checked.
-
-``NPY_WRITEABLE``
-    True only if the data buffer can be "written" to.
-
-``NPY_UPDATEIFCOPY``
-    This is a special flag that is set if this array represents a copy
-    made because a user required certain flags in ``PyArray_FromAny`` and
-    a copy had to be made of some other array (and the user asked for
-    this flag to be set in such a situation). The base attribute then
-    points to the "misbehaved" array (which is set read_only). When
-    the array with this flag set is deallocated, it will copy its
-    contents back to the "misbehaved" array (casting if necessary) and
-    will reset the "misbehaved" array to ``WRITEABLE``. If the
-    "misbehaved" array was not ``WRITEABLE`` to begin with then
-    ``PyArray_FromAny`` would have returned an error because ``UPDATEIFCOPY``
-    would not have been possible.
-
-
-``PyArray_UpdateFlags(obj, flags)`` will update the ``obj->flags`` for
-``flags`` which can be any of ``NPY_CONTIGUOUS``, ``NPY_FORTRAN``, ``NPY_ALIGNED``, or
-``NPY_WRITEABLE``.
-
-Some useful combinations of these flags:
-
-- ``NPY_BEHAVED = NPY_ALIGNED | NPY_WRITEABLE``
-- ``NPY_CARRAY = NPY_DEFAULT = NPY_CONTIGUOUS | NPY_BEHAVED``
-- ``NPY_CARRAY_RO = NPY_CONTIGUOUS | NPY_ALIGNED``
-- ``NPY_FARRAY = NPY_FORTRAN | NPY_BEHAVED``
-- ``NPY_FARRAY_RO = NPY_FORTRAN | NPY_ALIGNED``
-
-The macro ``PyArray_CHECKFLAGS(obj, flags)``  can test any combination of flags.
-There are several default combinations defined as macros already
-(see ``arrayobject.h``)
-
-In particular, there are ``ISBEHAVED``, ``ISBEHAVED_RO``, ``ISCARRAY``
-and ``ISFARRAY`` macros that also check to make sure the array is in
-native byte order (as determined) by the data-type descriptor.
-
-There are more C-API enhancements which you can discover in the code,
-or buy the book (http://www.trelgol.com)

Deleted: trunk/numpy/doc/DISTUTILS.txt
===================================================================
--- trunk/numpy/doc/DISTUTILS.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/DISTUTILS.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,546 +0,0 @@
-.. -*- rest -*-
-
-NumPy Distutils - Users Guide
-=============================
-
-:Author: Pearu Peterson <pearu@cens.ioc.ee>
-:Discussions to: scipy-dev@scipy.org
-:Created: October 2005
-:Revision: $LastChangedRevision$
-:SVN source: $HeadURL$
-
-.. contents::
-
-SciPy structure
-'''''''''''''''
-
-Currently SciPy project consists of two packages:
-
-- NumPy (previously called SciPy core) --- it provides packages like:
-
-  + numpy.distutils - extension to Python distutils
-  + numpy.f2py - a tool to bind Fortran/C codes to Python
-  + numpy.core - future replacement of Numeric and numarray packages
-  + numpy.lib - extra utility functions
-  + numpy.testing - numpy-style tools for unit testing
-  + etc
-
-- SciPy --- a collection of scientific tools for Python.
-
-The aim of this document is to describe how to add new tools to SciPy.
-
-
-Requirements for SciPy packages
-'''''''''''''''''''''''''''''''
-
-SciPy consists of Python packages, called SciPy packages, that are
-available to Python users via the ``scipy`` namespace. Each SciPy package
-may contain other SciPy packages. And so on. Therefore, the SciPy 
-directory tree is a tree of packages with arbitrary depth and width. 
-Any SciPy package may depend on NumPy packages but the dependence on other
-SciPy packages should be kept minimal or zero.
-
-A SciPy package contains, in addition to its sources, the following
-files and directories:
-
-  + ``setup.py`` --- building script
-  + ``info.py``  --- contains documentation and import flags
-  + ``__init__.py`` --- package initializer
-  + ``tests/`` --- directory of unittests
-
-Their contents are described below.
-
-The ``setup.py`` file
-'''''''''''''''''''''
-
-In order to add a Python package to SciPy, its build script (``setup.py``) 
-must meet certain requirements. The most important requirement is that the 
-package define a ``configuration(parent_package='',top_path=None)`` function 
-which returns a dictionary suitable for passing to 
-``numpy.distutils.core.setup(..)``. To simplify the construction of 
-this dictionary, ``numpy.distutils.misc_util`` provides the 
-``Configuration`` class, described below.
-
-SciPy pure Python package example
----------------------------------
-
-Below is an example of a minimal ``setup.py`` file for a pure Scipy package::
-
-  #!/usr/bin/env python
-  def configuration(parent_package='',top_path=None):
-      from numpy.distutils.misc_util import Configuration
-      config = Configuration('mypackage',parent_package,top_path)
-      return config
-
-  if __name__ == "__main__":
-      from numpy.distutils.core import setup
-      #setup(**configuration(top_path='').todict())
-      setup(configuration=configuration)
-
-The arguments of the ``configuration`` function specifiy the name of
-parent SciPy package (``parent_package``) and the directory location
-of the main ``setup.py`` script (``top_path``).  These arguments, 
-along with the name of the current package, should be passed to the
-``Configuration`` constructor.
-
-The ``Configuration`` constructor has a fourth optional argument,
-``package_path``, that can be used when package files are located in
-a different location than the directory of the ``setup.py`` file. 
-
-Remaining ``Configuration`` arguments are all keyword arguments that will
-be used to initialize attributes of ``Configuration``
-instance. Usually, these keywords are the same as the ones that
-``setup(..)`` function would expect, for example, ``packages``,
-``ext_modules``, ``data_files``, ``include_dirs``, ``libraries``,
-``headers``, ``scripts``, ``package_dir``, etc.  However, the direct
-specification of these keywords is not recommended as the content of
-these keyword arguments will not be processed or checked for the
-consistency of SciPy building system.
-
-Finally, ``Configuration`` has ``.todict()`` method that returns all
-the configuration data as a dictionary suitable for passing on to the
-``setup(..)`` function.
-
-``Configuration`` instance attributes
--------------------------------------
-
-In addition to attributes that can be specified via keyword arguments
-to ``Configuration`` constructor, ``Configuration`` instance (let us
-denote as ``config``) has the following attributes that can be useful
-in writing setup scripts:
-
-+ ``config.name`` - full name of the current package. The names of parent
-  packages can be extracted as ``config.name.split('.')``.
-
-+ ``config.local_path`` - path to the location of current ``setup.py`` file.
-
-+ ``config.top_path`` - path to the location of main ``setup.py`` file.
-
-``Configuration`` instance methods
-----------------------------------
-
-+ ``config.todict()`` --- returns configuration dictionary suitable for
-  passing to ``numpy.distutils.core.setup(..)`` function.
-
-+ ``config.paths(*paths) --- applies ``glob.glob(..)`` to items of
-  ``paths`` if necessary. Fixes ``paths`` item that is relative to
-  ``config.local_path``.
-
-+ ``config.get_subpackage(subpackage_name,subpackage_path=None)`` ---
-  returns a list of subpackage configurations. Subpackage is looked in the
-  current directory under the name ``subpackage_name`` but the path
-  can be specified also via optional ``subpackage_path`` argument.
-  If ``subpackage_name`` is specified as ``None`` then the subpackage
-  name will be taken the basename of ``subpackage_path``.
-  Any ``*`` used for subpackage names are expanded as wildcards.
-
-+ ``config.add_subpackage(subpackage_name,subpackage_path=None)`` ---
-  add SciPy subpackage configuration to the current one. The meaning
-  and usage of arguments is explained above, see
-  ``config.get_subpackage()`` method.
-
-+ ``config.add_data_files(*files)`` --- prepend ``files`` to ``data_files``
-  list. If ``files`` item is a tuple then its first element defines
-  the suffix of where data files are copied relative to package installation
-  directory and the second element specifies the path to data
-  files. By default data files are copied under package installation
-  directory. For example,
-
-  ::
-
-    config.add_data_files('foo.dat',
-	                  ('fun',['gun.dat','nun/pun.dat','/tmp/sun.dat']),
-                          'bar/car.dat'.
-                          '/full/path/to/can.dat',
-                          )
-
-  will install data files to the following locations
-
-  ::
-
-    <installation path of config.name package>/
-      foo.dat
-      fun/
-        gun.dat
-	pun.dat
-        sun.dat
-      bar/
-        car.dat
-      can.dat 
-
-  Path to data files can be a function taking no arguments and
-  returning path(s) to data files -- this is a useful when data files
-  are generated while building the package. (XXX: explain the step
-  when this function are called exactly) 
-
-+ ``config.add_data_dir(data_path)`` --- add directory ``data_path``
-  recursively to ``data_files``. The whole directory tree starting at
-  ``data_path`` will be copied under package installation directory.
-  If ``data_path`` is a tuple then its first element defines
-  the suffix of where data files are copied relative to package installation
-  directory and the second element specifies the path to data directory.
-  By default, data directory are copied under package installation
-  directory under the basename of ``data_path``. For example,
- 
-  ::
-
-    config.add_data_dir('fun')  # fun/ contains foo.dat bar/car.dat
-    config.add_data_dir(('sun','fun'))
-    config.add_data_dir(('gun','/full/path/to/fun'))
-
-  will install data files to the following locations 
-
-  ::
-
-    <installation path of config.name package>/
-      fun/
-         foo.dat
-         bar/
-            car.dat
-      sun/
-         foo.dat
-         bar/
-            car.dat
-      gun/
-         foo.dat
-         bar/
-            car.dat
-
-+ ``config.add_include_dirs(*paths)`` --- prepend ``paths`` to
-  ``include_dirs`` list. This list will be visible to all extension
-  modules of the current package.
-
-+ ``config.add_headers(*files)`` --- prepend ``files`` to ``headers``
-  list. By default, headers will be installed under 
-  ``<prefix>/include/pythonX.X/<config.name.replace('.','/')>/``
-  directory. If ``files`` item is a tuple then it's first argument
-  specifies the installation suffix relative to
-  ``<prefix>/include/pythonX.X/`` path.  This is a Python distutils
-  method; its use is discouraged for NumPy and SciPy in favour of
-  ``config.add_data_files(*files)``.
-
-+ ``config.add_scripts(*files)`` --- prepend ``files`` to ``scripts``
-  list. Scripts will be installed under ``<prefix>/bin/`` directory.
-
-+ ``config.add_extension(name,sources,*kw)`` --- create and add an
-  ``Extension`` instance to ``ext_modules`` list. The first argument 
-  ``name`` defines the name of the extension module that will be
-  installed under ``config.name`` package. The second argument is
-  a list of sources. ``add_extension`` method takes also keyword
-  arguments that are passed on to the ``Extension`` constructor.
-  The list of allowed keywords is the following: ``include_dirs``,
-  ``define_macros``, ``undef_macros``, ``library_dirs``, ``libraries``,
-  ``runtime_library_dirs``, ``extra_objects``, ``extra_compile_args``,
-  ``extra_link_args``, ``export_symbols``, ``swig_opts``, ``depends``,
-  ``language``, ``f2py_options``, ``module_dirs``, ``extra_info``.
-
-  Note that ``config.paths`` method is applied to all lists that
-  may contain paths. ``extra_info`` is a dictionary or a list
-  of dictionaries that content will be appended to keyword arguments.
-  The list ``depends`` contains paths to files or directories
-  that the sources of the extension module depend on. If any path
-  in the ``depends`` list is newer than the extension module, then
-  the module will be rebuilt.
-
-  The list of sources may contain functions ('source generators')
-  with a pattern ``def <funcname>(ext, build_dir): return
-  <source(s) or None>``. If ``funcname`` returns ``None``, no sources
-  are generated. And if the ``Extension`` instance has no sources
-  after processing all source generators, no extension module will
-  be built. This is the recommended way to conditionally define
-  extension modules. Source generator functions are called by the
-  ``build_src`` command of ``numpy.distutils``.
-
-  For example, here is a typical source generator function::
-
-    def generate_source(ext,build_dir):
-        import os
-        from distutils.dep_util import newer
-        target = os.path.join(build_dir,'somesource.c')
-        if newer(target,__file__):
-            # create target file
-        return target
-
-  The first argument contains the Extension instance that can be
-  useful to access its attributes like ``depends``, ``sources``,
-  etc. lists and modify them during the building process.
-  The second argument gives a path to a build directory that must
-  be used when creating files to a disk.
-
-+ ``config.add_library(name, sources, **build_info)`` --- add
-  a library to ``libraries`` list. Allowed keywords arguments
-  are ``depends``, ``macros``, ``include_dirs``,
-  ``extra_compiler_args``, ``f2py_options``. See ``.add_extension()``
-  method for more information on arguments.
-
-+ ``config.have_f77c()`` --- return True if Fortran 77 compiler is
-  available (read: a simple Fortran 77 code compiled succesfully). 
-
-+ ``config.have_f90c()`` --- return True if Fortran 90 compiler is
-  available (read: a simple Fortran 90 code compiled succesfully). 
-
-+ ``config.get_version()`` --- return version string of the current package,
-  ``None`` if version information could not be detected. This methods
-  scans files ``__version__.py``, ``<packagename>_version.py``,
-  ``version.py``, ``__svn_version__.py`` for string variables
-  ``version``, ``__version__``, ``<packagename>_version``.
-
-+ ``config.make_svn_version_py()`` --- appends a data function to
-  ``data_files`` list that will generate ``__svn_version__.py`` file
-  to the current package directory. The file will be removed from
-  the source directory when Python exits.
-
-+ ``config.get_build_temp_dir()`` --- return a path to a temporary
-  directory. This is the place where one should build temporary
-  files.
-
-+ ``config.get_distribution()`` --- return distutils ``Distribution``
-  instance.
-
-+ ``config.get_config_cmd()`` --- returns ``numpy.distutils`` config
-  command instance.
-
-+ ``config.get_info(*names)`` ---
-
-Template files
---------------
-
-XXX: Describe how files with extensions ``.f.src``, ``.pyf.src``,
-``.c.src``, etc. are pre-processed by the ``build_src`` command.
-
-Useful functions in ``numpy.distutils.misc_util``
--------------------------------------------------
-
-+ ``get_numpy_include_dirs()`` --- return a list of NumPy base
-  include directories. NumPy base include directories contain
-  header files such as ``numpy/arrayobject.h``, ``numpy/funcobject.h``
-  etc. For installed NumPy the returned list has length 1
-  but when building NumPy the list may contain more directories,
-  for example, a path to ``config.h`` file that
-  ``numpy/base/setup.py`` file generates and is used by ``numpy``
-  header files.
-
-+ ``append_path(prefix,path)`` --- smart append ``path`` to ``prefix``.
-
-+ ``gpaths(paths, local_path='')`` --- apply glob to paths and prepend
-  ``local_path`` if needed.
-
-+ ``njoin(*path)`` --- join pathname components + convert ``/``-separated path
-  to ``os.sep``-separated path and resolve ``..``, ``.`` from paths.
-  Ex. ``njoin('a',['b','./c'],'..','g') -> os.path.join('a','b','g')``.
-
-+ ``minrelpath(path)`` --- resolves dots in ``path``.
-
-+ ``rel_path(path, parent_path)`` --- return ``path`` relative to ``parent_path``.
-
-+ ``def get_cmd(cmdname,_cache={})`` --- returns ``numpy.distutils``
-  command instance.
-
-+ ``all_strings(lst)``
-
-+ ``has_f_sources(sources)``
-
-+ ``has_cxx_sources(sources)``
-
-+ ``filter_sources(sources)`` --- return ``c_sources, cxx_sources,
-  f_sources, fmodule_sources``
-
-+ ``get_dependencies(sources)``
-
-+ ``is_local_src_dir(directory)``
-
-+ ``get_ext_source_files(ext)``
-
-+ ``get_script_files(scripts)``
-
-+ ``get_lib_source_files(lib)``
-
-+ ``get_data_files(data)``
-
-+ ``dot_join(*args)`` --- join non-zero arguments with a dot.
-
-+ ``get_frame(level=0)`` --- return frame object from call stack with given level.
-
-+ ``cyg2win32(path)``
-
-+ ``mingw32()`` --- return ``True`` when using mingw32 environment.
-
-+ ``terminal_has_colors()``, ``red_text(s)``, ``green_text(s)``,
-  ``yellow_text(s)``, ``blue_text(s)``, ``cyan_text(s)``
-
-+ ``get_path(mod_name,parent_path=None)`` --- return path of a module
-  relative to parent_path when given. Handles also ``__main__`` and
-  ``__builtin__`` modules.
-
-+ ``allpath(name)`` --- replaces ``/`` with ``os.sep`` in ``name``.
-
-+ ``cxx_ext_match``, ``fortran_ext_match``, ``f90_ext_match``,
-  ``f90_module_name_match``
-
-``numpy.distutils.system_info`` module
---------------------------------------
-
-+ ``get_info(name,notfound_action=0)``
-+ ``combine_paths(*args,**kws)``
-+ ``show_all()``
-
-``numpy.distutils.cpuinfo`` module
-----------------------------------
-
-+ ``cpuinfo``
-
-``numpy.distutils.log`` module
-------------------------------
-
-+ ``set_verbosity(v)``
-
-
-``numpy.distutils.exec_command`` module
----------------------------------------
-
-+ ``get_pythonexe()``
-+ ``find_executable(exe, path=None)``
-+ ``exec_command( command, execute_in='', use_shell=None, use_tee=None, **env )``
-
-The ``info.py`` file
-''''''''''''''''''''
-
-Scipy package import hooks assume that each package contains a
-``info.py`` file.  This file contains overall documentation about the package
-and variables defining the order of package imports, dependency
-relations between packages, etc.
-
-On import, the following information will be looked for in ``info.py``:
-
-__doc__
-  The documentation string of the package.
-
-__doc_title__
-  The title of the package. If not defined then the first non-empty 
-  line of ``__doc__`` will be used.
-
-__all__
-  List of symbols that package exports. Optional.
-
-global_symbols
-  List of names that should be imported to numpy name space. To import
-  all symbols to ``numpy`` namespace, define ``global_symbols=['*']``.
-
-depends
-  List of names that the package depends on. Prefix ``numpy.``
-  will be automatically added to package names. For example,
-  use ``testing`` to indicate dependence on ``numpy.testing``
-  package. Default value is ``[]``.
-
-postpone_import
-  Boolean variable indicating that importing the package should be
-  postponed until the first attempt of its usage. Default value is ``False``.
-  Depreciated.
-
-The ``__init__.py`` file
-''''''''''''''''''''''''
-
-To speed up the import time and minimize memory usage, numpy
-uses ``ppimport`` hooks to transparently postpone importing large modules,
-which might not be used during the Scipy session. In order to
-have access to the documentation of all Scipy packages, including 
-postponed packages, the docstring from ``info.py`` is imported
-into ``__init__.py``.
-
-The header of a typical ``__init__.py`` is::
-
-  #
-  # Package ... - ...
-  #
-
-  from info import __doc__
-  ...
-
-  from numpy.testing import NumpyTest
-  test = NumpyTest().test
-
-The ``tests/`` directory
-''''''''''''''''''''''''
-
-Ideally, every Python code, extension module, or subpackage in Scipy
-package directory should have the corresponding ``test_<name>.py``
-file in ``tests/`` directory.  This file should define classes
-derived from the ``numpy.testing.TestCase`` class (or from 
-``unittest.TestCase``) and have names starting with ``test``. The methods
-of these classes whose names contain ``test`` or start with ``bench`` are 
-automatically picked up by the test machinery. 
-
-A minimal example of a ``test_yyy.py`` file that implements tests for
-a NumPy package module ``numpy.xxx.yyy`` containing a function
-``zzz()``, is shown below::
-
-  import sys
-  from numpy.testing import *
-
-  # import xxx symbols
-  from numpy.xxx.yyy import zzz
-
-
-  class test_zzz(TestCase):
-      def test_simple(self, level=1):
-          assert zzz()=='Hello from zzz'
-      #...
-
-  if __name__ == "__main__":
-      run_module_tests(file)
-
-Note that all classes that are inherited from ``TestCase`` class, are
-automatically picked up by the test runner.
-
-``numpy.testing`` module provides also the following convenience
-functions::
-
-  assert_equal(actual,desired,err_msg='',verbose=1)
-  assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=1)
-  assert_approx_equal(actual,desired,significant=7,err_msg='',verbose=1)
-  assert_array_equal(x,y,err_msg='')
-  assert_array_almost_equal(x,y,decimal=6,err_msg='')
-  rand(*shape) # returns random array with a given shape
-
-To run all test scripts of the module ``xxx``, execute in Python:
-
-  >>> import numpy
-  >>> numpy.xxx.test()
-
-To run only tests for ``xxx.yyy`` module, execute:
-
-  >>> NumpyTest('xxx.yyy').test(level=1,verbosity=1)
-
-Extra features in NumPy Distutils
-'''''''''''''''''''''''''''''''''
-
-Specifing config_fc options for libraries in setup.py script
-------------------------------------------------------------
-
-It is possible to specify config_fc options in setup.py scripts.
-For example, using
-
-  config.add_library('library',
-                     sources=[...],
-                     config_fc={'noopt':(__file__,1)})
-
-will compile the ``library`` sources without optimization flags.
-
-It's recommended to specify only those config_fc options in such a way
-that are compiler independent.
-
-Getting extra Fortran 77 compiler options from source 
------------------------------------------------------
-
-Some old Fortran codes need special compiler options in order to
-work correctly. In order to specify compiler options per source
-file, ``numpy.distutils`` Fortran compiler looks for the following
-pattern::
-
-  CF77FLAGS(<fcompiler type>) = <fcompiler f77flags>
-
-in the first 20 lines of the source and use the ``f77flags`` for
-specified type of the fcompiler (the first character ``C`` is optional). 
-
-TODO: This feature can be easily extended for Fortran 90 codes as
-well. Let us know if you would need such a feature.

Deleted: trunk/numpy/doc/EXAMPLE_DOCSTRING.txt
===================================================================
--- trunk/numpy/doc/EXAMPLE_DOCSTRING.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/EXAMPLE_DOCSTRING.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,104 +0,0 @@
-.. Here follows an example docstring for a C-function.  Note that the
-   signature is given.  This is done only for functions written is C,
-   since Python cannot find their signature by inspection.  For all
-   other functions, start with the one line description.
-
-
-multivariate_normal(mean, cov[, shape])
-
-Draw samples from a multivariate normal distribution.
-
-The multivariate normal, multinormal or Gaussian distribution is a
-generalisation of the one-dimensional normal distribution to higher
-dimensions.
-
-Such a distribution is specified by its mean and covariance matrix,
-which are analogous to the mean (average or "centre") and variance
-(standard deviation squared or "width") of the one-dimensional normal
-distribution.
-
-Parameters
-----------
-mean : (N,) ndarray
-    Mean of the N-dimensional distribution.
-cov : (N,N) ndarray
-    Covariance matrix of the distribution.
-shape : tuple of ints, optional
-    Given a shape of, for example, (m,n,k), m*n*k samples are
-    generated, and packed in an m-by-n-by-k arrangement.  Because each
-    sample is N-dimensional, the output shape is (m,n,k,N).  If no
-    shape is specified, a single sample is returned.
-
-Returns
--------
-out : ndarray
-    The drawn samples, arranged according to `shape`.  If the
-    shape given is (m,n,...), then the shape of `out` is is
-    (m,n,...,N).
-
-    In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
-    value drawn from the distribution.
-
-See Also
---------
-normal
-scipy.stats.distributions.norm : Provides random variates, as well as
-                                 probability density function, cumulative
-                                 density function, etc.
-
-Notes
------
-The mean is a coordinate in N-dimensional space, which represents the
-location where samples are most likely to be generated.  This is
-analogous to the peak of the bell curve for the one-dimensional or
-univariate normal distribution.
-
-Covariance indicates the level to which two variables vary together.
-From the multivariate normal distribution, we draw N-dimensional
-samples, :math:`X = [x_1, x_2, ... x_N]`.  The covariance matrix
-element :math:`C_ij` is the covariance of :math:`x_i` and :math:`x_j`.
-The element :math:`C_ii` is the variance of :math:`x_i` (i.e. its
-"spread").
-
-Instead of specifying the full covariance matrix, popular
-approximations include:
-
-  - Spherical covariance (`cov` is a multiple of the identity matrix)
-  - Diagonal covariance (`cov` has non-negative elements, and only on
-    the diagonal)
-
-This geometrical property can be seen in two dimensions by plotting
-generated data-points:
-
->>> mean = [0,0]
->>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
->>> x,y = np.random.multivariate_normal(mean,cov,5000).T
-
->>> import matplotlib.pyplot as plt
->>> plt.plot(x,y,'x'); plt.axis('equal'); pyplot.show()
-
-Note that the covariance matrix must be non-negative definite.
-
-References
-----------
-.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
-       Processes," 3rd ed., McGraw-Hill Companies, 1991
-.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
-       2nd ed., Wiley, 2001.
-
-Examples
---------
->>> mean = (1,2)
->>> cov = [[1,0],[1,0]]
->>> x = np.random.multivariate_normal(mean,cov,(3,3))
->>> x.shape
-(3, 3, 2)
-
-The following is probably true, given that 0.6 is roughly twice the
-standard deviation:
-
->>> print list( (x[0,0,:] - mean) < 0.6 )
-[True, True]
-
-.. index:
-   :refguide: random:distributions

Deleted: trunk/numpy/doc/HOWTO_BUILD_DOCS.txt
===================================================================
--- trunk/numpy/doc/HOWTO_BUILD_DOCS.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/HOWTO_BUILD_DOCS.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,71 +0,0 @@
-=========================================
-Building the NumPy API and reference docs
-=========================================
-
-Using Sphinx_
--------------
-`Download <https://code.launchpad.net/~stefanv/scipy/numpy-refguide>`_
-the builder.  Follow the instructions in ``README.txt``.
-
-
-Using Epydoc_
--------------
-
-Currently, we recommend that you build epydoc from the trunk::
-
-  svn co https://epydoc.svn.sf.net/svnroot/epydoc/trunk/epydoc epydoc
-  cd epydoc/src
-  sudo python setup.py install
-
-The appearance of some elements can be changed in the epydoc.css
-style sheet.
-
-Emphasized text appearance can be controlled by the definition of the <em>
-tag. For instance, to make them bold, insert::
-
-  em     {font-weight: bold;}
-
-The variables' types are in a span of class rst-classifier, hence can be
-changed by inserting something like::
-
-  span.rst-classifier     {font-weight: normal;}
-
-The first line of the signature should **not** copy the signature unless
-the function is written in C, in which case it is mandatory.  If the function
-signature is generic (uses ``*args`` or ``**kwds``), then a function signature
-may be included.
-
-Use optional in the "type" field for parameters that are non-keyword
-optional for C-functions.
-
-Epydoc depends on Docutils for reStructuredText parsing.  You can
-download Docutils from the `Docutils sourceforge
-page. <http://docutils.sourceforge.net/>`_.  The version in SVN is
-broken, so use 0.4 or the patched version from Debian.  You may also
-be able to use a package manager like yum to install it::
-
-  $ sudo yum install python-docutils
-
-
-Example
--------
-Here is a short example module,
-`plain text <http://svn.scipy.org/svn/numpy/trunk/numpy/doc/example.py>`_
-or
-`rendered <http://www.scipy.org/doc/example>`_ in HTML.
-
-To try this yourself, simply download the example.py::
-
-  svn co http://svn.scipy.org/svn/numpy/trunk/numpy/doc/example.py .
-
-Then, run epydoc::
-
-  $ epydoc --docformat=restructuredtext example.py
-
-The output is placed in ``./html``, and may be viewed by loading the
-``index.html`` file into your browser.
-
-
-
-.. _epydoc: http://epydoc.sourceforge.net/
-.. _sphinx: http://sphinx.pocoo.org

Deleted: trunk/numpy/doc/HOWTO_DOCUMENT.txt
===================================================================
--- trunk/numpy/doc/HOWTO_DOCUMENT.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/HOWTO_DOCUMENT.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,430 +0,0 @@
-====================================
-A Guide to NumPy/SciPy Documentation
-====================================
-
-.. Contents::
-
-.. Note::
-
-   For an accompanying example, see `example.py
-   <http://svn.scipy.org/svn/numpy/trunk/numpy/doc/example.py>`_.
-
-Overview
---------
-In general, we follow the standard Python style conventions as described here:
- * `Style Guide for C Code <http://www.python.org/peps/pep-0007.html>`_
- * `Style Guide for Python Code <http://www.python.org/peps/pep-0008.html>`_
- * `Docstring Conventions <http://www.python.org/peps/pep-0257.html>`_
-
-Additional PEPs of interest regarding documentation of code:
- * `Docstring Processing Framework <http://www.python.org/peps/pep-0256.html>`_
- * `Docutils Design Specification <http://www.python.org/peps/pep-0258.html>`_
-
-Use a code checker:
- * `pylint <http://www.logilab.org/857>`_
- * `pyflakes` easy_install pyflakes
- * `pep8.py <http://svn.browsershots.org/trunk/devtools/pep8/pep8.py>`_
-
-The following import conventions are used throughout the NumPy source
-and documentation::
-
-   import numpy as np
-   import scipy as sp
-   import matplotlib as mpl
-   import matplotlib.pyplot as plt
-
-It is not necessary to do ``import numpy as np`` at the beginning of
-an example.  However, some sub-modules, such as ``fft``, are not
-imported by default, and you have to include them explicitly::
-
-  import numpy.fft
-
-after which you may use it::
-
-  np.fft.fft2(...)
-
-Docstring Standard
-------------------
-A documentation string (docstring) is a string that describes a module,
-function, class, or method definition.  The docstring is a special attribute
-of the object (``object.__doc__``) and, for consistency, is surrounded by
-triple double quotes, i.e.::
-
-   """This is the form of a docstring.
-
-   It can be spread over several lines.
-
-   """
-
-NumPy, SciPy_, and the scikits follow a common convention for
-docstrings that provides for consistency, while also allowing our
-toolchain to produce well-formatted reference guides.  This document
-describes the current community consensus for such a standard.  If you
-have suggestions for improvements, post them on the `numpy-discussion
-list`_, together with the epydoc output.
-
-Our docstring standard uses `re-structured text (reST)
-<http://docutils.sourceforge.net/rst.html>`_ syntax and is rendered
-using tools like epydoc_ or sphinx_ (pre-processors that understand
-the particular documentation style we are using).  While a rich set of
-markup is available, we limit ourselves to a very basic subset, in
-order to provide docstrings that are easy to read on text-only
-terminals.
-
-A guiding principle is that human readers of the text are given
-precedence over contorting docstrings so our tools produce nice
-output.  Rather than sacrificing the readability of the docstrings, we
-have written pre-processors to assist tools like epydoc_ and sphinx_ in
-their task.
-
-Status
-------
-We are busy converting existing docstrings to the new format,
-expanding them where they are lacking, as well as writing new ones for
-undocumented functions.  Volunteers are welcome to join the effort on
-our new documentation system (see the `Developer Zone
-<http://www.scipy.org/Developer_Zone/DocMarathon2008>`_).
-
-Sections
---------
-The sections of the docstring are:
-
-1. **Short summary**
-
-   A one-line summary that does not use variable names or the function
-   name, e.g.
-
-   ::
-
-     def add(a,b):
-        """The sum of two numbers.
-
-        """
-
-   The function signature is normally found by introspection and
-   displayed by the help function.  For some functions (notably those
-   written in C) the signature is not available, so we have to specify
-   it as the first line of the docstring::
-
-     """
-     add(a,b)
-
-     The sum of two numbers.
-
-     """
-
-2. **Extended summary**
-
-   A few sentences giving an extended description.  This section
-   should be used to clarify *functionality*, not to discuss
-   implementation detail or background theory, which should rather be
-   explored in the **notes** section below.  You may refer to the
-   parameters and the function name, but parameter descriptions still
-   belong in the **parameters** section.
-
-3. **Parameters**
-
-   Description of the function arguments, keywords and their
-   respective types.
-
-   ::
-
-     Parameters
-     ----------
-     x : type
-        Description of parameter `x`.
-
-   Enclose variables in single back-tics.  If it is not necessary to
-   specify a keyword argument, use ``optional``::
-
-     x : int, optional
-
-   Optional keyword parameters have default values, which are
-   displayed as part of the function signature.  They can also be
-   detailed in the description::
-
-     Description of parameter `x` (the default is -1, which implies summation
-     over all axes).
-
-   When a parameter can only assume one of a fixed set of values,
-   those values can be listed in braces ::
-
-     x : {True, False}
-         Description of `x`.
-
-4. **Returns**
-
-   Explanation of the returned values and their types, of the same
-   format as **parameters**.
-
-5. **Other parameters**
-
-   An optional section used to describe infrequently used parameters.
-   It should only be used if a function has a large number of keyword
-   prameters, to prevent cluttering the **parameters** section.
-
-6. **Raises**
-
-   An optional section detailing which errors get raised and under
-   what conditions::
-
-     Raises
-     ------
-     LinAlgException
-         If the matrix is not numerically invertible.
-
-7. **See Also**
-
-   An optional section used to refer to related code.  This section
-   can be very useful, but should be used judiciously.  The goal is to
-   direct users to other functions they may not be aware of, or have
-   easy means of discovering (by looking at the module docstring, for
-   example).  Routines whose docstrings further explain parameters
-   used by this function are good candidates.
-
-   As an example, for ``numpy.mean`` we would have::
-
-     See Also
-     --------
-     average : Weighted average
-
-   When referring to functions in the same sub-module, no prefix is
-   needed, and the tree is searched upwards for a match.
-
-   Prefix functions from other sub-modules appropriately.  E.g.,
-   whilst documenting the ``random`` module, refer to a function in
-   ``fft`` by
-
-   ::
-
-     fft.fft2 : 2-D fast discrete Fourier transform
-
-   When referring to an entirely different module::
-
-     scipy.random.norm : Random variates, PDFs, etc.
-
-   Functions may be listed without descriptions::
-
-     See Also
-     --------
-     func_a : Function a with its description.
-     func_b, func_c_, func_d
-     func_e
-
-8. **Notes**
-
-   An optional section that provides additional information about the
-   code, possibly including a discussion of the algorithm. This
-   section may include mathematical equations, written in
-   `LaTeX <http://www.latex-project.org/>`_ format::
-
-     The FFT is a fast implementation of the discrete Fourier transform:
-
-     .. math:: X(e^{j\omega } ) = x(n)e^{ - j\omega n}
-
-   Equations can also be typeset underneath the math directive::
-
-     The discrete-time Fourier time-convolution property states that
-
-     .. math::
-
-          x(n) * y(n) \Leftrightarrow X(e^{j\omega } )Y(e^{j\omega } )\\
-          another equation here
-
-   Math can furthermore be used inline, i.e.
-
-   ::
-
-     The value of :math:`\omega` is larger than 5.
-
-   Variable names are displayed in typewriter font, obtained by using
-   ``\mathtt{var}``::
-
-     We square the input parameter `alpha` to obtain
-     :math:`\mathtt{alpha}^2`.
-
-   Note that LaTeX is not particularly easy to read, so use equations
-   sparingly.
-
-   Images are allowed, but should not be central to the explanation;
-   users viewing the docstring as text must be able to comprehend its
-   meaning without resorting to an image viewer.  These additional
-   illustrations are included using::
-
-     .. image:: filename
-
-   where filename is a path relative to the reference guide source
-   directory.
-
-9. **References**
-
-   References cited in the **notes** section may be listed here,
-   e.g. if you cited the article below using the text ``[1]_``,
-   include it as in the list as follows::
-
-     .. [1] O. McNoleg, "The integration of GIS, remote sensing,
-        expert systems and adaptive co-kriging for environmental habitat
-        modelling of the Highland Haggis using object-oriented, fuzzy-logic
-        and neural-network techniques," Computers & Geosciences, vol. 22,
-        pp. 585-588, 1996.
-
-   which renders as
-
-   .. [1] O. McNoleg, "The integration of GIS, remote sensing,
-      expert systems and adaptive co-kriging for environmental habitat
-      modelling of the Highland Haggis using object-oriented, fuzzy-logic
-      and neural-network techniques," Computers & Geosciences, vol. 22,
-      pp. 585-588, 1996.
-
-   Referencing sources of a temporary nature, like web pages, is
-   discouraged.  References are meant to augment the docstring, but
-   should not be required to understand it.  Follow the `citation
-   format of the IEEE
-   <http://www.ieee.org/pubs/transactions/auinfo03.pdf>`_, which
-   states that references are numbered, starting from one, in the
-   order in which they are cited.
-
-10. **Examples**
-
-    An optional section for examples, using the `doctest
-    <http://www.python.org/doc/lib/module-doctest.html>`_ format.
-    This section is meant to illustrate usage, not to provide a
-    testing framework -- for that, use the ``tests/`` directory.
-    While optional, this section is very strongly encouraged. You can
-    run these examples by doing::
-
-      >>> import doctest
-      >>> doctest.testfile('example.py')
-
-    or, using nose,
-
-    ::
-
-      $ nosetests --with-doctest example.py
-
-    Blank lines are used to seperate doctests.  When they occur in the
-    expected output, they should be replaced by ``<BLANKLINE>`` (see
-    `doctest options
-    <http://docs.python.org/lib/doctest-options.html>`_ for other such
-    special strings), e.g.
-
-    ::
-
-      >>> print "a\n\nb"
-      a
-      <BLANKLINE>
-      b
-
-    The examples may assume that ``import numpy as np`` is executed before
-    the example code in *numpy*, and ``import scipy as sp`` in *scipy*.
-    Additional examples may make use of *matplotlib* for plotting, but should
-    import it explicitly, e.g., ``import matplotlib.pyplot as plt``.
-
-11. **Indexing tags***
-
-    Each function needs to be categorised for indexing purposes.  Use
-    the ``index`` directive::
-
-      .. index::
-         :refguide: ufunc, trigonometry
-
-    To index a function as a sub-category of a class, separate index
-    entries by a colon, e.g.
-
-    ::
-
-      :refguide: ufunc, numpy:reshape, other
-
-    A `list of available categories
-    <http://www.scipy.org/Developer_Zone/ReferenceGuide>`_ is
-    available.
-
-Documenting classes
--------------------
-
-Class docstring
-```````````````
-Use the same sections as outlined above (all except ``Returns`` are
-applicable).  The constructor (``__init__``) should also be documented
-here.
-
-An ``Attributes`` section may be used to describe class variables::
-
-  Attributes
-  ----------
-  x : float
-      The X coordinate.
-  y : float
-      The Y coordinate.
-
-In general, it is not necessary to list class methods.  Those that are
-not part of the public API have names that start with an underscore.
-In some cases, however, a class may have a great many methods, of
-which only a few are relevant (e.g., subclasses of ndarray).  Then, it
-becomes useful to have an additional ``Methods`` section::
-
-  class Photo(ndarray):
-      """
-      Array with associated photographic information.
-
-      ...
-
-      Attributes
-      ----------
-      exposure : float
-          Exposure in seconds.
-
-      Methods
-      -------
-      colorspace(c='rgb')
-          Represent the photo in the given colorspace.
-      gamma(n=1.0)
-          Change the photo's gamma exposure.
-
-      """
-
-Note that `self` is *not* listed as the first parameter of methods.
-
-Method docstrings
-`````````````````
-Document these as you would any other function.  Do not include
-``self`` in the list of parameters.
-
-Common reST concepts
---------------------
-For paragraphs, indentation is significant and indicates indentation in the
-output. New paragraphs are marked with a blank line.
-
-Use *italics*, **bold**, and ``courier`` if needed in any explanations
-(but not for variable names and doctest code or multi-line code).
-Variable, module and class names should be written between single
-backticks (```numpy```).
-
-A more extensive example of reST markup can be found in `this example
-document <http://docutils.sourceforge.net/docs/user/rst/demo.txt>`_;
-the `quick reference
-<http://docutils.sourceforge.net/docs/user/rst/quickref.html>`_ is
-useful while editing.
-
-Line spacing and indentation are significant and should be carefully
-followed.
-
-Conclusion
-----------
-
-`An example
-<http://svn.scipy.org/svn/numpy/trunk/numpy/doc/example.py>`_ of the
-format shown here is available.  Refer to `How to Build API/Reference
-Documentation
-<http://svn.scipy.org/svn/numpy/trunk/numpy/doc/HOWTO_BUILD_DOCS.txt>`_
-on how to use epydoc_ or sphinx_ to construct a manual and web page.
-
-This document itself was written in ReStructuredText, and may be converted to
-HTML using::
-
-  $ rst2html HOWTO_DOCUMENT.txt HOWTO_DOCUMENT.html
-
-.. _SciPy: http://www.scipy.org
-.. _numpy-discussion list: http://www.scipy.org/Mailing_Lists
-.. _epydoc: http://epydoc.sourceforge.net/
-.. _sphinx: http://sphinx.pocoo.org

Deleted: trunk/numpy/doc/README.txt
===================================================================
--- trunk/numpy/doc/README.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/README.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,15 +0,0 @@
-Very complete documentation is available from the primary developer of
-NumPy for a small fee.  After a brief period, that documentation
-will become freely available.  See http://www.trelgol.com for
-details. The fee and restriction period is intended to allow people
-and to encourage companies to easily contribute to the development of
-NumPy.
-
-This directory will contain all public documentation that becomes available. 
-
-Very good documentation is also available using Python's (and
-especially IPython's) own help system.  Most of the functions have
-docstrings that provide usage assistance.
-
-
-

Modified: trunk/numpy/doc/__init__.py
===================================================================
--- trunk/numpy/doc/__init__.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/__init__.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,2 +1,12 @@
-from numpy.doc.reference import *
-del reference
+import os
+
+ref_dir = os.path.join(os.path.dirname(__file__))
+
+__all__ = [f[:-3] for f in os.listdir(ref_dir) if f.endswith('.py') and
+           not f.startswith('__')]
+__all__.sort()
+
+__doc__ = 'The following topics are available:\n' + \
+          '\n - '.join([''] + __all__)
+
+__all__.extend(['__doc__'])

Copied: trunk/numpy/doc/basics.py (from rev 5669, trunk/numpy/doc/reference/basics.py)

Copied: trunk/numpy/doc/broadcasting.py (from rev 5669, trunk/numpy/doc/reference/broadcasting.py)

Copied: trunk/numpy/doc/creation.py (from rev 5669, trunk/numpy/doc/reference/creation.py)

Deleted: trunk/numpy/doc/example.py
===================================================================
--- trunk/numpy/doc/example.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/example.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,125 +0,0 @@
-"""This is the docstring for the example.py module.  Modules names should
-have short, all-lowercase names.  The module name may have underscores if
-this improves readability.
-
-Every module should have a docstring at the very top of the file.  The
-module's docstring may extend over multiple lines.  If your docstring does
-extend over multiple lines, the closing three quotation marks must be on
-a line by itself, preferably preceeded by a blank line.
-
-"""
-import os # standard library imports first
-
-# Do NOT import using *, e.g. from numpy import *
-#
-# Import the module using
-#
-#   import numpy
-#
-# instead or import individual functions as needed, e.g
-#
-#  from numpy import array, zeros
-#
-# If you prefer the use of abbreviated module names, we suggest the
-# convention used by NumPy itself::
-
-import numpy as np
-import scipy as sp
-import matplotlib as mpl
-import matplotlib.pyplot as plt
-
-# These abbreviated names are not to be used in docstrings; users must
-# be able to paste and execute docstrings after importing only the
-# numpy module itself, unabbreviated.
-
-from my_module import my_func, other_func
-
-def foo(var1, var2, long_var_name='hi') :
-    """A one-line summary that does not use variable names or the
-    function name.
-
-    Several sentences providing an extended description. Refer to
-    variables using back-ticks, e.g. `var`.
-
-    Parameters
-    ----------
-    var1 : array_like
-        Array_like means all those objects -- lists, nested lists, etc. --
-        that can be converted to an array.  We can also refer to
-        variables like `var1`.
-    var2 : int
-        The type above can either refer to an actual Python type
-        (e.g. ``int``), or describe the type of the variable in more
-        detail, e.g. ``(N,) ndarray`` or ``array_like``.
-    Long_variable_name : {'hi', 'ho'}, optional
-        Choices in brackets, default first when optional.
-
-    Returns
-    -------
-    describe : type
-        Explanation
-    output
-        Explanation
-    tuple
-        Explanation
-    items
-        even more explaining
-
-    Other Parameters
-    ----------------
-    only_seldom_used_keywords : type
-        Explanation
-    common_parameters_listed_above : type
-        Explanation
-
-    Raises
-    ------
-    BadException
-        Because you shouldn't have done that.
-
-    See Also
-    --------
-    otherfunc : relationship (optional)
-    newfunc : Relationship (optional), which could be fairly long, in which
-              case the line wraps here.
-    thirdfunc, fourthfunc, fifthfunc
-
-    Notes
-    -----
-    Notes about the implementation algorithm (if needed).
-
-    This can have multiple paragraphs.
-
-    You may include some math:
-
-    .. math:: X(e^{j\omega } ) = x(n)e^{ - j\omega n}
-
-    And even use a greek symbol like :math:`omega` inline.
-
-    References
-    ----------
-    Cite the relevant literature, e.g. [1]_.  You may also cite these
-    references in the notes section above.
-
-    .. [1] O. McNoleg, "The integration of GIS, remote sensing,
-       expert systems and adaptive co-kriging for environmental habitat
-       modelling of the Highland Haggis using object-oriented, fuzzy-logic
-       and neural-network techniques," Computers & Geosciences, vol. 22,
-       pp. 585-588, 1996.
-
-    Examples
-    --------
-    These are written in doctest format, and should illustrate how to
-    use the function.
-
-    >>> a=[1,2,3]
-    >>> print [x + 3 for x in a]
-    [4, 5, 6]
-    >>> print "a\n\nb"
-    a
-    <BLANKLINE>
-    b
-
-    """
-
-    pass

Copied: trunk/numpy/doc/glossary.py (from rev 5669, trunk/numpy/doc/reference/glossary.py)

Copied: trunk/numpy/doc/howtofind.py (from rev 5669, trunk/numpy/doc/reference/howtofind.py)

Copied: trunk/numpy/doc/indexing.py (from rev 5669, trunk/numpy/doc/reference/indexing.py)

Copied: trunk/numpy/doc/internals.py (from rev 5669, trunk/numpy/doc/reference/internals.py)

Copied: trunk/numpy/doc/io.py (from rev 5669, trunk/numpy/doc/reference/io.py)

Copied: trunk/numpy/doc/jargon.py (from rev 5669, trunk/numpy/doc/reference/jargon.py)

Copied: trunk/numpy/doc/methods_vs_functions.py (from rev 5669, trunk/numpy/doc/reference/methods_vs_functions.py)

Copied: trunk/numpy/doc/misc.py (from rev 5669, trunk/numpy/doc/reference/misc.py)

Deleted: trunk/numpy/doc/npy-format.txt
===================================================================
--- trunk/numpy/doc/npy-format.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/npy-format.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,294 +0,0 @@
-Title: A Simple File Format for NumPy Arrays
-Discussions-To: numpy-discussion@mail.scipy.org
-Version: $Revision$
-Last-Modified: $Date$
-Author: Robert Kern <robert.kern@gmail.com>
-Status: Draft
-Type: Standards Track
-Content-Type: text/plain
-Created: 20-Dec-2007
-
-
-Abstract
-
-    We propose a standard binary file format (NPY) for persisting
-    a single arbitrary NumPy array on disk.  The format stores all of
-    the shape and dtype information necessary to reconstruct the array
-    correctly even on another machine with a different architecture.
-    The format is designed to be as simple as possible while achieving
-    its limited goals.  The implementation is intended to be pure
-    Python and distributed as part of the main numpy package.
-
-
-Rationale
-
-    A lightweight, omnipresent system for saving NumPy arrays to disk
-    is a frequent need.  Python in general has pickle [1] for saving
-    most Python objects to disk.  This often works well enough with
-    NumPy arrays for many purposes, but it has a few drawbacks:
-
-    - Dumping or loading a pickle file require the duplication of the
-      data in memory.  For large arrays, this can be a showstopper.
-
-    - The array data is not directly accessible through
-      memory-mapping.  Now that numpy has that capability, it has
-      proved very useful for loading large amounts of data (or more to
-      the point: avoiding loading large amounts of data when you only
-      need a small part).
-
-    Both of these problems can be addressed by dumping the raw bytes
-    to disk using ndarray.tofile() and numpy.fromfile().  However,
-    these have their own problems:
-
-    - The data which is written has no information about the shape or
-      dtype of the array.
-
-    - It is incapable of handling object arrays.
-
-    The NPY file format is an evolutionary advance over these two
-    approaches.  Its design is mostly limited to solving the problems
-    with pickles and tofile()/fromfile().  It does not intend to solve
-    more complicated problems for which more complicated formats like
-    HDF5 [2] are a better solution.
-
-
-Use Cases
-
-    - Neville Newbie has just started to pick up Python and NumPy.  He
-      has not installed many packages, yet, nor learned the standard
-      library, but he has been playing with NumPy at the interactive
-      prompt to do small tasks.  He gets a result that he wants to
-      save.
-
-    - Annie Analyst has been using large nested record arrays to
-      represent her statistical data.  She wants to convince her
-      R-using colleague, David Doubter, that Python and NumPy are
-      awesome by sending him her analysis code and data.  She needs
-      the data to load at interactive speeds.  Since David does not
-      use Python usually, needing to install large packages would turn
-      him off.
-
-    - Simon Seismologist is developing new seismic processing tools.
-      One of his algorithms requires large amounts of intermediate
-      data to be written to disk.  The data does not really fit into
-      the industry-standard SEG-Y schema, but he already has a nice
-      record-array dtype for using it internally.
-
-    - Polly Parallel wants to split up a computation on her multicore
-      machine as simply as possible.  Parts of the computation can be
-      split up among different processes without any communication
-      between processes; they just need to fill in the appropriate
-      portion of a large array with their results.  Having several
-      child processes memory-mapping a common array is a good way to
-      achieve this.
-
-
-Requirements
-
-    The format MUST be able to:
-
-    - Represent all NumPy arrays including nested record
-      arrays and object arrays.
-
-    - Represent the data in its native binary form.
-
-    - Be contained in a single file.
-
-    - Support Fortran-contiguous arrays directly.
-
-    - Store all of the necessary information to reconstruct the array
-      including shape and dtype on a machine of a different
-      architecture.  Both little-endian and big-endian arrays must be
-      supported and a file with little-endian numbers will yield
-      a little-endian array on any machine reading the file.  The
-      types must be described in terms of their actual sizes.  For
-      example, if a machine with a 64-bit C "long int" writes out an
-      array with "long ints", a reading machine with 32-bit C "long
-      ints" will yield an array with 64-bit integers.
-
-    - Be reverse engineered.  Datasets often live longer than the
-      programs that created them.  A competent developer should be
-      able create a solution in his preferred programming language to
-      read most NPY files that he has been given without much
-      documentation.
-
-    - Allow memory-mapping of the data.
-
-    - Be read from a filelike stream object instead of an actual file.
-      This allows the implementation to be tested easily and makes the
-      system more flexible.  NPY files can be stored in ZIP files and
-      easily read from a ZipFile object.
-
-    - Store object arrays.  Since general Python objects are
-      complicated and can only be reliably serialized by pickle (if at
-      all), many of the other requirements are waived for files
-      containing object arrays.  Files with object arrays do not have
-      to be mmapable since that would be technically impossible.  We
-      cannot expect the pickle format to be reverse engineered without
-      knowledge of pickle.  However, one should at least be able to
-      read and write object arrays with the same generic interface as
-      other arrays.
-
-    - Be read and written using APIs provided in the numpy package
-      itself without any other libraries.  The implementation inside
-      numpy may be in C if necessary.
-
-    The format explicitly *does not* need to:
-
-    - Support multiple arrays in a file.  Since we require filelike
-      objects to be supported, one could use the API to build an ad
-      hoc format that supported multiple arrays.  However, solving the
-      general problem and use cases is beyond the scope of the format
-      and the API for numpy.
-
-    - Fully handle arbitrary subclasses of numpy.ndarray.  Subclasses
-      will be accepted for writing, but only the array data will be
-      written out.  A regular numpy.ndarray object will be created
-      upon reading the file.  The API can be used to build a format
-      for a particular subclass, but that is out of scope for the
-      general NPY format.
-
-
-Format Specification: Version 1.0
-
-    The first 6 bytes are a magic string: exactly "\x93NUMPY".
-
-    The next 1 byte is an unsigned byte: the major version number of
-    the file format, e.g. \x01.
-
-    The next 1 byte is an unsigned byte: the minor version number of
-    the file format, e.g. \x00.  Note: the version of the file format
-    is not tied to the version of the numpy package.
-
-    The next 2 bytes form a little-endian unsigned short int: the
-    length of the header data HEADER_LEN.
-
-    The next HEADER_LEN bytes form the header data describing the
-    array's format.  It is an ASCII string which contains a Python
-    literal expression of a dictionary.  It is terminated by a newline
-    ('\n') and padded with spaces ('\x20') to make the total length of
-    the magic string + 4 + HEADER_LEN be evenly divisible by 16 for
-    alignment purposes.
-
-    The dictionary contains three keys:
-
-        "descr" : dtype.descr
-            An object that can be passed as an argument to the
-            numpy.dtype() constructor to create the array's dtype.
-
-        "fortran_order" : bool
-            Whether the array data is Fortran-contiguous or not.
-            Since Fortran-contiguous arrays are a common form of
-            non-C-contiguity, we allow them to be written directly to
-            disk for efficiency.
-
-        "shape" : tuple of int
-            The shape of the array.
-
-    For repeatability and readability, this dictionary is formatted
-    using pprint.pformat() so the keys are in alphabetic order.
-
-    Following the header comes the array data.  If the dtype contains
-    Python objects (i.e. dtype.hasobject is True), then the data is
-    a Python pickle of the array.  Otherwise the data is the
-    contiguous (either C- or Fortran-, depending on fortran_order)
-    bytes of the array.  Consumers can figure out the number of bytes
-    by multiplying the number of elements given by the shape (noting
-    that shape=() means there is 1 element) by dtype.itemsize.
-
-
-Conventions
-
-    We recommend using the ".npy" extension for files following this
-    format.  This is by no means a requirement; applications may wish
-    to use this file format but use an extension specific to the
-    application.  In the absence of an obvious alternative, however,
-    we suggest using ".npy".
-
-    For a simple way to combine multiple arrays into a single file,
-    one can use ZipFile to contain multiple ".npy" files.  We
-    recommend using the file extension ".npz" for these archives.
-
-
-Alternatives
-
-    The author believes that this system (or one along these lines) is
-    about the simplest system that satisfies all of the requirements.
-    However, one must always be wary of introducing a new binary
-    format to the world.
-
-    HDF5 [2] is a very flexible format that should be able to
-    represent all of NumPy's arrays in some fashion.  It is probably
-    the only widely-used format that can faithfully represent all of
-    NumPy's array features.  It has seen substantial adoption by the
-    scientific community in general and the NumPy community in
-    particular.  It is an excellent solution for a wide variety of
-    array storage problems with or without NumPy.
-
-    HDF5 is a complicated format that more or less implements
-    a hierarchical filesystem-in-a-file.  This fact makes satisfying
-    some of the Requirements difficult.  To the author's knowledge, as
-    of this writing, there is no application or library that reads or
-    writes even a subset of HDF5 files that does not use the canonical
-    libhdf5 implementation.  This implementation is a large library
-    that is not always easy to build.  It would be infeasible to
-    include it in numpy.
-
-    It might be feasible to target an extremely limited subset of
-    HDF5.  Namely, there would be only one object in it: the array.
-    Using contiguous storage for the data, one should be able to
-    implement just enough of the format to provide the same metadata
-    that the proposed format does.  One could still meet all of the
-    technical requirements like mmapability.
-
-    We would accrue a substantial benefit by being able to generate
-    files that could be read by other HDF5 software.  Furthermore, by
-    providing the first non-libhdf5 implementation of HDF5, we would
-    be able to encourage more adoption of simple HDF5 in applications
-    where it was previously infeasible because of the size of the
-    library.  The basic work may encourage similar dead-simple
-    implementations in other languages and further expand the
-    community.
-
-    The remaining concern is about reverse engineerability of the
-    format.  Even the simple subset of HDF5 would be very difficult to
-    reverse engineer given just a file by itself.  However, given the
-    prominence of HDF5, this might not be a substantial concern.
-
-    In conclusion, we are going forward with the design laid out in
-    this document.  If someone writes code to handle the simple subset
-    of HDF5 that would be useful to us, we may consider a revision of
-    the file format.
-
-
-Implementation
-
-    The current implementation is in the trunk of the numpy SVN
-    repository and will be part of the 1.0.5 release.
-
-        http://svn.scipy.org/svn/numpy/trunk
-
-    Specifically, the file format.py in this directory implements the
-    format as described here.
-
-
-References
-
-    [1] http://docs.python.org/lib/module-pickle.html
-
-    [2] http://hdf.ncsa.uiuc.edu/products/hdf5/index.html
-
-
-Copyright
-
-    This document has been placed in the public domain.
-
-
-
-Local Variables:
-mode: indented-text
-indent-tabs-mode: nil
-sentence-end-double-space: t
-fill-column: 70
-coding: utf-8
-End:

Deleted: trunk/numpy/doc/pep_buffer.txt
===================================================================
--- trunk/numpy/doc/pep_buffer.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/pep_buffer.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,869 +0,0 @@
-:PEP: 3118
-:Title: Revising the buffer protocol
-:Version: $Revision$
-:Last-Modified: $Date$
-:Authors: Travis Oliphant <oliphant@ee.byu.edu>, Carl Banks <pythondev@aerojockey.com>
-:Status: Draft
-:Type: Standards Track
-:Content-Type: text/x-rst
-:Created: 28-Aug-2006
-:Python-Version: 3000
-
-Abstract
-========
-
-This PEP proposes re-designing the buffer interface (PyBufferProcs
-function pointers) to improve the way Python allows memory sharing
-in Python 3.0
-
-In particular, it is proposed that the character buffer portion
-of the API be elminated and the multiple-segment portion be
-re-designed in conjunction with allowing for strided memory
-to be shared.   In addition, the new buffer interface will
-allow the sharing of any multi-dimensional nature of the
-memory and what data-format the memory contains.
-
-This interface will allow any extension module to either
-create objects that share memory or create algorithms that
-use and manipulate raw memory from arbitrary objects that
-export the interface.
-
-
-Rationale
-=========
-
-The Python 2.X buffer protocol allows different Python types to
-exchange a pointer to a sequence of internal buffers.  This
-functionality is *extremely* useful for sharing large segments of
-memory between different high-level objects, but it is too limited and
-has issues:
-
-1. There is the little used "sequence-of-segments" option
-   (bf_getsegcount) that is not well motivated.
-
-2. There is the apparently redundant character-buffer option
-   (bf_getcharbuffer)
-
-3. There is no way for a consumer to tell the buffer-API-exporting
-   object it is "finished" with its view of the memory and
-   therefore no way for the exporting object to be sure that it is
-   safe to reallocate the pointer to the memory that it owns (for
-   example, the array object reallocating its memory after sharing
-   it with the buffer object which held the original pointer led
-   to the infamous buffer-object problem).
-
-4. Memory is just a pointer with a length. There is no way to
-   describe what is "in" the memory (float, int, C-structure, etc.)
-
-5. There is no shape information provided for the memory.  But,
-   several array-like Python types could make use of a standard
-   way to describe the shape-interpretation of the memory
-   (wxPython, GTK, pyQT, CVXOPT, PyVox, Audio and Video
-   Libraries, ctypes, NumPy, data-base interfaces, etc.)
-
-6. There is no way to share discontiguous memory (except through
-   the sequence of segments notion).
-
-   There are two widely used libraries that use the concept of
-   discontiguous memory: PIL and NumPy.  Their view of discontiguous
-   arrays is different, though.  The proposed buffer interface allows
-   sharing of either memory model.  Exporters will use only one
-   approach and consumers may choose to support discontiguous
-   arrays of each type however they choose.
-
-   NumPy uses the notion of constant striding in each dimension as its
-   basic concept of an array. With this concept, a simple sub-region
-   of a larger array can be described without copying the data.
-   Thus, stride information is the additional information that must be
-   shared.
-
-   The PIL uses a more opaque memory representation. Sometimes an
-   image is contained in a contiguous segment of memory, but sometimes
-   it is contained in an array of pointers to the contiguous segments
-   (usually lines) of the image.  The PIL is where the idea of multiple
-   buffer segments in the original buffer interface came from.
-
-   NumPy's strided memory model is used more often in computational
-   libraries and because it is so simple it makes sense to support
-   memory sharing using this model.  The PIL memory model is sometimes
-   used in C-code where a 2-d array can be then accessed using double
-   pointer indirection:  e.g. image[i][j].
-
-   The buffer interface should allow the object to export either of these
-   memory models.  Consumers are free to either require contiguous memory
-   or write code to handle one or both of these memory models.
-
-Proposal Overview
-=================
-
-* Eliminate the char-buffer and multiple-segment sections of the
-  buffer-protocol.
-
-* Unify the read/write versions of getting the buffer.
-
-* Add a new function to the interface that should be called when
-  the consumer object is "done" with the memory area.
-
-* Add a new variable to allow the interface to describe what is in
-  memory (unifying what is currently done now in struct and
-  array)
-
-* Add a new variable to allow the protocol to share shape information
-
-* Add a new variable for sharing stride information
-
-* Add a new mechanism for sharing arrays that must
-  be accessed using pointer indirection.
-
-* Fix all objects in the core and the standard library to conform
-  to the new interface
-
-* Extend the struct module to handle more format specifiers
-
-* Extend the buffer object into a new memory object which places
-  a Python veneer around the buffer interface.
-
-* Add a few functions to make it easy to copy contiguous data
-  in and out of object supporting the buffer interface.
-
-Specification
-=============
-
-While the new specification allows for complicated memory sharing.
-Simple contiguous buffers of bytes can still be obtained from an
-object.  In fact, the new protocol allows a standard mechanism for
-doing this even if the original object is not represented as a
-contiguous chunk of memory.
-
-The easiest way to obtain a simple contiguous chunk of memory is
-to use the provided C-API to obtain a chunk of memory.
-
-
-Change the PyBufferProcs structure to
-
-::
-
-    typedef struct {
-         getbufferproc bf_getbuffer;
-         releasebufferproc bf_releasebuffer;
-    }
-
-
-::
-
-    typedef int (*getbufferproc)(PyObject *obj, PyBuffer *view, int flags)
-
-This function returns 0 on success and -1 on failure (and raises an
-error). The first variable is the "exporting" object.  The second
-argument is the address to a bufferinfo structure.  If view is NULL,
-then no information is returned but a lock on the memory is still
-obtained.  In this case, the corresponding releasebuffer should also
-be called with NULL.
-
-The third argument indicates what kind of buffer the exporter is
-allowed to return.  It essentially tells the exporter what kind of
-memory area the consumer can deal with.  It also indicates what
-members of the PyBuffer structure the consumer is going to care about.
-
-The exporter can use this information to simplify how much of the PyBuffer
-structure is filled in and/or raise an error if the object can't support
-a simpler view of its memory.
-
-Thus, the caller can request a simple "view" and either receive it or
-have an error raised if it is not possible.
-
-All of the following assume that at least buf, len, and readonly
-will always be utilized by the caller.
-
-Py_BUF_SIMPLE
-
-   The returned buffer will be assumed to be readable (the object may
-   or may not have writeable memory).  Only the buf, len, and readonly
-   variables may be accessed. The format will be assumed to be
-   unsigned bytes .  This is a "stand-alone" flag constant.  It never
-   needs to be \|'d to the others.  The exporter will raise an
-   error if it cannot provide such a contiguous buffer.
-
-Py_BUF_WRITEABLE
-
-   The returned buffer must be writeable.  If it is not writeable,
-   then raise an error.
-
-Py_BUF_READONLY
-
-   The returned buffer must be readonly.  If the object is already
-   read-only or it can make its memory read-only (and there are no
-   other views on the object) then it should do so and return the
-   buffer information.  If the object does not have read-only memory
-   (or cannot make it read-only), then an error should be raised.
-
-Py_BUF_FORMAT
-
-   The returned buffer must have true format information.  This would
-   be used when the consumer is going to be checking for what 'kind'
-   of data is actually stored.  An exporter should always be able
-   to provide this information if requested.
-
-Py_BUF_SHAPE
-
-   The returned buffer must have shape information.  The memory will
-   be assumed C-style contiguous (last dimension varies the fastest).
-   The exporter may raise an error if it cannot provide this kind
-   of contiguous buffer.
-
-Py_BUF_STRIDES (implies Py_BUF_SHAPE)
-
-   The returned buffer must have strides information. This would be
-   used when the consumer can handle strided, discontiguous arrays.
-   Handling strides automatically assumes you can handle shape.
-   The exporter may raise an error if cannot provide a strided-only
-   representation of the data (i.e. without the suboffsets).
-
-Py_BUF_OFFSETS (implies Py_BUF_STRIDES)
-
-   The returned buffer must have suboffsets information.  This would
-   be used when the consumer can handle indirect array referencing
-   implied by these suboffsets.
-
-Py_BUF_FULL (Py_BUF_OFFSETS | Py_BUF_WRITEABLE | Py_BUF_FORMAT)
-
-Thus, the consumer simply wanting a contiguous chunk of bytes from
-the object would use Py_BUF_SIMPLE, while a consumer that understands
-how to make use of the most complicated cases could use Py_BUF_INDIRECT.
-
-If format information is going to be probed, then Py_BUF_FORMAT must
-be \|'d to the flags otherwise the consumer assumes it is unsigned
-bytes.
-
-There is a C-API that simple exporting objects can use to fill-in the
-buffer info structure correctly according to the provided flags if a
-contiguous chunk of "unsigned bytes" is all that can be exported.
-
-
-The bufferinfo structure is::
-
-  struct bufferinfo {
-       void *buf;
-       Py_ssize_t len;
-       int readonly;
-       const char *format;
-       int ndims;
-       Py_ssize_t *shape;
-       Py_ssize_t *strides;
-       Py_ssize_t *suboffsets;
-       int itemsize;
-       void *internal;
-  } PyBuffer;
-
-Before calling this function, the bufferinfo structure can be filled
-with whatever.  Upon return from getbufferproc, the bufferinfo
-structure is filled in with relevant information about the buffer.
-This same bufferinfo structure must be passed to bf_releasebuffer (if
-available) when the consumer is done with the memory. The caller is
-responsible for keeping a reference to obj until releasebuffer is
-called (i.e. this call does not alter the reference count of obj).
-
-The members of the bufferinfo structure are:
-
-buf
-    a pointer to the start of the memory for the object
-
-len
-    the total bytes of memory the object uses.  This should be the
-    same as the product of the shape array multiplied by the number of
-    bytes per item of memory.
-
-readonly
-    an integer variable to hold whether or not the memory is
-    readonly.  1 means the memory is readonly, zero means the
-    memory is writeable.
-
-format
-    a NULL-terminated format-string (following the struct-style syntax
-    including extensions) indicating what is in each element of
-    memory.  The number of elements is len / itemsize, where itemsize
-    is the number of bytes implied by the format.  For standard
-    unsigned bytes use a format string of "B".
-
-ndims
-    a variable storing the number of dimensions the memory represents.
-    Must be >=0.
-
-shape
-    an array of ``Py_ssize_t`` of length ``ndims`` indicating the
-    shape of the memory as an N-D array.  Note that ``((*shape)[0] *
-    ... * (*shape)[ndims-1])*itemsize = len``.  If ndims is 0 (indicating
-    a scalar), then this must be NULL.
-
-strides
-    address of a ``Py_ssize_t*`` variable that will be filled with a
-    pointer to an array of ``Py_ssize_t`` of length ``ndims`` (or NULL
-    if ndims is 0).  indicating the number of bytes to skip to get to
-    the next element in each dimension.  If this is not requested by
-    the caller (BUF_STRIDES is not set), then this member of the
-    structure will not be used and the consumer is assuming the array
-    is C-style contiguous.  If this is not the case, then an error
-    should be raised.  If this member is requested by the caller
-    (BUF_STRIDES is set), then it must be filled in.
-
-
-suboffsets
-    address of a ``Py_ssize_t *`` variable that will be filled with a
-    pointer to an array of ``Py_ssize_t`` of length ``*ndims``.  If
-    these suboffset numbers are >=0, then the value stored along the
-    indicated dimension is a pointer and the suboffset value dictates
-    how many bytes to add to the pointer after de-referencing.  A
-    suboffset value that it negative indicates that no de-referencing
-    should occur (striding in a contiguous memory block).  If all
-    suboffsets are negative (i.e. no de-referencing is needed, then
-    this must be NULL.
-
-    For clarity, here is a function that returns a pointer to the
-    element in an N-D array pointed to by an N-dimesional index when
-    there are both strides and suboffsets.::
-
-      void* get_item_pointer(int ndim, void* buf, Py_ssize_t* strides,
-                           Py_ssize_t* suboffsets, Py_ssize_t *indices) {
-          char* pointer = (char*)buf;
-          int i;
-          for (i = 0; i < ndim; i++) {
-              pointer += strides[i]*indices[i];
-              if (suboffsets[i] >=0 ) {
-                  pointer = *((char**)pointer) + suboffsets[i];
-              }
-          }
-          return (void*)pointer;
-      }
-
-    Notice the suboffset is added "after" the dereferencing occurs.
-    Thus slicing in the ith dimension would add to the suboffsets in
-    the (i-1)st dimension.  Slicing in the first dimension would change
-    the location of the starting pointer directly (i.e. buf would
-    be modified).
-
-itemsize
-    This is a storage for the itemsize of each element of the shared
-    memory.  It can be obtained using PyBuffer_SizeFromFormat but an
-    exporter may know it without making this call and thus storing it
-    is more convenient and faster.
-
-internal
-    This is for use internally by the exporting object.  For example,
-    this might be re-cast as an integer by the exporter and used to
-    store flags about whether or not the shape, strides, and suboffsets
-    arrays must be freed when the buffer is released.   The consumer
-    should never touch this value.
-
-
-The exporter is responsible for making sure the memory pointed to by
-buf, format, shape, strides, and suboffsets is valid until
-releasebuffer is called.  If the exporter wants to be able to change
-shape, strides, and/or suboffsets before releasebuffer is called then
-it should allocate those arrays when getbuffer is called (pointing to
-them in the buffer-info structure provided) and free them when
-releasebuffer is called.
-
-
-The same bufferinfo struct should be used in the release-buffer
-interface call. The caller is responsible for the memory of the
-bufferinfo structure itself.
-
-``typedef int (*releasebufferproc)(PyObject *obj, PyBuffer *view)``
-    Callers of getbufferproc must make sure that this function is
-    called when memory previously acquired from the object is no
-    longer needed.  The exporter of the interface must make sure that
-    any memory pointed to in the bufferinfo structure remains valid
-    until releasebuffer is called.
-
-    Both of these routines are optional for a type object
-
-    If the releasebuffer function is not provided then it does not ever
-    need to be called.
-
-Exporters will need to define a releasebuffer function if they can
-re-allocate their memory, strides, shape, suboffsets, or format
-variables which they might share through the struct bufferinfo.
-Several mechanisms could be used to keep track of how many getbuffer
-calls have been made and shared.  Either a single variable could be
-used to keep track of how many "views" have been exported, or a
-linked-list of bufferinfo structures filled in could be maintained in
-each object.
-
-All that is specifically required by the exporter, however, is to
-ensure that any memory shared through the bufferinfo structure remains
-valid until releasebuffer is called on the bufferinfo structure.
-
-
-New C-API calls are proposed
-============================
-
-::
-
-    int PyObject_CheckBuffer(PyObject *obj)
-
-Return 1 if the getbuffer function is available otherwise 0.
-
-::
-
-    int PyObject_GetBuffer(PyObject *obj, PyBuffer *view,
-                           int flags)
-
-This is a C-API version of the getbuffer function call.  It checks to
-make sure object has the required function pointer and issues the
-call.  Returns -1 and raises an error on failure and returns 0 on
-success.
-
-::
-
-    int PyObject_ReleaseBuffer(PyObject *obj, PyBuffer *view)
-
-This is a C-API version of the releasebuffer function call.  It checks
-to make sure the object has the required function pointer and issues
-the call.  Returns 0 on success and -1 (with an error raised) on
-failure. This function always succeeds if there is no releasebuffer
-function for the object.
-
-::
-
-    PyObject *PyObject_GetMemoryView(PyObject *obj)
-
-Return a memory-view object from an object that defines the buffer interface.
-
-A memory-view object is an extended buffer object that could replace
-the buffer object (but doesn't have to).  It's C-structure is
-
-::
-
-  typedef struct {
-      PyObject_HEAD
-      PyObject *base;
-      int ndims;
-      Py_ssize_t *starts;  /* slice starts */
-      Py_ssize_t *stops;   /* slice stops */
-      Py_ssize_t *steps;   /* slice steps */
-  } PyMemoryViewObject;
-
-This is functionally similar to the current buffer object except only
-a reference to base is kept.  The actual memory for base must be
-re-grabbed using the buffer-protocol, whenever it is needed.
-
-The getbuffer and releasebuffer for this object use the underlying
-base object (adjusted using the slice information).  If the number of
-dimensions of the base object (or the strides or the size) has changed
-when a new view is requested, then the getbuffer will trigger an error.
-
-This memory-view object will support mult-dimensional slicing.  Slices
-of the memory-view object are other memory-view objects. When an
-"element" from the memory-view is returned it is always a tuple of
-bytes object + format string which can then be interpreted using the
-struct module if desired.
-
-::
-
-    int PyBuffer_SizeFromFormat(const char *)
-
-Return the implied itemsize of the data-format area from a struct-style
-description.
-
-::
-
-    int PyObject_GetContiguous(PyObject *obj, void **buf, Py_ssize_t *len,
-                               char **format, char fortran)
-
-Return a contiguous chunk of memory representing the buffer.  If a
-copy is made then return 1.  If no copy was needed return 0.  If an
-error occurred in probing the buffer interface, then return -1.  The
-contiguous chunk of memory is pointed to by ``*buf`` and the length of
-that memory is ``*len``.  If the object is multi-dimensional, then if
-fortran is 'F', the first dimension of the underlying array will vary
-the fastest in the buffer.  If fortran is 'C', then the last dimension
-will vary the fastest (C-style contiguous). If fortran is 'A', then it
-does not matter and you will get whatever the object decides is more
-efficient.
-
-::
-
-    int PyObject_CopyToObject(PyObject *obj, void *buf, Py_ssize_t len,
-                              char fortran)
-
-Copy ``len`` bytes of data pointed to by the contiguous chunk of
-memory pointed to by ``buf`` into the buffer exported by obj.  Return
-0 on success and return -1 and raise an error on failure.  If the
-object does not have a writeable buffer, then an error is raised.  If
-fortran is 'F', then if the object is multi-dimensional, then the data
-will be copied into the array in Fortran-style (first dimension varies
-the fastest).  If fortran is 'C', then the data will be copied into the
-array in C-style (last dimension varies the fastest).  If fortran is 'A', then
-it does not matter and the copy will be made in whatever way is more
-efficient.
-
-::
-
-    void PyBuffer_FreeMem(void *buf)
-
-This function frees the memory returned by PyObject_GetContiguous if a
-copy was made.  Do not call this function unless
-PyObject_GetContiguous returns a 1 indicating that new memory was
-created.
-
-
-These last three C-API calls allow a standard way of getting data in and
-out of Python objects into contiguous memory areas no matter how it is
-actually stored.  These calls use the extended buffer interface to perform
-their work.
-
-::
-
-    int PyBuffer_IsContiguous(PyBuffer *view, char fortran);
-
-Return 1 if the memory defined by the view object is C-style (fortran = 'C')
-or Fortran-style (fortran = 'A') contiguous.  Return 0 otherwise.
-
-::
-
-    void PyBuffer_FillContiguousStrides(int *ndims, Py_ssize_t *shape,
-                                        int itemsize,
-                                        Py_ssize_t *strides, char fortran)
-
-Fill the strides array with byte-strides of a contiguous (C-style if
-fortran is 0 or Fortran-style if fortran is 1) array of the given
-shape with the given number of bytes per element.
-
-::
-
-    int PyBuffer_FillInfo(PyBuffer *view, void *buf,
-                          Py_ssize_t len, int readonly, int infoflags)
-
-Fills in a buffer-info structure correctly for an exporter that can
-only share a contiguous chunk of memory of "unsigned bytes" of the
-given length.  Returns 0 on success and -1 (with raising an error) on
-error.
-
-
-Additions to the struct string-syntax
-=====================================
-
-The struct string-syntax is missing some characters to fully
-implement data-format descriptions already available elsewhere (in
-ctypes and NumPy for example).  The Python 2.5 specification is
-at http://docs.python.org/lib/module-struct.html
-
-Here are the proposed additions:
-
-
-================  ===========
-Character         Description
-================  ===========
-'t'               bit (number before states how many bits)
-'?'               platform _Bool type
-'g'               long double
-'c'               ucs-1 (latin-1) encoding
-'u'               ucs-2
-'w'               ucs-4
-'O'               pointer to Python Object
-'Z'               complex (whatever the next specifier is)
-'&'               specific pointer (prefix before another charater)
-'T{}'             structure (detailed layout inside {})
-'(k1,k2,...,kn)'  multi-dimensional array of whatever follows
-':name:'          optional name of the preceeding element
-'X{}'             pointer to a function (optional function
-                                         signature inside {})
-' \n\t'           ignored (allow better readability)
-                             -- this may already be true
-================  ===========
-
-The struct module will be changed to understand these as well and
-return appropriate Python objects on unpacking.  Unpacking a
-long-double will return a decimal object or a ctypes long-double.
-Unpacking 'u' or 'w' will return Python unicode.  Unpacking a
-multi-dimensional array will return a list (of lists if >1d).
-Unpacking a pointer will return a ctypes pointer object. Unpacking a
-function pointer will return a ctypes call-object (perhaps). Unpacking
-a bit will return a Python Bool.  White-space in the struct-string
-syntax will be ignored if it isn't already.  Unpacking a named-object
-will return some kind of named-tuple-like object that acts like a
-tuple but whose entries can also be accessed by name. Unpacking a
-nested structure will return a nested tuple.
-
-Endian-specification ('!', '@','=','>','<', '^') is also allowed
-inside the string so that it can change if needed.  The
-previously-specified endian string is in force until changed.  The
-default endian is '@' which means native data-types and alignment.  If
-un-aligned, native data-types are requested, then the endian
-specification is '^'.
-
-According to the struct-module, a number can preceed a character
-code to specify how many of that type there are.  The
-(k1,k2,...,kn) extension also allows specifying if the data is
-supposed to be viewed as a (C-style contiguous, last-dimension
-varies the fastest) multi-dimensional array of a particular format.
-
-Functions should be added to ctypes to create a ctypes object from
-a struct description, and add long-double, and ucs-2 to ctypes.
-
-Examples of Data-Format Descriptions
-====================================
-
-Here are some examples of C-structures and how they would be
-represented using the struct-style syntax.
-
-<named> is the constructor for a named-tuple (not-specified yet).
-
-float
-    'f' <--> Python float
-complex double
-    'Zd' <--> Python complex
-RGB Pixel data
-    'BBB' <--> (int, int, int)
-    'B:r: B:g: B:b:' <--> <named>((int, int, int), ('r','g','b'))
-
-Mixed endian (weird but possible)
-    '>i:big: <i:little:' <--> <named>((int, int), ('big', 'little'))
-
-Nested structure
-    ::
-
-        struct {
-             int ival;
-             struct {
-                 unsigned short sval;
-                 unsigned char bval;
-                 unsigned char cval;
-             } sub;
-        }
-        """i:ival:
-           T{
-              H:sval:
-              B:bval:
-              B:cval:
-            }:sub:
-        """
-Nested array
-    ::
-
-        struct {
-             int ival;
-             double data[16*4];
-        }
-        """i:ival:
-           (16,4)d:data:
-        """
-
-
-Code to be affected
-===================
-
-All objects and modules in Python that export or consume the old
-buffer interface will be modified.  Here is a partial list.
-
-* buffer object
-* bytes object
-* string object
-* array module
-* struct module
-* mmap module
-* ctypes module
-
-Anything else using the buffer API.
-
-
-Issues and Details
-==================
-
-It is intended that this PEP will be back-ported to Python 2.6 by
-adding the C-API and the two functions to the existing buffer
-protocol.
-
-The proposed locking mechanism relies entirely on the exporter object
-to not invalidate any of the memory pointed to by the buffer structure
-until a corresponding releasebuffer is called.  If it wants to be able
-to change its own shape and/or strides arrays, then it needs to create
-memory for these in the bufferinfo structure and copy information
-over.
-
-The sharing of strided memory and suboffsets is new and can be seen as
-a modification of the multiple-segment interface.  It is motivated by
-NumPy and the PIL.  NumPy objects should be able to share their
-strided memory with code that understands how to manage strided memory
-because strided memory is very common when interfacing with compute
-libraries.
-
-Also, with this approach it should be possible to write generic code
-that works with both kinds of memory.
-
-Memory management of the format string, the shape array, the strides
-array, and the suboffsets array in the bufferinfo structure is always
-the responsibility of the exporting object.  The consumer should not
-set these pointers to any other memory or try to free them.
-
-Several ideas were discussed and rejected:
-
-    Having a "releaser" object whose release-buffer was called.  This
-    was deemed unacceptable because it caused the protocol to be
-    asymmetric (you called release on something different than you
-    "got" the buffer from).  It also complicated the protocol without
-    providing a real benefit.
-
-    Passing all the struct variables separately into the function.
-    This had the advantage that it allowed one to set NULL to
-    variables that were not of interest, but it also made the function
-    call more difficult.  The flags variable allows the same
-    ability of consumers to be "simple" in how they call the protocol.
-
-Code
-========
-
-The authors of the PEP promise to contribute and maintain the code for
-this proposal but will welcome any help.
-
-
-
-
-Examples
-=========
-
-Ex. 1
------------
-
-This example shows how an image object that uses contiguous lines might expose its buffer.
-
-::
-
-  struct rgba {
-      unsigned char r, g, b, a;
-  };
-
-  struct ImageObject {
-      PyObject_HEAD;
-      ...
-      struct rgba** lines;
-      Py_ssize_t height;
-      Py_ssize_t width;
-      Py_ssize_t shape_array[2];
-      Py_ssize_t stride_array[2];
-      Py_ssize_t view_count;
-  };
-
-"lines" points to malloced 1-D array of (struct rgba*).  Each pointer
-in THAT block points to a seperately malloced array of (struct rgba).
-
-In order to access, say, the red value of the pixel at x=30, y=50, you'd use "lines[50][30].r".
-
-So what does ImageObject's getbuffer do?  Leaving error checking out::
-
-  int Image_getbuffer(PyObject *self, PyBuffer *view, int flags) {
-
-      static Py_ssize_t suboffsets[2] = { -1, 0 };
-
-      view->buf = self->lines;
-      view->len = self->height*self->width;
-      view->readonly = 0;
-      view->ndims = 2;
-      self->shape_array[0] = height;
-      self->shape_array[1] = width;
-      view->shape = &self->shape_array;
-      self->stride_array[0] = sizeof(struct rgba*);
-      self->stride_array[1] = sizeof(struct rgba);
-      view->strides = &self->stride_array;
-      view->suboffsets = suboffsets;
-
-      self->view_count ++;
-
-      return 0;
-  }
-
-
-  int Image_releasebuffer(PyObject *self, PyBuffer *view) {
-      self->view_count--;
-      return 0;
-  }
-
-
-Ex. 2
------------
-
-This example shows how an object that wants to expose a contiguous
-chunk of memory (which will never be re-allocated while the object is
-alive) would do that.
-
-::
-
-  int myobject_getbuffer(PyObject *self, PyBuffer *view, int flags) {
-
-      void *buf;
-      Py_ssize_t len;
-      int readonly=0;
-
-      buf = /* Point to buffer */
-      len = /* Set to size of buffer */
-      readonly = /* Set to 1 if readonly */
-
-      return PyObject_FillBufferInfo(view, buf, len, readonly, flags);
-  }
-
-No releasebuffer is necessary because the memory will never
-be re-allocated so the locking mechanism is not needed.
-
-Ex.  3
------------
-
-A consumer that wants to only get a simple contiguous chunk of bytes
-from a Python object, obj would do the following:
-
-::
-
-  PyBuffer view;
-  int ret;
-
-  if (PyObject_GetBuffer(obj, &view, Py_BUF_SIMPLE) < 0) {
-       /* error return */
-  }
-
-  /* Now, view.buf is the pointer to memory
-          view.len is the length
-          view.readonly is whether or not the memory is read-only.
-   */
-
-
-  /* After using the information and you don't need it anymore */
-
-  if (PyObject_ReleaseBuffer(obj, &view) < 0) {
-          /* error return */
-  }
-
-
-Ex. 4
------------
-
-A consumer that wants to be able to use any object's memory but is
-writing an algorithm that only handle contiguous memory could do the following:
-
-::
-
-    void *buf;
-    Py_ssize_t len;
-    char *format;
-
-    if (PyObject_GetContiguous(obj, &buf, &len, &format, 0) < 0) {
-       /* error return */
-    }
-
-    /* process memory pointed to by buffer if format is correct */
-
-    /* Optional:
-
-       if, after processing, we want to copy data from buffer back
-       into the the object
-
-       we could do
-       */
-
-    if (PyObject_CopyToObject(obj, buf, len, 0) < 0) {
-           /*        error return */
-    }
-
-
-Copyright
-=========
-
-This PEP is placed in the public domain

Copied: trunk/numpy/doc/performance.py (from rev 5669, trunk/numpy/doc/reference/performance.py)

Deleted: trunk/numpy/doc/records.txt
===================================================================
--- trunk/numpy/doc/records.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/records.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,87 +0,0 @@
-
-The ndarray supports records intrinsically.  None of the default
-descriptors have fields defined, but you can create new descriptors
-easily.  The ndarray even supports nested arrays of records inside of
-a record.  Any record that the array protocol can describe can be
-represented.  The ndarray also supports partial field descriptors.
-Not every byte has to be accounted for.
-
-This was done by adding to the established ``PyArray_Descr *`` structure:
-
-1. A PyObject ``*fields`` member which contains a dictionary of "field
-   name" : (``PyArray_Descr`` ``*field-type``, ``offset``, [optional field
-   title]).  If a title is given, then it is also inserted into the
-   dictionary and used to key the same entry.
-
-2. A byteorder member.  By default this is '=' (native), or '|'
-   (not-applicable).
-
-3. An additional ``PyArray_ArrDescr`` ``*member`` of the structure which
-   contains a simple representation of an array of another base-type.
-   types. The ``PyArray_ArrayDescr`` structure has members
-   ``PyArray_Descr *``, ``PyObject *``, for holding a reference to
-   the base-type and the shape of the sub-array.
-
-4. The ``PyArray_Descr *`` as official Python object that fully describes
-   a region of memory for the data
-
-
-Data type conversions
----------------------
-
-We can support additional data-type
-conversions.  The data-type passed in is converted to a
-``PyArray_Descr *`` object.
-
-New possibilities for the "data-type"
-`````````````````````````````````````
-
-**List [data-type 1, data-type 2, ..., data-type n]**
-  Equivalent to  {'names':['f1','f2',...,'fn'],
-	        'formats': [data-type 1, data-type 2, ..., data-type n]}
-
-  This is a quick way to specify a record format with default field names.
-
-
-**Tuple  (flexible type, itemsize) (fixed type, shape)**
-  Get converted to a new ``PyArray_Descr *`` object with a flexible
-  type. The latter structure also sets the ``PyArray_ArrayDescr`` field of the
-  returned ``PyArray_Descr *``.
-
-
-**Dictionary (keys "names", "titles", and "formats")**
-  This will be converted to a ``PyArray_VOID`` type with corresponding
-  fields parameter (the formats list will be converted to actual
-  ``PyArray_Descr *`` objects).
-
-
-**Objects (anything with an .itemsize and .fields attribute)**
-  If its an instance of (a sub-class of) void type, then a new
-  ``PyArray_Descr*`` structure is created corresponding to its
-  typeobject (and ``PyArray_VOID``) typenumber.  If the type is
-  registered, then the registered type-number is used.
-
-  Otherwise a new ``PyArray_VOID PyArray_Descr*`` structure is created
-  and filled ->elsize and ->fields filled in appropriately.
-
-  The itemsize attribute must return a number > 0. The fields
-  attribute must return a dictionary with at least "names" and
-  "formats" entries.  The "formats" entry will be converted to a
-  "proper" descr->fields entry (all generic data-types converted to
-  ``PyArray_Descr *`` structure).
-
-
-Reference counting for ``PyArray_Descr *`` objects.
-```````````````````````````````````````````````````
-
-Most functions that take ``PyArary_Descr *`` as arguments and return a
-``PyObject *`` steal the reference unless otherwise noted in the code:
-
-Functions that return ``PyArray_Descr *`` objects return a new
-reference.
-
-.. tip::
-
-  There is a new function  and a new method of array objects both labelled
-  dtypescr which can be used to try out the ``PyArray_DescrConverter``.
-

Deleted: trunk/numpy/doc/reference/basics.py
===================================================================
--- trunk/numpy/doc/reference/basics.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/basics.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,137 +0,0 @@
-"""
-============
-Array basics
-============
-
-Array types and conversions between types
-=========================================
-
-Numpy supports a much greater variety of numerical types than Python does.
-This section shows which are available, and how to modify an array's data-type.
-
-==========  =========================================================
-Data type   Description
-==========  =========================================================
-bool        Boolean (True or False) stored as a byte
-int         Platform integer (normally either ``int32`` or ``int64``)
-int8        Byte (-128 to 127)
-int16       Integer (-32768 to 32767)
-int32       Integer (-2147483648 to 2147483647)
-int64       Integer (9223372036854775808 to 9223372036854775807)
-uint8       Unsigned integer (0 to 255)
-uint16      Unsigned integer (0 to 65535)
-uint32      Unsigned integer (0 to 4294967295)
-uint64      Unsigned integer (0 to 18446744073709551615)
-float       Shorthand for ``float64``.
-float32     Single precision float: sign bit, 8 bits exponent,
-            23 bits mantissa
-float64     Double precision float: sign bit, 11 bits exponent,
-            52 bits mantissa
-complex     Shorthand for ``complex128``.
-complex64   Complex number, represented by two 32-bit floats (real
-            and imaginary components)
-complex128  Complex number, represented by two 64-bit floats (real
-            and imaginary components)
-==========  =========================================================
-
-Numpy numerical types are instances of ``dtype`` (data-type) objects, each
-having unique characteristics.  Once you have imported NumPy using
-
-  ::
-
-    >>> import numpy as np
-
-the dtypes are available as ``np.bool``, ``np.float32``, etc.
-
-Advanced types, not listed in the table above, are explored in
-section `link_here`.
-
-There are 5 basic numerical types representing booleans (bool), integers (int),
-unsigned integers (uint) floating point (float) and complex. Those with numbers
-in their name indicate the bitsize of the type (i.e. how many bits are needed
-to represent a single value in memory).  Some types, such as ``int`` and
-``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit
-vs. 64-bit machines).  This should be taken into account when interfacing
-with low-level code (such as C or Fortran) where the raw memory is addressed.
-
-Data-types can be used as functions to convert python numbers to array scalars
-(see the array scalar section for an explanation), python sequences of numbers
-to arrays of that type, or as arguments to the dtype keyword that many numpy
-functions or methods accept. Some examples::
-
-    >>> import numpy as np
-    >>> x = np.float32(1.0)
-    >>> x
-    1.0
-    >>> y = np.int_([1,2,4])
-    >>> y
-    array([1, 2, 4])
-    >>> z = np.arange(3, dtype=np.uint8)
-    array([0, 1, 2], dtype=uint8)
-
-Array types can also be referred to by character codes, mostly to retain
-backward compatibility with older packages such as Numeric.  Some
-documentation may still refer to these, for example::
-
-  >>> np.array([1, 2, 3], dtype='f')
-  array([ 1.,  2.,  3.], dtype=float32)
-
-We recommend using dtype objects instead.
-
-To convert the type of an array, use the .astype() method (preferred) or
-the type itself as a function. For example: ::
-
-    >>> z.astype(float)
-    array([0.,  1.,  2.])
-    >>> np.int8(z)
-    array([0, 1, 2], dtype=int8)
-
-Note that, above, we use the *Python* float object as a dtype.  NumPy knows
-that ``int`` refers to ``np.int``, ``bool`` means ``np.bool`` and
-that ``float`` is ``np.float``.  The other data-types do not have Python
-equivalents.
-
-To determine the type of an array, look at the dtype attribute::
-
-    >>> z.dtype
-    dtype('uint8')
-
-dtype objects also contain information about the type, such as its bit-width
-and its byte-order. See xxx for details.  The data type can also be used
-indirectly to query properties of the type, such as whether it is an integer::
-
-    >>> d = np.dtype(int)
-    >>> d
-    dtype('int32')
-
-    >>> np.issubdtype(d, int)
-    True
-
-    >>> np.issubdtype(d, float)
-    False
-
-
-Array Scalars
-=============
-
-Numpy generally returns elements of arrays as array scalars (a scalar
-with an associated dtype).  Array scalars differ from Python scalars, but
-for the most part they can be used interchangeably (the primary
-exception is for versions of Python older than v2.x, where integer array
-scalars cannot act as indices for lists and tuples).  There are some
-exceptions, such as when code requires very specific attributes of a scalar
-or when it checks specifically whether a value is a Python scalar. Generally,
-problems are easily fixed by explicitly converting array scalars
-to Python scalars, using the corresponding Python type function
-(e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``).
-
-The primary advantage of using array scalars is that
-they preserve the array type (Python may not have a matching scalar type
-available, e.g. ``int16``).  Therefore, the use of array scalars ensures
-identical behaviour between arrays and scalars, irrespective of whether the
-value is inside an array or not.  NumPy scalars also have many of the same
-methods arrays do.
-
-See xxx for details.
-
-"""

Deleted: trunk/numpy/doc/reference/broadcasting.py
===================================================================
--- trunk/numpy/doc/reference/broadcasting.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/broadcasting.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,176 +0,0 @@
-"""
-========================
-Broadcasting over arrays
-========================
-
-The term broadcasting describes how numpy treats arrays with different
-shapes during arithmetic operations. Subject to certain constraints,
-the smaller array is "broadcast" across the larger array so that they
-have compatible shapes. Broadcasting provides a means of vectorizing
-array operations so that looping occurs in C instead of Python. It does
-this without making needless copies of data and usually leads to
-efficient algorithm implementations. There are, however, cases where
-broadcasting is a bad idea because it leads to inefficient use of memory
-that slows computation.
-
-NumPy operations are usually done element-by-element, which requires two
-arrays to have exactly the same shape::
-
-  >>> a = np.array([1.0, 2.0, 3.0])
-  >>> b = np.array([2.0, 2.0, 2.0])
-  >>> a * b
-  array([ 2.,  4.,  6.])
-
-NumPy's broadcasting rule relaxes this constraint when the arrays'
-shapes meet certain constraints. The simplest broadcasting example occurs
-when an array and a scalar value are combined in an operation:
-
->>> a = np.array([1.0, 2.0, 3.0])
->>> b = 2.0
->>> a * b
-array([ 2.,  4.,  6.])
-
-The result is equivalent to the previous example where ``b`` was an array.
-We can think of the scalar ``b`` being *stretched* during the arithmetic
-operation into an array with the same shape as ``a``. The new elements in
-``b`` are simply copies of the original scalar. The stretching analogy is
-only conceptual.  NumPy is smart enough to use the original scalar value
-without actually making copies, so that broadcasting operations are as
-memory and computationally efficient as possible.
-
-The second example is more effective than the first, since here broadcasting
-moves less memory around during the multiplication (``b`` is a scalar,
-not an array).
-
-General Broadcasting Rules
-==========================
-When operating on two arrays, NumPy compares their shapes element-wise.
-It starts with the trailing dimensions, and works its way forward.  Two
-dimensions are compatible when
-
-1) they are equal, or
-2) one of them is 1
-
-If these conditions are not met, a
-``ValueError: frames are not aligned`` exception is thrown, indicating that
-the arrays have incompatible shapes. The size of the resulting array
-is the maximum size along each dimension of the input arrays.
-
-Arrays do not need to have the same *number* of dimensions.  For example,
-if you have a ``256x256x3`` array of RGB values, and you want to scale
-each color in the image by a different value, you can multiply the image
-by a one-dimensional array with 3 values. Lining up the sizes of the
-trailing axes of these arrays according to the broadcast rules, shows that
-they are compatible::
-
-  Image  (3d array): 256 x 256 x 3
-  Scale  (1d array):             3
-  Result (3d array): 256 x 256 x 3
-
-When either of the dimensions compared is one, the larger of the two is
-used.  In other words, the smaller of two axes is stretched or "copied"
-to match the other.
-
-In the following example, both the ``A`` and ``B`` arrays have axes with
-length one that are expanded to a larger size during the broadcast
-operation::
-
-  A      (4d array):  8 x 1 x 6 x 1
-  B      (3d array):      7 x 1 x 5
-  Result (4d array):  8 x 7 x 6 x 5
-
-Here are some more examples::
-
-  A      (2d array):  5 x 4
-  B      (1d array):      1
-  Result (2d array):  5 x 4
-
-  A      (2d array):  5 x 4
-  B      (1d array):      4
-  Result (2d array):  5 x 4
-
-  A      (3d array):  15 x 3 x 5
-  B      (3d array):  15 x 1 x 5
-  Result (3d array):  15 x 3 x 5
-
-  A      (3d array):  15 x 3 x 5
-  B      (2d array):       3 x 5
-  Result (3d array):  15 x 3 x 5
-
-  A      (3d array):  15 x 3 x 5
-  B      (2d array):       3 x 1
-  Result (3d array):  15 x 3 x 5
-
-Here are examples of shapes that do not broadcast::
-
-  A      (1d array):  3
-  B      (1d array):  4 # trailing dimensions do not match
-
-  A      (2d array):      2 x 1
-  B      (3d array):  8 x 4 x 3 # second from last dimensions mismatch
-
-An example of broadcasting in practice::
-
- >>> x = np.arange(4)
- >>> xx = x.reshape(4,1)
- >>> y = np.ones(5)
- >>> z = np.ones((3,4))
-
- >>> x.shape
- (4,)
-
- >>> y.shape
- (5,)
-
- >>> x + y
- <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape
-
- >>> xx.shape
- (4, 1)
-
- >>> y.shape
- (5,)
-
- >>> (xx + y).shape
- (4, 5)
-
- >>> xx + y
- array([[ 1.,  1.,  1.,  1.,  1.],
-        [ 2.,  2.,  2.,  2.,  2.],
-        [ 3.,  3.,  3.,  3.,  3.],
-        [ 4.,  4.,  4.,  4.,  4.]])
-
- >>> x.shape
- (4,)
-
- >>> z.shape
- (3, 4)
-
- >>> (x + z).shape
- (3, 4)
-
- >>> x + z
- array([[ 1.,  2.,  3.,  4.],
-        [ 1.,  2.,  3.,  4.],
-        [ 1.,  2.,  3.,  4.]])
-
-Broadcasting provides a convenient way of taking the outer product (or
-any other outer operation) of two arrays. The following example shows an
-outer addition operation of two 1-d arrays::
-
-  >>> a = np.array([0.0, 10.0, 20.0, 30.0])
-  >>> b = np.array([1.0, 2.0, 3.0])
-  >>> a[:, np.newaxis] + b
-  array([[  1.,   2.,   3.],
-         [ 11.,  12.,  13.],
-         [ 21.,  22.,  23.],
-         [ 31.,  32.,  33.]])
-
-Here the ``newaxis`` index operator inserts a new axis into ``a``,
-making it a two-dimensional ``4x1`` array.  Combining the ``4x1`` array
-with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.
-
-See `this article <http://www.scipy.org/EricsBroadcastingDoc>`_
-for illustrations of broadcasting concepts.
-
-"""

Deleted: trunk/numpy/doc/reference/creation.py
===================================================================
--- trunk/numpy/doc/reference/creation.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/creation.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,132 +0,0 @@
-"""
-==============
-Array creation
-==============
-
-Introduction
-============
-
-There are 5 general mechanisms for creating arrays:
-
-1) Conversion from other Python structures (e.g., lists, tuples)
-2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.)
-3) Reading arrays from disk, either from standard or custom formats
-4) Creating arrays from raw bytes through the use of strings or buffers
-5) Use of special library functions (e.g., random)
-
-This section will not cover means of replicating, joining, or otherwise
-expanding or mutating existing arrays. Nor will it cover creating object
-arrays or record arrays. Both of those are covered in their own sections.
-
-Converting Python array-like objects to numpy arrays
-====================================================
-
-In general, numerical data arranged in an array-like structure in Python can
-be converted to arrays through the use of the array() function. The most obvious
-examples are lists and tuples. See the documentation for array() for details for
-its use. Some
-objects may support the array-protocol and allow conversion to arrays this
-way. A simple way to find out if the object can be converted to a numpy array
-using array() is simply to try it interactively and see if it works! (The
-Python Way).
-
-Examples: ::
-
- >>> x = np.array([2,3,1,0])
- >>> x = np.array([2, 3, 1, 0])
- >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types
- >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
-
-Intrinsic numpy array creation
-==============================
-
-Numpy has built-in functions for creating arrays from scratch:
-
-zeros(shape) will create an array filled with 0 values with the specified
-shape. The default dtype is float64.
-
-``>>> np.zeros((2, 3))
-array([[ 0., 0., 0.], [ 0., 0., 0.]])``
-
-ones(shape) will create an array filled with 1 values. It is identical to
-zeros in all other respects.
-
-arange() will create arrays with regularly incrementing values. Check the
-docstring for complete information on the various ways it can be used. A few
-examples will be given here: ::
-
- >>> np.arange(10)
- array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
- >>> np.arange(2, 10, dtype=np.float)
- array([ 2., 3., 4., 5., 6., 7., 8., 9.])
- >>> np.arange(2, 3, 0.1)
- array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9])
-
-Note that there are some subtleties regarding the last usage that the user
-should be aware of that are described in the arange docstring.
-
-indices() will create a set of arrays (stacked as a one-higher dimensioned
-array), one per dimension with each representing variation in that dimension.
-An examples illustrates much better than a verbal description: ::
-
- >>> np.indices((3,3))
- array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
-
-This is particularly useful for evaluating functions of multiple dimensions on
-a regular grid.
-
-Reading arrays from disk
-========================
-
-This is presumably the most common case of large array creation. The details,
-of course, depend greatly on the format of data on disk and so this section
-can only give general pointers on how to handle various formats.
-
-Standard binary formats
------------------------
-
-Various fields have standard formats for array data. The following lists the
-ones with known python libraries to read them and return numpy arrays (there
-may be others for which it is possible to read and convert to numpy arrays so
-check the last section as well)
-
-HDF5: PyTables
-FITS: PyFITS
-Others? xxx
-
-Examples of formats that cannot be read directly but for which it is not hard
-to convert are libraries like PIL (able to read and write many image formats
-such as jpg, png, etc).
-
-Common ascii formats
---------------------
-
-Comma Separated Value files (CSV) are widely used (and an export and import
-option for programs like Excel). There are a number of ways of reading these
-files in Python. The most convenient ways of reading these are found in pylab
-(part of matplotlib) in the xxx function. (list alternatives xxx)
-
-More generic ascii files can be read using the io package in scipy. xxx a few
-more details needed...
-
-Custom binary formats
----------------------
-
-There are a variety of approaches one can use. If the file has a relatively
-simple format then one can write a simple I/O library and use the numpy
-fromfile() function and .tofile() method to read and write numpy arrays
-directly (mind your byteorder though!) If a good C or C++ library exists that
-read the data, one can wrap that library with a variety of techniques (see
-xxx) though that certainly is much more work and requires significantly more
-advanced knowledge to interface with C or C++.
-
-Use of special libraries
-------------------------
-
-There are libraries that can be used to generate arrays for special purposes
-and it isn't possible to enumerate all of them. The most common uses are use
-of the many array generation functions in random that can generate arrays of
-random values, and some utility functions to generate special matrices (e.g.
-diagonal, see xxx)
-
-"""

Deleted: trunk/numpy/doc/reference/glossary.py
===================================================================
--- trunk/numpy/doc/reference/glossary.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/glossary.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,367 +0,0 @@
-"""
-=================
-Glossary
-=================
-
-along an axis
-    Axes are defined for arrays with more than one dimension.  A
-    2-dimensional array has two corresponding axes: the first running
-    vertically downwards across rows (axis 0), and the second running
-    horizontally across columns (axis 1).
-
-    Many operation can take place along one of these axes.  For example,
-    we can sum each row of an array, in which case we operate along
-    columns, or axis 1::
-
-      >>> x = np.arange(12).reshape((3,4))
-
-      >>> x
-      array([[ 0,  1,  2,  3],
-             [ 4,  5,  6,  7],
-             [ 8,  9, 10, 11]])
-
-      >>> x.sum(axis=1)
-      array([ 6, 22, 38])
-
-array or ndarray
-    A homogeneous container of numerical elements.  Each element in the
-    array occupies a fixed amount of memory (hence homogeneous), and
-    can be a numerical element of a single type (such as float, int
-    or complex) or a combination (such as ``(float, int, float)``).  Each
-    array has an associated data-type (or ``dtype``), which describes
-    the numerical type of its elements::
-
-      >>> x = np.array([1, 2, 3], float)
-
-      >>> x
-      array([ 1.,  2.,  3.])
-
-      >>> x.dtype # floating point number, 64 bits of memory per element
-      dtype('float64')
-
-
-      # More complicated data type: each array element is a combination of
-      # and integer and a floating point number
-      >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)])
-      array([(1, 2.0), (3, 4.0)],
-            dtype=[('x', '<i4'), ('y', '<f8')])
-
-    Fast element-wise operations, called `ufuncs`_, operate on arrays.
-
-array_like
-    Any sequence that can be interpreted as an ndarray.  This includes
-    nested lists, tuples, scalars and existing arrays.
-
-attribute
-    A property of an object that can be accessed using ``obj.attribute``,
-    e.g., ``shape`` is an attribute of an array::
-
-      >>> x = np.array([1, 2, 3])
-      >>> x.shape
-      (3,)
-
-broadcast
-    NumPy can do operations on arrays whose shapes are mismatched::
-
-      >>> x = np.array([1, 2])
-      >>> y = np.array([[3], [4]])
-
-      >>> x
-      array([1, 2])
-
-      >>> y
-      array([[3],
-             [4]])
-
-      >>> x + y
-      array([[4, 5],
-             [5, 6]])
-
-    See `doc.broadcasting`_ for more information.
-
-decorator
-    An operator that transforms a function.  For example, a ``log``
-    decorator may be defined to print debugging information upon
-    function execution::
-
-      >>> def log(f):
-      ...     def new_logging_func(*args, **kwargs):
-      ...         print "Logging call with parameters:", args, kwargs
-      ...         return f(*args, **kwargs)
-      ...
-      ...     return new_logging_func
-
-    Now, when we define a function, we can "decorate" it using ``log``::
-
-      >>> @log
-      ... def add(a, b):
-      ...     return a + b
-
-    Calling ``add`` then yields:
-
-    >>> add(1, 2)
-    Logging call with parameters: (1, 2) {}
-    3
-
-dictionary
-    Resembling a language dictionary, which provides a mapping between
-    words and descriptions thereof, a Python dictionary is a mapping
-    between two objects::
-
-      >>> x = {1: 'one', 'two': [1, 2]}
-
-    Here, `x` is a dictionary mapping keys to values, in this case
-    the integer 1 to the string "one", and the string "two" to
-    the list ``[1, 2]``.  The values may be accessed using their
-    corresponding keys::
-
-      >>> x[1]
-      'one'
-
-      >>> x['two']
-      [1, 2]
-
-    Note that dictionaries are not stored in any specific order.  Also,
-    most mutable (see *immutable* below) objects, such as lists, may not
-    be used as keys.
-
-    For more information on dictionaries, read the
-    `Python tutorial <http://docs.python.org/tut>`_.
-
-immutable
-    An object that cannot be modified after execution is called
-    immutable.  Two common examples are strings and tuples.
-
-instance
-    A class definition gives the blueprint for constructing an object::
-
-      >>> class House(object):
-      ...     wall_colour = 'white'
-
-    Yet, we have to *build* a house before it exists::
-
-      >>> h = House() # build a house
-
-    Now, ``h`` is called a ``House`` instance.  An instance is therefore
-    a specific realisation of a class.
-
-iterable
-    A sequence that allows "walking" (iterating) over items, typically
-    using a loop such as::
-
-      >>> x = [1, 2, 3]
-      >>> [item**2 for item in x]
-      [1, 4, 9]
-
-    It is often used in combintion with ``enumerate``::
-
-      >>> for n, k in enumerate(keys):
-      ...     print "Key %d: %s" % (n, k)
-      ...
-      Key 0: a
-      Key 1: b
-      Key 2: c
-
-list
-    A Python container that can hold any number of objects or items.
-    The items do not have to be of the same type, and can even be
-    lists themselves::
-
-      >>> x = [2, 2.0, "two", [2, 2.0]]
-
-    The list `x` contains 4 items, each which can be accessed individually::
-
-      >>> x[2] # the string 'two'
-      'two'
-
-      >>> x[3] # a list, containing an integer 2 and a float 2.0
-      [2, 2.0]
-
-    It is also possible to select more than one item at a time,
-    using *slicing*::
-
-      >>> x[0:2] # or, equivalently, x[:2]
-      [2, 2.0]
-
-    In code, arrays are often conveniently expressed as nested lists::
-
-
-      >>> np.array([[1, 2], [3, 4]])
-      array([[1, 2],
-             [3, 4]])
-
-    For more information, read the section on lists in the `Python
-    tutorial <http://docs.python.org/tut>`_.  For a mapping
-    type (key-value), see *dictionary*.
-
-mask
-    A boolean array, used to select only certain elements for an operation::
-
-      >>> x = np.arange(5)
-      >>> x
-      array([0, 1, 2, 3, 4])
-
-      >>> mask = (x > 2)
-      >>> mask
-      array([False, False, False, True,  True], dtype=bool)
-
-      >>> x[mask] = -1
-      >>> x
-      array([ 0,  1,  2,  -1, -1])
-
-masked array
-    Array that suppressed values indicated by a mask::
-
-      >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True])
-      >>> x
-      masked_array(data = [-- 2.0 --],
-            mask = [ True False  True],
-            fill_value=1e+20)
-
-      >>> x + [1, 2, 3]
-      masked_array(data = [-- 4.0 --],
-            mask = [ True False  True],
-            fill_value=1e+20)
-
-    Masked arrays are often used when operating on arrays containing
-    missing or invalid entries.
-
-matrix
-    A 2-dimensional ndarray that preserves its two-dimensional nature
-    throughout operations.  It has certain special operations, such as ``*``
-    (matrix multiplication) and ``**`` (matrix power), defined::
-
-      >>> x = np.mat([[1, 2], [3, 4]])
-
-      >>> x
-      matrix([[1, 2],
-              [3, 4]])
-
-      >>> x**2
-      matrix([[ 7, 10],
-            [15, 22]])
-
-method
-    A function associated with an object.  For example, each ndarray has a
-    method called ``repeat``::
-
-      >>> x = np.array([1, 2, 3])
-
-      >>> x.repeat(2)
-      array([1, 1, 2, 2, 3, 3])
-
-reference
-    If ``a`` is a reference to ``b``, then ``(a is b) == True``.  Therefore,
-    ``a`` and ``b`` are different names for the same Python object.
-
-self
-    Often seen in method signatures, ``self`` refers to the instance
-    of the associated class.  For example:
-
-      >>> class Paintbrush(object):
-      ...     color = 'blue'
-      ...
-      ...     def paint(self):
-      ...         print "Painting the city %s!" % self.color
-      ...
-      >>> p = Paintbrush()
-      >>> p.color = 'red'
-      >>> p.paint() # self refers to 'p'
-      Painting the city red!
-
-slice
-    Used to select only certain elements from a sequence::
-
-      >>> x = range(5)
-      >>> x
-      [0, 1, 2, 3, 4]
-
-      >>> x[1:3] # slice from 1 to 3 (excluding 3 itself)
-      [1, 2]
-
-      >>> x[1:5:2] # slice from 1 to 5, but skipping every second element
-      [1, 3]
-
-      >>> x[::-1] # slice a sequence in reverse
-      [4, 3, 2, 1, 0]
-
-    Arrays may have more than one dimension, each which can be sliced
-    individually::
-
-      >>> x = np.array([[1, 2], [3, 4]])
-      >>> x
-      array([[1, 2],
-             [3, 4]])
-
-      >>> x[:, 1]
-      array([2, 4])
-
-tuple
-    A sequence that may contain a variable number of types of any
-    kind.  A tuple is immutable, i.e., once constructed it cannot be
-    changed.  Similar to a list, it can be indexed and sliced::
-
-      >>> x = (1, 'one', [1, 2])
-
-      >>> x
-      (1, 'one', [1, 2])
-
-      >>> x[0]
-      1
-
-      >>> x[:2]
-      (1, 'one')
-
-    A useful concept is "tuple unpacking", which allows variables to
-    be assigned to the contents of a tuple::
-
-      >>> x, y = (1, 2)
-      >>> x, y = 1, 2
-
-    This is often used when a function returns multiple values:
-
-      >>> def return_many():
-      ...     return 1, 'alpha'
-
-      >>> a, b, c = return_many()
-      >>> a, b, c
-      (1, 'alpha', None)
-
-      >>> a
-      1
-      >>> b
-      'alpha'
-
-ufunc
-    Universal function.  A fast element-wise array operation.  Examples include
-    ``add``, ``sin`` and ``logical_or``.
-
-view
-    An array that does not own its data, but refers to another array's
-    data instead.  For example, we may create a view that only shows
-    every second element of another array::
-
-      >>> x = np.arange(5)
-      >>> x
-      array([0, 1, 2, 3, 4])
-
-      >>> y = x[::2]
-      >>> y
-      array([0, 2, 4])
-
-      >>> x[0] = 3 # changing x changes y as well, since y is a view on x
-      >>> y
-      array([3, 2, 4])
-
-wrapper
-    Python is a high-level (highly abstracted, or English-like) language.
-    This abstraction comes at a price in execution speed, and sometimes
-    it becomes necessary to use lower level languages to do fast
-    computations.  A wrapper is code that provides a bridge between
-    high and the low level languages, allowing, e.g., Python to execute
-    code written in C or Fortran.
-
-    Examples include ctypes, SWIG and Cython (which wraps C and C++)
-    and f2py (which wraps Fortran).
-
-"""

Deleted: trunk/numpy/doc/reference/howtofind.py
===================================================================
--- trunk/numpy/doc/reference/howtofind.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/howtofind.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,9 +0,0 @@
-"""
-
-=================
-How to Find Stuff
-=================
-
-How to find things in NumPy.
-
-"""

Deleted: trunk/numpy/doc/reference/indexing.py
===================================================================
--- trunk/numpy/doc/reference/indexing.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/indexing.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,384 +0,0 @@
-"""
-==============
-Array indexing
-==============
-
-Array indexing refers to any use of the square brackets ([]) to index
-array values. There are many options to indexing, which give numpy
-indexing great power, but with power comes some complexity and the
-potential for confusion. This section is just an overview of the
-various options and issues related to indexing. Aside from single
-element indexing, the details on most of these options are to be
-found in related sections.
-
-Assignment vs referencing
-=========================
-
-Most of the following examples show the use of indexing when referencing
-data in an array. The examples work just as well when assigning to an
-array. See the section at the end for specific examples and explanations
-on how assignments work.
-
-Single element indexing
-=======================
-
-Single element indexing for a 1-D array is what one expects. It work
-exactly like that for other standard Python sequences. It is 0-based,
-and accepts negative indices for indexing from the end of the array. ::
-
-    >>> x = np.arange(10)
-    >>> x[2]
-    2
-    >>> x[-2]
-    8
-
-Unlike lists and tuples, numpy arrays support multidimensional indexing
-for multidimensional arrays. That means that it is not necessary to
-separate each dimension's index into its own set of square brackets. ::
-
-    >>> x.shape = (2,5) # now x is 2-dimensional
-    >>> x[1,3]
-    8
-    >>> x[1,-1]
-    9
-
-Note that if one indexes a multidimensional array with fewer indices
-than dimensions, one gets a subdimensional array. For example: ::
-
-    >>> x[0]
-    array([0, 1, 2, 3, 4])
-
-That is, each index specified selects the array corresponding to the rest
-of the dimensions selected. In the above example, choosing 0 means that
-remaining dimension of lenth 5 is being left unspecified, and that what
-is returned is an array of that dimensionality and size. It must be noted
-that the returned array is not a copy of the original, but points to the
-same values in memory as does the original array (a new view of the same
-data in other words, see xxx for details). In this case,
-the 1-D array at the first position (0) is returned. So using a single
-index on the returned array, results in a single element being returned.
-That is: ::
-
-    >>> x[0][2]
-    2
-
-So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient
-a new temporary array is created after the first index that is subsequently
-indexed by 2.
-
-Note to those used to IDL or Fortran memory order as it relates to indexing.
-Numpy uses C-order indexing. That means that the last index usually (see
-xxx for exceptions) represents the most rapidly changing memory location,
-unlike Fortran or IDL, where the first index represents the most rapidly
-changing location in memory. This difference represents a great potential
-for confusion.
-
-Other indexing options
-======================
-
-It is possible to slice and stride arrays to extract arrays of the same
-number of dimensions, but of different sizes than the original. The slicing
-and striding works exactly the same way it does for lists and tuples except
-that they can be applied to multiple dimensions as well. A few
-examples illustrates best: ::
-
- >>> x = np.arange(10)
- >>> x[2:5]
- array([2, 3, 4])
- >>> x[:-7]
- array([0, 1, 2])
- >>> x[1:7:2]
- array([1,3,5])
- >>> y = np.arange(35).reshape(5,7)
- >>> y[1:5:2,::3]
- array([[ 7, 10, 13],
-        [21, 24, 27]])
-
-Note that slices of arrays do not copy the internal array data but
-also produce new views of the original data (see xxx for more
-explanation of this issue).
-
-It is possible to index arrays with other arrays for the purposes of
-selecting lists of values out of arrays into new arrays. There are two
-different ways of accomplishing this. One uses one or more arrays of
-index values (see xxx for details). The other involves giving a boolean
-array of the proper shape to indicate the values to be selected.
-Index arrays are a very powerful tool that allow one to avoid looping
-over individual elements in arrays and thus greatly improve performance
-(see xxx for examples)
-
-It is possible to use special features to effectively increase the
-number of dimensions in an array through indexing so the resulting
-array aquires the shape needed for use in an expression or with a
-specific function. See xxx.
-
-Index arrays
-============
-
-Numpy arrays may be indexed with other arrays (or any other sequence-like
-object that can be converted to an array, such as lists, with the exception
-of tuples; see the end of this document for why this is). The use of index
-arrays ranges from simple, straightforward cases to complex, hard-to-understand
-cases. For all cases of index arrays, what is returned is a copy of the
-original data, not a view as one gets for slices.
-
-Index arrays must be of integer type. Each value in the array indicates which
-value in the array to use in place of the index. To illustrate: ::
-
- >>> x = np.arange(10,1,-1)
- >>> x
- array([10,  9,  8,  7,  6,  5,  4,  3,  2])
- >>> x[np.array([3, 3, 1, 8])]
- array([7, 7, 9, 2])
-
-
-The index array consisting of the values 3, 3, 1 and 8 correspondingly create
-an array of length 4 (same as the index array) where each index is replaced by
-the value the index array has in the array being indexed.
-
-Negative values are permitted and work as they do with single indices or slices: ::
-
- >>> x[np.array([3,3,-3,8])]
- array([7, 7, 4, 2])
-
-It is an error to have index values out of bounds: ::
-
- >>> x[np.array([3, 3, 20, 8])]
- <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
-
-Generally speaking, what is returned when index arrays are used is an array with
-the same shape as the index array, but with the type and values of the array being
-indexed. As an example, we can use a multidimensional index array instead: ::
-
- >>> x[np.array([[1,1],[2,3]])]
- array([[9, 9],
-        [8, 7]])
-
-Indexing Multi-dimensional arrays
-=================================
-
-Things become more complex when multidimensional arrays are indexed, particularly
-with multidimensional index arrays. These tend to be more unusal uses, but they
-are permitted, and they are useful for some problems. We'll  start with the
-simplest multidimensional case (using the array y from the previous examples): ::
-
- >>> y[np.array([0,2,4]), np.array([0,1,2])]
- array([ 0, 15, 30])
-
-In this case, if the index arrays have a matching shape, and there is an index
-array for each dimension of the array being indexed, the resultant array has the
-same shape as the index arrays, and the values correspond to the index set for each
-position in the index arrays. In this example, the first index value is 0 for both
-index arrays, and thus the first value of the resultant array is y[0,0]. The next
-value is y[2,1], and the last is y[4,2].
-
-If the index arrays do not have the same shape, there is an attempt to broadcast
-them to the same shape. Broadcasting won't be discussed here but is discussed in
-detail in xxx. If they cannot be broadcast to the same shape, an exception is
-raised: ::
-
- >>> y[np.array([0,2,4]), np.array([0,1])]
- <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape
-
-The broadcasting mechanism permits index arrays to be combined with scalars for
-other indices. The effect is that the scalar value is used for all the corresponding
-values of the index arrays: ::
-
- >>> y[np.array([0,2,4]), 1]
- array([ 1, 15, 29])
-
-Jumping to the next level of complexity, it is possible to only partially index an array
-with index arrays. It takes a bit of thought to understand what happens in such cases.
-For example if we just use one index array with y: ::
-
- >>> y[np.array([0,2,4])]
- array([[ 0,  1,  2,  3,  4,  5,  6],
-        [14, 15, 16, 17, 18, 19, 20],
-        [28, 29, 30, 31, 32, 33, 34]])
-
-What results is the construction of a new array where each value of the index array
-selects one row from the array being indexed and the resultant array has the resulting
-shape (size of row, number index elements).
-
-An example of where this may be useful is for a color lookup table where we want to map
-the values of an image into RGB triples for display. The lookup table could have a shape
-(nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8
-(or any integer type so long as values are with the bounds of the lookup table) will
-result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with
-each pixel location.
-
-In general, the shape of the resulant array will be the concatenation of the shape of
-the index array (or the shape that all the index arrays were broadcast to) with the
-shape of any unused dimensions (those not indexed) in the array being indexed.
-
-Boolean or "mask" index arrays
-==============================
-
-Boolean arrays used as indices are treated in a different manner entirely than index
-arrays. Boolean arrays must be of the same shape as the array being indexed, or
-broadcastable to the same shape. In the most straightforward case, the boolean array
-has the same shape: ::
-
- >>> b = y>20
- >>> y[b]
- array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
-
-The result is a 1-D array containing all the elements in the indexed array corresponding
-to all the true elements in the boolean array. As with index arrays, what is returned
-is a copy of the data, not a view as one gets with slices.
-
-With broadcasting, multidimesional arrays may be the result. For example: ::
-
- >>> b[:,5] # use a 1-D boolean that broadcasts with y
- array([False, False, False,  True,  True], dtype=bool)
- >>> y[b[:,5]]
- array([[21, 22, 23, 24, 25, 26, 27],
-        [28, 29, 30, 31, 32, 33, 34]])
-
-Here the 4th and 5th rows are selected from the indexed array and combined to make a
-2-D array.
-
-Combining index arrays with slices
-==================================
-
-Index arrays may be combined with slices. For example: ::
-
- >>> y[np.array([0,2,4]),1:3]
- array([[ 1,  2],
-        [15, 16],
-        [29, 30]])
-
-In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is
-broadcast with the index array to produce a resultant array of shape (3,2).
-
-Likewise, slicing can be combined with broadcasted boolean indices: ::
-
- >>> y[b[:,5],1:3]
- array([[22, 23],
-        [29, 30]])
-
-Structural indexing tools
-=========================
-
-To facilitate easy matching of array shapes with expressions and in
-assignments, the np.newaxis object can be used within array indices
-to add new dimensions with a size of 1. For example: ::
-
- >>> y.shape
- (5, 7)
- >>> y[:,np.newaxis,:].shape
- (5, 1, 7)
-
-Note that there are no new elements in the array, just that the
-dimensionality is increased. This can be handy to combine two
-arrays in a way that otherwise would require explicitly reshaping
-operations. For example: ::
-
- >>> x = np.arange(5)
- >>> x[:,np.newaxis] + x[np.newaxis,:]
- array([[0, 1, 2, 3, 4],
-        [1, 2, 3, 4, 5],
-        [2, 3, 4, 5, 6],
-        [3, 4, 5, 6, 7],
-        [4, 5, 6, 7, 8]])
-
-The ellipsis syntax maybe used to indicate selecting in full any
-remaining unspecified dimensions. For example: ::
-
- >>> z = np.arange(81).reshape(3,3,3,3)
- >>> z[1,...,2]
- array([[29, 32, 35],
-        [38, 41, 44],
-        [47, 50, 53]])
-
-This is equivalent to: ::
-
- >>> z[1,:,:,2]
-
-Assigning values to indexed arrays
-==================================
-
-As mentioned, one can select a subset of an array to assign to using
-a single index, slices, and index and mask arrays. The value being
-assigned to the indexed array must be shape consistent (the same shape
-or broadcastable to the shape the index produces). For example, it is
-permitted to assign a constant to a slice: ::
-
- >>> x[2:7] = 1
-
-or an array of the right size: ::
-
- >>> x[2:7] = np.arange(5)
-
-Note that assignments may result in changes if assigning
-higher types to lower types (like floats to ints) or even
-exceptions (assigning complex to floats or ints): ::
-
- >>> x[1] = 1.2
- >>> x[1]
- 1
- >>> x[1] = 1.2j
- <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z))
-
-
-Unlike some of the references (such as array and mask indices)
-assignments are always made to the original data in the array
-(indeed, nothing else would make sense!). Note though, that some
-actions may not work as one may naively expect. This particular
-example is often surprising to people: ::
-
- >>> x[np.array([1, 1, 3, 1]) += 1
-
-Where people expect that the 1st location will be incremented by 3.
-In fact, it will only be incremented by 1. The reason is because
-a new array is extracted from the original (as a temporary) containing
-the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
-and then the temporary is assigned back to the original array. Thus
-the value of the array at x[1]+1 is assigned to x[1] three times,
-rather than being incremented 3 times.
-
-Dealing with variable numbers of indices within programs
-========================================================
-
-The index syntax is very powerful but limiting when dealing with
-a variable number of indices. For example, if you want to write
-a function that can handle arguments with various numbers of
-dimensions without having to write special case code for each
-number of possible dimensions, how can that be done? If one
-supplies to the index a tuple, the tuple will be interpreted
-as a list of indices. For example (using the previous definition
-for the array z): ::
-
- >>> indices = (1,1,1,1)
- >>> z[indices]
- 40
-
-So one can use code to construct tuples of any number of indices
-and then use these within an index.
-
-Slices can be specified within programs by using the slice() function
-in Python. For example: ::
-
- >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
- array([39, 40])
-
-Likewise, ellipsis can be specified by code by using the Ellipsis object: ::
-
- >>> indices = (1, Ellipsis, 1) # same as [1,...,1]
- >>> z[indices]
- array([[28, 31, 34],
-        [37, 40, 43],
-        [46, 49, 52]])
-
-For this reason it is possible to use the output from the np.where()
-function directly as an index since it always returns a tuple of index arrays.
-
-Because the special treatment of tuples, they are not automatically converted
-to an array as a list would be. As an example: ::
-
- >>> z[[1,1,1,1]]
- ... # produces a large array
- >>> z[(1,1,1,1)]
- 40 # returns a single value
-
-"""

Deleted: trunk/numpy/doc/reference/internals.py
===================================================================
--- trunk/numpy/doc/reference/internals.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/internals.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,162 +0,0 @@
-"""
-===============
-Array Internals
-===============
-
-Internal organization of numpy arrays
-=====================================
-
-It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy".
-
-Numpy arrays consist of two major components, the raw array data (from now on,
-referred to as the data buffer), and the information about the raw array data.
-The data buffer is typically what people think of as arrays in C or Fortran,
-a contiguous (and fixed) block of memory containing fixed sized data items.
-Numpy also contains a significant set of data that describes how to interpret
-the data in the data buffer. This extra information contains (among other things):
-
- 1) The basic data element's size in bytes
- 2) The start of the data within the data buffer (an offset relative to the
-    beginning of the data buffer).
- 3) The number of dimensions and the size of each dimension
- 4) The separation between elements for each dimension (the 'stride'). This
-    does not have to be a multiple of the element size
- 5) The byte order of the data (which may not be the native byte order)
- 6) Whether the buffer is read-only
- 7) Information (via the dtype object) about the interpretation of the basic
-    data element. The basic data element may be as simple as a int or a float,
-    or it may be a compound object (e.g., struct-like), a fixed character field,
-    or Python object pointers.
- 8) Whether the array is to interpreted as C-order or Fortran-order.
-
-This arrangement allow for very flexible use of arrays. One thing that it allows
-is simple changes of the metadata to change the interpretation of the array buffer.
-Changing the byteorder of the array is a simple change involving no rearrangement
-of the data. The shape of the array can be changed very easily without changing
-anything in the data buffer or any data copying at all
-
-Among other things that are made possible is one can create a new array metadata
-object that uses the same data buffer
-to create a new view of that data buffer that has a different interpretation
-of the buffer (e.g., different shape, offset, byte order, strides, etc) but
-shares the same data bytes. Many operations in numpy do just this such as
-slices. Other operations, such as transpose, don't move data elements
-around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
-
-Typically these new versions of the array metadata but the same data buffer are
-new 'views' into the data buffer. There is a different ndarray object, but it
-uses the same data buffer. This is why it is necessary to force copies through
-use of the .copy() method if one really wants to make a new and independent
-copy of the data buffer.
-
-New views into arrays mean the the object reference counts for the data buffer
-increase. Simply doing away with the original array object will not remove the
-data buffer if other views of it still exist.
-
-Multidimensional Array Indexing Order Issues
-============================================
-
-What is the right way to index
-multi-dimensional arrays? Before you jump to conclusions about the one and
-true way to index multi-dimensional arrays, it pays to understand why this is
-a confusing issue. This section will try to explain in detail how numpy
-indexing works and why we adopt the convention we do for images, and when it
-may be appropriate to adopt other conventions.
-
-The first thing to understand is
-that there are two conflicting conventions for indexing 2-dimensional arrays.
-Matrix notation uses the first index to indicate which row is being selected and
-the second index to indicate which column is selected. This is opposite the
-geometrically oriented-convention for images where people generally think the
-first index represents x position (i.e., column) and the second represents y
-position (i.e., row). This alone is the source of much confusion;
-matrix-oriented users and image-oriented users expect two different things with
-regard to indexing.
-
-The second issue to understand is how indices correspond
-to the order the array is stored in memory. In Fortran the first index is the
-most rapidly varying index when moving through the elements of a two
-dimensional array as it is stored in memory. If you adopt the matrix
-convention for indexing, then this means the matrix is stored one column at a
-time (since the first index moves to the next row as it changes). Thus Fortran
-is considered a Column-major language. C has just the opposite convention. In
-C, the last index changes most rapidly as one moves through the array as
-stored in memory. Thus C is a Row-major language. The matrix is stored by
-rows. Note that in both cases it presumes that the matrix convention for
-indexing is being used, i.e., for both Fortran and C, the first index is the
-row. Note this convention implies that the indexing convention is invariant
-and that the data order changes to keep that so.
-
-But that's not the only way
-to look at it. Suppose one has large two-dimensional arrays (images or
-matrices) stored in data files. Suppose the data are stored by rows rather than
-by columns. If we are to preserve our index convention (whether matrix or
-image) that means that depending on the language we use, we may be forced to
-reorder the data if it is read into memory to preserve our indexing
-convention. For example if we read row-ordered data into memory without
-reordering, it will match the matrix indexing convention for C, but not for
-Fortran. Conversely, it will match the image indexing convention for Fortran,
-but not for C. For C, if one is using data stored in row order, and one wants
-to preserve the image index convention, the data must be reordered when
-reading into memory.
-
-In the end, which you do for Fortran or C depends on
-which is more important, not reordering data or preserving the indexing
-convention. For large images, reordering data is potentially expensive, and
-often the indexing convention is inverted to avoid that.
-
-The situation with
-numpy makes this issue yet more complicated. The internal machinery of numpy
-arrays is flexible enough to accept any ordering of indices. One can simply
-reorder indices by manipulating the internal stride information for arrays
-without reordering the data at all. Numpy will know how to map the new index
-order to the data without moving the data.
-
-So if this is true, why not choose
-the index order that matches what you most expect? In particular, why not define
-row-ordered images to use the image convention? (This is sometimes referred
-to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
-order options for array ordering in numpy.) The drawback of doing this is
-potential performance penalties. It's common to access the data sequentially,
-either implicitly in array operations or explicitly by looping over rows of an
-image. When that is done, then the data will be accessed in non-optimal order.
-As the first index is incremented, what is actually happening is that elements
-spaced far apart in memory are being sequentially accessed, with usually poor
-memory access speeds. For example, for a two dimensional image 'im' defined so
-that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
-Python behavior then im[0] would represent a column at x=0. Yet that data
-would be spread over the whole array since the data are stored in row order.
-Despite the flexibility of numpy's indexing, it can't really paper over the fact
-basic operations are rendered inefficient because of data order or that getting
-contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
-im[0]), thus one can't use an idiom such as for row in im; for col in im does
-work, but doesn't yield contiguous column data.
-
-As it turns out, numpy is
-smart enough when dealing with ufuncs to determine which index is the most
-rapidly varying one in memory and uses that for the innermost loop. Thus for
-ufuncs there is no large intrinsic advantage to either approach in most cases.
-On the other hand, use of .flat with an FORTRAN ordered array will lead to
-non-optimal memory access as adjacent elements in the flattened array (iterator,
-actually) are not contiguous in memory.
-
-Indeed, the fact is that Python
-indexing on lists and other sequences naturally leads to an outside-to inside
-ordering (the first index gets the largest grouping, the next the next largest,
-and the last gets the smallest element). Since image data are normally stored
-by rows, this corresponds to position within rows being the last item indexed.
-
-If you do want to use Fortran ordering realize that
-there are two approaches to consider: 1) accept that the first index is just not
-the most rapidly changing in memory and have all your I/O routines reorder
-your data when going from memory to disk or visa versa, or use numpy's
-mechanism for mapping the first index to the most rapidly varying data. We
-recommend the former if possible. The disadvantage of the latter is that many
-of numpy's functions will yield arrays without Fortran ordering unless you are
-careful to use the 'order' keyword. Doing this would be highly inconvenient.
-
-Otherwise we recommend simply learning to reverse the usual order of indices
-when accessing elements of an array. Granted, it goes against the grain, but
-it is more in line with Python semantics and the natural order of the data.
-
-"""

Deleted: trunk/numpy/doc/reference/io.py
===================================================================
--- trunk/numpy/doc/reference/io.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/io.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,9 +0,0 @@
-"""
-
-=========
-Array I/O
-=========
-
-Placeholder for array I/O documentation.
-
-"""

Deleted: trunk/numpy/doc/reference/jargon.py
===================================================================
--- trunk/numpy/doc/reference/jargon.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/jargon.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,9 +0,0 @@
-"""
-
-======
-Jargon
-======
-
-Placeholder for computer science, engineering and other jargon.
-
-"""

Deleted: trunk/numpy/doc/reference/methods_vs_functions.py
===================================================================
--- trunk/numpy/doc/reference/methods_vs_functions.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/methods_vs_functions.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,9 +0,0 @@
-"""
-
-=====================
-Methods vs. Functions
-=====================
-
-Placeholder for Methods vs. Functions documentation.
-
-"""

Deleted: trunk/numpy/doc/reference/misc.py
===================================================================
--- trunk/numpy/doc/reference/misc.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/misc.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,9 +0,0 @@
-"""
-
-=============
-Miscellaneous
-=============
-
-Placeholder for other tips.
-
-"""

Deleted: trunk/numpy/doc/reference/performance.py
===================================================================
--- trunk/numpy/doc/reference/performance.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/performance.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,9 +0,0 @@
-"""
-
-===========
-Performance
-===========
-
-Placeholder for Improving Performance documentation.
-
-"""

Deleted: trunk/numpy/doc/reference/structured_arrays.py
===================================================================
--- trunk/numpy/doc/reference/structured_arrays.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/structured_arrays.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,176 +0,0 @@
-"""
-=====================================
-Structured Arrays (aka Record Arrays)
-=====================================
-
-Introduction
-============
-
-Numpy provides powerful capabilities to create arrays of structs or records.
-These arrays permit one to manipulate the data by the structs or by fields of
-the struct. A simple example will show what is meant.: ::
-
- >>> x = np.zeros((2,),dtype=('i4,f4,a10'))
- >>> x[:] = [(1,2.,'Hello'),(2,3.,"World")]
- >>> x
- array([(1, 2.0, 'Hello'), (2, 3.0, 'World')],
-      dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
-
-Here we have created a one-dimensional array of length 2. Each element of
-this array is a record that contains three items, a 32-bit integer, a 32-bit
-float, and a string of length 10 or less. If we index this array at the second
-position we get the second record: ::
-
- >>> x[1]
- (2,3.,"World")
-
-The interesting aspect is that we can reference the different fields of the
-array simply by indexing the array with the string representing the name of
-the field. In this case the fields have received the default names of 'f0', 'f1'
-and 'f2'.
-
- >>> y = x['f1']
- >>> y
- array([ 2.,  3.], dtype=float32)
- >>> y[:] = 2*y
- >>> y
- array([ 4.,  6.], dtype=float32)
- >>> x
- array([(1, 4.0, 'Hello'), (2, 6.0, 'World')],
-       dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
-
-In these examples, y is a simple float array consisting of the 2nd field
-in the record. But it is not a copy of the data in the structured array,
-instead it is a view. It shares exactly the same data. Thus when we updated
-this array by doubling its values, the structured array shows the
-corresponding values as doubled as well. Likewise, if one changes the record,
-the field view changes: ::
-
- >>> x[1] = (-1,-1.,"Master")
- >>> x
- array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')],
-       dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
- >>> y
- array([ 4., -1.], dtype=float32)
-
-Defining Structured Arrays
-==========================
-
-The definition of a structured array is all done through the dtype object.
-There are a **lot** of different ways one can define the fields of a
-record. Some of variants are there to provide backward compatibility with
-Numeric or numarray, or another module, and should not be used except for
-such purposes. These will be so noted. One defines records by specifying
-the structure by 4 general ways, using an argument (as supplied to a dtype
-function keyword or a dtype object constructor itself) in the form of a:
-1) string, 2) tuple, 3) list, or 4) dictionary. Each of these will be briefly
-described.
-
-1) String argument (as used in the above examples).
-In this case, the constructor is expecting a comma
-separated list of type specifiers, optionally with extra shape information.
-The type specifiers can take 4 different forms: ::
-
-  a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f4, f8, c8, c16, a<n>
-     (representing bytes, ints, unsigned ints, floats, complex and
-      fixed length strings of specified byte lengths)
-  b) int8,...,uint8,...,float32, float64, complex64, complex128
-     (this time with bit sizes)
-  c) older Numeric/numarray type specifications (e.g. Float32).
-     Don't use these in new code!
-  d) Single character type specifiers (e.g H for unsigned short ints).
-     Avoid using these unless you must. Details can be found in the
-     Numpy book
-
-These different styles can be mixed within the same string (but why would you
-want to do that?). Furthermore, each type specifier can be prefixed
-with a repetition number, or a shape. In these cases an array
-element is created, i.e., an array within a record. That array
-is still referred to as a single field. An example: ::
-
- >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64')
- >>> x
- array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
-        ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
-        ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
-       dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))])
-
-By using strings to define the record structure, it precludes being
-able to name the fields in the original definition. The names can
-be changed as shown later, however.
-
-2) Tuple argument: The only relevant tuple case that applies to record
-structures is when a structure is mapped to an existing data type. This
-is done by pairing in a tuple, the existing data type with a matching
-dtype definition (using any of the variants being described here). As
-an example (using a definition using a list, so see 3) for further
-details): ::
-
- >>> x = zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')]))
- >>> x
- array([0, 0, 0])
- >>> x['r']
- array([0, 0, 0], dtype=uint8)
-
-In this case, an array is produced that looks and acts like a simple int32 array,
-but also has definitions for fields that use only one byte of the int32 (a bit
-like Fortran equivalencing).
-
-3) List argument: In this case the record structure is defined with a list of
-tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field
-('' is permitted), 2) the type of the field, and 3) the shape (optional).
-For example:
-
- >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
- >>> x
- array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
-        (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
-        (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
-       dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))])
-
-4) Dictionary argument: two different forms are permitted. The first consists
-of a dictionary with two required keys ('names' and 'formats'), each having an
-equal sized list of values. The format list contains any type/shape specifier
-allowed in other contexts. The names must be strings. There are two optional
-keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to
-the required two where offsets contain integer offsets for each field, and
-titles are objects containing metadata for each field (these do not have
-to be strings), where the value of None is permitted. As an example: ::
-
- >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
- >>> x
- array([(0, 0.0), (0, 0.0), (0, 0.0)],
-       dtype=[('col1', '>i4'), ('col2', '>f4')])
-
-The other dictionary form permitted is a dictionary of name keys with tuple
-values specifying type, offset, and an optional title.
-
- >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')})
- array([(0, 0.0), (0, 0.0), (0, 0.0)],
-       dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')])
-
-Accessing and modifying field names
-===================================
-
-The field names are an attribute of the dtype object defining the record structure.
-For the last example: ::
-
- >>> x.dtype.names
- ('col1', 'col2')
- >>> x.dtype.names = ('x', 'y')
- >>> x
- array([(0, 0.0), (0, 0.0), (0, 0.0)],
-      dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')])
- >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names
- <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2
-
-Accessing field titles
-====================================
-
-The field titles provide a standard place to put associated info for fields.
-They do not have to be strings.
-
- >>> x.dtype.fields['x'][2]
- 'title 1'
-
-"""

Deleted: trunk/numpy/doc/reference/ufuncs.py
===================================================================
--- trunk/numpy/doc/reference/ufuncs.py	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/reference/ufuncs.py	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,135 +0,0 @@
-"""
-===================
-Universal Functions
-===================
-
-Ufuncs are, generally speaking, mathematical functions or operations that are
-applied element-by-element to the contents of an array. That is, the result
-in each output array element only depends on the value in the corresponding
-input array (or arrays) and on no other array elements. Numpy comes with a
-large suite of ufuncs, and scipy extends that suite substantially. The simplest
-example is the addition operator: ::
-
- >>> np.array([0,2,3,4]) + np.array([1,1,-1,2])
- array([1, 3, 2, 6])
-
-The unfunc module lists all the available ufuncs in numpy. Additional ufuncts
-available in xxx in scipy. Documentation on the specific ufuncs may be found
-in those modules. This documentation is intended to address the more general
-aspects of unfuncs common to most of them. All of the ufuncs that make use of
-Python operators (e.g., +, -, etc.) have equivalent functions defined
-(e.g. add() for +)
-
-Type coercion
-=============
-
-What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of
-two different types? What is the type of the result? Typically, the result is
-the higher of the two types. For example: ::
-
- float32 + float64 -> float64
- int8 + int32 -> int32
- int16 + float32 -> float32
- float32 + complex64 -> complex64
-
-There are some less obvious cases generally involving mixes of types
-(e.g. uints, ints and floats) where equal bit sizes for each are not
-capable of saving all the information in a different type of equivalent
-bit size. Some examples are int32 vs float32 or uint32 vs int32.
-Generally, the result is the higher type of larger size than both
-(if available). So: ::
-
- int32 + float32 -> float64
- uint32 + int32 -> int64
-
-Finally, the type coercion behavior when expressions involve Python
-scalars is different than that seen for arrays. Since Python has a
-limited number of types, combining a Python int with a dtype=np.int8
-array does not coerce to the higher type but instead, the type of the
-array prevails. So the rules for Python scalars combined with arrays is
-that the result will be that of the array equivalent the Python scalar
-if the Python scalar is of a higher 'kind' than the array (e.g., float
-vs. int), otherwise the resultant type will be that of the array.
-For example: ::
-
-  Python int + int8 -> int8
-  Python float + int8 -> float64
-
-ufunc methods
-=============
-
-Binary ufuncs support 4 methods. These methods are explained in detail in xxx
-(or are they, I don't see anything in the ufunc docstring that is useful?).
-
-**.reduce(arr)** applies the binary operator to elements of the array in sequence. For example: ::
-
- >>> np.add.reduce(np.arange(10))  # adds all elements of array
- 45
-
-For multidimensional arrays, the first dimension is reduced by default: ::
-
- >>> np.add.reduce(np.arange(10).reshape(2,5))
-     array([ 5,  7,  9, 11, 13])
-
-The axis keyword can be used to specify different axes to reduce: ::
-
- >>> np.add.reduce(np.arange(10).reshape(2,5),axis=1)
- array([10, 35])
-
-**.accumulate(arr)** applies the binary operator and generates an an equivalently
-shaped array that includes the accumulated amount for each element of the
-array. A couple examples: ::
-
- >>> np.add.accumulate(np.arange(10))
- array([ 0,  1,  3,  6, 10, 15, 21, 28, 36, 45])
- >>>  np.multiply.accumulate(np.arange(1,9))
- array([    1,     2,     6,    24,   120,   720,  5040, 40320])
-
-The behavior for multidimensional arrays is the same as for .reduce(), as is the use of the axis keyword).
-
-**.reduceat(arr,indices)** allows one to apply reduce to selected parts of an array.
-It is a difficult method to understand. See the documentation at:
-
-**.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and arr2. It will work on multidimensional arrays (the shape of the result is the
-concatenation of the two input shapes.: ::
-
- >>> np.multiply.outer(np.arange(3),np.arange(4))
- array([[0, 0, 0, 0],
-        [0, 1, 2, 3],
-        [0, 2, 4, 6]])
-
-Output arguments
-================
-
-All ufuncs accept an optional output array. The array must be of the expected output shape. Beware that if the type of the output array is of a
-different (and lower) type than the output result, the results may be silently
-truncated or otherwise corrupted in the downcast to the lower type. This usage
-is useful when one wants to avoid creating large temporary arrays and instead
-allows one to reuse the same array memory repeatedly (at the expense of not
-being able to use more convenient operator notation in expressions). Note that
-when the output argument is used, the ufunc still returns a reference to the
-result.
-
- >>> x = np.arange(2)
- >>> np.add(np.arange(2),np.arange(2.),x)
- array([0, 2])
- >>> x
- array([0, 2])
-
-and & or as ufuncs
-==================
-
-Invariably people try to use the python 'and' and 'or' as logical operators
-(and quite understandably). But these operators do not behave as normal
-operators since Python treats these quite differently. They cannot be
-overloaded with array equivalents. Thus using 'and' or 'or' with an array
-results in an error. There are two alternatives:
-
- 1) use the ufunc functions logical_and() and logical_or().
- 2) use the bitwise operators & and \\|. The drawback of these is that if
-    the arguments to these operators are not boolean arrays, the result is
-    likely incorrect. On the other hand, most usages of logical_and and
-    logical_or are with boolean arrays. As long as one is careful, this is
-    a convenient way to apply these operators.
-
-"""

Copied: trunk/numpy/doc/structured_arrays.py (from rev 5669, trunk/numpy/doc/reference/structured_arrays.py)

Copied: trunk/numpy/doc/ufuncs.py (from rev 5669, trunk/numpy/doc/reference/ufuncs.py)

Deleted: trunk/numpy/doc/ufuncs.txt
===================================================================
--- trunk/numpy/doc/ufuncs.txt	2008-08-23 22:52:55 UTC (rev 5680)
+++ trunk/numpy/doc/ufuncs.txt	2008-08-23 23:17:23 UTC (rev 5681)
@@ -1,103 +0,0 @@
-BUFFERED General Ufunc explanation
-==================================
-
-.. note::
-
-  This was implemented already, but the notes are kept here for historical
-  and explanatory purposes.
-
-We need to optimize the section of ufunc code that handles mixed-type
-and misbehaved arrays.  In particular, we need to fix it so that items
-are not copied into the buffer if they don't have to be.
-
-Right now, all data is copied into the buffers (even scalars are copied
-multiple times into the buffers even if they are not going to be cast).
-
-Some benchmarks show that this results in a significant slow-down
-(factor of 4) over similar numarray code.
-
-The approach is therefore, to loop over the largest-dimension (just like
-the NO_BUFFER) portion of the code.  All arrays will either have N or
-1 in this last dimension (or their would be a mis-match error). The
-buffer size is B.
-
-If N <= B (and only if needed), we copy the entire last-dimension into
-the buffer as fast as possible using the single-stride information.
-
-Also we only copy into output arrays if needed as well (other-wise the
-output arrays are used directly in the ufunc code).
-
-Call the function using the appropriate strides information from all the input
-arrays.  Only set the strides to the element-size for arrays that will be copied.
-
-If N > B, then we have to do the above operation in a loop (with an extra loop
-at the end with a different buffer size).
-
-Both of these cases are handled with the following code::
-
-   Compute N = quotient * B + remainder.
-   quotient = N / B  # integer math
-   (store quotient + 1) as the number of innerloops
-   remainder = N % B # integer remainder
-
-On the inner-dimension we will have (quotient + 1) loops where
-the size of the inner function is B for all but the last when the niter size is
-remainder.
-
-So, the code looks very similar to NOBUFFER_LOOP except the inner loop is
-replaced with::
-
-  for(k=0; i<quotient+1; k++) {
-      if (k==quotient+1) make itersize remainder size
-      copy only needed items to buffer.
-      swap input buffers if needed
-      cast input buffers if needed
-      call function()
-      cast outputs in buffers if needed
-      swap outputs in buffers if needed
-      copy only needed items back to output arrays.
-      update all data-pointers by strides*niter
-  }
-
-
-Reference counting for OBJECT arrays:
-
-If there are object arrays involved then loop->obj gets set to 1.  Then there are two cases:
-
-1) The loop function is an object loop:
-
-   Inputs:
-	    - castbuf starts as NULL and then gets filled with new references.
-	    - function gets called and doesn't alter the reference count in castbuf
-	    - on the next iteration (next value of k), the casting function will
-	      DECREF what is present in castbuf already and place a new object.
-
-	    - At the end of the inner loop (for loop over k), the final new-references
-	      in castbuf must be DECREF'd.  If its a scalar then a single DECREF suffices
-	      Otherwise, "bufsize" DECREF's are needed (unless there was only one
-	      loop, then "remainder" DECREF's are needed).
-
-   Outputs:
-            - castbuf contains a new reference as the result of the function call.  This
-	      gets converted to the type of interest and.  This new reference in castbuf
-	      will be DECREF'd by later calls to the function.  Thus, only after the
-	      inner most loop do we need to DECREF the remaining references in castbuf.
-
-2) The loop function is of a different type:
-
-   Inputs:
-
-	    - The PyObject input is copied over to buffer which receives a "borrowed"
-	      reference.  This reference is then used but not altered by the cast
-	      call.   Nothing needs to be done.
-
-   Outputs:
-
-            - The buffer[i] memory receives the PyObject input after the cast.  This is
-	      a new reference which will be "stolen" as it is copied over into memory.
-	      The only problem is that what is presently in memory must be DECREF'd first.
-
-
-
-
-



More information about the Numpy-svn mailing list