[Scipysvn] r4587  in branches/Interpolate1D: . docs
scipysvn@scip...
scipysvn@scip...
Thu Jul 31 14:09:03 CDT 2008
Author: fcady
Date: 20080731 14:09:00 0500 (Thu, 31 Jul 2008)
New Revision: 4587
Modified:
branches/Interpolate1D/docs/tutorial.rst
branches/Interpolate1D/interpolate1d.py
Log:
documentation improved, tweaked the api in the case of an instance of a callable class being passed
Modified: branches/Interpolate1D/docs/tutorial.rst
===================================================================
 branches/Interpolate1D/docs/tutorial.rst 20080731 16:58:25 UTC (rev 4586)
+++ branches/Interpolate1D/docs/tutorial.rst 20080731 19:09:00 UTC (rev 4587)
@@ 2,25 +2,25 @@
Overview
==================
Often researchers need to infer from a data set the values at points where measurements
were not taken. Perhaps they are running a simulation that will demand data points
never measured. Perhaps to estimate a statistic of a function (say, the integral) they
want to guess its value everywhere based on a few points. Perhaps they know a function
which must be evaluated multiple times, but evaluating it is expensive; they want to evaluate
it several times at first, and make educated guesses in the future. In all these cases, interpolation
it the tool of choice.
+Interpolation is the process of using known data to guess the value of unknown data. It turns
+a sample of a function into an approximation of that function for every value, and is one of
+the most basic mathematical tools available to a researcher.
The interpolate package provides tools for interpolating and extrapolating new data points from a known set of data points.
Interpolate provides both a functional interface that is flexible and easy to use as well as an object oriented interface that
can be more efficient and flexible for some cases. It is able to interpolate and extrapolate in 1D, 2D, and even N
dimensions.*[fixme: 1D only right now]*
+dimensions. *[FIXME : 1D only right now]*
For 1D interpolation, it handles linear and spline(cubic, quadratic, and quintic) for both uniformly and nonuniformly spaced
data points "out of the box." Users can choose the behavior when new values fall outside the range of known data, and
with a little more work, they can incorporate interpolation methods that are specially tailored to their needs.
For 2D interpolation,
+For 2D interpolation, *[FIXME : include this]*
+This tutorial covers how to use the interpolate module, provides some basic examples, and shows
+them at work in realistic sample sessions. These sessions demonstrate how to use the
+interpolate module, but also highlight some of the uses of interpolation techniques.
+
================================================
1D Interpolation with the Functional Interface
================================================
@@ 38,36 +38,32 @@
The following example uses the 'interp1d' function to linearly interpolate a sin
curve from a sparse set of values. ::
 # start up ipython for our examples.
 $ ipython pylab

 In [1]: from interpolate import interp1d

 # Create our "known" set of 5 points with the x values in one array and the y values in another.
 In [2]: x = linspace(0, 2*pi, 5)
 In [3]: y = sin(x)
+ # start up ipython for our examples.
+ $ ipython pylab
::
+ In [1]: from interpolate import interp1d
+ # Create our "known" set of 5 points with the x values in one array and the y values in another.
+ In [2]: x = linspace(0, 2*pi, 5)
+ In [3]: y = sin(x)
+
# If we only want a value at a single point, we can pass in a scalar and interp1d
# will return a scalar
 In [9]: interp1d(x, y, 1.2)
 Out [10]: 0.76394372684109768
+ In []: interp1d(x, y, 1.2)
+ Out []: 0.76394372684109768
::

# 0dimensional arrays are also treated as scalars
 In [9]: interp1d(x, y, array(1.2) )
 Out [10]: 0.76394372684109768
+ In []: interp1d(x, y, array(1.2) )
+ Out []: 0.76394372684109768
# To interpolate from these x,y values at multiple points, possibly to get a more dense set
# of new_x, new_y values to approximate the function, pass a numpy array to interp1d,
# and the return type will also be a numpy array.
 In [4]: new_x = linspace(0, 2*pi, 21)
 In [5]: new_y = interp1d(x, y, new_x)

 # Plot the results using matplotlib. [note examples assume you are running in ipython pylab]
 In [6]: plot(x, y, 'ro', new_x, new_y, 'b')
+ In []: new_x = linspace(0, 2*pi, 21)
+ In []: new_y = interp1d(x, y, new_x)
+
+ # Plot the results using matplotlib. [note examples assume you are running in ipython pylab]
+ In []: plot(x, y, 'ro', new_x, new_y, 'b')
.. image:: interp1d_linear_simple.png
@@ 83,15 +79,15 @@
# If we attempt to extrapolate values outside the interpolation range, interp1d defaults
# to returning NaN
 In [7]: interp1d(x, y, array([2, 1, 1, 2]))
 Out [8]: array([ NaN, NaN, 0.63661977, 0.72676046])
+ In []: interp1d(x, y, array([2, 1, 1, 2]))
+ Out []: array([ NaN, NaN, 0.63661977, 0.72676046])
If we want a type of interpolation other than linear, there is a range of options which we can specify
with the keyword argument kind, which is usually a string. For example::
+with the keyword argument "kind", which is usually a string. Continuing from the previous example,::
# If we want quadratic (2nd order) spline interpolation, we can use the string 'quadratic'
 In [7]: new_y_quadratic = interp1d(x, y, new_x, kind = 'quadratic')
 In [8]: plot(x, y, 'r', new_x, new_y_quadratic, 'g')
+ In []: new_y_quadratic = interp1d(x, y, new_x, kind = 'quadratic')
+ In []: plot(x, y, 'r', new_x, new_y_quadratic, 'g')
.. image:: interp1d_linear_and_quadratic.png
@@ 105,7 +101,7 @@
#. 'quintic' : 5th order spline interpolation
The same flexibility is afforded for extrapolation by the keywords low and high, which
specify how to treat values below and above the range of know data: ::
+specify how to treat values below and above the range of known data: ::
In []: z = array([ 1.0, 2.0 ])
In []: interp1d(z, z, array([5.0, 5.0]), low = 'linear', high = 'linear') # 5 and 5 both out of range
@@ 133,7 +129,7 @@
Removal of Bad Datapoints

Many datasets have missing or corrupt data which it is desirable to ignore when interpolating,
+Many datasets have missing or corrupt data which we want to ignore when interpolating,
and to this end, interp1d has the keyword argument bad_data.
bad_data defaults to being None. But if it is a list, all "bad" points (x[i], y[i]) will be removed
@@ 144,7 +140,7 @@
Note that bad_data must be either None or a list of numbers. Including NaN or None in the list,
for example, is not supported and will cause errors.
The following example shows how ::
+The following example demonstrates using this keyword argument ::
# data will be linear, except for artificial bad points
In []: x = arange(10.); y = arange(10.)
@@ 165,12 +161,12 @@
The string interface is designed to conveniently take care of most things a user would want
to do in a way that is easy and, when something goes wrong, informative and helpful.
If, however, you want more direct control than is afforded by the string interface, that is also possible,
thought it's a little trickier than using strings. You must be very careful to have correct
format, and failure to do so can cause a range of errors which won't necessarily result in
+If, however, you want more direct control than is afforded by the string interface, that is also possible.
+If you define your own types, you must be very careful to have correct
+format; failure to do so can cause a range of errors which won't necessarily result in
informative error messages.
kind (or, equivalently, low and high) can also be set to a function, a callable
+kind (or, equivalently, low or high) can also be set to a function, a callable
class, or an instance of a callable class.
If a function is passed, it will be called when interpolating.
@@ 178,36 +174,45 @@
newy = interp(x, y, newx)
where x, y, newx, and newy are all numpy arrays.
+where x, y, newx, and newy are all 1D numpy arrays.
If a callable class is passed, it is assumed to have format::
+If a class is passed, it is assumed to have ones of two formats.
+If there is a "init_xy" or "set_xy" method, the class is instantiated
+with no argument, then the relevant method is called to initialize
+x and y, and the class is later called with a 1D array as an argument.::
 instance = Class(x, y).

which can then be called by ::
+ instance = Class().
+ instance.set_xy(x, y)
+ new_y = instance(new_x)
+If the class does not have an init_xy or set_xy method, the class
+is instantiated with x and y as arguments, and passed a 1D array
+during interpolation. ::
+
+ instance = Class(x, y)
new_y = instance(new_x)
If a callable object with method "init_xy" or "set_xy" is
passed, that method will be used to set x and y as follows: ::
+You can also pass an instance of the callable class, rather than the class
+itself. This is useful is the class has other parameters besides x, y, and
+new_x (perhaps smoothing coefficients, orders for polynomials, etc).
+If the instance has a method "init_xy" or "set_xy",
+that method will be used to set x and y, and the instance will be
+called later: ::
+
instance.set_xy(x, y)

and the object will be called during interpolation. ::

new_y = instance(new_x)
If the "init_xy" and "set_xy" are not present, it will be called as
+If the instance has no "init_xy" or "set_xy" method, it will be called as ::
 new_y = argument(new_x)

A primitive type which is not a string signifies a function
which is identically that value (e.g. val and
lambda x, y, newx : val are equivalent). ::
+ new_y = kind(x, y, new_x)
+
+Failure to follow these guidelines (say, by having kind require other keyword
+arguments, having a method "initialize_xy" rather than "init_xy", etc) can result
+in cryptic errors, so be careful. Here is a demo of how to properly use these features:
 # However, this behavior can be overwritten in the same way as linear interpolation,
 # by setting the keyword extrap_low (for values below the range of interpolation) and
 # extrap_high (for values above that range)
+::
+
In []: def dummy(x, y, newx):
# Note that dummy has acceptable form
return 5.7
@@ 222,10 +227,10 @@
In []: y = arange(5.0)
In []: new_x = np.array([ 1, 2.4, 7 ])
In []: new_y = interp1d(x, y,
 kind = Phony,
 low = dummy,
 high = dummy
 )
+ kind = Phony,
+ low = dummy,
+ high = dummy
+ )
In []: new_y
Out []: array([ 5.7, 4.0, 5.7 ])
@@ 251,7 +256,7 @@
# The default behavior is virtually the same
In []: x = linspace(0, 2*pi, 5)
 In []: y = sin(x)
+ In []: y = sin(x)
In []: new_x = linspace(0, 2*pi, 21)
In []: new_y1 = interp1d(x, y, new_x)
In []: interp_obj1 = Interpolate1d(x, y)
@@ 279,16 +284,12 @@
Estimating Function Statistics and Displaying Data

In this session, the engineer
has a data set of geological data indicating the temperature at various
depths in the ground. The data set is noisy and large. He wants to 1)
get a feel for the data, and 2) estimate the average temperature.
+In this session, the geologist
+has a data set of data indicating the temperature at various
+depths in the ground. He wants to 1) get a visual feel for the data,
+and 2) estimate the average temperature.
::
 # start up ipython for our examples.
 $ ipython pylab

 # load the data from a text file
In []: data_array = loadtxt('dataset1.txt')
In []: shape(data_array)
Out []: (12, 2)
@@ 305,26 +306,29 @@
In []: import interpolate as I
In []: plot( I.interp1d(depth, temp, linspace(0,20,100), bad_data = [1000])
 In []: # much better, but he wants to see it smoother too
+ # much better, but he wants to see it smoother too
In []: plot( I.interp1d(depth, temp, linspace(0,20,100), kind='cubic', bad_data = [1000])
 # To find the average temp he can't average the data because the samples
 # are not necessarily uniform, but he can uniformly sample the interpolated function
 In []: average_temp = average( I.interp1d(depth, temp, linspace(0,20,100), 'cubic', bad_data=[1000]))
+ # To find the average temp he can't average the data points because the samples
+ # are not necessarily uniform, but it is easy to uniformly sample the interpolated function
+ In []: average_temp = average( I.interp1d(depth, temp, linspace(0,20,100), 'cubic', bad_data=[1000]) )

Modelling from a small dataset
+Modeling from a small dataset

This computational biologist wants to model the growth rate of
cancer cells in tissue. He has measurements of the metabolic rate of cancer
cells at several concentrations of blood glucose. He also has measurements
of the growth rate of these cells as a function of their CO2 metabolic output. Each data point represents
+cancer cells in tissue. For several levels of blood glucose, he has measurements
+of the CO2 output of the cancer cells. For several different levels of CO2 ouput,
+he also has measurements of the growth rate of these cells. Each data point represents
a week's work on the part of experimentalists, so though there isn't much
data he'll have to make due. Now, his full simulation takes up hundreds of lines of
code, but we show here the module object estimate_growth_rate which he wrote.
+code, so we only show the module estimate_growth_rate.py which he wrote.
::
+ """ Contains callable class EstimateGrowthRate, which accepts blood glucose level as
+ an argument and returns interpolated growth rate of cells.
+ """
import numpy as np
import interpolate as I
@@ 387,8 +391,6 @@
In []: where_to_insert = max( find(thickness < guess_thick) ) +1
In []: thickness = insert(thickness, where_to_insert, guess_thick)
In []: peformance = insert(performance, where_to_insert, measured_perf)





\ No newline at end of file
+
+More sophisticated optimization tools are also available from the scipy.optimize
+module.
\ No newline at end of file
Modified: branches/Interpolate1D/interpolate1d.py
===================================================================
 branches/Interpolate1D/interpolate1d.py 20080731 16:58:25 UTC (rev 4586)
+++ branches/Interpolate1D/interpolate1d.py 20080731 19:09:00 UTC (rev 4587)
@@ 1,79 +1,92 @@
# FIXME: information strings giving mathematical descriptions of the actions
# of the functions.
from interpolate_wrapper import linear, logarithmic, block, block_average_above, atleast_1d_and_contiguous
+from interpolate_wrapper import linear, logarithmic, block, \
+ block_average_above, atleast_1d_and_contiguous
from fitpack_wrapper import Spline
import numpy as np
from numpy import array, arange, empty, float64, NaN
# dictionary of tuples. First element is a callable (class, instance of a class, or function
# second argument is dictionary of additional keywords, if any
dict_of_interp_types = \
 { 'linear' : (linear, {}),
 'logarithmic' : (logarithmic, {}),
 'block' : (block, {}),
 'block_average_above' : (block_average_above, {}),
 'Spline' : (Spline, {}), 'spline' : (Spline, {}),
 'Quadratic' : (Spline, {'k':2}), 'quadratic' : (Spline, {'k':2}),
 'Quad' : (Spline, {'k':2}), 'quad' : (Spline, {'k':2}),
 'Cubic' : (Spline, {'k':3}), 'cubic' : (Spline, {'k':3}),
 'Quartic' : (Spline, {'k':4}), 'quartic' : (Spline, {'k':4}),
 'Quar' : (Spline, {'k':4}), 'quar' : (Spline, {'k':4}),
 'Quintic' : (Spline, {'k':5}), 'quintic' : (Spline, {'k':5}),
 'Quin' : (Spline, {'k':5}), 'quin' : (Spline, {'k':5})
+# dictionary of interpolation functions/classes/objects
+method_register = \
+ { 'linear' : linear,
+ 'logarithmic' : logarithmic,
+ 'block' : block,
+ 'block_average_above' : block_average_above,
+ 'Spline' : Spline, 'spline' : Spline,
+ 'Quadratic' : Spline(k=2), 'quadratic' : Spline(k=2),
+ 'Quad' : Spline(k=2), 'quad' : Spline(k=2),
+ 'Cubic' : Spline(k=3), 'cubic' : Spline(k=3),
+ 'Quartic' : Spline(k=3), 'quartic' : Spline(k=3),
+ 'Quar' : Spline(k=4), 'quar' : Spline(k=4),
+ 'Quintic' : Spline(k=5), 'quintic' : Spline(k=5),
+ 'Quin' : Spline(k=5), 'quin' : Spline(k=5)
}
+
+# dictionary of types for casting. key = possible datatype, value = datatype it is cast to
+# BEWARE : if you cast things to integers, you will lose interpolation ability
+dtype_register = {np.float32 : np.float32,
+ np.float64 : np.float64
+ }
+dtype_default = np.float64
def interp1d(x, y, new_x,
 interp = 'linear', extrap_low = NaN, extrap_high = NaN,
 interpkw = {}, lowkw = {}, highkw ={},
+ kind = 'linear', low = NaN, high = NaN,
bad_data = None):
# FIXME : all y to be multidimensional
 # FIXME : update the doc string to match that of Interpolate1d
 """ A function for interpolation of 1D data.
+ # NOTE : This docstring is considered suboordinate to that for Interpolate1d.
+ # That is, update Interpolate1d and copyandpaste
+ """ A callable class for interpolation of 1D, realvalued data.
Parameters

 x  list or NumPy array
 x includes the xvalues for the data set to
 interpolate from. It must be sorted in
 ascending order
+ x  list or 1D NumPy array
+ x includes the xvalues for the data set to
+ interpolate from. It must be sorted in
+ ascending order.
+
+ y  list or 1D NumPy array
+ y includes the yvalues for the data set to
+ interpolate from. Note that 2dimensional
+ y is not currently supported.
 y  list or NumPy array
 y includes the yvalues for the data set to
 interpolate from. Note that y must be
 onedimensional.

 new_x  list or NumPy array
 points whose value is to be interpolated from x and y.
 new_x must be in sorted order, lowest to highest.

Optional Arguments

 interp  Usu. function or string. But can be any type.
 Specifies the type of extrapolation to use for values within
 the range of x. If a string is passed, it will look for an object
 or function with that name and call it when evaluating. If
 a function or object is passed, it will be called when interpolating.
 If nothing else, assumes the argument is intended as a value
 to be returned for all arguments. Defaults to linear interpolation.
+ kind  Usually a string. But can be any type.
+ Specifies the type of interpolation to use for values within
+ the range of x.
+
+ By default, linear interpolation is used.
+
+ See below for details on other options.
+
+ low  same as for kind
+ How to extrapolate values for inputs below the range of x.
+ Same options as for 'kind'. Defaults to returning numpy.NaN ('not
+ a number') for all values below the range of x.
+
+ high  same as for kind
+ How to extrapolate values for inputs above the range of x.
+ Same options as for 'kind'. Defaults to returning numpy.NaN ('not
+ a number') for all values above the range of x.
+
+ bad_data  list
+ List of numerical values (in x or y) which indicate unacceptable data.
+
+ If bad_data is not None (its default), all points whose x or y coordinate is in
+ bad_data, OR ones of whose coordinates is NaN, will be removed.
+
+ interpkw  dictionary
+ If interp is set to a function, class or callable object, this contains
+ additional keywords.
+
+ lowkw (highkw)  dictionary
+ like interpkw, but for extrap_low and extrap_high
+
 low (high)  same as for interp
 Same options as for 'interp'. Defaults to returning numpy.NaN ('not
 a number') for all values outside the range of x.

 interpkw  dictionary
 If

 bad_data  list
 List of values (in x or y) which indicate unacceptable data. All points
 that have x or y value in missing_data will be removed before
 any interpolatin is performed if bad_data is not None.

 numpy.NaN is always considered bad data.

 Acceptable Input Strings
+ Some Acceptable Input Strings

"linear"  linear interpolation : default
@@ 82,8 +95,39 @@
"block_average_above'  block average above
"Spline"  spline interpolation. keyword k (defaults to 3)
indicates order of spline
 numpy.NaN  return numpy.NaN
+ "quad", "quadratic"  spline interpolation order 2
+ "cubic"  spline interpolation order 3
+ "quartic"  spline interpolation order 4
+ "quintic"  spline interpolation order 5
+
+ Other options for interp, extrap_low, and extrap_high
+ 
+ If you choose to use a nonstring argument, you must
+ be careful to use correct formatting.
+
+ If a function is passed, it will be called when interpolating.
+ It is assumed to have the form
+ newy = interp(x, y, newx, **kw),
+ where x, y, newx, and newy are all numpy arrays.
+
+ If a callable class is passed, it is assumed to have format
+ instance = Class(x, y, **kw).
+ which can then be called by
+ new_y = instance(new_x)
+
+ If a callable object with method "init_xy" or "set_xy" is
+ passed, that method will be used to set x and y as follows
+ instance.set_xy(x, y, **kw)
+ and the object will be called during interpolation.
+ new_y = instance(new_x)
+ If the "init_xy" and "set_xy" are not present, it will be called as
+ new_y = argument(new_x)
+
+ A primitive type which is not a string signifies a function
+ which is identically that value (e.g. val and
+ lambda x, y, newx : val are equivalent).
+
Examples

@@ 96,12 +140,9 @@
array([.2, 2.3, 5.6, NaN])
"""
return Interpolate1d(x, y,
 interp = interp,
 extrap_low = extrap_low,
 extrap_high = extrap_high,
 interpkw = interpkw,
 lowkw = lowkw,
 highkw = highkw,
+ kind = kind,
+ low = low,
+ high = high,
bad_data = bad_data
)(new_x)
@@ 124,7 +165,7 @@
Optional Arguments

 interp  Usually a string. But can be any type.
+ kind  Usually a string. But can be any type.
Specifies the type of interpolation to use for values within
the range of x.
@@ 132,12 +173,12 @@
See below for details on other options.
 extrap_low  same as for kind
+ low  same as for kind
How to extrapolate values for inputs below the range of x.
Same options as for 'kind'. Defaults to returning numpy.NaN ('not
a number') for all values below the range of x.
 extrap_high  same as for kind
+ high  same as for kind
How to extrapolate values for inputs above the range of x.
Same options as for 'kind'. Defaults to returning numpy.NaN ('not
a number') for all values above the range of x.
@@ 148,14 +189,6 @@
If bad_data is not None (its default), all points whose x or y coordinate is in
bad_data, OR ones of whose coordinates is NaN, will be removed.
 interpkw  dictionary
 If interp is set to a function, class or callable object, this contains
 additional keywords.

 lowkw (highkw)  dictionary
 like interpkw, but for extrap_low and extrap_high


Some Acceptable Input Strings

@@ 170,7 +203,7 @@
"quartic"  spline interpolation order 4
"quintic"  spline interpolation order 5
 Other options for interp, extrap_low, and extrap_high
+ Other options for kind, low, and high

If you choose to use a nonstring argument, you must
@@ 178,17 +211,17 @@
If a function is passed, it will be called when interpolating.
It is assumed to have the form
 newy = interp(x, y, newx, **kw),
+ newy = interp(x, y, newx),
where x, y, newx, and newy are all numpy arrays.
If a callable class is passed, it is assumed to have format
 instance = Class(x, y, **kw).
+ instance = Class(x, y).
which can then be called by
new_y = instance(new_x)
If a callable object with method "init_xy" or "set_xy" is
passed, that method will be used to set x and y as follows
 instance.set_xy(x, y, **kw)
+ instance.set_xy(x, y)
and the object will be called during interpolation.
new_y = instance(new_x)
If the "init_xy" and "set_xy" are not present, it will be called as
@@ 211,30 +244,32 @@
array([.2, 2.3, 5.6, NaN])
"""
 # FIXME: more informative descriptions of sample arguments
 # FIXME: examples in doc string
 # FIXME : Allow copying or not of arrays. noncopy + remove_bad_data should flash
 # a warning (esp if we interpolate missing values), but work anyway.
def __init__(self, x, y,
 interp = 'linear',
 extrap_low = NaN,
 extrap_high = NaN,
 interpkw = {},
 lowkw = {},
 highkw = {},
+ kind = 'linear',
+ low = NaN,
+ high = NaN,
bad_data = None):
 # FIXME: don't allow copying multiple times.
 # FIXME : allow no copying, in case user has huge dataset
 # remove bad data, is there is any
+ # put data into nice format and store it
+ self._init_xy(x, y, bad_data)
+
+ # store interpolation functions for each range
+ self.interp = self._init_interp_method(kind)
+ self.extrap_low = self._init_interp_method(low)
+ self.extrap_high = self._init_interp_method(high)
+
+ def _init_xy(self, x, y, bad_data):
+ # FIXME : nocopying option, in case user has huge dataset. noncopy + remove bad data should
+ # flash a warning (esp if, in the future, bad values are interpolated).
+
+ # remove bad data if applicable
if bad_data is not None:
 try:
+ try: # check that bad_data contains only numerical values
sum_of_bad_data = sum(bad_data)
except:
raise TypeError, "bad_data must be either None \
 or a list of numerical types"

+ or a list of numbers"
x, y = self._remove_bad_data(x, y, bad_data)
# check acceptable size and dimensions
@@ 245,20 +280,9 @@
assert y.ndim == 1 , "y must be onedimensional"
assert len(x) == len(y) , "x and y must be of the same length"
 # store data, and remove bad data points is applicable
 # FIXME : may be good to let x and y be initialized later, or changed afterthefact
 self._init_xy(x, y)

 # store interpolation functions for each range
 self.interp = self._init_interp_method(interp, interpkw)
 self.extrap_low = self._init_interp_method(extrap_low, lowkw)
 self.extrap_high = self._init_interp_method(extrap_high, highkw)

 def _init_xy(self, x, y):

# select proper dataypes and make arrays
 self._xdtype = {np.float32 : np.float32}.setdefault(type(x[0]), np.float64) # unless data is float32, cast to float64
 self._ydtype = {np.float32 : np.float32}.setdefault(type(y[0]), np.float64)
+ self._xdtype = dtype_register.setdefault(type(x[0]), dtype_default)
+ self._ydtype = dtype_register.setdefault(type(y[0]), dtype_default)
self._x = atleast_1d_and_contiguous(x, self._xdtype).copy()
self._y = atleast_1d_and_contiguous(y, self._ydtype).copy()
@@ 266,9 +290,6 @@
""" removes data points whose x or y coordinate is
either in bad_data or is a NaN.
"""
 # FIXME : In the future, it may be good to just replace the bad points with good guesses.
 # Especially in generalizing the higher dimensions
 # FIXME : This step is very inefficient because it iterates over the array
bad_data_mask = np.isnan(x)  np.isnan(y)
for bad_num in bad_data:
@@ 278,20 +299,15 @@
y = y[~bad_data_mask]
return x, y
 def _init_interp_method(self, interp_arg, kw):
 """
 returns the interpolating function specified by interp_arg.
 """
 # FIXME : error checking specific to interpolation method. x and y long
 # enough for order3 spline if that's indicated, etc. Functions should throw
 # errors themselves, but errors at instantiation would be nice.

+ def _init_interp_method(self, interp_arg):
+ """ returns the interpolating function specified by interp_arg.
+ """
from inspect import isclass, isfunction
# primary usage : user passes a string indicating a known function
+ # pick interpolator accordingly
if isinstance(interp_arg, basestring):
 interpolator, kw = dict_of_interp_types.setdefault(interp_arg, (None, {}) )

+ interpolator = method_register.setdefault(interp_arg, None )
if interpolator is None:
raise TypeError, "input string %s not valid" % interp_arg
else:
@@ 301,29 +317,29 @@
if hasattr(interpolator, '__call__'):
# function
if isfunction(interpolator):
 result = lambda newx : interpolator(self._x, self._y, newx, **kw)
+ result = lambda newx : interpolator(self._x, self._y, newx)
# callable class
elif isclass(interpolator):
if hasattr(interpolator, 'set_xy'):
 result = interpolator(**kw)
+ result = interpolator()
result.set_xy(self._x, self._y)
if hasattr(interpolator, 'init_xy'):
 result = interpolator(**kw)
+ result = interpolator()
result.init_xy(self._x, self._y)
else:
 result = interpolator(self._x, self._y, **kw)
+ result = interpolator(self._x, self._y)
# instance of callable class
else:
if hasattr(interpolator, 'init_xy'):
result = interpolator
 result.init_xy(self._x, self._y, **kw)
+ result.init_xy(self._x, self._y)
elif hasattr(interpolator, 'set_xy'):
result = interpolator
 result.set_xy(self._x, self._y, **kw)
+ result.set_xy(self._x, self._y)
else:
 result = interpolator
+ result = lambda new_x : interpolator(self._x, self._y, new_x)
# noncallable : user has passed a default value to always be returned
else:
@@ 332,49 +348,38 @@
return result
def __call__(self, newx):
 """
 Input x must be a list or NumPy array in sorted order.
+ """ Input x must be a list or NumPy array in sorted order.
Breaks x into pieces inrange, belowrange, and above range.
Performs appropriate operation on each and concatenates results.
"""
 # FIXME : atleast_1d_and_contiguous may also be called within the interpolation technique.
 # waste of time, but ok for the time being.
 # if input is scalar or 0dimemsional array, output will be scalar
+ # record if input is scalar or 0dimemsional array, in which case output will be scalar
input_is_scalar = np.isscalar(newx) or \
(
isinstance( newx , np.ndarray ) and
np.shape(newx) == ()
)
 # make
 newx_array = atleast_1d_and_contiguous(newx)
+ # make input into a nice 1d, contiguous array
+ newx_array = atleast_1d_and_contiguous(newx, dtype=self._xdtype)
+ assert newx_array.ndim == 1, "new_x can be at most 1dimensional"
# masks indicate which elements fall into which interpolation region
low_mask = newx_array<self._x[0]
high_mask = newx_array>self._x[1]
interp_mask = (~low_mask) & (~high_mask)
 type(newx_array[low_mask])


 # use correct function for x values in each region
 if len(newx_array[low_mask]) == 0: new_low=np.array([]) # FIXME : remove need for if/else.
 # if/else is a hack, since vectorize is failing
 # to work on lists/arrays of length 0
 # on the computer where this is being
 # developed
+ # use correct function for x values in each region and create output as an array
+ if len(newx_array[low_mask]) == 0: new_low=np.array([]) # FIXME : remove need for if/else hack.
+ # it's there since vectorize is failing on arrays of zero length
else: new_low = self.extrap_low(newx_array[low_mask])
if len(newx_array[interp_mask])==0: new_interp=np.array([])
else: new_interp = self.interp(newx_array[interp_mask])
if len(newx_array[high_mask]) == 0: new_high = np.array([])
else: new_high = self.extrap_high(newx_array[high_mask])
+ result_array = np.concatenate((new_low, new_interp, new_high))
 result_array = np.concatenate((new_low, new_interp, new_high)) # FIXME : deal with mixed datatypes
 # Would be nice to say result = zeros(dtype=?)
 # and fill in

# convert to scalar if scalar was passed in
if input_is_scalar:
result = float(result_array)
More information about the Scipysvn
mailing list