[Numpy-discussion] fixing up datetime
Tue Jun 7 18:16:25 CDT 2011
Thanks for all the feedback on the datetime, it's very useful to help
understand the timeseries ideas, in particular with the many examples you're
One overall impression I have about timeseries in general is the use of the
term "frequency" synonymously with the time unit. To me, a frequency is a
numerical quantity with a unit of 1/(time unit), so while it's related to
the time unit, naming it the same is something the specific timeseries
domain has chosen to do, I think the numpy datetime class shouldn't have
anything called "frequency" in it, and I would like to remove the current
usage of that terminology from the codebase.
In Wes's comment, he said
I'm hopeful that the datetime64 dtype will enable scikits.timeseries
> and pandas to consolidate much ofir the datetime / frequency code.
> scikits.timeseries has a ton of great stuff for generating dates with
> all the standard fixed frequencies.
implying to me that the important functionality needed in time series is the
ability to generate arrays of dates in specific ways. I suspect equating the
specification of the array of dates and the unit of precision used to store
the date isn't good for either the datetime functionality or supporting
timeseries, and I'm presently trying to understand what it is that
On Tue, Jun 7, 2011 at 7:34 AM, Dave Hirschfeld
> As a user of numpy/scipy in finance I thought I would put in my 2p worth as
> it's something which is of great importance in this area.
> I'm currently a heavy user of the scikits.timeseries package by Matt &
> and I'm also following the development of statsmodels and pandas should we
> require more sophisticated statistics in future. Hopefully the numpy
> type will provide a foundation such packages can build upon...
> I'll use the timeseries package for reference since I'm most familiar with
> and it's a very good api for my requirements. Apologies to Matt/Pierre if
> I get anything wrong - feel free to correct my misconceptions...
> I think some of the complexity is coming from the definition of the
> In the timeseries package each date simply represents the number of periods
> since the epoch and the difference between dates is therefore just and
> with no attached metadata - its meaning is determined by the context it's
> in. e.g.
> In : M1 = ts.Date('M',"01-Jan-2011")
> In : M2 = ts.Date('M',"01-Jan-2012")
> In : M2 - M1
> Out: 12
> timeseries gets on just fine without a timedelta type - a timedelta is just
> integer and if you add an integer to a date it's interpreted as the number
> periods of that dates frequency. From a useability point of view M1 + 1 is
> much nicer than having to do something like M1 + ts.TimeDelta(M1.freq, 1).
I think the timedelta is important, especially with the large number of
units NumPy's datetime supports. When you're subtracting two nanosecond
datetimes and two minute datetimes in the same code, having the units there
to avoid confusion is pretty useful. Ideally, timedelta would just be a
regular integer or float with a time unit associated, but NumPy doesn't have
a physical units system integrated at present.
> Something like the dateutil relativedelta pacage is very convenient and
> could serve as a template for such functionality:
> In : from dateutil.relativedelta import relativedelta
> In : (D1 + 30).datetime
> Out: datetime.datetime(2011, 1, 31, 0, 0)
> In : (D1 + 30).datetime + relativedelta(months=1)
> Out: datetime.datetime(2011, 2, 28, 0, 0)
> ...but you can still get the same behaviour without a timedelta by asking
> the user explicitly specify what they mean by "adding one month" to a date
> a different frequency. e.g.
> In : (D1 + 30)
> Out: <D : 31-Jan-2011>
> In : _62.asfreq('M') + 1
> Out: <M : Feb-2011>
> In : (_62.asfreq('M') + 1).asfreq('D','END')
> Out: <D : 28-Feb-2011>
> In : (_62.asfreq('M') + 1).asfreq('D','START') + _62.day
> Out: <D : 04-Mar-2011>
I don't envision 'asfreq' being a datetime function, this is the kind of
thing that would layer on top in a specialized timeseries library. The
behavior of timedelta follows a more physics-like idea with regard to the
time unit, and I don't think something more complicated belongs at the
bottom layer that is shared among all datetime uses.
Here's a rough approximation of your calculations above:
>>> d = np.datetime64('2011-01-31')
>>> d.astype('M8[M]') + 1
>>> (d.astype('M8[M]') + 2) - np.timedelta64(1, 'D')
>>> (d.astype('M8[M]') + 1) + np.timedelta64(31, 'D')
As Pierre noted when converting dates from a lower frequency to a higher one
> it's very useful (essential!) to be able to specify whether you want the
> or the start of the interval. It may also be useful to be able to specify
> arbitrary offset from either the start or the end of the interval so you
> do something like:
> In : (_62.asfreq('M') + 1).asfreq('D', offset=0)
> Out: <D : 01-Feb-2011>
> In : (_62.asfreq('M') + 1).asfreq('D', offset=-1)
> Out: <D : 28-Feb-2011>
> In : (_62.asfreq('M') + 1).asfreq('D', offset=15)
> Out: <D : 16-Feb-2011>
I think this kind of functionality belongs at a higher level, but the idea
is to make it reasonable to implement it with the NumPy datetime primitives:
>>> (d.astype('M8[M]') + 1).astype('M8[D]')
>>> ((d.astype('M8[M]') + 1) + 1) - np.timedelta64(1, 'D')
>>> (d.astype('M8[M]') + 1) + np.timedelta64(15, 'D')
I don't think it's useful to define higher 'frequencies' as arbitrary
> of lower 'frequencies' unless the conversion is exact otherwise it leads
> to the following inconsistencies:
> In : days_per_month = 30
> In : D1 = M1.asfreq('D',relation='START')
> In : D2 = M2.asfreq('D','START')
> In : D1, D2
> Out: (<D : 01-Jan-2011>, <D : 01-Jan-2012>)
> In : D1 + days_per_month*(M2 - M1)
> Out: <D : 27-Dec-2011>
> In : D1 + days_per_month*(M2 - M1) == D2
> Out: False
Here's what I get:
>>> d1, d2 = np.datetime64('2011-01-01'), np.datetime64('2012-01-01')
>>> m1, m2 = d1.astype('M8[M]'), d2.astype('M8[M]')
>>> d1 + 30 * (m2 - m1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Cannot get a common metadata divisor for types
dtype('datetime64[D]') and dtype('timedelta64[M]') because they have
incompatible nonlinear base time units
>>> d1 + 30 * (m2 - m1).astype('i8')
> If I want the number of days between M1 and M2 I explicitely do the
> In : M2.asfreq('D','START') - M1.asfreq('D','START')
> Out: 365
> thus avoiding any inconsistency:
> In : D1 + (M2.asfreq('D','START') - M1.asfreq('D','START')) == D2
> Out: True
> I'm not convinced about the events concept - it seems to add complexity
> for something which could be accomplished better in other ways. A [Y]//4
> dtype is better specified as [3M] dtype, a [D]//100 is an [864S]. There
> may well be a good reason for it however I can't see the need for it in my
> own applications.
> In the timeseries package, because the difference between dates represents
> number of periods between the dates they must be of the same frequency to
> unambiguopusly define what a "period" means:
> In : M1 - D1
> ValueError Traceback (most recent call last)
> C:\dev\code\<ipython console> in <module>()
> ValueError: Cannot subtract Date objects with different frequencies.
> I would argue that in the spirit that we're all consenting adults
> adding dates of the same frequency can be a useful thing for example
> in finding the mid-point between two dates:
> In : M1.asfreq('S','START')
> Out: <S : 01-Jan-2011 00:00:00>
> In : M2.asfreq('S','START')
> Out: <S : 01-Jan-2012 00:00:00>
> In : ts.Date('S', (_64.value + _65.value)//2)
> Out: <S : 02-Jul-2011 12:00:00>
Adding dates definitely doesn't work, because datetimes have no zero, but I
would express it like this:
>>> s1, s2 = m1.astype('M8[s]'), m2.astype('M8[s]')
>>> s1 + (s2 - s1)/2
>>> np.datetime_as_string(s1 + (s2 - s1)/2)
Printing times in the local timezone by default makes that first printout a
bit weird, but I really like having that default so this looks good:
I think any errors which arose from adding or multiplying dates would be
> easy to spot in your code.
> As Robert mentioned the economic data we use is often supplied as weekly,
> monthly, quarterly or annual data. So these frequencies are critical if
> to use the the array as a container for economic data. Such data would
> represent either the sum or the average over that period so it's very easy
> get a consistent "linear-time" representation by interpolating down to a
> frequency such as daily or hourly.
> I really like the idea of being able to specify multiples of the base
> - e.g. [7D] is equivalenty to [W] not the least because it provides an easy
> way to specify quarters [3M] or seasons [6M] which are important in my
> NB: I also deal with half-hourly and quarter-hourly timeseries and I'm sure
> there are many other example which are all made possible by allowing
> One aspect of this is that the origin becomes important - i.e. does the
> [7D] start on Monday/Tuesday etc. In scikits.timeseries this is solved by
> defining a different weekly frequency for each day of the week, a different
> annual frequency starting at each month etc...
I'm thinking however that it may be possible to use the
> attribute to define the start of such periods - e.g. if you had a weekly
> frequency and the origin was 01-Jan-1970 then each week would be defined as
> Thursday-Thursday week. To get a Monday-Monday week you could supply
> 05-Jan-1970 as the origin attribute.
This is one of the things where I think mixing the datetime storage
precision with timeseries frequency seems counterproductive. Having
different origins for datetime64 starting on different weekdays near
1970-01-01 doesn't seem like the right way to tackle the problem to me. I
see other valid reasons for reintroducing the origin metadata, but this one
I don't really like.
> Unfortunately business days and holidays are also very important in
> however I agree that this may be better suited to a calendar API. I would
> suggest that leap seconds would be something which could also be handled by
> this API rather than having such complex functionality baked in by default.
I've got a business day API in development, and will post it for feedback
I'm not sure how this could be implemented in practice except for some vague
> thoughts about providing hooks where users could provide functions which
> converted to and from an integer representation for their particular
> calendar. Creating a weekday calendar would be a good test-case for such
> an API.
> Apologies for the very long post! I guess it can be mostly summarised as
> you've got a pretty good template for functionality in scikits.timeseries!
> Pandas/statsmodels may have more sophisticated requirements though so their
> input on the finance/econometric side would be useful...
Thanks again for the feedback!
> NumPy-Discussion mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion