[Numpy-discussion] Reading a big netcdf file
Wed Aug 3 11:46:18 CDT 2011
Here are my values for your comparison:
test.nc file is about 715 MB. The details are below:
In : netCDF4.__version__
In : np.__version__
In : from netCDF4 import Dataset
In : f = Dataset("test.nc")
In : f.variables['reflectivity'].shape
Out: (6, 18909, 506)
In : f.variables['reflectivity'].size
In : f.variables['reflectivity'][:].dtype
In : timeit z = f.variables['reflectivity'][:]
1 loops, best of 3: 731 ms per loop
How long it takes in your side to read that big array?
On Wed, Aug 3, 2011 at 10:30 AM, Kiko <email@example.com> wrote:
> I'm trying to read a big netcdf file (445 Mb) using netcdf4-python.
> The data are described as:
> *The GEBCO gridded data set is stored in NetCDF as a one dimensional array
> of 2-byte signed integers that represent integer elevations in metres.
> The complete data set gives global coverage. It consists of 21601 x 10801
> data values, one for each one minute of latitude and longitude for 233312401
> The data start at position 90°N, 180°W and are arranged in bands of 360
> degrees x 60 points/degree + 1 = 21601 values. The data range eastward from
> 180°W longitude to 180°E longitude, i.e. the 180° value is repeated.*
> The problem is that it is very slow (or I am quite newbie).
> Anyone has a suggestion to get these data in a numpy array in a faster way?
> Thanks in advance.
> NumPy-Discussion mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion