era5_hourly_reanalysis_single_levels_sa
ERA5 hourly estimates of variables on single levels
Load in Python
from intake import open_catalog
cat = open_catalog("https://raw.githubusercontent.com/pangeo-data/pangeo-datastore/master/intake-catalogs/atmosphere.yaml")
ds = cat["era5_hourly_reanalysis_single_levels_sa"].to_dask()
Working with requester pays data
Several of the datasets within the cloud data catalog are contained in requester pays storage buckets. This means that a user requesting data must provide their own billing project (created and authenticated through Google Cloud Platform) to be billed for the charges associated with accessing a dataset. To set up an GCP billing project and use it for authentication in applications:- Create a project on GCP; if this is the first time using GCP, a prompt will appear to choose a Google account to link to all GCP-related activities.
- Create a Cloud Billing account associated with the project and enable billing for the project through this account.
- Using Google Cloud IAM, add the Service Usage Consumer role to your account, which enables it to make billed requests on the behalf of the project.
- Through command line, install the Google Cloud SDK; this can be done using conda:
conda install -c conda-forge google-cloud-sdk
- Initialize the
gcloud
command line interface, logging into the account used to create the aforementioned project and selecting it as the default project; this will allow the project to be used for requester pays access through the command line:gcloud auth login gcloud init
- Finally, use
gcloud
to establish application default credentials; this will allow the project to be used for requester pays access through applications:gcloud auth application-default login
Metadata
url | https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels |
tags | ['ocean', 'model', 'atmosphere'] |
Dataset Contents
xarray.Dataset
- latitude: 721
- longitude: 1440
- time: 350640
- latitude(latitude)float3290.0 89.75 89.5 ... -89.75 -90.0
- long_name :
- latitude
- units :
- degrees_north
array([ 90. , 89.75, 89.5 , ..., -89.5 , -89.75, -90. ], dtype=float32)
- longitude(longitude)float320.0 0.25 0.5 ... 359.5 359.75
- long_name :
- longitude
- units :
- degrees_east
array([0.0000e+00, 2.5000e-01, 5.0000e-01, ..., 3.5925e+02, 3.5950e+02, 3.5975e+02], dtype=float32)
- time(time)datetime64[ns]1979-01-01 ... 2018-12-31T23:00:00
- long_name :
- time
array(['1979-01-01T00:00:00.000000000', '1979-01-01T01:00:00.000000000', '1979-01-01T02:00:00.000000000', ..., '2018-12-31T21:00:00.000000000', '2018-12-31T22:00:00.000000000', '2018-12-31T23:00:00.000000000'], dtype='datetime64[ns]')
- asn(time, latitude, longitude)float32dask.array<chunksize=(31, 721, 1440), meta=np.ndarray>
- long_name :
- Snow albedo
- units :
- (0 - 1)
Array Chunk Bytes 1.46 TB 128.74 MB Shape (350640, 721, 1440) (31, 721, 1440) Count 11312 Tasks 11311 Chunks Type float32 numpy.ndarray - d2m(time, latitude, longitude)float32dask.array<chunksize=(31, 721, 1440), meta=np.ndarray>
- long_name :
- 2 metre dewpoint temperature
- units :
- K
Array Chunk Bytes 1.46 TB 128.74 MB Shape (350640, 721, 1440) (31, 721, 1440) Count 11312 Tasks 11311 Chunks Type float32 numpy.ndarray