Title: | Get Spanish Origin-Destination Data |
---|---|
Description: | Enables access to origin-destination (OD) provided by the Spanish Minstry of Transport, hosted at <https://www.transportes.gob.es/ministerio/proyectos-singulares/estudios-de-movilidad-con-big-data/opendata-movilidad>. It contains functions for downloading zone boundaries and associated origin-destination data. The OD datasets are large. The package eases working with them by using the database interface package 'duckdb', using an optional environment variable 'SPANISH_OD_DATA_DIR' to avoid repeated downloads, and by providing documentation demonstrating how to collect subsets of the resulting databases into memory. |
Authors: | Egor Kotov [aut, cre] , Robin Lovelace [aut] , Eugeni Vidal-Tortosa [ctb] |
Maintainer: | Egor Kotov <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.0.1 |
Built: | 2024-11-19 05:12:51 UTC |
Source: | https://github.com/rOpenSpain/spanishoddata |
Get a table with links to available data files for the specified data version. Optionally check (see arguments) if certain files have already been downloaded into the cache directory specified with SPANISH_OD_DATA_DIR environment variable or a custom path specified with data_dir
argument.
spod_available_data( ver = 2, check_local_files = FALSE, quiet = FALSE, data_dir = spod_get_data_dir() )
spod_available_data( ver = 2, check_local_files = FALSE, quiet = FALSE, data_dir = spod_get_data_dir() )
ver |
Integer. Can be 1 or 2. The version of the data to use. v1 spans 2020-2021, v2 covers 2022 and onwards. |
check_local_files |
Whether to check if the local files exist. Defaults to |
quiet |
A |
data_dir |
The directory where the data is stored. Defaults to the value returned by |
A tibble with links, release dates of files in the data, dates of data coverage, local paths to files, and the download status.
character
. The URL link to the data file.
POSIXct
. The timestamp of when the file was published.
character
. The file extension of the data file (e.g., 'tar', 'gz').
Date
. The year and month of the data coverage, if available.
Date
. The specific date of the data coverage, if available.
character
. The local file path where the data is stored.
logical
. Indicator of whether the data file has been downloaded locally.
Opens relevant vignette.
spod_codebook(ver = 1)
spod_codebook(ver = 1)
ver |
An |
Nothing, calls relevant vignette.
This function allows the user to quickly connect to the data converted to DuckDB with the spod_convert_to_duckdb()
function. This function is a simplificaiton of the connection process. It uses
spod_connect( data_path, target_table_name = NULL, quiet = FALSE, max_mem_gb = max(4, spod_available_ram() - 4), max_n_cpu = parallelly::availableCores() - 1, temp_path = spod_get_temp_dir() )
spod_connect( data_path, target_table_name = NULL, quiet = FALSE, max_mem_gb = max(4, spod_available_ram() - 4), max_n_cpu = parallelly::availableCores() - 1, temp_path = spod_get_temp_dir() )
data_path |
a path to the |
target_table_name |
Default is |
quiet |
A |
max_mem_gb |
The maximum memory to use in GB. A conservative default is 3 GB, which should be enough for resaving the data to DuckDB form a folder of CSV.gz files while being small enough to fit in memory of most even old computers. For data analysis using the already converted data (in DuckDB or Parquet format) or with the raw CSV.gz data, it is recommended to increase it according to available resources. |
max_n_cpu |
The maximum number of threads to use. Defaults to the number of available cores minus 1. |
temp_path |
The path to the temp folder for DuckDB for intermediate spilling in case the set memory limit and/or physical memory of the computer is too low to perform the query. By default this is set to the |
a DuckDB table connection object.
Converts data for faster analysis into either DuckDB
file or into parquet
files in a hive-style directory structure. Running analysis on these files is sometimes 100x times faster than working with raw CSV files, espetially when these are in gzip archives. To connect to converted data, please use mydata <- spod_connect()
passing the path to where the data was saved. The connected mydata
can be analysed using dplyr
functions such as select()
, filter()
, mutate()
, group_by()
, summarise()
, etc. In the end of any sequence of commands you will need to add collect()
to execute the whole chain of data manipulations and load the results into memory in an R data.frame
/tibble
. For more in-depth usage of such data, please refer to DuckDB documentation and examples at https://duckdb.org/docs/api/r#dbplyr . Some more useful examples can be found here https://arrow-user2022.netlify.app/data-wrangling#combining-arrow-with-duckdb . You may also use arrow
package to work with parquet files https://arrow.apache.org/docs/r/.
spod_convert( type = c("od", "origin-destination", "os", "overnight_stays", "nt", "number_of_trips"), zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios"), dates = NULL, save_format = "duckdb", save_path = NULL, overwrite = FALSE, data_dir = spod_get_data_dir(), quiet = FALSE, max_mem_gb = max(4, spod_available_ram() - 4), max_n_cpu = parallelly::availableCores() - 1, max_download_size_gb = 1 )
spod_convert( type = c("od", "origin-destination", "os", "overnight_stays", "nt", "number_of_trips"), zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios"), dates = NULL, save_format = "duckdb", save_path = NULL, overwrite = FALSE, data_dir = spod_get_data_dir(), quiet = FALSE, max_mem_gb = max(4, spod_available_ram() - 4), max_n_cpu = parallelly::availableCores() - 1, max_download_size_gb = 1 )
type |
The type of data to download. Can be |
zones |
The zones for which to download the data. Can be |
dates |
A The possible values can be any of the following:
|
save_format |
A You can also set |
save_path |
A
|
overwrite |
A |
data_dir |
The directory where the data is stored. Defaults to the value returned by |
quiet |
A |
max_mem_gb |
The maximum memory to use in GB. A conservative default is 3 GB, which should be enough for resaving the data to DuckDB form a folder of CSV.gz files while being small enough to fit in memory of most even old computers. For data analysis using the already converted data (in DuckDB or Parquet format) or with the raw CSV.gz data, it is recommended to increase it according to available resources. |
max_n_cpu |
The maximum number of threads to use. Defaults to the number of available cores minus 1. |
max_download_size_gb |
The maximum download size in gigabytes. Defaults to 1. |
Path to saved DuckDB file.
This function is to ensure that DuckDB
connections to CSV.gz files (created via spod_get()
), as well as to DuckDB
files or folders of parquet
files (created via spod_convert()
) are closed properly to prevent conflicting connections. Essentially this is just a wrapper around DBI::dbDisconnect()
that reaches out into the .$src$con
object of the tbl_duckdb_connection
connection object that is returned to the user via spod_get()
and spod_connect()
. After disonnecting the database, it also frees up memory by running gc()
.
spod_disconnect(tbl_con, free_mem = TRUE)
spod_disconnect(tbl_con, free_mem = TRUE)
tbl_con |
A |
free_mem |
A |
## Not run: od_distr <- spod_get("od", zones = "distr", dates <- c("2020-01-01", "2020-01-02")) spod_disconnect(od_distr) ## End(Not run)
## Not run: od_distr <- spod_get("od", zones = "distr", dates <- c("2020-01-01", "2020-01-02")) spod_disconnect(od_distr) ## End(Not run)
This function downloads the data files of the specified type, zones, dates and data version.
spod_download( type = c("od", "origin-destination", "os", "overnight_stays", "nt", "number_of_trips"), zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios", "lua", "large_urban_areas", "gau", "grandes_areas_urbanas"), dates = NULL, max_download_size_gb = 1, data_dir = spod_get_data_dir(), quiet = FALSE, return_local_file_paths = FALSE )
spod_download( type = c("od", "origin-destination", "os", "overnight_stays", "nt", "number_of_trips"), zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios", "lua", "large_urban_areas", "gau", "grandes_areas_urbanas"), dates = NULL, max_download_size_gb = 1, data_dir = spod_get_data_dir(), quiet = FALSE, return_local_file_paths = FALSE )
type |
The type of data to download. Can be |
zones |
The zones for which to download the data. Can be |
dates |
A The possible values can be any of the following:
|
max_download_size_gb |
The maximum download size in gigabytes. Defaults to 1. |
data_dir |
The directory where the data is stored. Defaults to the value returned by |
quiet |
A |
return_local_file_paths |
Logical. If |
Nothing. If return_local_file_paths = TRUE
, a character
vector of the paths to the downloaded files.
## Not run: # Download the origin-destination on district level for the a date range in March 2020 spod_download( type = "od", zones = "districts", dates = c(start = "2020-03-20", end = "2020-03-24") ) # Download the origin-destination on district level for select dates in 2020 and 2021 spod_download( type = "od", zones = "dist", dates = c("2020-03-20", "2020-03-24", "2021-03-20", "2021-03-24") ) # Download the origin-destination on municipality level using regex for a date range in March 2020 # (the regex will capture the dates 2020-03-20 to 2020-03-24) spod_download( type = "od", zones = "municip", dates = "2020032[0-4]" ) ## End(Not run)
## Not run: # Download the origin-destination on district level for the a date range in March 2020 spod_download( type = "od", zones = "districts", dates = c(start = "2020-03-20", end = "2020-03-24") ) # Download the origin-destination on district level for select dates in 2020 and 2021 spod_download( type = "od", zones = "dist", dates = c("2020-03-20", "2020-03-24", "2021-03-20", "2021-03-24") ) # Download the origin-destination on municipality level using regex for a date range in March 2020 # (the regex will capture the dates 2020-03-20 to 2020-03-24) spod_download( type = "od", zones = "municip", dates = "2020032[0-4]" ) ## End(Not run)
This function creates a DuckDB lazy table connection object from the specified type and zones. It checks for missing data and downloads it if necessary. The connnection is made to the raw CSV files in gzip archives, so analysing the data through this connection may be slow if you select more than a few days. You can manipulate this object using {dplyr}
functions such as select, filter, mutate, group_by, summarise, etc. In the end of any sequence of commands you will need to add collect to execute the whole chain of data manipulations and load the results into memory in an R data.frame
/tibble
. See codebooks for v1 and v2 data in vignettes with spod_codebook(1)
and spod_codebook(2)
(spod_codebook).
If you want to analyse longer periods of time (especiially several months or even the whole data over several years), consider using the spod_convert and then spod_connect.
spod_get( type = c("od", "origin-destination", "os", "overnight_stays", "nt", "number_of_trips"), zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios", "lua", "large_urban_areas", "gau", "grandes_areas_urbanas"), dates = NULL, data_dir = spod_get_data_dir(), quiet = FALSE, max_mem_gb = max(4, spod_available_ram() - 4), max_n_cpu = parallelly::availableCores() - 1, max_download_size_gb = 1, duckdb_target = ":memory:", temp_path = spod_get_temp_dir() )
spod_get( type = c("od", "origin-destination", "os", "overnight_stays", "nt", "number_of_trips"), zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios", "lua", "large_urban_areas", "gau", "grandes_areas_urbanas"), dates = NULL, data_dir = spod_get_data_dir(), quiet = FALSE, max_mem_gb = max(4, spod_available_ram() - 4), max_n_cpu = parallelly::availableCores() - 1, max_download_size_gb = 1, duckdb_target = ":memory:", temp_path = spod_get_temp_dir() )
type |
The type of data to download. Can be |
zones |
The zones for which to download the data. Can be |
dates |
A The possible values can be any of the following:
|
data_dir |
The directory where the data is stored. Defaults to the value returned by |
quiet |
A |
max_mem_gb |
The maximum memory to use in GB. A conservative default is 3 GB, which should be enough for resaving the data to DuckDB form a folder of CSV.gz files while being small enough to fit in memory of most even old computers. For data analysis using the already converted data (in DuckDB or Parquet format) or with the raw CSV.gz data, it is recommended to increase it according to available resources. |
max_n_cpu |
The maximum number of threads to use. Defaults to the number of available cores minus 1. |
max_download_size_gb |
The maximum download size in gigabytes. Defaults to 1. |
duckdb_target |
(Optional) The path to the duckdb file to save the data to, if a convertation from CSV is reuqested by the |
temp_path |
The path to the temp folder for DuckDB for intermediate spilling in case the set memory limit and/or physical memory of the computer is too low to perform the query. By default this is set to the |
A DuckDB lazy table connection object of class tbl_duckdb_connection
.
## Not run: # create a connection to the v1 data Sys.setenv(SPANISH_OD_DATA_DIR = "~/path/to/your/cache/dir") dates <- c("2020-02-14", "2020-03-14", "2021-02-14", "2021-02-14", "2021-02-15") od_dist <- spod_get(type = "od", zones = "distr", dates = dates) # od dist is a table view filtered to the specified dates # access the source connection with all dates # list tables DBI::dbListTables(od_dist$src$con) ## End(Not run)
## Not run: # create a connection to the v1 data Sys.setenv(SPANISH_OD_DATA_DIR = "~/path/to/your/cache/dir") dates <- c("2020-02-14", "2020-03-14", "2021-02-14", "2021-02-14", "2021-02-15") od_dist <- spod_get(type = "od", zones = "distr", dates = dates) # od dist is a table view filtered to the specified dates # access the source connection with all dates # list tables DBI::dbListTables(od_dist$src$con) ## End(Not run)
Get valid dates for the specified data version
spod_get_valid_dates(ver = NULL)
spod_get_valid_dates(ver = NULL)
ver |
Integer. Can be 1 or 2. The version of the data to use. v1 spans 2020-2021, v2 covers 2022 and onwards. |
A vector of type Date
with all possible valid dates for the specified data version (v1 for 2020-2021 and v2 for 2020 onwards).
Get spatial zones for the specified data version. Supports both v1 (2020-2021) and v2 (2022 onwards) data.
spod_get_zones( zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios", "lua", "large_urban_areas", "gau", "grandes_areas_urbanas"), ver = NULL, data_dir = spod_get_data_dir(), quiet = FALSE )
spod_get_zones( zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni", "municip", "municipios", "lua", "large_urban_areas", "gau", "grandes_areas_urbanas"), ver = NULL, data_dir = spod_get_data_dir(), quiet = FALSE )
zones |
The zones for which to download the data. Can be |
ver |
Integer. Can be 1 or 2. The version of the data to use. v1 spans 2020-2021, v2 covers 2022 and onwards. |
data_dir |
The directory where the data is stored. Defaults to the value returned by |
quiet |
A |
An sf
object (Simple Feature collection).
The columns for v1 (2020-2021) data include:
A character vector containing the unique identifier for each district, assigned by the data provider. This id
matches the id_origin
, id_destination
, and id
in district-level origin-destination and number of trips data.
A string with semicolon-separated identifiers of census districts classified by the Spanish Statistical Office (INE) that are spatially bound within the polygons for each id
.
A string with semicolon-separated municipality identifiers (as assigned by the data provider) corresponding to each district id
.
A string with semicolon-separated municipality identifiers classified by the Spanish Statistical Office (INE) corresponding to each id
.
A string with semicolon-separated district names (from the v2 version of this data) corresponding to each district id
in v1.
A string with semicolon-separated district identifiers (from the v2 version of this data) corresponding to each district id
in v1.
A MULTIPOLYGON
column containing the spatial geometry of each district, stored as an sf object. The geometry is projected in the ETRS89 / UTM zone 30N coordinate reference system (CRS), with XY dimensions.
The columns for v2 (2022 onwards) data include:
A character vector containing the unique identifier for each zone, assigned by the data provider.
A character vector with the name of each district.
A numeric vector representing the population of each district (as of 2022).
A string with semicolon-separated identifiers of census sections corresponding to each district.
A string with semicolon-separated identifiers of census districts as classified by the Spanish Statistical Office (INE) corresponding to each district.
A string with semicolon-separated identifiers of municipalities classified by the Spanish Statistical Office (INE) corresponding to each district.
A string with semicolon-separated identifiers of municipalities, as assigned by the data provider, that correspond to each district.
A string with semicolon-separated identifiers of LUAs (Local Urban Areas) from the provider, associated with each district.
A string with semicolon-separated district identifiers from v1 data corresponding to each district in v2. If no match exists, it is marked as NA
.
A MULTIPOLYGON
column containing the spatial geometry of each district, stored as an sf object. The geometry is projected in the ETRS89 / UTM zone 30N coordinate reference system (CRS), with XY dimensions.