Title: | Connecting to Various Database Platforms |
---|---|
Description: | An R 'DataBase Interface' ('DBI') compatible interface to various database platforms ('PostgreSQL', 'Oracle', 'Microsoft SQL Server', 'Amazon Redshift', 'Microsoft Parallel Database Warehouse', 'IBM Netezza', 'Apache Impala', 'Google BigQuery', 'Snowflake', 'Spark', and 'SQLite'). Also includes support for fetching data as 'Andromeda' objects. Uses either 'Java Database Connectivity' ('JDBC') or other 'DBI' drivers to connect to databases. |
Authors: | Martijn Schuemie [aut, cre], Marc Suchard [aut], Observational Health Data Science and Informatics [cph], Microsoft Inc. [cph] (SQL Server JDBC driver), PostgreSQL Global Development Group [cph] (PostgreSQL JDBC driver), Oracle Inc. [cph] (Oracle JDBC driver), Amazon Inc. [cph] (RedShift JDBC driver) |
Maintainer: | Martijn Schuemie <[email protected]> |
License: | Apache License |
Version: | 6.3.2 |
Built: | 2024-11-03 05:07:07 UTC |
Source: | https://github.com/ohdsi/databaseconnector |
Asserts the temp emulation schema is set for DBMSs requiring temp table emulation.
If you know your code uses temp tables, it is a good idea to call this function first, so it can throw an informative error if the user forgot to set the temp emulation schema.
assertTempEmulationSchemaSet( dbms, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema") )
assertTempEmulationSchemaSet( dbms, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema") )
dbms |
The type of DBMS running on the server. See |
tempEmulationSchema |
The temp emulation schema specified by the user. |
Does not return anything. Throws an error if the DBMS requires temp emulation but the temp emulation schema is not set.
Compute a hash of the data in the database schema. If the data changes, this should produce a different hash code. Specifically, the hash is based on the field names, field types, and table row counts.
computeDataHash(connection, databaseSchema, tables = NULL, progressBar = TRUE)
computeDataHash(connection, databaseSchema, tables = NULL, progressBar = TRUE)
connection |
The connection to the database server created using either
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
tables |
(Optional) A list of tables to restrict to. |
progressBar |
When true, a progress bar is shown based on the number of tables in the database schema. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
A string representing the MD5 hash code.
Creates a connection to a database server .There are four ways to call this function:
connect(dbms, user, password, server, port, extraSettings, oracleDriver, pathToDriver)
connect(connectionDetails)
connect(dbms, connectionString, pathToDriver))
connect(dbms, connectionString, user, password, pathToDriver)
Depending on the DBMS, the function arguments have slightly different interpretations:
Oracle:
user
. The user name used to access the server
password
. The password for that user
server
. This field contains the SID, or host and servicename, SID, or TNSName:
'sid', 'host/sid', 'host/service name', or 'tnsname'
port
. Specifies the port on the server (default = 1521)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "(PROTOCOL=tcps)")
oracleDriver
. The driver to be used. Choose between "thin" or "oci".
pathToDriver
. The path to the folder containing the Oracle JDBC driver JAR files.
Microsoft SQL Server:
user
. The user used to log in to the server. If the user is not specified, Windows
Integrated Security will be used, which requires the SQL Server JDBC drivers to be installed
(see details below).
password
. The password used to log on to the server
server
. This field contains the host name of the server
port
. Not used for SQL Server
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "encrypt=true; trustServerCertificate=false;")
pathToDriver
. The path to the folder containing the SQL Server JDBC driver JAR files.
Microsoft PDW:
user
. The user used to log in to the server. If the user is not specified, Windows
Integrated Security will be used, which requires the SQL Server JDBC drivers to be installed
(see details below).
password
. The password used to log on to the server
server
. This field contains the host name of the server
port
. Not used for SQL Server
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "encrypt=true; trustServerCertificate=false;")
pathToDriver
. The path to the folder containing the SQL Server JDBC driver JAR files.
PostgreSQL:
user
. The user used to log in to the server
password
. The password used to log on to the server
server
. This field contains the host name of the server and the database holding the
relevant schemas: host/database
port
. Specifies the port on the server (default = 5432)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "ssl=true")
pathToDriver
. The path to the folder containing the PostgreSQL JDBC driver JAR files.
Redshift:
user
. The user used to log in to the server
password
. The password used to log on to the server
server
. This field contains the host name of the server and the database holding the
relevant schemas: host/database
port
. Specifies the port on the server (default = 5439)
'extraSettings The configuration settings for the connection (i.e. SSL Settings such as "ssl=true&sslfactory=com.amazon.redshift.ssl.NonValidatingFactory")
pathToDriver
. The path to the folder containing the RedShift JDBC driver JAR files.
Netezza:
user
. The user used to log in to the server
password
. The password used to log on to the server
server
. This field contains the host name of the server and the database holding the
relevant schemas: host/database
port
. Specifies the port on the server (default = 5480)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "ssl=true")
pathToDriver
. The path to the folder containing the Netezza JDBC driver JAR file
(nzjdbc.jar).
Impala:
user
. The user name used to access the server
password
. The password for that user
server
. The host name of the server
port
. Specifies the port on the server (default = 21050)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "SSLKeyStorePwd=*****")
pathToDriver
. The path to the folder containing the Impala JDBC driver JAR files.
SQLite:
server
. The path to the SQLIte file.
Spark / Databricks:
Currently both JDBC and ODBC connections are supported for Spark. Set the
connectionString
argument to use JDBC, otherwise ODBC is used:
connectionString
. The JDBC connection string (e.g. something like
'jdbc:databricks://my-org.cloud.databricks.com:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/abcde12345;').
user
. The user name used to access the server. This can be set to 'token' when using a personal token (recommended).
password
. The password for that user. This should be your personal token when using a personal token (recommended).
server
. The host name of the server (when using ODBC), e.g. 'my-org.cloud.databricks.com')
port
. Specifies the port on the server (when using ODBC)
extraSettings
. Additional settings for the ODBC connection, for example
extraSettings = list(HTTPPath = "/sql/1.0/warehouses/abcde12345", SSL = 1, ThriftTransport = 2, AuthMech = 3)
Snowflake:
connectionString
. The connection string (e.g. starting with
'jdbc:snowflake://host:port/?db=database').
user
. The user name used to access the server.
password
. The password for that user.
To be able to use Windows authentication for SQL Server (and PDW), you have to install the JDBC
driver. Download the version 9.2.0 .zip from Microsoft
and extract its contents to a folder. In the extracted folder you will find the file
sqljdbc_9.2/enu/auth/x64/mssql-jdbc_auth-9.2.0.x64.dll (64-bits) or
ssqljdbc_9.2/enu/auth/x86/mssql-jdbc_auth-9.2.0.x86.dll (32-bits), which needs to be moved to
location on the system path, for example to c:/windows/system32. If you not have write access to
any folder in the system path, you can also specify the path to the folder containing the dll by
setting the environmental variable PATH_TO_AUTH_DLL, so for example
Sys.setenv("PATH_TO_AUTH_DLL" = "c:/temp")
Note that the environmental variable needs to be
set before calling connect()
for the first time.
connectionDetails |
An object of class |
dbms |
The type of DBMS running on the server. Valid values are
|
user |
The user name used to access the server. |
password |
The password for that user. |
server |
The name of the server. |
port |
(optional) The port on the server to connect to. |
extraSettings |
(optional) Additional configuration settings specific to the database
provider to configure things as security for SSL. For connections using
JDBC these will be appended to end of the connection string. For
connections using DBI, these settings will additionally be used to call
|
oracleDriver |
Specify which Oracle drive you want to use. Choose between |
connectionString |
The JDBC connection string. If specified, the |
pathToDriver |
Path to a folder containing the JDBC driver JAR files. See
|
This function creates a connection to a database.
An object that extends DBIConnection
in a database-specific manner. This object is used to
direct commands to the database engine.
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost/postgres", user = "root", password = "xxx" ) conn <- connect(connectionDetails) dbGetQuery(conn, "SELECT COUNT(*) FROM person") disconnect(conn) conn <- connect(dbms = "sql server", server = "RNDUSRDHIT06.jnj.com") dbGetQuery(conn, "SELECT COUNT(*) FROM concept") disconnect(conn) conn <- connect( dbms = "oracle", server = "127.0.0.1/xe", user = "system", password = "xxx", pathToDriver = "c:/temp" ) dbGetQuery(conn, "SELECT COUNT(*) FROM test_table") disconnect(conn) conn <- connect( dbms = "postgresql", connectionString = "jdbc:postgresql://127.0.0.1:5432/cmd_database" ) dbGetQuery(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost/postgres", user = "root", password = "xxx" ) conn <- connect(connectionDetails) dbGetQuery(conn, "SELECT COUNT(*) FROM person") disconnect(conn) conn <- connect(dbms = "sql server", server = "RNDUSRDHIT06.jnj.com") dbGetQuery(conn, "SELECT COUNT(*) FROM concept") disconnect(conn) conn <- connect( dbms = "oracle", server = "127.0.0.1/xe", user = "system", password = "xxx", pathToDriver = "c:/temp" ) dbGetQuery(conn, "SELECT COUNT(*) FROM test_table") disconnect(conn) conn <- connect( dbms = "postgresql", connectionString = "jdbc:postgresql://127.0.0.1:5432/cmd_database" ) dbGetQuery(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
Creates a list containing all details needed to connect to a database. There are three ways to call this function:
createConnectionDetails(dbms, user, password, server, port, extraSettings, oracleDriver, pathToDriver)
createConnectionDetails(dbms, connectionString, pathToDriver)
createConnectionDetails(dbms, connectionString, user, password, pathToDriver)
Depending on the DBMS, the function arguments have slightly different interpretations:
Oracle:
user
. The user name used to access the server
password
. The password for that user
server
. This field contains the SID, or host and servicename, SID, or TNSName:
'sid', 'host/sid', 'host/service name', or 'tnsname'
port
. Specifies the port on the server (default = 1521)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "(PROTOCOL=tcps)")
oracleDriver
. The driver to be used. Choose between "thin" or "oci".
pathToDriver
. The path to the folder containing the Oracle JDBC driver JAR files.
Microsoft SQL Server:
user
. The user used to log in to the server. If the user is not specified, Windows
Integrated Security will be used, which requires the SQL Server JDBC drivers to be installed
(see details below).
password
. The password used to log on to the server
server
. This field contains the host name of the server
port
. Not used for SQL Server
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "encrypt=true; trustServerCertificate=false;")
pathToDriver
. The path to the folder containing the SQL Server JDBC driver JAR files.
Microsoft PDW:
user
. The user used to log in to the server. If the user is not specified, Windows
Integrated Security will be used, which requires the SQL Server JDBC drivers to be installed
(see details below).
password
. The password used to log on to the server
server
. This field contains the host name of the server
port
. Not used for SQL Server
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "encrypt=true; trustServerCertificate=false;")
pathToDriver
. The path to the folder containing the SQL Server JDBC driver JAR files.
PostgreSQL:
user
. The user used to log in to the server
password
. The password used to log on to the server
server
. This field contains the host name of the server and the database holding the
relevant schemas: host/database
port
. Specifies the port on the server (default = 5432)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "ssl=true")
pathToDriver
. The path to the folder containing the PostgreSQL JDBC driver JAR files.
Redshift:
user
. The user used to log in to the server
password
. The password used to log on to the server
server
. This field contains the host name of the server and the database holding the
relevant schemas: host/database
port
. Specifies the port on the server (default = 5439)
'extraSettings The configuration settings for the connection (i.e. SSL Settings such as "ssl=true&sslfactory=com.amazon.redshift.ssl.NonValidatingFactory")
pathToDriver
. The path to the folder containing the RedShift JDBC driver JAR files.
Netezza:
user
. The user used to log in to the server
password
. The password used to log on to the server
server
. This field contains the host name of the server and the database holding the
relevant schemas: host/database
port
. Specifies the port on the server (default = 5480)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "ssl=true")
pathToDriver
. The path to the folder containing the Netezza JDBC driver JAR file
(nzjdbc.jar).
Impala:
user
. The user name used to access the server
password
. The password for that user
server
. The host name of the server
port
. Specifies the port on the server (default = 21050)
extraSettings
. The configuration settings for the connection (i.e. SSL Settings such
as "SSLKeyStorePwd=*****")
pathToDriver
. The path to the folder containing the Impala JDBC driver JAR files.
SQLite:
server
. The path to the SQLIte file.
Spark / Databricks:
Currently both JDBC and ODBC connections are supported for Spark. Set the
connectionString
argument to use JDBC, otherwise ODBC is used:
connectionString
. The JDBC connection string (e.g. something like
'jdbc:databricks://my-org.cloud.databricks.com:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/abcde12345;').
user
. The user name used to access the server. This can be set to 'token' when using a personal token (recommended).
password
. The password for that user. This should be your personal token when using a personal token (recommended).
server
. The host name of the server (when using ODBC), e.g. 'my-org.cloud.databricks.com')
port
. Specifies the port on the server (when using ODBC)
extraSettings
. Additional settings for the ODBC connection, for example
extraSettings = list(HTTPPath = "/sql/1.0/warehouses/abcde12345", SSL = 1, ThriftTransport = 2, AuthMech = 3)
Snowflake:
connectionString
. The connection string (e.g. starting with
'jdbc:snowflake://host:port/?db=database').
user
. The user name used to access the server.
password
. The password for that user.
To be able to use Windows authentication for SQL Server (and PDW), you have to install the JDBC
driver. Download the version 9.2.0 .zip from Microsoft
and extract its contents to a folder. In the extracted folder you will find the file
sqljdbc_9.2/enu/auth/x64/mssql-jdbc_auth-9.2.0.x64.dll (64-bits) or
ssqljdbc_9.2/enu/auth/x86/mssql-jdbc_auth-9.2.0.x86.dll (32-bits), which needs to be moved to
location on the system path, for example to c:/windows/system32. If you not have write access to
any folder in the system path, you can also specify the path to the folder containing the dll by
setting the environmental variable PATH_TO_AUTH_DLL, so for example
Sys.setenv("PATH_TO_AUTH_DLL" = "c:/temp")
Note that the environmental variable needs to be
set before calling connect()
for the first time.
dbms |
The type of DBMS running on the server. Valid values are
|
user |
The user name used to access the server. |
password |
The password for that user. |
server |
The name of the server. |
port |
(optional) The port on the server to connect to. |
extraSettings |
(optional) Additional configuration settings specific to the database
provider to configure things as security for SSL. For connections using
JDBC these will be appended to end of the connection string. For
connections using DBI, these settings will additionally be used to call
|
oracleDriver |
Specify which Oracle drive you want to use. Choose between |
connectionString |
The JDBC connection string. If specified, the |
pathToDriver |
Path to a folder containing the JDBC driver JAR files. See
|
This function creates a list containing all details needed to connect to a database. The list can
then be used in the connect()
function.
It is highly recommended to use a secure approach to storing credentials, so not to have your
credentials in plain text in your R scripts. The examples demonstrate how to use the
keyring
package.
A list with all the details needed to connect to a database.
## Not run: # Needs to be done only once on a machine. Credentials will then be stored in # the operating system's secure credential manager: keyring::key_set_with_value("server", password = "localhost/postgres") keyring::key_set_with_value("user", password = "root") keyring::key_set_with_value("password", password = "secret") # Create connection details using keyring. Note: the connection details will # not store the credentials themselves, but the reference to get the credentials. connectionDetails <- createConnectionDetails( dbms = "postgresql", server = keyring::key_get("server"), user = keyring::key_get("user"), password = keyring::key_get("password"), ) conn <- connect(connectionDetails) dbGetQuery(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
## Not run: # Needs to be done only once on a machine. Credentials will then be stored in # the operating system's secure credential manager: keyring::key_set_with_value("server", password = "localhost/postgres") keyring::key_set_with_value("user", password = "root") keyring::key_set_with_value("password", password = "secret") # Create connection details using keyring. Note: the connection details will # not store the credentials themselves, but the reference to get the credentials. connectionDetails <- createConnectionDetails( dbms = "postgresql", server = keyring::key_get("server"), user = keyring::key_get("user"), password = keyring::key_get("password"), ) conn <- connect(connectionDetails) dbGetQuery(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
For advanced users only. This function will allow DatabaseConnector
to wrap any DBI driver. Using a driver that
DatabaseConnector
hasn't been tested with may give unpredictable performance. Use at your own risk. No
support will be provided.
createDbiConnectionDetails(dbms, drv, ...)
createDbiConnectionDetails(dbms, drv, ...)
dbms |
The type of DBMS running on the server. Valid values are
|
drv |
An object that inherits from DBIDriver, or an existing DBIConnection object (in order to clone an existing connection). |
... |
authentication arguments needed by the DBMS instance; these typically include user, password, host, port, dbname, etc. For details see the appropriate DBIDriver |
A list with all the details needed to connect to a database.
Compress files and/or folders into a single zip file
createZipFile(zipFile, files, rootFolder = getwd(), compressionLevel = 9)
createZipFile(zipFile, files, rootFolder = getwd(), compressionLevel = 9)
zipFile |
The path to the zip file to be created. |
files |
The files and/or folders to be included in the zip file. Folders will be included recursively. |
rootFolder |
The root folder. All files will be stored with relative paths relative to this folder. |
compressionLevel |
A number between 1 and 9. 9 compresses best, but it also takes the longest. |
Uses Java's compression library to create a zip file. It is similar to utils::zip
, except
that it does not require an external zip tool to be available on the system path.
Create a DatabaseConnectorDriver object
DatabaseConnectorDriver()
DatabaseConnectorDriver()
This function is provided primarily to be used together with dbplyr
when querying
a database. It will also work in dplyr
against data frames.
dateAdd(interval, number, date)
dateAdd(interval, number, date)
interval |
Unit for the interval. Can be "day", "week", "month", "year". |
number |
The number of units to add to the date. |
date |
The date to add to. |
A new date.
dateAdd("day", 10, as.Date("2000-01-01"))
dateAdd("day", 10, as.Date("2000-01-01"))
This function is provided primarily to be used together with dbplyr
when querying
a database. It will also work in dplyr
against data frames.
dateDiff(interval, date1, date2)
dateDiff(interval, date1, date2)
interval |
Unit for the interval. Can be "day", "week", "month", "year". |
date1 |
The first date. |
date2 |
The second date. |
The numeric value of the difference.
dateDiff("day", as.Date("2000-01-01"), as.Date("2000-03-01"))
dateDiff("day", as.Date("2000-01-01"), as.Date("2000-03-01"))
This function is provided primarily to be used together with dbplyr
when querying
a database. It will also work in dplyr
against data frames.
dateFromParts(year, month, day)
dateFromParts(year, month, day)
year |
The calendar year. |
month |
The calendar month (1 = January). |
day |
The day of the month. |
The date.
dateFromParts(2000, 1, 5)
dateFromParts(2000, 1, 5)
This function is provided primarily to be used together with dbplyr
when querying
a database. It will also work in dplyr
against data frames.
day(date)
day(date)
date |
The date. |
The day
day(as.Date("2000-02-01"))
day(as.Date("2000-02-01"))
The dbAppendTable()
method assumes that the table has been created
beforehand, e.g. with dbCreateTable()
.
The default implementation calls sqlAppendTableTemplate()
and then
dbExecute()
with the param
argument. Backends compliant to
ANSI SQL 99 which use ?
as a placeholder for prepared queries don't need
to override it. Backends with a different SQL syntax which use ?
as a placeholder for prepared queries can override sqlAppendTable()
.
Other backends (with different placeholders or with entirely different
ways to create tables) need to override the dbAppendTable()
method.
## S4 method for signature 'DatabaseConnectorConnection,character' dbAppendTable( conn, name, value, databaseSchema = NULL, temporary = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ..., row.names = NULL )
## S4 method for signature 'DatabaseConnectorConnection,character' dbAppendTable( conn, name, value, databaseSchema = NULL, temporary = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ..., row.names = NULL )
conn |
A DBIConnection object, as returned by
|
name |
The table name, passed on to
|
value |
A data frame of values. The column names must be consistent with those in the target table in the database. |
databaseSchema |
The name of the database schema. See details for platform-specific details. |
temporary |
Should the table created as a temp table? |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
... |
Other parameters passed on to methods. |
row.names |
Must be |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbAppendTable()
returns a
scalar
numeric.
Other DBIConnection generics:
DBIConnection-class
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Frees all resources (local and remote) associated with a result set.
This step is mandatory for all objects obtained by calling
dbSendQuery()
or dbSendStatement()
.
## S4 method for signature 'DatabaseConnectorDbiResult' dbClearResult(res, ...)
## S4 method for signature 'DatabaseConnectorDbiResult' dbClearResult(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbClearResult()
returns TRUE
, invisibly, for result sets obtained from
both dbSendQuery()
and dbSendStatement()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Frees all resources (local and remote) associated with a result set.
This step is mandatory for all objects obtained by calling
dbSendQuery()
or dbSendStatement()
.
## S4 method for signature 'DatabaseConnectorJdbcResult' dbClearResult(res, ...)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbClearResult(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbClearResult()
returns TRUE
, invisibly, for result sets obtained from
both dbSendQuery()
and dbSendStatement()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Produces a data.frame that describes the output of a query. The data.frame should have as many rows as there are output fields in the result set, and each column in the data.frame describes an aspect of the result set field (field name, type, etc.)
## S4 method for signature 'DatabaseConnectorDbiResult' dbColumnInfo(res, ...)
## S4 method for signature 'DatabaseConnectorDbiResult' dbColumnInfo(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbColumnInfo()
returns a data frame
with at least two columns "name"
and "type"
(in that order)
(and optional columns that start with a dot).
The "name"
and "type"
columns contain the names and types
of the R columns of the data frame that is returned from dbFetch()
.
The "type"
column is of type character
and only for information.
Do not compute on the "type"
column, instead use dbFetch(res, n = 0)
to create a zero-row data frame initialized with the correct data types.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Produces a data.frame that describes the output of a query. The data.frame should have as many rows as there are output fields in the result set, and each column in the data.frame describes an aspect of the result set field (field name, type, etc.)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbColumnInfo(res, ...)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbColumnInfo(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbColumnInfo()
returns a data frame
with at least two columns "name"
and "type"
(in that order)
(and optional columns that start with a dot).
The "name"
and "type"
columns contain the names and types
of the R columns of the data frame that is returned from dbFetch()
.
The "type"
column is of type character
and only for information.
Do not compute on the "type"
column, instead use dbFetch(res, n = 0)
to create a zero-row data frame initialized with the correct data types.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Connect to a database. This function is synonymous with the connect()
function. except
a dummy driver needs to be specified
## S4 method for signature 'DatabaseConnectorDriver' dbConnect(drv, ...)
## S4 method for signature 'DatabaseConnectorDriver' dbConnect(drv, ...)
drv |
The result of the |
... |
Other parameters. These are the same as expected by the |
Returns a DatabaseConnectorConnection object that can be used with most of the other functions in this package.
## Not run: conn <- dbConnect(DatabaseConnectorDriver(), dbms = "postgresql", server = "localhost/ohdsi", user = "joe", password = "secret" ) querySql(conn, "SELECT * FROM cdm_synpuf.person;") dbDisconnect(conn) ## End(Not run)
## Not run: conn <- dbConnect(DatabaseConnectorDriver(), dbms = "postgresql", server = "localhost/ohdsi", user = "joe", password = "secret" ) querySql(conn, "SELECT * FROM cdm_synpuf.person;") dbDisconnect(conn) ## End(Not run)
The default dbCreateTable()
method calls sqlCreateTable()
and
dbExecute()
.
Backends compliant to ANSI SQL 99 don't need to override it.
Backends with a different SQL syntax can override sqlCreateTable()
,
backends with entirely different ways to create tables need to
override this method.
## S4 method for signature 'DatabaseConnectorConnection' dbCreateTable( conn, name, fields, databaseSchema = NULL, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ..., row.names = NULL, temporary = FALSE )
## S4 method for signature 'DatabaseConnectorConnection' dbCreateTable( conn, name, fields, databaseSchema = NULL, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ..., row.names = NULL, temporary = FALSE )
conn |
A DBIConnection object, as returned by
|
name |
The table name, passed on to
|
fields |
Either a character vector or a data frame. A named character vector: Names are column names, values are types.
Names are escaped with A data frame: field types are generated using
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
... |
Other parameters passed on to methods. |
row.names |
Must be |
temporary |
Should the table created as a temp table? |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbCreateTable()
returns TRUE
, invisibly.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
This closes the connection, discards all pending work, and frees resources (e.g., memory, sockets).
## S4 method for signature 'DatabaseConnectorConnection' dbDisconnect(conn)
## S4 method for signature 'DatabaseConnectorConnection' dbDisconnect(conn)
conn |
A DBIConnection object, as returned by
|
dbDisconnect()
returns TRUE
, invisibly.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Executes a statement and returns the number of rows affected.
dbExecute()
comes with a default implementation
(which should work with most backends) that calls
dbSendStatement()
, then dbGetRowsAffected()
, ensuring that
the result is always free-d by dbClearResult()
.
For passing query parameters, see dbBind()
, in particular
the "The command execution flow" section.
## S4 method for signature 'DatabaseConnectorConnection,character' dbExecute(conn, statement, translate = TRUE, ...)
## S4 method for signature 'DatabaseConnectorConnection,character' dbExecute(conn, statement, translate = TRUE, ...)
conn |
A DBIConnection object, as returned by
|
statement |
a character string containing SQL. |
translate |
Translate the query using SqlRender? |
... |
Other parameters passed on to methods. |
You can also use dbExecute()
to call a stored procedure
that performs data manipulation or other actions that do not return a result set.
To execute a stored procedure that returns a result set,
or a data manipulation query that also returns a result set
such as INSERT INTO ... RETURNING ...
, use dbGetQuery()
instead.
dbExecute()
always returns a
scalar
numeric
that specifies the number of rows affected
by the statement.
For queries: dbSendQuery()
and dbGetQuery()
.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Returns if a table given by name exists in the database.
## S4 method for signature 'DatabaseConnectorConnection,character' dbExistsTable(conn, name, databaseSchema = NULL, ...)
## S4 method for signature 'DatabaseConnectorConnection,character' dbExistsTable(conn, name, databaseSchema = NULL, ...)
conn |
A DBIConnection object, as returned by
|
name |
The table name, passed on to
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
... |
Other parameters passed on to methods. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbExistsTable()
returns a logical scalar, TRUE
if the table or view
specified by the name
argument exists, FALSE
otherwise.
This includes temporary tables if supported by the database.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Fetch the next n
elements (rows) from the result set and return them
as a data.frame.
## S4 method for signature 'DatabaseConnectorDbiResult' dbFetch(res, n = -1, ...)
## S4 method for signature 'DatabaseConnectorDbiResult' dbFetch(res, n = -1, ...)
res |
An object inheriting from DBIResult, created by
|
n |
maximum number of records to retrieve per fetch. Use |
... |
Other arguments passed on to methods. |
fetch()
is provided for compatibility with older DBI clients - for all
new code you are strongly encouraged to use dbFetch()
. The default
implementation for dbFetch()
calls fetch()
so that it is compatible with
existing code. Modern backends should implement for dbFetch()
only.
dbFetch()
always returns a data.frame
with as many rows as records were fetched and as many
columns as fields in the result set,
even if the result is a single value
or has one
or zero rows.
Close the result set with dbClearResult()
as soon as you
finish retrieving the records you want.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Fetch the next n
elements (rows) from the result set and return them
as a data.frame.
## S4 method for signature 'DatabaseConnectorJdbcResult' dbFetch(res, n = -1, ...)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbFetch(res, n = -1, ...)
res |
An object inheriting from DBIResult, created by
|
n |
maximum number of records to retrieve per fetch. Use |
... |
Other arguments passed on to methods. |
fetch()
is provided for compatibility with older DBI clients - for all
new code you are strongly encouraged to use dbFetch()
. The default
implementation for dbFetch()
calls fetch()
so that it is compatible with
existing code. Modern backends should implement for dbFetch()
only.
dbFetch()
always returns a data.frame
with as many rows as records were fetched and as many
columns as fields in the result set,
even if the result is a single value
or has one
or zero rows.
Close the result set with dbClearResult()
as soon as you
finish retrieving the records you want.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Retrieves information on objects of class DBIDriver, DBIConnection or DBIResult.
## S4 method for signature 'DatabaseConnectorConnection' dbGetInfo(dbObj, ...)
## S4 method for signature 'DatabaseConnectorConnection' dbGetInfo(dbObj, ...)
dbObj |
An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a DBIResult |
... |
Other arguments to methods. |
For objects of class DBIDriver, dbGetInfo()
returns a named list
that contains at least the following components:
driver.version
: the package version of the DBI backend,
client.version
: the version of the DBMS client library.
For objects of class DBIConnection, dbGetInfo()
returns a named list
that contains at least the following components:
db.version
: version of the database server,
dbname
: database name,
username
: username to connect to the database,
host
: hostname of the database server,
port
: port on the database server.
It must not contain a password
component.
Components that are not applicable should be set to NA
.
For objects of class DBIResult, dbGetInfo()
returns a named list
that contains at least the following components:
statatment
: the statement used with dbSendQuery()
or dbExecute()
,
as returned by dbGetStatement()
,
row.count
: the number of rows fetched so far (for queries),
as returned by dbGetRowCount()
,
rows.affected
: the number of rows affected (for statements),
as returned by dbGetRowsAffected()
has.completed
: a logical that indicates
if the query or statement has completed,
as returned by dbHasCompleted()
.
Other DBIDriver generics:
DBIDriver-class
,
dbCanConnect()
,
dbConnect()
,
dbDataType()
,
dbDriver()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListConnections()
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Retrieves information on objects of class DBIDriver, DBIConnection or DBIResult.
## S4 method for signature 'DatabaseConnectorDriver' dbGetInfo(dbObj, ...)
## S4 method for signature 'DatabaseConnectorDriver' dbGetInfo(dbObj, ...)
dbObj |
An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a DBIResult |
... |
Other arguments to methods. |
For objects of class DBIDriver, dbGetInfo()
returns a named list
that contains at least the following components:
driver.version
: the package version of the DBI backend,
client.version
: the version of the DBMS client library.
For objects of class DBIConnection, dbGetInfo()
returns a named list
that contains at least the following components:
db.version
: version of the database server,
dbname
: database name,
username
: username to connect to the database,
host
: hostname of the database server,
port
: port on the database server.
It must not contain a password
component.
Components that are not applicable should be set to NA
.
For objects of class DBIResult, dbGetInfo()
returns a named list
that contains at least the following components:
statatment
: the statement used with dbSendQuery()
or dbExecute()
,
as returned by dbGetStatement()
,
row.count
: the number of rows fetched so far (for queries),
as returned by dbGetRowCount()
,
rows.affected
: the number of rows affected (for statements),
as returned by dbGetRowsAffected()
has.completed
: a logical that indicates
if the query or statement has completed,
as returned by dbHasCompleted()
.
Other DBIDriver generics:
DBIDriver-class
,
dbCanConnect()
,
dbConnect()
,
dbDataType()
,
dbDriver()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListConnections()
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Returns the result of a query as a data frame.
dbGetQuery()
comes with a default implementation
(which should work with most backends) that calls
dbSendQuery()
, then dbFetch()
, ensuring that
the result is always free-d by dbClearResult()
.
For retrieving chunked/paged results or for passing query parameters,
see dbSendQuery()
, in particular the "The data retrieval flow" section.
## S4 method for signature 'DatabaseConnectorConnection,character' dbGetQuery(conn, statement, translate = TRUE, ...)
## S4 method for signature 'DatabaseConnectorConnection,character' dbGetQuery(conn, statement, translate = TRUE, ...)
conn |
A DBIConnection object, as returned by
|
statement |
a character string containing SQL. |
translate |
Translate the query using SqlRender? |
... |
Other parameters passed on to methods. |
This method is for SELECT
queries only
(incl. other SQL statements that return a SELECT
-alike result,
e. g. execution of a stored procedure or data manipulation queries
like INSERT INTO ... RETURNING ...
).
To execute a stored procedure that does not return a result set,
use dbExecute()
.
Some backends may
support data manipulation statements through this method for compatibility
reasons. However, callers are strongly advised to use
dbExecute()
for data manipulation statements.
dbGetQuery()
always returns a data.frame
with as many rows as records were fetched and as many
columns as fields in the result set,
even if the result is a single value
or has one
or zero rows.
For updates: dbSendStatement()
and dbExecute()
.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Returns the total number of rows actually fetched with calls to dbFetch()
for this result set.
## S4 method for signature 'DatabaseConnectorDbiResult' dbGetRowCount(res, ...)
## S4 method for signature 'DatabaseConnectorDbiResult' dbGetRowCount(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbGetRowCount()
returns a scalar number (integer or numeric),
the number of rows fetched so far.
After calling dbSendQuery()
,
the row count is initially zero.
After a call to dbFetch()
without limit,
the row count matches the total number of rows returned.
Fetching a limited number of rows
increases the number of rows by the number of rows returned,
even if fetching past the end of the result set.
For queries with an empty result set,
zero is returned
even after fetching.
For data manipulation statements issued with
dbSendStatement()
,
zero is returned before
and after calling dbFetch()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Returns the total number of rows actually fetched with calls to dbFetch()
for this result set.
## S4 method for signature 'DatabaseConnectorJdbcResult' dbGetRowCount(res, ...)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbGetRowCount(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbGetRowCount()
returns a scalar number (integer or numeric),
the number of rows fetched so far.
After calling dbSendQuery()
,
the row count is initially zero.
After a call to dbFetch()
without limit,
the row count matches the total number of rows returned.
Fetching a limited number of rows
increases the number of rows by the number of rows returned,
even if fetching past the end of the result set.
For queries with an empty result set,
zero is returned
even after fetching.
For data manipulation statements issued with
dbSendStatement()
,
zero is returned before
and after calling dbFetch()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
This method returns the number of rows that were added, deleted, or updated by a data manipulation statement.
## S4 method for signature 'DatabaseConnectorDbiResult' dbGetRowsAffected(res, ...)
## S4 method for signature 'DatabaseConnectorDbiResult' dbGetRowsAffected(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbGetRowsAffected()
returns a scalar number (integer or numeric),
the number of rows affected by a data manipulation statement
issued with dbSendStatement()
.
The value is available directly after the call
and does not change after calling dbFetch()
.
For queries issued with dbSendQuery()
,
zero is returned before
and after the call to dbFetch()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
This method returns the number of rows that were added, deleted, or updated by a data manipulation statement.
## S4 method for signature 'DatabaseConnectorJdbcResult' dbGetRowsAffected(res, ...)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbGetRowsAffected(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbGetRowsAffected()
returns a scalar number (integer or numeric),
the number of rows affected by a data manipulation statement
issued with dbSendStatement()
.
The value is available directly after the call
and does not change after calling dbFetch()
.
For queries issued with dbSendQuery()
,
zero is returned before
and after the call to dbFetch()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Returns the statement that was passed to dbSendQuery()
or dbSendStatement()
.
## S4 method for signature 'DatabaseConnectorDbiResult' dbGetStatement(res, ...)
## S4 method for signature 'DatabaseConnectorDbiResult' dbGetStatement(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbGetStatement()
returns a string, the query used in
either dbSendQuery()
or dbSendStatement()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Returns the statement that was passed to dbSendQuery()
or dbSendStatement()
.
## S4 method for signature 'DatabaseConnectorJdbcResult' dbGetStatement(res, ...)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbGetStatement(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbGetStatement()
returns a string, the query used in
either dbSendQuery()
or dbSendStatement()
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
This method returns if the operation has completed.
A SELECT
query is completed if all rows have been fetched.
A data manipulation statement is always completed.
## S4 method for signature 'DatabaseConnectorDbiResult' dbHasCompleted(res, ...)
## S4 method for signature 'DatabaseConnectorDbiResult' dbHasCompleted(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbHasCompleted()
returns a logical scalar.
For a query initiated by dbSendQuery()
with non-empty result set,
dbHasCompleted()
returns FALSE
initially
and TRUE
after calling dbFetch()
without limit.
For a query initiated by dbSendStatement()
,
dbHasCompleted()
always returns TRUE
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
This method returns if the operation has completed.
A SELECT
query is completed if all rows have been fetched.
A data manipulation statement is always completed.
## S4 method for signature 'DatabaseConnectorJdbcResult' dbHasCompleted(res, ...)
## S4 method for signature 'DatabaseConnectorJdbcResult' dbHasCompleted(res, ...)
res |
An object inheriting from DBIResult. |
... |
Other arguments passed on to methods. |
dbHasCompleted()
returns a logical scalar.
For a query initiated by dbSendQuery()
with non-empty result set,
dbHasCompleted()
returns FALSE
initially
and TRUE
after calling dbFetch()
without limit.
For a query initiated by dbSendStatement()
,
dbHasCompleted()
always returns TRUE
.
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbIsReadOnly()
,
dbIsValid()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
This generic tests whether a database object is still valid (i.e. it hasn't been disconnected or cleared).
## S4 method for signature 'DatabaseConnectorDbiConnection' dbIsValid(dbObj, ...)
## S4 method for signature 'DatabaseConnectorDbiConnection' dbIsValid(dbObj, ...)
dbObj |
An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a DBIResult |
... |
Other arguments to methods. |
dbIsValid()
returns a logical scalar,
TRUE
if the object specified by dbObj
is valid,
FALSE
otherwise.
A DBIConnection object is initially valid,
and becomes invalid after disconnecting with dbDisconnect()
.
For an invalid connection object (e.g., for some drivers if the object
is saved to a file and then restored), the method also returns FALSE
.
A DBIResult object is valid after a call to dbSendQuery()
,
and stays valid even after all rows have been fetched;
only clearing it with dbClearResult()
invalidates it.
A DBIResult object is also valid after a call to dbSendStatement()
,
and stays valid after querying the number of rows affected;
only clearing it with dbClearResult()
invalidates it.
If the connection to the database system is dropped (e.g., due to
connectivity problems, server failure, etc.), dbIsValid()
should return
FALSE
. This is not tested automatically.
Other DBIDriver generics:
DBIDriver-class
,
dbCanConnect()
,
dbConnect()
,
dbDataType()
,
dbDriver()
,
dbGetInfo()
,
dbIsReadOnly()
,
dbListConnections()
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
This generic tests whether a database object is still valid (i.e. it hasn't been disconnected or cleared).
## S4 method for signature 'DatabaseConnectorJdbcConnection' dbIsValid(dbObj, ...)
## S4 method for signature 'DatabaseConnectorJdbcConnection' dbIsValid(dbObj, ...)
dbObj |
An object inheriting from DBIObject, i.e. DBIDriver, DBIConnection, or a DBIResult |
... |
Other arguments to methods. |
dbIsValid()
returns a logical scalar,
TRUE
if the object specified by dbObj
is valid,
FALSE
otherwise.
A DBIConnection object is initially valid,
and becomes invalid after disconnecting with dbDisconnect()
.
For an invalid connection object (e.g., for some drivers if the object
is saved to a file and then restored), the method also returns FALSE
.
A DBIResult object is valid after a call to dbSendQuery()
,
and stays valid even after all rows have been fetched;
only clearing it with dbClearResult()
invalidates it.
A DBIResult object is also valid after a call to dbSendStatement()
,
and stays valid after querying the number of rows affected;
only clearing it with dbClearResult()
invalidates it.
If the connection to the database system is dropped (e.g., due to
connectivity problems, server failure, etc.), dbIsValid()
should return
FALSE
. This is not tested automatically.
Other DBIDriver generics:
DBIDriver-class
,
dbCanConnect()
,
dbConnect()
,
dbDataType()
,
dbDriver()
,
dbGetInfo()
,
dbIsReadOnly()
,
dbListConnections()
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Other DBIResult generics:
DBIResult-class
,
dbBind()
,
dbClearResult()
,
dbColumnInfo()
,
dbFetch()
,
dbGetInfo()
,
dbGetRowCount()
,
dbGetRowsAffected()
,
dbGetStatement()
,
dbHasCompleted()
,
dbIsReadOnly()
,
dbQuoteIdentifier()
,
dbQuoteLiteral()
,
dbQuoteString()
,
dbUnquoteIdentifier()
Returns the field names of a remote table as a character vector.
## S4 method for signature 'DatabaseConnectorConnection,character' dbListFields(conn, name, databaseSchema = NULL, ...)
## S4 method for signature 'DatabaseConnectorConnection,character' dbListFields(conn, name, databaseSchema = NULL, ...)
conn |
A DBIConnection object, as returned by
|
name |
The table name, passed on to
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
... |
Other parameters passed on to methods. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbListFields()
returns a character vector
that enumerates all fields
in the table in the correct order.
This also works for temporary tables if supported by the database.
The returned names are suitable for quoting with dbQuoteIdentifier()
.
dbColumnInfo()
to get the type of the fields.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Returns the unquoted names of remote tables accessible through this connection. This should include views and temporary objects, but not all database backends (in particular RMariaDB and RMySQL) support this.
## S4 method for signature 'DatabaseConnectorConnection' dbListTables(conn, databaseSchema = NULL, ...)
## S4 method for signature 'DatabaseConnectorConnection' dbListTables(conn, databaseSchema = NULL, ...)
conn |
A DBIConnection object, as returned by
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
... |
Not used |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbListTables()
returns a character vector
that enumerates all tables
and views
in the database.
Tables added with dbWriteTable()
are part of the list.
As soon a table is removed from the database,
it is also removed from the list of database tables.
The same applies to temporary tables if supported by the database.
The returned names are suitable for quoting with dbQuoteIdentifier()
.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
The SqlRender package provides functions that translate SQL from OHDSI-SQL to
a target SQL dialect. These function need the name of the database platform to
translate to. The dbms
function returns the dbms for any DBI
connection that can be passed along to SqlRender translation functions (see example).
dbms(connection)
dbms(connection)
connection |
The connection to the database server created using either
|
The name of the database (dbms) used by SqlRender
library(DatabaseConnector) con <- connect(dbms = "sqlite", server = ":memory:") dbms(con) #> [1] "sqlite" SqlRender::translate("DATEADD(d, 365, dateColumn)", targetDialect = dbms(con)) #> "CAST(STRFTIME('%s', DATETIME(dateColumn, 'unixepoch', (365)||' days')) AS REAL)" disconnect(con)
library(DatabaseConnector) con <- connect(dbms = "sqlite", server = ":memory:") dbms(con) #> [1] "sqlite" SqlRender::translate("DATEADD(d, 365, dateColumn)", targetDialect = dbms(con)) #> "CAST(STRFTIME('%s', DATETIME(dateColumn, 'unixepoch', (365)||' days')) AS REAL)" disconnect(con)
Reads a database table to a data frame, optionally converting a column to row names and converting the column names to valid R identifiers.
## S4 method for signature 'DatabaseConnectorConnection,character' dbReadTable( conn, name, databaseSchema = NULL, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
## S4 method for signature 'DatabaseConnectorConnection,character' dbReadTable( conn, name, databaseSchema = NULL, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
conn |
A DBIConnection object, as returned by
|
name |
The table name, passed on to
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
... |
Other parameters passed on to methods. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbReadTable()
returns a data frame that contains the complete data
from the remote table, effectively the result of calling dbGetQuery()
with SELECT * FROM <name>
.
An empty table is returned as a data frame with zero rows.
The presence of rownames depends on the row.names
argument,
see sqlColumnToRownames()
for details:
If FALSE
or NULL
, the returned data frame doesn't have row names.
If TRUE
, a column named "row_names" is converted to row names.
If NA
, a column named "row_names" is converted to row names if it exists,
otherwise no translation occurs.
If a string, this specifies the name of the column in the remote table that contains the row names.
The default is row.names = FALSE
.
If the database supports identifiers with special characters,
the columns in the returned data frame are converted to valid R
identifiers
if the check.names
argument is TRUE
,
If check.names = FALSE
, the returned table has non-syntactic column names without quotes.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
Remove a remote table (e.g., created by dbWriteTable()
)
from the database.
## S4 method for signature 'DatabaseConnectorConnection,ANY' dbRemoveTable( conn, name, databaseSchema = NULL, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
## S4 method for signature 'DatabaseConnectorConnection,ANY' dbRemoveTable( conn, name, databaseSchema = NULL, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
conn |
A DBIConnection object, as returned by
|
name |
The table name, passed on to
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
... |
Other parameters passed on to methods. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbRemoveTable()
returns TRUE
, invisibly.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbSendQuery()
,
dbSendStatement()
,
dbWriteTable()
The dbSendQuery()
method only submits and synchronously executes the
SQL query to the database engine. It does not extract any
records — for that you need to use the dbFetch()
method, and
then you must call dbClearResult()
when you finish fetching the
records you need. For interactive use, you should almost always prefer
dbGetQuery()
.
## S4 method for signature 'DatabaseConnectorDbiConnection,character' dbSendQuery(conn, statement, translate = TRUE, ...)
## S4 method for signature 'DatabaseConnectorDbiConnection,character' dbSendQuery(conn, statement, translate = TRUE, ...)
conn |
A DBIConnection object, as returned by
|
statement |
a character string containing SQL. |
translate |
Translate the query using SqlRender? |
... |
Other parameters passed on to methods. |
This method is for SELECT
queries only. Some backends may
support data manipulation queries through this method for compatibility
reasons. However, callers are strongly encouraged to use
dbSendStatement()
for data manipulation statements.
The query is submitted to the database server and the DBMS executes it,
possibly generating vast amounts of data. Where these data live
is driver-specific: some drivers may choose to leave the output on the server
and transfer them piecemeal to R, others may transfer all the data to the
client – but not necessarily to the memory that R manages. See individual
drivers' dbSendQuery()
documentation for details.
dbSendQuery()
returns
an S4 object that inherits from DBIResult.
The result set can be used with dbFetch()
to extract records.
Once you have finished using a result, make sure to clear it
with dbClearResult()
.
For updates: dbSendStatement()
and dbExecute()
.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendStatement()
,
dbWriteTable()
The dbSendQuery()
method only submits and synchronously executes the
SQL query to the database engine. It does not extract any
records — for that you need to use the dbFetch()
method, and
then you must call dbClearResult()
when you finish fetching the
records you need. For interactive use, you should almost always prefer
dbGetQuery()
.
## S4 method for signature 'DatabaseConnectorJdbcConnection,character' dbSendQuery(conn, statement, translate = TRUE, ...)
## S4 method for signature 'DatabaseConnectorJdbcConnection,character' dbSendQuery(conn, statement, translate = TRUE, ...)
conn |
A DBIConnection object, as returned by
|
statement |
a character string containing SQL. |
translate |
Translate the query using SqlRender? |
... |
Other parameters passed on to methods. |
This method is for SELECT
queries only. Some backends may
support data manipulation queries through this method for compatibility
reasons. However, callers are strongly encouraged to use
dbSendStatement()
for data manipulation statements.
The query is submitted to the database server and the DBMS executes it,
possibly generating vast amounts of data. Where these data live
is driver-specific: some drivers may choose to leave the output on the server
and transfer them piecemeal to R, others may transfer all the data to the
client – but not necessarily to the memory that R manages. See individual
drivers' dbSendQuery()
documentation for details.
dbSendQuery()
returns
an S4 object that inherits from DBIResult.
The result set can be used with dbFetch()
to extract records.
Once you have finished using a result, make sure to clear it
with dbClearResult()
.
For updates: dbSendStatement()
and dbExecute()
.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendStatement()
,
dbWriteTable()
The dbSendStatement()
method only submits and synchronously executes the
SQL data manipulation statement (e.g., UPDATE
, DELETE
,
INSERT INTO
, DROP TABLE
, ...) to the database engine. To query
the number of affected rows, call dbGetRowsAffected()
on the
returned result object. You must also call dbClearResult()
after
that. For interactive use, you should almost always prefer
dbExecute()
.
## S4 method for signature 'DatabaseConnectorConnection,character' dbSendStatement(conn, statement, translate = TRUE, ...)
## S4 method for signature 'DatabaseConnectorConnection,character' dbSendStatement(conn, statement, translate = TRUE, ...)
conn |
A DBIConnection object, as returned by
|
statement |
a character string containing SQL. |
translate |
Translate the query using SqlRender? |
... |
Other parameters passed on to methods. |
dbSendStatement()
comes with a default implementation that simply
forwards to dbSendQuery()
, to support backends that only
implement the latter.
dbSendStatement()
returns
an S4 object that inherits from DBIResult.
The result set can be used with dbGetRowsAffected()
to
determine the number of rows affected by the query.
Once you have finished using a result, make sure to clear it
with dbClearResult()
.
For queries: dbSendQuery()
and dbGetQuery()
.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbWriteTable()
These methods are deprecated, please consult the documentation of the individual backends for the construction of driver instances.
dbDriver()
is a helper method used to create an new driver object
given the name of a database or the corresponding R package. It works
through convention: all DBI-extending packages should provide an exported
object with the same name as the package. dbDriver()
just looks for
this object in the right places: if you know what database you are connecting
to, you should call the function directly.
dbUnloadDriver()
is not implemented for modern backends.
## S4 method for signature 'DatabaseConnectorDriver' dbUnloadDriver(drv, ...)
## S4 method for signature 'DatabaseConnectorDriver' dbUnloadDriver(drv, ...)
drv |
an object that inherits from |
... |
any other arguments are passed to the driver |
The client part of the database communication is
initialized (typically dynamically loading C code, etc.) but note that
connecting to the database engine itself needs to be done through calls to
dbConnect
.
In the case of dbDriver
, an driver object whose class extends
DBIDriver
. This object may be used to create connections to the
actual DBMS engine.
In the case of dbUnloadDriver
, a logical indicating whether the
operation succeeded or not.
Other DBIDriver generics:
DBIDriver-class
,
dbCanConnect()
,
dbConnect()
,
dbDataType()
,
dbGetInfo()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListConnections()
Other DBIDriver generics:
DBIDriver-class
,
dbCanConnect()
,
dbConnect()
,
dbDataType()
,
dbGetInfo()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListConnections()
Writes, overwrites or appends a data frame to a database table, optionally converting row names to a column and specifying SQL data types for fields.
## S4 method for signature 'DatabaseConnectorConnection,ANY' dbWriteTable( conn, name, value, databaseSchema = NULL, overwrite = FALSE, append = FALSE, temporary = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
## S4 method for signature 'DatabaseConnectorConnection,ANY' dbWriteTable( conn, name, value, databaseSchema = NULL, overwrite = FALSE, append = FALSE, temporary = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
conn |
A DBIConnection object, as returned by
|
name |
The table name, passed on to
|
value |
a data.frame (or coercible to data.frame). |
databaseSchema |
The name of the database schema. See details for platform-specific details. |
overwrite |
Overwrite an existing table (if exists)? |
append |
Append to existing table? |
temporary |
Should the table created as a temp table? |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
... |
Other parameters passed on to methods. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
dbWriteTable()
returns TRUE
, invisibly.
Other DBIConnection generics:
DBIConnection-class
,
dbAppendTable()
,
dbCreateTable()
,
dbDataType()
,
dbDisconnect()
,
dbExecute()
,
dbExistsTable()
,
dbGetException()
,
dbGetInfo()
,
dbGetQuery()
,
dbIsReadOnly()
,
dbIsValid()
,
dbListFields()
,
dbListObjects()
,
dbListResults()
,
dbListTables()
,
dbReadTable()
,
dbRemoveTable()
,
dbSendQuery()
,
dbSendStatement()
Close the connection to the server.
disconnect(connection)
disconnect(connection)
connection |
The connection to the database server created using either
|
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah" ) conn <- connect(connectionDetails) count <- querySql(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah" ) conn <- connect(connectionDetails) count <- querySql(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
Download the DatabaseConnector JDBC drivers from https://ohdsi.github.io/DatabaseConnectorJars/
downloadJdbcDrivers( dbms, pathToDriver = Sys.getenv("DATABASECONNECTOR_JAR_FOLDER"), method = "auto", ... )
downloadJdbcDrivers( dbms, pathToDriver = Sys.getenv("DATABASECONNECTOR_JAR_FOLDER"), method = "auto", ... )
dbms |
The type of DBMS to download Jar files for.
|
pathToDriver |
The full path to the folder where the JDBC driver .jar files should be downloaded to. By default the value of the environment variable "DATABASECONNECTOR_JAR_FOLDER" is used. |
method |
The method used for downloading files. See |
... |
Further arguments passed on to |
The following versions of the JDBC drivers are currently used:
PostgreSQL: V42.2.18
RedShift: V2.1.0.9
SQL Server: V9.2.0
Oracle: V19.8
Spark: V2.6.21
Snowflake: V3.13.22
BigQuery: v1.3.2.1003
Invisibly returns the destination if the download was successful.
## Not run: downloadJdbcDrivers("redshift") ## End(Not run)
## Not run: downloadJdbcDrivers("redshift") ## End(Not run)
On some DBMSs, like Oracle and BigQuery, DatabaseConnector
through SqlRender
emulates temp tables
in a schema provided by the user. Ideally, these tables are deleted by the application / R script creating them,
but for various reasons orphan temp tables may remain. This function drops all emulated temp tables created in this
session only.
dropEmulatedTempTables( connection, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema") )
dropEmulatedTempTables( connection, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema") )
connection |
The connection to the database server created using either
|
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
Invisibly returns the list of deleted emulated temp tables.
This function is provided primarily to be used together with dbplyr
when querying
a database. It will also work in dplyr
against data frames.
eoMonth(date)
eoMonth(date)
date |
A date in the month for which we need the end. |
The date of the last day of the month.
eoMonth(as.Date("2000-02-01"))
eoMonth(as.Date("2000-02-01"))
This function executes SQL consisting of one or more statements.
executeSql( connection, sql, profile = FALSE, progressBar = !as.logical(Sys.getenv("TESTTHAT", unset = FALSE)), reportOverallTime = TRUE, errorReportFile = file.path(getwd(), "errorReportSql.txt"), runAsBatch = FALSE )
executeSql( connection, sql, profile = FALSE, progressBar = !as.logical(Sys.getenv("TESTTHAT", unset = FALSE)), reportOverallTime = TRUE, errorReportFile = file.path(getwd(), "errorReportSql.txt"), runAsBatch = FALSE )
connection |
The connection to the database server created using either
|
sql |
The SQL to be executed |
profile |
When true, each separate statement is written to file prior to sending to the server, and the time taken to execute a statement is displayed. |
progressBar |
When true, a progress bar is shown based on the statements in the SQL code. |
reportOverallTime |
When true, the function will display the overall time taken to execute all statements. |
errorReportFile |
The file where an error report will be written if an error occurs. Defaults to 'errorReportSql.txt' in the current working directory. |
runAsBatch |
When true the SQL statements are sent to the server as a single batch, and executed there. This will be faster if you have many small SQL statements, but there will be no progress bar, and no per-statement error messages. If the database platform does not support batched updates the query is executed without batching. |
This function splits the SQL in separate statements and sends it to the server for execution. If an error occurs during SQL execution, this error is written to a file to facilitate debugging. Optionally, a progress bar is shown and the total time taken to execute the SQL is displayed. Optionally, each separate SQL statement is written to file, and the execution time per statement is shown to aid in detecting performance issues.
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) executeSql(conn, "CREATE TABLE x (k INT); CREATE TABLE y (k INT);") disconnect(conn) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) executeSql(conn, "CREATE TABLE x (k INT); CREATE TABLE y (k INT);") disconnect(conn) ## End(Not run)
Checks whether a table exists. Accounts for surrounding escape characters. Case insensitive.
existsTable(connection, databaseSchema, tableName)
existsTable(connection, databaseSchema, tableName)
connection |
The connection to the database server created using either
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
tableName |
The name of the table to check. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
A logical value indicating whether the table exits.
ParallelLogger
log fileWhen using the ParallelLogger
default file logger, and using options(LOG_DATABASECONNECTOR_SQL = TRUE)
,
DatabaseConnector
will log all SQL sent to the server, and the time to get a response.
This function parses the log file, producing a data frame with time per query.
extractQueryTimes(logFileName)
extractQueryTimes(logFileName)
logFileName |
Name of the |
A data frame with queries and their run times in milliseconds.
connection <- connect(dbms = "sqlite", server = ":memory:") logFile <- tempfile(fileext = ".log") ParallelLogger::addDefaultFileLogger(fileName = logFile, name = "MY_LOGGER") options(LOG_DATABASECONNECTOR_SQL = TRUE) executeSql(connection, "CREATE TABLE test (x INT);") querySql(connection, "SELECT * FROM test;") extractQueryTimes(logFile) ParallelLogger::unregisterLogger("MY_LOGGER") unlink(logFile) disconnect(connection)
connection <- connect(dbms = "sqlite", server = ":memory:") logFile <- tempfile(fileext = ".log") ParallelLogger::addDefaultFileLogger(fileName = logFile, name = "MY_LOGGER") options(LOG_DATABASECONNECTOR_SQL = TRUE) executeSql(connection, "CREATE TABLE test (x INT);") querySql(connection, "SELECT * FROM test;") extractQueryTimes(logFile) ParallelLogger::unregisterLogger("MY_LOGGER") unlink(logFile) disconnect(connection)
For debugging purposes: get the available Java heap space.
getAvailableJavaHeapSpace()
getAvailableJavaHeapSpace()
The Java heap space (in bytes).
This function returns a list of all tables in a database schema.
getTableNames(connection, databaseSchema = NULL, cast = "lower")
getTableNames(connection, databaseSchema = NULL, cast = "lower")
connection |
The connection to the database server created using either
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
cast |
Should the table names be cast to uppercase or lowercase before being returned? Valid options are "upper" , "lower" (default), "none" (no casting is done) |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
A character vector of table names.
Can be used with dplyr::tbl()
to indicate a table in a specific database schema.
inDatabaseSchema(databaseSchema, table)
inDatabaseSchema(databaseSchema, table)
databaseSchema |
The name of the database schema. See details for platform-specific details. |
table |
The name of the table in the database schema. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
An object representing the table and database schema.
This function sends the data in a data frame to a table on the server. Either a new table is created, or the data is appended to an existing table.
insertTable( connection, databaseSchema = NULL, tableName, data, dropTableIfExists = TRUE, createTable = TRUE, tempTable = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), bulkLoad = Sys.getenv("DATABASE_CONNECTOR_BULK_UPLOAD"), useMppBulkLoad = Sys.getenv("USE_MPP_BULK_LOAD"), progressBar = FALSE, camelCaseToSnakeCase = FALSE )
insertTable( connection, databaseSchema = NULL, tableName, data, dropTableIfExists = TRUE, createTable = TRUE, tempTable = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), bulkLoad = Sys.getenv("DATABASE_CONNECTOR_BULK_UPLOAD"), useMppBulkLoad = Sys.getenv("USE_MPP_BULK_LOAD"), progressBar = FALSE, camelCaseToSnakeCase = FALSE )
connection |
The connection to the database server created using either
|
databaseSchema |
The name of the database schema. See details for platform-specific details. |
tableName |
The name of the table where the data should be inserted. |
data |
The data frame containing the data to be inserted. |
dropTableIfExists |
Drop the table if the table already exists before writing? |
createTable |
Create a new table? If false, will append to existing table. |
tempTable |
Should the table created as a temp table? |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
bulkLoad |
If using Redshift, PDW, Hive or Postgres, use more performant bulk loading techniques. Does not work for temp tables (except for HIVE). See Details for requirements for the various platforms. |
useMppBulkLoad |
DEPRECATED. Use |
progressBar |
Show a progress bar when uploading? |
camelCaseToSnakeCase |
If TRUE, the data frame column names are assumed to use camelCase and are converted to snake_case before uploading. |
The databaseSchema
argument is interpreted differently according to the different platforms:
SQL Server and PDW: The databaseSchema schema should specify both the database and the schema, e.g.
'my_database.dbo'. Impala: the databaseSchema should specify the database. Oracle:
The databaseSchema should specify the Oracle 'user'. All other : The databaseSchema should
specify the schema.
This function sends the data in a data frame to a table on the server. Either a new table is created, or the data is appended to an existing table. NA values are inserted as null values in the database.
Bulk uploading:
Redshift: The MPP bulk loading relies upon the CloudyR S3 library to test a connection to an S3 bucket using AWS S3 credentials. Credentials are configured directly into the System Environment using the following keys: Sys.setenv("AWS_ACCESS_KEY_ID" = "some_access_key_id", "AWS_SECRET_ACCESS_KEY" = "some_secret_access_key", "AWS_DEFAULT_REGION" = "some_aws_region", "AWS_BUCKET_NAME" = "some_bucket_name", "AWS_OBJECT_KEY" = "some_object_key", "AWS_SSE_TYPE" = "server_side_encryption_type").
PDW: The MPP bulk loading relies upon the client having a Windows OS and the DWLoader exe installed, and the following permissions granted: –Grant BULK Load permissions - needed at a server level USE master; GRANT ADMINISTER BULK OPERATIONS TO user; –Grant Staging database permissions - we will use the user db. USE scratch; EXEC sp_addrolemember 'db_ddladmin', user; Set the R environment variable DWLOADER_PATH to the location of the binary.
PostgreSQL: Uses the 'psql' executable to upload. Set the POSTGRES_PATH environment variable to the Postgres binary path, e.g. 'C:/Program Files/PostgreSQL/11/bin' on Windows or '/Library/PostgreSQL/16/bin' on MacOs.
## Not run: connectionDetails <- createConnectionDetails( dbms = "mysql", server = "localhost", user = "root", password = "blah" ) conn <- connect(connectionDetails) data <- data.frame(x = c(1, 2, 3), y = c("a", "b", "c")) insertTable(conn, "my_schema", "my_table", data) disconnect(conn) ## bulk data insert with Redshift or PDW connectionDetails <- createConnectionDetails( dbms = "redshift", server = "localhost", user = "root", password = "blah", schema = "cdm_v5" ) conn <- connect(connectionDetails) data <- data.frame(x = c(1, 2, 3), y = c("a", "b", "c")) insertTable( connection = connection, databaseSchema = "scratch", tableName = "somedata", data = data, dropTableIfExists = TRUE, createTable = TRUE, tempTable = FALSE, bulkLoad = TRUE ) # or, Sys.setenv("DATABASE_CONNECTOR_BULK_UPLOAD" = TRUE) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "mysql", server = "localhost", user = "root", password = "blah" ) conn <- connect(connectionDetails) data <- data.frame(x = c(1, 2, 3), y = c("a", "b", "c")) insertTable(conn, "my_schema", "my_table", data) disconnect(conn) ## bulk data insert with Redshift or PDW connectionDetails <- createConnectionDetails( dbms = "redshift", server = "localhost", user = "root", password = "blah", schema = "cdm_v5" ) conn <- connect(connectionDetails) data <- data.frame(x = c(1, 2, 3), y = c("a", "b", "c")) insertTable( connection = connection, databaseSchema = "scratch", tableName = "somedata", data = data, dropTableIfExists = TRUE, createTable = TRUE, tempTable = FALSE, bulkLoad = TRUE ) # or, Sys.setenv("DATABASE_CONNECTOR_BULK_UPLOAD" = TRUE) ## End(Not run)
This function checks a character vector against a predefined list of reserved SQL words.
isSqlReservedWord(sqlNames, warn = FALSE)
isSqlReservedWord(sqlNames, warn = FALSE)
sqlNames |
A character vector containing table or field names to check. |
warn |
(logical) Should a warn be thrown if invalid SQL names are found? |
A logical vector with length equal to sqlNames that is TRUE for each name that is reserved and FALSE otherwise
Below are instructions for downloading JDBC drivers for the various data platforms. Once downloaded
use the pathToDriver
argument in the connect()
or createConnectionDetails()
functions to point to the driver. Alternatively, you can set the 'DATABASECONNECTOR_JAR_FOLDER' environmental
variable, for example in your .Renviron file (recommended).
Use the downloadJdbcDrivers()
function to download these drivers from the OHDSI GitHub pages.
Read the instructions here on how to obtain the Netezza JDBC driver.
Go to Cloudera's site, pick your OS version, and click "GET IT NOW!'. Register, and you should be able to download the driver.
For SQLite we actually don't use a JDBC driver. Instead, we use the RSQLite package, which can be installed
using install.packages("RSQLite")
.
This function executes a single SQL statement.
lowLevelExecuteSql(connection, sql)
lowLevelExecuteSql(connection, sql)
connection |
The connection to the database server created using either
|
sql |
The SQL to be executed |
This is the equivalent of the querySql()
function, except no error report is written
when an error occurs.
lowLevelQuerySql( connection, query, datesAsString = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
lowLevelQuerySql( connection, query, datesAsString = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
connection |
The connection to the database server created using either
|
query |
The SQL statement to retrieve the data |
datesAsString |
Logical: Should dates be imported as character vectors, our should they be converted to R's date format? |
integerAsNumeric |
Logical: should 32-bit integers be converted to numeric (double) values? If FALSE
32-bit integers will be represented using R's native |
integer64AsNumeric |
Logical: should 64-bit integers be converted to numeric (double) values? If FALSE
64-bit integers will be represented using |
Retrieves data from the database server and stores it in a data frame. Null values in the database are converted to NA values in R.
A data frame containing the data retrieved from the server
This is the equivalent of the querySqlToAndromeda()
function, except no error report is
written when an error occurs.
lowLevelQuerySqlToAndromeda( connection, query, andromeda, andromedaTableName, datesAsString = FALSE, appendToTable = FALSE, snakeCaseToCamelCase = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
lowLevelQuerySqlToAndromeda( connection, query, andromeda, andromedaTableName, datesAsString = FALSE, appendToTable = FALSE, snakeCaseToCamelCase = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
connection |
The connection to the database server created using either
|
query |
The SQL statement to retrieve the data |
andromeda |
An open Andromeda object, for example as created
using |
andromedaTableName |
The name of the table in the local Andromeda object where the results of the query will be stored. |
datesAsString |
Should dates be imported as character vectors, our should they be converted to R's date format? |
appendToTable |
If FALSE, any existing table in the Andromeda with the same name will be replaced with the new data. If TRUE, data will be appended to an existing table, assuming it has the exact same structure. |
snakeCaseToCamelCase |
If true, field names are assumed to use snake_case, and are converted to camelCase. |
integerAsNumeric |
Logical: should 32-bit integers be converted to numeric (double) values? If FALSE
32-bit integers will be represented using R's native |
integer64AsNumeric |
Logical: should 64-bit integers be converted to numeric (double) values? If FALSE
64-bit integers will be represented using |
Retrieves data from the database server and stores it in a local Andromeda object This allows very large data sets to be retrieved without running out of memory. Null values in the database are converted to NA values in R. If a table with the same name already exists in the local Andromeda object it is replaced.
Invisibly returns the andromeda. The Andromeda object will have a table added with the query results.
This function is provided primarily to be used together with dbplyr
when querying
a database. It will also work in dplyr
against data frames.
month(date)
month(date)
date |
The date. |
The month
month(as.Date("2000-02-01"))
month(as.Date("2000-02-01"))
This function sends SQL to the server, and returns the results.
querySql( connection, sql, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
querySql( connection, sql, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
connection |
The connection to the database server created using either
|
sql |
The SQL to be send. |
errorReportFile |
The file where an error report will be written if an error occurs. Defaults to 'errorReportSql.txt' in the current working directory. |
snakeCaseToCamelCase |
If true, field names are assumed to use snake_case, and are converted to camelCase. |
integerAsNumeric |
Logical: should 32-bit integers be converted to numeric (double) values? If FALSE
32-bit integers will be represented using R's native |
integer64AsNumeric |
Logical: should 64-bit integers be converted to numeric (double) values? If FALSE
64-bit integers will be represented using |
This function sends the SQL to the server and retrieves the results. If an error occurs during SQL execution, this error is written to a file to facilitate debugging. Null values in the database are converted to NA values in R.
A data frame.
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) count <- querySql(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) count <- querySql(conn, "SELECT COUNT(*) FROM person") disconnect(conn) ## End(Not run)
This function sends SQL to the server, and returns the results in a local Andromeda object
querySqlToAndromeda( connection, sql, andromeda, andromedaTableName, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, appendToTable = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
querySqlToAndromeda( connection, sql, andromeda, andromedaTableName, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, appendToTable = FALSE, integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE) )
connection |
The connection to the database server created using either
|
sql |
The SQL to be sent. |
andromeda |
An open Andromeda object, for example as created
using |
andromedaTableName |
The name of the table in the local Andromeda object where the results of the query will be stored. |
errorReportFile |
The file where an error report will be written if an error occurs. Defaults to 'errorReportSql.txt' in the current working directory. |
snakeCaseToCamelCase |
If true, field names are assumed to use snake_case, and are converted to camelCase. |
appendToTable |
If FALSE, any existing table in the Andromeda with the same name will be replaced with the new data. If TRUE, data will be appended to an existing table, assuming it has the exact same structure. |
integerAsNumeric |
Logical: should 32-bit integers be converted to numeric (double) values? If FALSE
32-bit integers will be represented using R's native |
integer64AsNumeric |
Logical: should 64-bit integers be converted to numeric (double) values? If FALSE
64-bit integers will be represented using |
Retrieves data from the database server and stores it in a local Andromeda object. This allows very large data sets to be retrieved without running out of memory. If an error occurs during SQL execution, this error is written to a file to facilitate debugging. Null values in the database are converted to NA values in R.If a table with the same name already exists in the local Andromeda object it is replaced.
Invisibly returns the andromeda. The Andromeda object will have a table added with the query results.
## Not run: andromeda <- Andromeda::andromeda() connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) querySqlToAndromeda( connection = conn, sql = "SELECT * FROM person;", andromeda = andromeda, andromedaTableName = "foo" ) disconnect(conn) andromeda$foo ## End(Not run)
## Not run: andromeda <- Andromeda::andromeda() connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) querySqlToAndromeda( connection = conn, sql = "SELECT * FROM person;", andromeda = andromeda, andromedaTableName = "foo" ) disconnect(conn) andromeda$foo ## End(Not run)
This function renders, translates, and executes SQL consisting of one or more statements.
renderTranslateExecuteSql( connection, sql, profile = FALSE, progressBar = TRUE, reportOverallTime = TRUE, errorReportFile = file.path(getwd(), "errorReportSql.txt"), runAsBatch = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
renderTranslateExecuteSql( connection, sql, profile = FALSE, progressBar = TRUE, reportOverallTime = TRUE, errorReportFile = file.path(getwd(), "errorReportSql.txt"), runAsBatch = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), ... )
connection |
The connection to the database server created using either
|
sql |
The SQL to be executed |
profile |
When true, each separate statement is written to file prior to sending to the server, and the time taken to execute a statement is displayed. |
progressBar |
When true, a progress bar is shown based on the statements in the SQL code. |
reportOverallTime |
When true, the function will display the overall time taken to execute all statements. |
errorReportFile |
The file where an error report will be written if an error occurs. Defaults to 'errorReportSql.txt' in the current working directory. |
runAsBatch |
When true the SQL statements are sent to the server as a single batch, and executed there. This will be faster if you have many small SQL statements, but there will be no progress bar, and no per-statement error messages. If the database platform does not support batched updates the query is executed as ordinarily. |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
... |
Parameters that will be used to render the SQL. |
This function calls the render
and translate
functions in the SqlRender
package before
calling executeSql()
.
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) renderTranslateExecuteSql(connection, sql = "SELECT * INTO #temp FROM @schema.person;", schema = "cdm_synpuf" ) disconnect(conn) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) renderTranslateExecuteSql(connection, sql = "SELECT * INTO #temp FROM @schema.person;", schema = "cdm_synpuf" ) disconnect(conn) ## End(Not run)
This function renders, and translates SQL, sends it to the server, processes the data in batches with a call back function. Note that this function should perform a row-wise operation. This is designed to work with massive data that won't fit in to memory.
The batch sizes are determined by the java virtual machine and will depend on the data.
renderTranslateQueryApplyBatched( connection, sql, fun, args = list(), errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE), ... )
renderTranslateQueryApplyBatched( connection, sql, fun, args = list(), errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE), ... )
connection |
The connection to the database server created using either
|
sql |
The SQL to be send. |
fun |
Function to apply to batch. Must take data.frame and integer position as parameters. |
args |
List of arguments to be passed to function call. |
errorReportFile |
The file where an error report will be written if an error occurs. Defaults to 'errorReportSql.txt' in the current working directory. |
snakeCaseToCamelCase |
If true, field names are assumed to use snake_case, and are converted to camelCase. |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
integerAsNumeric |
Logical: should 32-bit integers be converted to numeric (double) values? If FALSE
32-bit integers will be represented using R's native |
integer64AsNumeric |
Logical: should 64-bit integers be converted to numeric (double) values? If FALSE
64-bit integers will be represented using |
... |
Parameters that will be used to render the SQL. |
This function calls the render
and translate
functions in the SqlRender
package before
calling querySql()
.
Invisibly returns a list of outputs from each call to the provided function.
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) connection <- connect(connectionDetails) # First example: write data to a large CSV file: filepath <- "myBigFile.csv" writeBatchesToCsv <- function(data, position, ...) { write.csv(data, filepath, append = position != 1) return(NULL) } renderTranslateQueryApplyBatched(connection, "SELECT * FROM @schema.person;", schema = "cdm_synpuf", fun = writeBatchesToCsv ) # Second example: write data to Andromeda # (Alternative to querySqlToAndromeda if some local computation needs to be applied) bigResults <- Andromeda::andromeda() writeBatchesToAndromeda <- function(data, position, ...) { data$p <- EmpiricalCalibration::computeTraditionalP(data$logRr, data$logSeRr) if (position == 1) { bigResults$rrs <- data } else { Andromeda::appendToTable(bigResults$rrs, data) } return(NULL) } sql <- "SELECT target_id, comparator_id, log_rr, log_se_rr FROM @schema.my_results;" renderTranslateQueryApplyBatched(connection, sql, fun = writeBatchesToAndromeda, schema = "my_results", snakeCaseToCamelCase = TRUE ) disconnect(connection) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) connection <- connect(connectionDetails) # First example: write data to a large CSV file: filepath <- "myBigFile.csv" writeBatchesToCsv <- function(data, position, ...) { write.csv(data, filepath, append = position != 1) return(NULL) } renderTranslateQueryApplyBatched(connection, "SELECT * FROM @schema.person;", schema = "cdm_synpuf", fun = writeBatchesToCsv ) # Second example: write data to Andromeda # (Alternative to querySqlToAndromeda if some local computation needs to be applied) bigResults <- Andromeda::andromeda() writeBatchesToAndromeda <- function(data, position, ...) { data$p <- EmpiricalCalibration::computeTraditionalP(data$logRr, data$logSeRr) if (position == 1) { bigResults$rrs <- data } else { Andromeda::appendToTable(bigResults$rrs, data) } return(NULL) } sql <- "SELECT target_id, comparator_id, log_rr, log_se_rr FROM @schema.my_results;" renderTranslateQueryApplyBatched(connection, sql, fun = writeBatchesToAndromeda, schema = "my_results", snakeCaseToCamelCase = TRUE ) disconnect(connection) ## End(Not run)
This function renders, and translates SQL, sends it to the server, and returns the results as a data.frame.
renderTranslateQuerySql( connection, sql, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE), ... )
renderTranslateQuerySql( connection, sql, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE), ... )
connection |
The connection to the database server created using either
|
sql |
The SQL to be send. |
errorReportFile |
The file where an error report will be written if an error occurs. Defaults to 'errorReportSql.txt' in the current working directory. |
snakeCaseToCamelCase |
If true, field names are assumed to use snake_case, and are converted to camelCase. |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
integerAsNumeric |
Logical: should 32-bit integers be converted to numeric (double) values? If FALSE
32-bit integers will be represented using R's native |
integer64AsNumeric |
Logical: should 64-bit integers be converted to numeric (double) values? If FALSE
64-bit integers will be represented using |
... |
Parameters that will be used to render the SQL. |
This function calls the render
and translate
functions in the SqlRender
package before
calling querySql()
.
A data frame.
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) persons <- renderTranslatequerySql(conn, sql = "SELECT TOP 10 * FROM @schema.person", schema = "cdm_synpuf" ) disconnect(conn) ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) persons <- renderTranslatequerySql(conn, sql = "SELECT TOP 10 * FROM @schema.person", schema = "cdm_synpuf" ) disconnect(conn) ## End(Not run)
This function renders, and translates SQL, sends it to the server, and returns the results as an ffdf object
renderTranslateQuerySqlToAndromeda( connection, sql, andromeda, andromedaTableName, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, appendToTable = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE), ... )
renderTranslateQuerySqlToAndromeda( connection, sql, andromeda, andromedaTableName, errorReportFile = file.path(getwd(), "errorReportSql.txt"), snakeCaseToCamelCase = FALSE, appendToTable = FALSE, oracleTempSchema = NULL, tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"), integerAsNumeric = getOption("databaseConnectorIntegerAsNumeric", default = TRUE), integer64AsNumeric = getOption("databaseConnectorInteger64AsNumeric", default = TRUE), ... )
connection |
The connection to the database server created using either
|
sql |
The SQL to be send. |
andromeda |
An open Andromeda object, for example as created
using |
andromedaTableName |
The name of the table in the local Andromeda object where the results of the query will be stored. |
errorReportFile |
The file where an error report will be written if an error occurs. Defaults to 'errorReportSql.txt' in the current working directory. |
snakeCaseToCamelCase |
If true, field names are assumed to use snake_case, and are converted to camelCase. |
appendToTable |
If FALSE, any existing table in the Andromeda with the same name will be replaced with the new data. If TRUE, data will be appended to an existing table, assuming it has the exact same structure. |
oracleTempSchema |
DEPRECATED: use |
tempEmulationSchema |
Some database platforms like Oracle and Impala do not truly support temp tables. To emulate temp tables, provide a schema with write privileges where temp tables can be created. |
integerAsNumeric |
Logical: should 32-bit integers be converted to numeric (double) values? If FALSE
32-bit integers will be represented using R's native |
integer64AsNumeric |
Logical: should 64-bit integers be converted to numeric (double) values? If FALSE
64-bit integers will be represented using |
... |
Parameters that will be used to render the SQL. |
This function calls the render
and translate
functions in the SqlRender
package
before calling querySqlToAndromeda()
.
Invisibly returns the andromeda. The Andromeda object will have a table added with the query results.
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) renderTranslatequerySqlToAndromeda(conn, sql = "SELECT * FROM @schema.person", schema = "cdm_synpuf", andromeda = andromeda, andromedaTableName = "foo" ) disconnect(conn) andromeda$foo ## End(Not run)
## Not run: connectionDetails <- createConnectionDetails( dbms = "postgresql", server = "localhost", user = "root", password = "blah", schema = "cdm_v4" ) conn <- connect(connectionDetails) renderTranslatequerySqlToAndromeda(conn, sql = "SELECT * FROM @schema.person", schema = "cdm_synpuf", andromeda = andromeda, andromedaTableName = "foo" ) disconnect(conn) andromeda$foo ## End(Not run)
Does the DBMS require temp table emulation?
requiresTempEmulation(dbms)
requiresTempEmulation(dbms)
dbms |
The type of DBMS running on the server. See |
TRUE if the DBMS requires temp table emulation, FALSE otherwise.
requiresTempEmulation("postgresql") requiresTempEmulation("oracle")
requiresTempEmulation("postgresql") requiresTempEmulation("oracle")
This function is provided primarily to be used together with dbplyr
when querying
a database. It will also work in dplyr
against data frames.
year(date)
year(date)
date |
The date. |
The year
year(as.Date("2000-02-01"))
year(as.Date("2000-02-01"))