Last time we saw how to connect to Redshift from Spark running in EMR. Provided solution was nice but allowed for reading data only. Sometimes we might want to run any DDL or DML query, not only simple read statements.
To do that, we need to connect to Redshift directly over JDBC. I assume you configured your cluster the same way as in the previous part. Now use this code:
1 2 3 4 5 6 7 8 9 10 |
def executeNonQuery(query: String) = { import java.sql._ import java.util._ var connectionProps = new Properties(); var connection = DriverManager.getConnection(getConnectionString(), connectionProps); var statement = connection.createStatement(query); statement.executeUpdate(); } |
We first import packages for JDBC. Next, we create new properties for the connection which can be empty. Then, we open the connection using JDBC infrastructure, prepare the query and execute it. Please remember that this query must return row count so it shouldn’t be SELECT
query.
We can use it like this:
1 2 3 |
%spark executeNonQuery(s"""DROP TABLE IF EXISTS table""") |
Query is pretty much anything you can run in SQL Workbench. It works with temporary tables as well (unlike prepareStatement
).