Ready to get started?

Download a free trial of the Apache Spark ODBC Driver to get started:

 Download Now

Learn more:

Apache Spark Icon Apache Spark ODBC Driver

The Spark ODBC Driver is a powerful tool that allows you to connect with Apache Spark, directly from any applications that support ODBC connectivity.

The Driver maps SQL to Spark SQL, enabling direct standard SQL-92 access to Apache Spark.

Connect to Spark as an External Data Source using PolyBase



Use the CData ODBC Driver for Apache Spark and PolyBase to create an external data source in SQL Server 2019 with access to live Spark data.

PolyBase for SQL Server allows you to query external data by using the same Transact-SQL syntax used to query a database table. When paired with the CData ODBC Driver for Apache Spark, you get access to your Spark data directly alongside your SQL Server data. This article describes creating an external data source and external tables to grant access to live Spark data using T-SQL queries.

NOTE: PolyBase is only available on SQL Server 19 and above, and only for Standard SQL Server.

The CData ODBC drivers offer unmatched performance for interacting with live Spark data using PolyBase due to optimized data processing built into the driver. When you issue complex SQL queries from SQL Server to Spark, the driver pushes down supported SQL operations, like filters and aggregations, directly to Spark and utilizes the embedded SQL engine to process unsupported operations (often SQL functions and JOIN operations) client-side. And with PolyBase, you can also join SQL Server data with Spark data, using a single query to pull data from distributed sources.

Connect to Spark

If you have not already, first specify connection properties in an ODBC DSN (data source name). This is the last step of the driver installation. You can use the Microsoft ODBC Data Source Administrator to create and configure ODBC DSNs. To create an external data source in SQL Server using PolyBase, configure a System DSN (CData Spark Sys is created automatically).

Set the Server, Database, User, and Password connection properties to connect to SparkSQL.

Click "Test Connection" to ensure that the DSN is connected to Spark properly. Navigate to the Tables tab to review the table definitions for Spark.

Create an External Data Source for Spark Data

After configuring the connection, you need to create a master encryption key and a credential database for the external data source.

Creating a Master Encryption Key

Execute the following SQL command to create a new master key, 'ENCRYPTION,' to encrypt the credentials for the external data source.

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password';

Creating a Credential Database

Execute the following SQL command to create credentials for the external data source connected to Spark data.

NOTE: Since Spark does not require a User or Password to authenticate, you may use whatever values you wish for IDENTITY and SECRET.


CREATE DATABASE SCOPED CREDENTIAL sparksql_creds
WITH IDENTITY = 'username', SECRET = 'password';

Create an External Data Source for Spark

Execute a CREATE EXTERNAL DATA SOURCE SQL command to create an external data source for Spark with PolyBase:

  • Set the LOCATION parameter , using the DSN and credentials configured earlier.

PUSHDOWN is set to ON by default, meaning the ODBC Driver can leverage server-side processing for complex queries.


CREATE EXTERNAL DATA SOURCE cdata_sparksql_source
WITH ( 
  LOCATION = 'odbc://SERVER_URL',
  CONNECTION_OPTIONS = 'DSN=CData Spark Sys',
  -- PUSHDOWN = ON | OFF,
  CREDENTIAL = sparksql_creds
);

Create External Tables for Spark

After creating the external data source, use CREATE EXTERNAL TABLE statements to link to Spark data from your SQL Server instance. The table column definitions must match those exposed by the CData ODBC Driver for Apache Spark. You can refer to the Tables tab of the DSN Configuration Wizard to see the table definition.

Sample CREATE TABLE Statement

The statement to create an external table based on a Spark Customers would look similar to the following:

CREATE EXTERNAL TABLE Customers(
  City [nvarchar](255) NULL,
  Balance [nvarchar](255) NULL,
  ...
) WITH ( 
  LOCATION='Customers',
  DATA_SOURCE=cdata_sparksql_source
);

Having created external tables for Spark in your SQL Server instance, you are now able to query local and remote data simultaneously. Thanks to built-in query processing in the CData ODBC Driver, you know that as much query processing as possible is being pushed to Spark, freeing up local resources and computing power. Download a free, 30-day trial of the ODBC Driver for Spark and start working with live Spark data alongside your SQL Server data today.