A PostgreSQL Interface for Databricks Data



Use the Remoting features of the Databricks ODBC Driver to create a PostgreSQL entry-point for data access.

There are a vast number of PostgreSQL clients available on the Internet. From standard Drivers to BI and Analytics tools, PostgreSQL is a popular interface for data access. Using the remoting features of our JDBC Drivers, you can now create PostgreSQL entry-points that you can connect to from any standard client.

To access Databricks data as a PostgreSQL database, use the Remoting feature of the CData JDBC Driver for Databricks and the MySQL foreign data wrapper (FDW) from EnterpriseDB. In this article, we install the FDW and query Databricks data from PostgreSQL Server.

About Databricks Data Integration

Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:

  • Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
  • Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
  • Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
  • Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.

While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.

Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.


Getting Started


Configure the Connection to Databricks

Follow the steps below to configure the driver's MySQL daemon to use the credentials and other connection properties needed to connect to Databricks. The MySQL daemon exposes Databricks data as a MySQL database named CDataDatabricks. Add connection properties to the databases section of the configuration file for the daemon. The configuration file for the daemon is located in the lib subfolder of the installation directory for the driver.

Below is a typical connection string:

[databases] databricks = "Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;"

Additionally, create a user in the users section.

You can find all of the configuration options for the MySQL daemon in the help documentation.

Start the Remoting Service

Follow the steps below to enable the MySQL Remoting feature of the CData JDBC Driver for Databricks.

  1. The driver creates a default configuration suitable for testing: Simply start the service to connect to Databricks data.

  2. Start the MySQL Remoting Service with the following command: java -jar cdata.jdbc.databricks.jar -f cdata.jdbc.databricks.remoting.ini

Build and Install the MySQL Foreign Data Wrapper

The Foreign Data Wrapper can be installed as an extension to PostgreSQL, without recompiling PostgreSQL.

If pgxn is available for your operating system, you can install with the following:

pgxn install mysql_fdw USE_PGXS=1

Otherwise, follow the steps below to build it yourself:

  1. Install the MySQL C client library and obtain the source for the EnterpriseDB FDW for MySQL; from GitHub, for example.
  2. Build the FDW. Add the pg_config and mysql_config executables to your PATH: env PATH=/usr/local/pgsql/bin:/usr/local/mysql/bin:$PATH make USE_PGXS=1
  3. Install the FDW: make USE_PGXS=1 install

To complete the installation, you will need to load the libmysqlclient library into the environment; for example by adding it to the path.

Query Databricks Data as a PostgreSQL Database

After you have installed the extension, follow the steps below to start executing queries to Databricks data:

  1. Log into your database.
  2. Load the extension for the database: postgres=#CREATE EXTENSION mysql_fdw;
  3. Create a server object for Databricks data: postgres=# CREATE SERVER Databricks FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host '127.0.0.1', port '3309');
  4. Create a user mapping for the username and password of a user known to the MySQL daemon. postgres=# CREATE USER MAPPING for postgres SERVER Databricks OPTIONS (username 'admin', password 'test');
  5. Create the local schema: postgres=# CREATE SCHEMA Databricks_db;
  6. Import all the tables in the Databricks database you defined in the daemon configuration file: postgres=# IMPORT FOREIGN SCHEMA "Databricks" FROM SERVER Databricks INTO Databricks_db;

You can now execute read/write commands to Databricks:

postgres=# SELECT * FROM Databricks_db."customers";

Ready to get started?

Download a free trial of the Databricks Driver to get started:

 Download Now

Learn more:

Databricks Icon Databricks JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Databricks.