How to work with Microsoft Dataverse Data in Apache Spark using SQL



Access and process Microsoft Dataverse Data in Apache Spark using the CData JDBC Driver.

Apache Spark is a fast and general engine for large-scale data processing. When paired with the CData JDBC Driver for Microsoft Dataverse, Spark can work with live Microsoft Dataverse data. This article describes how to connect to and query Microsoft Dataverse data from a Spark shell.

The CData JDBC Driver offers unmatched performance for interacting with live Microsoft Dataverse data due to optimized data processing built into the driver. When you issue complex SQL queries to Microsoft Dataverse, the driver pushes supported SQL operations, like filters and aggregations, directly to Microsoft Dataverse and utilizes the embedded SQL engine to process unsupported operations (often SQL functions and JOIN operations) client-side. With built-in dynamic metadata querying, you can work with and analyze Microsoft Dataverse data using native data types.

About Microsoft Dataverse Data Integration

CData provides the easiest way to access and integrate live data from Microsoft Dataverse (formerly the Common Data Service). Customers use CData connectivity to:

  • Access both Dataverse Entities and Dataverse system tables to work with exactly the data they need.
  • Authenticate securely with Microsoft Dataverse in a variety of ways, including Azure Active Directory, Azure Managed Service Identity credentials, and Azure Service Principal using either a client secret or a certificate.
  • Use SQL stored procedures to manage Microsoft Dataverse entities - listing, creating, and removing associations between entities.

CData customers use our Dataverse connectivity solutions for a variety of reasons, whether they're looking to replicate their data into a data warehouse (alongside other data sources)or analyze live Dataverse data from their preferred data tools inside the Microsoft ecosystem (Power BI, Excel, etc.) or with external tools (Tableau, Looker, etc.).


Getting Started


Install the CData JDBC Driver for Microsoft Dataverse

Download the CData JDBC Driver for Microsoft Dataverse installer, unzip the package, and run the JAR file to install the driver.

Start a Spark Shell and Connect to Microsoft Dataverse Data

  1. Open a terminal and start the Spark shell with the CData JDBC Driver for Microsoft Dataverse JAR file as the jars parameter: $ spark-shell --jars /CData/CData JDBC Driver for Microsoft Dataverse/lib/cdata.jdbc.cds.jar
  2. With the shell running, you can connect to Microsoft Dataverse with a JDBC URL and use the SQL Context load() function to read a table.

    You can connect without setting any connection properties for your user credentials. Below are the minimum connection properties required to connect.

    • InitiateOAuth: Set this to GETANDREFRESH. You can use InitiateOAuth to avoid repeating the OAuth exchange and manually setting the OAuthAccessToken.
    • OrganizationUrl: Set this to the organization URL you are connecting to, such as https://myorganization.crm.dynamics.com.
    • Tenant (optional): Set this if you wish to authenticate to a different tenant than your default. This is required to work with an organization not on your default Tenant.

    When you connect the Common Data Service OAuth endpoint opens in your default browser. Log in and grant permissions. The OAuth process completes automatically.

    Built-in Connection String Designer

    For assistance in constructing the JDBC URL, use the connection string designer built into the Microsoft Dataverse JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

    java -jar cdata.jdbc.cds.jar

    Fill in the connection properties and copy the connection string to the clipboard.

    Configure the connection to Microsoft Dataverse, using the connection string generated above.

    scala> val cds_df = spark.sqlContext.read.format("jdbc").option("url", "jdbc:cds:OrganizationUrl=https://myaccount.crm.dynamics.com/").option("dbtable","Accounts").option("driver","cdata.jdbc.cds.CDSDriver").load()
  3. Once you connect and the data is loaded you will see the table schema displayed.
  4. Register the Microsoft Dataverse data as a temporary table:

    scala> cds_df.registerTable("accounts")
  5. Perform custom SQL queries against the Data using commands like the one below:

    scala> cds_df.sqlContext.sql("SELECT AccountId, Name FROM Accounts WHERE Name = MyAccount").collect.foreach(println)

    You will see the results displayed in the console, similar to the following:

Using the CData JDBC Driver for Microsoft Dataverse in Apache Spark, you are able to perform fast and complex analytics on Microsoft Dataverse data, combining the power and utility of Spark with your data. Download a free, 30 day trial of any of the 200+ CData JDBC Drivers and get started today.

Ready to get started?

Download a free trial of the Microsoft Dataverse Driver to get started:

 Download Now

Learn more:

Microsoft Dataverse Icon Microsoft Dataverse JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Microsoft Dataverse.