How to work with Okta Data in Apache Spark using SQL



Access and process Okta Data in Apache Spark using the CData JDBC Driver.

Apache Spark is a fast and general engine for large-scale data processing. When paired with the CData JDBC Driver for Okta, Spark can work with live Okta data. This article describes how to connect to and query Okta data from a Spark shell.

The CData JDBC Driver offers unmatched performance for interacting with live Okta data due to optimized data processing built into the driver. When you issue complex SQL queries to Okta, the driver pushes supported SQL operations, like filters and aggregations, directly to Okta and utilizes the embedded SQL engine to process unsupported operations (often SQL functions and JOIN operations) client-side. With built-in dynamic metadata querying, you can work with and analyze Okta data using native data types.

Install the CData JDBC Driver for Okta

Download the CData JDBC Driver for Okta installer, unzip the package, and run the JAR file to install the driver.

Start a Spark Shell and Connect to Okta Data

  1. Open a terminal and start the Spark shell with the CData JDBC Driver for Okta JAR file as the jars parameter: $ spark-shell --jars /CData/CData JDBC Driver for Okta/lib/cdata.jdbc.okta.jar
  2. With the shell running, you can connect to Okta with a JDBC URL and use the SQL Context load() function to read a table.

    To connect to Okta, set the Domain connection string property to your Okta domain.

    You will use OAuth to authenticate with Okta, so you need to create a custom OAuth application.

    Creating a Custom OAuth Application

    From your Okta account:

    1. Sign in to your Okta developer edition organization with your administrator account.
    2. In the Admin Console, go to Applications > Applications.
    3. Click Create App Integration.
    4. For the Sign-in method, select OIDC - OpenID Connect.
    5. For Application type, choose Web Application.
    6. Enter a name for your custom application.
    7. Set the Grant Type to Authorization Code. If you want the token to be automatically refreshed, also check Refresh Token.
    8. Set the callback URL:
      • For desktop applications and headless machines, use http://localhost:33333 or another port number of your choice. The URI you set here becomes the CallbackURL property.
      • For web applications, set the callback URL to a trusted redirect URL. This URL is the web location the user returns to with the token that verifies that your application has been granted access.
    9. In the Assignments section, either select Limit access to selected groups and add a group, or skip group assignment for now.
    10. Save the OAuth application.
    11. The application's Client Id and Client Secret are displayed on the application's General tab. Record these for future use. You will use the Client Id to set the OAuthClientId and the Client Secret to set the OAuthClientSecret.
    12. Check the Assignments tab to confirm that all users who must access the application are assigned to the application.
    13. On the Okta API Scopes tab, select the scopes you wish to grant to the OAuth application. These scopes determine the data that the app has permission to read, so a scope for a particular view must be granted for the driver to have permission to query that view. To confirm the scopes required for each view, see the view-specific pages in Data Model < Views in the Help documentation.

    Built-in Connection String Designer

    For assistance in constructing the JDBC URL, use the connection string designer built into the Okta JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

    java -jar cdata.jdbc.okta.jar

    Fill in the connection properties and copy the connection string to the clipboard.

    Configure the connection to Okta, using the connection string generated above.

    scala> val okta_df = spark.sqlContext.read.format("jdbc").option("url", "jdbc:okta:Domain=dev-44876464.okta.com;").option("dbtable","Users").option("driver","cdata.jdbc.okta.OktaDriver").load()
  3. Once you connect and the data is loaded you will see the table schema displayed.
  4. Register the Okta data as a temporary table:

    scala> okta_df.registerTable("users")
  5. Perform custom SQL queries against the Data using commands like the one below:

    scala> okta_df.sqlContext.sql("SELECT Id, ProfileFirstName FROM Users WHERE Status = Active").collect.foreach(println)

    You will see the results displayed in the console, similar to the following:

Using the CData JDBC Driver for Okta in Apache Spark, you are able to perform fast and complex analytics on Okta data, combining the power and utility of Spark with your data. Download a free, 30 day trial of any of the 200+ CData JDBC Drivers and get started today.

Ready to get started?

Download a free trial of the Okta Driver to get started:

 Download Now

Learn more:

Okta Icon Okta JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Okta.