Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →How to work with Google Cloud Storage Data in Apache Spark using SQL
Access and process Google Cloud Storage Data in Apache Spark using the CData JDBC Driver.
Apache Spark is a fast and general engine for large-scale data processing. When paired with the CData JDBC Driver for Google Cloud Storage, Spark can work with live Google Cloud Storage data. This article describes how to connect to and query Google Cloud Storage data from a Spark shell.
The CData JDBC Driver offers unmatched performance for interacting with live Google Cloud Storage data due to optimized data processing built into the driver. When you issue complex SQL queries to Google Cloud Storage, the driver pushes supported SQL operations, like filters and aggregations, directly to Google Cloud Storage and utilizes the embedded SQL engine to process unsupported operations (often SQL functions and JOIN operations) client-side. With built-in dynamic metadata querying, you can work with and analyze Google Cloud Storage data using native data types.
Install the CData JDBC Driver for Google Cloud Storage
Download the CData JDBC Driver for Google Cloud Storage installer, unzip the package, and run the JAR file to install the driver.
Start a Spark Shell and Connect to Google Cloud Storage Data
- Open a terminal and start the Spark shell with the CData JDBC Driver for Google Cloud Storage JAR file as the jars parameter:
$ spark-shell --jars /CData/CData JDBC Driver for Google Cloud Storage/lib/cdata.jdbc.googlecloudstorage.jar
- With the shell running, you can connect to Google Cloud Storage with a JDBC URL and use the SQL Context load() function to read a table.
Authenticate with a User Account
You can connect without setting any connection properties for your user credentials. After setting InitiateOAuth to GETANDREFRESH, you are ready to connect.
When you connect, the Google Cloud Storage OAuth endpoint opens in your default browser. Log in and grant permissions, then the OAuth process completes
Authenticate with a Service Account
Service accounts have silent authentication, without user authentication in the browser. You can also use a service account to delegate enterprise-wide access scopes.
You need to create an OAuth application in this flow. See the Help documentation for more information. After setting the following connection properties, you are ready to connect:
- InitiateOAuth: Set this to GETANDREFRESH.
- OAuthJWTCertType: Set this to "PFXFILE".
- OAuthJWTCert: Set this to the path to the .p12 file you generated.
- OAuthJWTCertPassword: Set this to the password of the .p12 file.
- OAuthJWTCertSubject: Set this to "*" to pick the first certificate in the certificate store.
- OAuthJWTIssuer: In the service accounts section, click Manage Service Accounts and set this field to the email address displayed in the service account Id field.
- OAuthJWTSubject: Set this to your enterprise Id if your subject type is set to "enterprise" or your app user Id if your subject type is set to "user".
- ProjectId: Set this to the Id of the project you want to connect to.
The OAuth flow for a service account then completes.
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Google Cloud Storage JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.googlecloudstorage.jar
Fill in the connection properties and copy the connection string to the clipboard.
Configure the connection to Google Cloud Storage, using the connection string generated above.
scala> val googlecloudstorage_df = spark.sqlContext.read.format("jdbc").option("url", "jdbc:googlecloudstorage:ProjectId='project1';").option("dbtable","Buckets").option("driver","cdata.jdbc.googlecloudstorage.GoogleCloudStorageDriver").load()
- Once you connect and the data is loaded you will see the table schema displayed.
Register the Google Cloud Storage data as a temporary table:
scala> googlecloudstorage_df.registerTable("buckets")
-
Perform custom SQL queries against the Data using commands like the one below:
scala> googlecloudstorage_df.sqlContext.sql("SELECT Name, OwnerId FROM Buckets WHERE Name = TestBucket").collect.foreach(println)
You will see the results displayed in the console, similar to the following:
Using the CData JDBC Driver for Google Cloud Storage in Apache Spark, you are able to perform fast and complex analytics on Google Cloud Storage data, combining the power and utility of Spark with your data. Download a free, 30 day trial of any of the 200+ CData JDBC Drivers and get started today.