Ready to get started?

Download a free trial of the Apache Spark Connector to get started:

 Download Now

Learn more:

Apache Spark Icon Apache Spark Python Connector

Python Connector Libraries for Apache Spark Data Connectivity. Integrate Apache Spark with popular Python tools like Pandas, SQLAlchemy, Dash & petl.

How to Visualize Spark Data in Python with pandas



Use pandas and other modules to analyze and visualize live Spark data in Python.

The rich ecosystem of Python modules lets you get to work quickly and integrate your systems more effectively. With the CData Python Connector for Apache Spark, the pandas & Matplotlib modules, and the SQLAlchemy toolkit, you can build Spark-connected Python applications and scripts for visualizing Spark data. This article shows how to use the pandas, SQLAlchemy, and Matplotlib built-in functions to connect to Spark data, execute queries, and visualize the results.

With built-in optimized data processing, the CData Python Connector offers unmatched performance for interacting with live Spark data in Python. When you issue complex SQL queries from Spark, the driver pushes supported SQL operations, like filters and aggregations, directly to Spark and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations).

Connecting to Spark Data

Connecting to Spark data looks just like connecting to any relational data source. Create a connection string using the required connection properties. For this article, you will pass the connection string as a parameter to the create_engine function.

Set the Server, Database, User, and Password connection properties to connect to SparkSQL.

Follow the procedure below to install the required modules and start accessing Spark through Python objects.

Install Required Modules

Use the pip utility to install the pandas & Matplotlib modules and the SQLAlchemy toolkit:

pip install pandas
pip install matplotlib
pip install sqlalchemy

Be sure to import the module with the following:

import pandas
import matplotlib.pyplot as plt
from sqlalchemy import create_engine

Visualize Spark Data in Python

You can now connect with a connection string. Use the create_engine function to create an Engine for working with Spark data.

engine = create_engine("sparksql:///?Server=127.0.0.1")

Execute SQL to Spark

Use the read_sql function from pandas to execute any SQL statement and store the resultset in a DataFrame.

df = pandas.read_sql("SELECT City, Balance FROM Customers WHERE Country = 'US'", engine)

Visualize Spark Data

With the query results stored in a DataFrame, use the plot function to build a chart to display the Spark data. The show method displays the chart in a new window.

df.plot(kind="bar", x="City", y="Balance")
plt.show()

Free Trial & More Information

Download a free, 30-day trial of the CData Python Connector for Apache Spark to start building Python apps and scripts with connectivity to Spark data. Reach out to our Support Team if you have any questions.



Full Source Code

import pandas
import matplotlib.pyplot as plt
from sqlalchemy import create_engin

engine = create_engine("sparksql:///?Server=127.0.0.1")
df = pandas.read_sql("SELECT City, Balance FROM Customers WHERE Country = 'US'", engine)

df.plot(kind="bar", x="City", y="Balance")
plt.show()