Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →How to Query Live Databricks Data in Natural Language in Python using LlamaIndex
Use LlamaIndex to query live Databricks data data in natural language using Python.
Start querying live data from Databricks using the CData Python Connector for Databricks. Leverage the power of AI with LlamaIndex and retrieve insights using simple English, eliminating the need for complex SQL queries. Benefit from real-time data access that enhances your decision-making process, while easily integrating with your existing Python applications.
With built-in, optimized data processing, the CData Python Connector offers unmatched performance for interacting with live Databricks data in Python. When you issue complex SQL queries from Python, the driver pushes supported SQL operations, like filters and aggregations, directly to Databricks and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations).
Whether you're analyzing trends, generating reports, or visualizing data, our Python connectors enable you to harness the full potential of your live data source with ease.
About Databricks Data Integration
Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:
- Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
- Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
- Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
- Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.
While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.
Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.
Getting Started
Overview
Here's how to query live data with CData's Python connector for Databricks data using LlamaIndex:
- Import required Python, CData, and LlamaIndex modules for logging, database connectivity, and NLP.
- Retrieve your OpenAI API key for authenticating API requests from your application.
- Connect to live Databricks data using the CData Python Connector.
- Initialize OpenAI and create instances of SQLDatabase and NLSQLTableQueryEngine for handling natural language queries.
- Create the query engine and specific database instance.
- Execute natural language queries (e.g., "Who are the top-earning employees?") to get structured responses from the database.
- Analyze retrieved data to gain insights and inform data-driven decisions.
Import Required Modules
Import the necessary modules CData, database connections, and natural language querying.
import os
import logging
import sys
# Configure logging
logging.basicConfig(stream=sys.stdout, level=logging.INFO, force=True)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
# Import required modules for CData and LlamaIndex
import cdata.databricks as mod
from sqlalchemy import create_engine
from llama_index.core.query_engine import NLSQLTableQueryEngine
from llama_index.core import SQLDatabase
from llama_index.llms.openai import OpenAI
Set Your OpenAI API Key
To use OpenAI's language model, you need to set your API key as an environment variable. Make sure you have your OpenAI API key available in your system's environment variables.
# Retrieve the OpenAI API key from the environment variables
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
''as an alternative, you can also add your API key directly within your code (though this method is not recommended for production environments due to security risks):''
# Directly set the API key (not recommended for production use)
OPENAI_API_KEY = "your-api-key-here"
Create a Database Connection
Next, establish a connection to Databricks using the CData connector using a connection string with the required connection properties.
To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
Connecting to Databricks
# Create a database engine using the CData Python Connector for Databricks
engine = create_engine("cdata_databricks_2:///?User=Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;")
Initialize the OpenAI Instance
Create an instance of the OpenAI language model. Here, you can specify parameters like temperature and the model version.
# Initialize the OpenAI language model instance
llm = OpenAI(temperature=0.0, model="gpt-3.5-turbo")
Set Up the Database and Query Engine
Now, set up the SQL database and the query engine. The NLSQLTableQueryEngine allows you to perform natural language queries against your SQL database.
# Create a SQL database instance
sql_db = SQLDatabase(engine) # This includes all tables
# Initialize the query engine for natural language SQL queries
query_engine = NLSQLTableQueryEngine(sql_database=sql_db)
Execute a Query
Now, you can execute a natural language query against your live data source. In this example, we will query for the top two earning employees.
# Define your query string
query_str = "Who are the top earning employees?"
# Get the response from the query engine
response = query_engine.query(query_str)
# Print the response
print(response)
Download a free, 30-day trial of the CData Python Connector for Databricks and start querying your live data seamlessly. Experience the power of natural language processing and unlock valuable insights from your data today.