To write a PySpark DataFrame to a table in a SQL database using JDBC, we need a few things.
First, we have to add the JDBC driver to the driver node and the worker nodes. We can do that using the --jars
property while submitting a new PySpark job:
spark-submit --deploy-mode cluster \
--jars s3://some_bucket/jdbc_driver.jar \
s3://some_bucket/pyspark_job.py
After that, we have to prepare the JDBC connection URL. The URL consists of three parts: the database name, the host with port, and the database (schema) name. If I want to connect to Postgres running on the local machine, the URL should look like this:
url = "jdbc:postgresql://localhost/database_name"
In addition to that, I have to prepare a dictionary of properties, which contains the username and password used to connect to the database:
properties = {
"user": "the_username",
"password": "the_password"
}
Please do not store the credentials in the code. It is better to use the AWS SecretsManager (if you run your code on EMR) or any other method of passing the credentials securely from an external source.
I have to decide how Spark should behave when there is already some data in the table. Let’s assume that I want to overwrite the existing data with the DataFrame df
content. In this case, I have to set the write mode to ‘overwrite’.
The last information I need is the table name that will be populated with the DataFrame. When I have all of the required information, I can call the write.jdbc
function:
df.write.jdbc(url=url, table="the_table_name", mode='overwrite', properties=properties)
Want to build AI systems that actually work?
Download my expert-crafted GenAI Transformation Guide for Data Teams and discover how to properly measure AI performance, set up guardrails, and continuously improve your AI solutions like the pros.