When we read multiple Parquet files using Apache Spark, we may end up with a problem caused by schema differences. When Spark gets a list of files to read, it picks the schema from either the Parquet summary file or a randomly chosen input file:
Table of Contents
spark.read.parquet(
List(
"file_a",
"file_b",
"file_c"): _*
)
Most likely, you don’t have the Parquet summary file because it is not a popular solution. In this case, Spark will try to apply the schema of a randomly chosen file to every file in the list.
It is an annoying problem because if we have additional columns in some files, we may end up with a dataset that does not contain those extra columns because Spark read the schema from a file without those columns.
Get Weekly AI Implementation Insights
Join engineering leaders who receive my analysis of common AI production failures and how to prevent them. No fluff, just actionable techniques.
How to merge Parquet schemas in Apache Spark?
To solve the issue, we must instruct Apache Spark to merge the schemas from all given files into one common schema. We can do that using the mergeSchema
configuration parameter:
spark.read.option("mergeSchema", "true").parquet(...)