Speed up counting the distinct elements in a Spark DataFrame

Why does counting the unique elements in Spark take so long? Let’s look at the classical example used to demonstrate big data problems: counting words in a book.

When we use Spark to do that, it calculates the number of unique words in every partition, reshuffles the data using the words as the partitioning keys (so all counts of a particular word end up in the same cluster), and sums the partial aggregates to get the final result.

We immediately see that exchanging partial counts between nodes in the cluster will slow down the whole operation. It is also quite memory-intensive because we must store a counter for every word in the text.

Suppose we don’t need the accurate count, and an approximation is good enough. In that case, we can count the unique values using the approx_count_distinct function (there is also a version that lets you define the maximal approximation error).

When we use that function, Spark counts the distinct elements using a variant of the HyperLogLog algorithm.

import org.apache.spark.sql.functions.approx_count_distinct

df.agg(approx_count_distinct("column_name"))
Older post

Pass parameters to SQL query when using PostgresOperator in Airflow

How to pass parameters to SQL template when using PostgresOperator in Airflow

Newer post

Use a custom function in Airflow templates

How to add a custom function to Airflow and use it in a template

Are you looking for an experienced AI consultant? Do you need assistance with your RAG or Agentic Workflow?
Schedule a call, send me a message on LinkedIn. Schedule a call or send me a message on LinkedIn

>