---
title: "How to combine two DataFrames with no common columns in Apache Spark"
description: "Use full outer join to combine two Apache Spark DataFrames with no common columns"
author: "Bartosz Mikulski"
author_bio: "Principal AI Engineer & MLOps Architect. I bridge the gap between \"it works in a notebook\" and \"it works for 200 million users.\""
author_url: https://mikulskibartosz.name
author_linkedin: https://www.linkedin.com/in/mikulskibartosz/
author_github: https://github.com/mikulskibartosz
canonical_url: https://mikulskibartosz.name/combine-dataframes-with-no-common-columns
---

In this article, I will show you how to combine two Spark DataFrames that have no common columns.

For example, if we have the two following DataFrames:

```scala
val df1 = Seq(
        ("001","002","003"),
        ("004","005","006")
    ).toDF("A","B","C")
val df2 = Seq(
        ("011","022","033"),
        ("044","055","066")
    ).toDF("D","E","F")
```

The output I want to get looks like this:

```
+----+----+----+----+----+----+
|   A|   B|   C|   D|   E|   F|
+----+----+----+----+----+----+
| 001| 002| 003|null|null|null|
| 004| 005| 006|null|null|null|
|null|null|null| 011| 022| 033|
|null|null|null| 044| 055| 066|
+----+----+----+----+----+----+
```

This can be easily achieved by using the full outer join with the condition set to `false`:

```scala
df1.join(df2, lit(false), "full")
```

It works because the full outer join takes all rows from both DataFrames, so we end up with all rows, and we use `lit(false)` as the joining condition, which ensures that there will be no matches between both DataFrames.

