---
title: "How to add an EMR step from AWS Lambda"
description: "How to configure a new EMR step using AWS Lambda in Python"
author: "Bartosz Mikulski"
author_bio: "Principal AI Engineer & MLOps Architect. I bridge the gap between \"it works in a notebook\" and \"it works for 200 million users.\""
author_url: https://mikulskibartosz.name
author_linkedin: https://www.linkedin.com/in/mikulskibartosz/
author_github: https://github.com/mikulskibartosz
canonical_url: https://mikulskibartosz.name/add-emr-step-from-aws-lambda
---

This short tutorial shows how to configure and add a new EMR step using Python running in AWS Lambda. Because the code is supposed to run in AWS Lambda, we don't have to configure the AWS client. We can just import boto3 and use it to get the EMR client:

```python
import boto3

emr = boto3.client("emr")
```

After that, we have to define the EMR step. For example, if I want to run a Scala Spark job, I have to call the `spark-submit` script:

```python
step_args = 'spark-submit --master yarn --deploy-mode client --class class_name --executor-memory 32G --driver-memory 8G'
```

Right now, we have to create a new `JobDefinition` object and add it to the EMR cluster:

```python
step = JobDefintion._prepare_step_dict("step_name", step_args=step_args)
return emr.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])
```

## What a JobDefinition is?

When you open the AWS EMR web interface, you will see EMR clusters with Steps. The JobDefinition API object defines a single step executed by the EMR.

The JobFlow is the entire queue of steps running on an EMR cluster. That's why we use the cluster id as the JobFLow id. It is the same thing.

The `_prepare_step_dict` function creates the JSON object describing a single step. In this article, it has two arguments, the name, and the command to run on the cluster (spark-submit).