This short tutorial shows how to configure and add a new EMR step using Python running in AWS Lambda. Because the code is supposed to run in AWS Lambda, we don’t have to configure the AWS client. We can just import boto3 and use it to get the EMR client:

Table of Contents

  1. Get Weekly AI Implementation Insights
  2. What a JobDefinition is?

import boto3

emr = boto3.client("emr")

After that, we have to define the EMR step. For example, if I want to run a Scala Spark job, I have to call the spark-submit script:

step_args = 'spark-submit --master yarn --deploy-mode client --class class_name --executor-memory 32G --driver-memory 8G'

Right now, we have to create a new JobDefinition object and add it to the EMR cluster:

step = JobDefintion._prepare_step_dict("step_name", step_args=step_args)
return emr.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])

Get Weekly AI Implementation Insights

Join engineering leaders who receive my analysis of common AI production failures and how to prevent them. No fluff, just actionable techniques.

What a JobDefinition is?

When you open the AWS EMR web interface, you will see EMR clusters with Steps. The JobDefinition API object defines a single step executed by the EMR.

The JobFlow is the entire queue of steps running on an EMR cluster. That’s why we use the cluster id as the JobFLow id. It is the same thing.

The _prepare_step_dict function creates the JSON object describing a single step. In this article, it has two arguments, the name, and the command to run on the cluster (spark-submit).

Get Weekly AI Implementation Insights

Join engineering leaders who receive my analysis of common AI production failures and how to prevent them. No fluff, just actionable techniques.

Older post

Send event to AWS Lambda when a file is added to an S3 bucket

Trigger AWS Lambda when a file is created in an S3 bucket

Engineering leaders: Is your AI failing in production? Take the 10-minute assessment
>