This short tutorial shows how to configure and add a new EMR step using Python running in AWS Lambda. Because the code is supposed to run in AWS Lambda, we don’t have to configure the AWS client. We can just import boto3 and use it to get the EMR client:
Table of Contents
import boto3
emr = boto3.client("emr")
After that, we have to define the EMR step. For example, if I want to run a Scala Spark job, I have to call the spark-submit
script:
step_args = 'spark-submit --master yarn --deploy-mode client --class class_name --executor-memory 32G --driver-memory 8G'
Right now, we have to create a new JobDefinition
object and add it to the EMR cluster:
step = JobDefintion._prepare_step_dict("step_name", step_args=step_args)
return emr.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])
Want to build AI systems that actually work?
Download my expert-crafted GenAI Transformation Guide for Data Teams and discover how to properly measure AI performance, set up guardrails, and continuously improve your AI solutions like the pros.
What a JobDefinition is?
When you open the AWS EMR web interface, you will see EMR clusters with Steps. The JobDefinition API object defines a single step executed by the EMR.
The JobFlow is the entire queue of steps running on an EMR cluster. That’s why we use the cluster id as the JobFLow id. It is the same thing.
The _prepare_step_dict
function creates the JSON object describing a single step. In this article, it has two arguments, the name, and the command to run on the cluster (spark-submit).