How to pass a machine learning engineer interview

Many of you may be looking for a machine learning engineer job right now. Let me tell you what problems I observed while interviewing machine learning engineers for a senior position so that you can prepare better.

Table of Contents

  1. Angry rant
  2. No clue about testing
  3. Falling into own trap
  4. Throwing the model over the fence
  5. Attention to details

Some of those things are easy fixes. It will take you 5 seconds to decide you must not say something during an interview. You won’t need more than one hour to fix your CV, but that one hour may be the best time investment you make today. On the other hand, some of my advice will take you several weeks to apply.

Angry rant

We asked a candidate to say something about themselves. Would you expect to hear an angry rant at this point of an interview? I didn’t either.

Yet, the candidate took the opportunity to tell us why they feel superior to other machine learning engineers. In their opinion, the others are incompetent and have no clue what they are doing. The candidate felt special because they had a degree in a field not related to IT. Apparently, it makes you a better machine learning engineer. Maybe it does. I won’t have an opportunity to see it.

I must say I admired the confidence of the candidate. It was a bold move to rant during an interview. I felt it was a well-rehearsed part. We weren’t the only ones hearing the rant.

On the other hand, who would like to work with someone who insults you during the first five minutes of the interview? I could swallow my pride and learn how to work with such a person. Unfortunately, the rant was the only well-rehearsed part. The candidate struggled to answer most of the machine learning questions.

Well, I am sure the candidate didn’t feel bad about the outcome of this interview. Perhaps, they blamed us because we “envy their superiority.” I bet their huge ego found a convenient explanation.

No clue about testing

People still write code without tests. How is that even possible? What’s even worse, some candidates have no clue why we should write tests.

One candidate claimed tests are used only to optimize code performance. Ok, fine. Tests are a crucial part of performance optimization. Without them, you will write fast code that doesn’t work correctly. However, it isn’t the only reason to write the tests!

From another candidate, I heard they had no tests because they didn’t need testing. When asked about the biggest issue in the current project, the candidate said… they have a problem getting it to work because every code change breaks something. Seriously?

Falling into own trap

Tell me, what do you expect to happen when you say you like to get into details of machine learning models? I think it’s reasonable to expect to hear questions about such details.

That’s what we did. Nevertheless, the candidate who was so in love with details of machine learning models couldn’t explain how a decision forest works or the difference between bagging and boosting.

Don’t say something because you think you should say it. People will verify what you said.

Throwing the model over the fence

We were looking for someone with experience in running ML models in production. It was crucial because today, everyone can train a model. It may not be the best model possible, but everyone can do it. We have so many autoML tools, tools for feature engineering, and widespread knowledge about ML models or neural network architecture. Making a working model is no longer impressive.

We didn’t want a “Kaggle data scientist” either. In real life, you don’t get a predefined dataset and a metric to measure how well you did the job. In a real project, you have to figure out what you can do with ML and how to measure whether you succeeded. You must know how to choose valuable data and get the missing data from other teams. In real life, the problem is solved when we are satisfied with the business outcome, not when the metric says you have the perfect model.

Even more important is that we wanted someone who had experience working with MLOps engineers. I don’t want to explain for a hundredth time why sending me the file with a saved model or the Jupiter notebook is not enough. Where is the preprocessing code? What libraries did you use? Which versions? How will I know the model works well in production? Do you have any test cases?

No surprise, we didn’t want to hire the candidate who told us they were training models for three years, but they never deployed them in production. Three years. Three years without any business impact. Some companies waste lots of money.

Attention to details

Is the CV important? Yes and no. In general, HR filters out most candidates, so we get only the people who fulfill the criteria. Because of that, I shouldn’t care about the CV. It was helpful for HR, but not for me.

However, the CV is the first document written by the candidate I will ever see. It tells me what to expect from the documents the candidate writes in the future. Writing and documentation are crucial, especially when working remotely. If a candidate does a sloppy job writing the CV, I expect similar quality later.

Therefore, I didn’t like the CV that looked like fragments copied from multiple documents with no consistent formatting. From another person, we received CVs with placeholders like “Reference line item 1” instead of the actual reference. I have seen a CV where the author couldn’t decide whether they would write out the month names (January) or abbreviate them (Jan), so they switched between both notations.

And the typos. There is no excuse for typos. It is too easy to fix them. The typos say you can’t use the word processor or don’t care. I have seen two people who we will remember for the typos they made. The first one made a typo in their country of residence name. The other candidate had a Ph.D. but couldn’t spell the university’s name where they got it.

Was I nitpicking? I don’t think so. After all, programming = dealing with details, and working remotely = writing a lot.

Older post

Why do data engineers quit?

Why do data engineers quit their jobs?

Newer post

What does your data pipeline need in production?

When you're debugging a failing production pipeline at 2 am, what do you need?

Are you looking for an experienced AI consultant? Do you need assistance with your RAG or Agentic Workflow?
Schedule a call, send me a message on LinkedIn. Schedule a call or send me a message on LinkedIn

>