---
title: "Bootstrapping vs. Bagging: Key Differences in Machine Learning Ensemble Methods"
description: "Understanding the difference between bootstrapping and bagging in machine learning"
author: "Bartosz Mikulski"
author_bio: "Principal AI Engineer & MLOps Architect. I bridge the gap between \"it works in a notebook\" and \"it works for 200 million users.\""
author_url: https://mikulskibartosz.name
author_linkedin: https://www.linkedin.com/in/mikulskibartosz/
author_github: https://github.com/mikulskibartosz
canonical_url: https://mikulskibartosz.name/bootstrapping-vs-bagging
---

Those words are often used in the same texts/tutorials. Some people seem to use them as synonyms.

Those things are not the same. They are not even similar.
Sure, we see them used in the same context, but they describe two different steps of a single machine learning process.

## Bootstrapping

Bootstrapping is a method of sample selection. The formal definition describes it as “random sampling with replacement”.
Nevermind, let’s forget the definition for a while and build intuition around this term

In short, it allows us to choose duplicates while sampling (for example when selecting observations to be used for training).
It may be useful when we have a small dataset, but the algorithm requires many data. Don’t get too excited. It won’t magically let you successfully use deep learning when you have only 10 examples in the training set.

## Bagging

Now, we can move on to “bagging.” Bagging is a technique of fitting multiple classifiers and creating one ensembles model out of them.

Each one of the classifiers gets a different training set, and that is why words “bootstrapping” and “bagging” are often used together. The dataset for every classifier may be generated using bootstrapping.

## Bootstrapping and bagging

In Scikit-learn the problem is nicely encapsulated (and not so nicely generalized). We have the `sklearn.ensemble.BaggingClassifier` classifier.

BaggingClassifier in its default configuration uses bootstrapping to choose samples for the training set of every classifier, but it can be configured to choose a subset of features randomly or to use random sampling without replacement.