Understanding the Keras layer input shapes

When creating a sequential model using Keras, we have to specify only the shape of the first layer. The number of expected values in the shape tuple depends on the type of the first layer. I have made a list of layers and their input shape parameters.

Table of Contents

  1. Batch size
  2. Layer input shape parameters
    1. Dense
    2. Conv2D
  3. LSTM
  4. ConvLSTM2D

Batch size

(Almost) every kind of layer has the batch size parameter as the first elements of the input_shape tuple, but we usually don’t specify it as a part of the input definition. We do it later, during training, so I am going to skip the batch size in my examples.

Layer input shape parameters

Dense

The actual shape depends on the number of dimensions. In the case of a one-dimensional array of n features, the input_shape looks like this (batch_size, n). As I mentioned before, we can skip the batch_size when we define the model structure, so in the code, we write:

keras.layers.Dense(32, activation='relu', input_shape=(16,))

Conv2D

Surprisingly, the convolutional layer used for images needs four-dimensional input. As usual, the first parameter is the batch size and (as usual) we skip it. Next, are two parameters that denote the number of pixels of the image. As the last parameter, we put the number of channels. In case of a standard RGB image, the number of channels is 3.

It is also possible to specify the number of channels before the size. To do this, we also have to set the data_format to “channel_first.”

keras.layers.Conv2D(128, kernel_size=3, activation='relu', input_shape=(64,64,3)) #channel_last (the default style)

# or

keras.layers.Conv2D(128, kernel_size=3, activation='relu', input_shape=(3,64,64), data_format='channel_first')

LSTM

In the case of LSTM, we have three parameters. The implicit batch size, which we don’t define (as usual), the parameter that denotes the number of timesteps (in this example, 4), and the number of features of every timestep (16).

keras.layers.LSTM(units=8,input_shape=(4,16))

Note that, I also had to specify the number of units (the number 8 in the first parameter). That is the size of the output. It is going to be used as the input size of the next layer.

ConvLSTM2D

Very similar to Conv2d. We put the additional time parameter after the batch size (so it is always the first one in the tuple, even if “channel_first” parameter is used, in that case, the channel is the second parameter). In this example, 4 denotes the number of timesteps.

keras.layers.ConvLSTM2D(filters=128, kernel_size=3, activation='relu', input_shape=(4,64,64,3)) #channel_last (the default style)

# or

keras.layers.ConvLSTM2D(filters=128, kernel_size=3, activation='relu', input_shape=(4,3,64,64), data_format='channel_first')
Older post

How to train a model in TensorFlow 2.0

TensorFlow 2 - example

Newer post

Using keras-tuner to tune hyperparameters of a TensorFlow model

Tuning Keras hyperparameters with keras-tuner

Are you looking for an experienced AI consultant? Do you need assistance with your RAG or Agentic Workflow?
Schedule a call, send me a message on LinkedIn. Schedule a call or send me a message on LinkedIn

>