# ValueError: Negative dimension size caused by subtracting 2 from 1 for MaxPool1D with input shapes: [?,1,1,128]. Full code,output & error in the post:

## Issue

I have an error while making the following CNN model:

```
features_train = np.reshape(features_train, (2363,2,-1))
features_test = np.reshape(features_test, (591,2,-1))
features_train = np.array(features_train)
features_test = np.array(features_test)
print('Data Shape:', features_train.shape, features_test.shape)
print('Training & Testing Data:', features_train, features_test)
model_2 = Sequential()
model_2.add(Conv1D(256, kernel_size=1, activation='relu', input_shape=(2,1)))
model_2.add(BatchNormalization())
model_2.add(MaxPooling1D())
model_2.add(Conv1D(128, kernel_size=1, activation='relu'))
model_2.add(BatchNormalization())
model_2.add(MaxPooling1D())
model_2.add(Conv1D(64, kernel_size=1, activation='relu'))
model_2.add(BatchNormalization())
model_2.add(MaxPooling1D())
model_2.add(Conv1D(32, kernel_size=1, activation='relu'))
model_2.add(BatchNormalization())
model_2.add(MaxPooling1D())
model_2.add(Flatten())
model_2.add(Dense(4,kernel_initializer="uniform",activation='relu'))
model_2.add(Dense(1,kernel_initializer="uniform",activation='softmax'))
```

The output and the error while executing:

```
Data Shape: (2363, 2, 1) (591, 2, 1)
Training & Testing Data:
[[[0.5000063 ]
[0.4999937 ]]
[[0.5000012 ]
[0.4999988 ]]
[[0.50005335]
[0.49994668]]
...
[[0.50000364]
[0.49999636]]
[[0.5000013 ]
[0.49999866]]
[[0.49999487]
[0.5000052 ]]]
[[[0.50000024]
[0.4999998 ]]
[[0.5000017 ]
[0.49999833]]
[[0.50003964]
[0.49996033]]
...
[[0.5000441 ]
[0.4999559 ]]
[[0.5 ]
[0.5 ]]
[[0.5000544 ]
[0.4999456 ]]]
```

```
ValueError: Negative dimension size caused by subtracting 2 from 1 for '{{node max_pooling1d_1/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 1, 1], padding="VALID", strides=[1, 2, 1, 1]](max_pooling1d_1/ExpandDims)' with input shapes: [?,1,1,128].
```

The data I’m trying to input is features_train which has the shape (2363,2,1). I believe this is some issue with input_shape and dimensions. I’m new to neural networks, so any help would be appreciated. Thanks

## Solution

`MaxPooling1D`

downsizes the model by 2, so the output of the first Pooling Layer is 1, then you have more Pooling layers which won’t work, as it cannot be downsized by 2 anymore

Therefore, you cannot have more than 1 Pooling Layer in your model

Also, I would not suggest to use a `MaxPooling1D`

layers on such a small input

Another thing, You have 1 unit on the final layer and a *softmax* activation function which makes no sense. Using *softmax* on the final layer with one unit will always return a value of 1

So, I think you want to use *sigmoid* and not *softmax*

Your model should be like this,

```
model_2 = Sequential()
model_2.add(Conv1D(64, kernel_size=1, activation='relu', input_shape=(2,1)))
model_2.add(BatchNormalization())
model_2.add(Conv1D(32, kernel_size=1, activation='relu'))
model_2.add(BatchNormalization())
model_2.add(Flatten())
model_2.add(Dense(10,kernel_initializer="uniform",activation='relu'))
model_2.add(Dense(1,kernel_initializer="uniform",activation='sigmoid'))
```

Answered By – Mushfirat Mohaimin

**This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 **