# Neural Networks - Exercise: Convolution and Pooling

## Introduction

In this notebook, you will implement two of the key operations of a convolutional neural network: convolution and pooling. For now, the focus is to implement the operations themselves outside the context of a network with preset filters. In succeeding notebooks, you'll focus on an efficient implementation with learnable kernels

# Requirements

## Knowledge

By now you should be familiar with the convolution operation, but if you need a recap consider following resources:

## Python-Modules

# third party
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal

from PIL import Image

## Data

For the exercise I used the 'Photo of the Day' (6.6.2018) from Unsplash.com by Adrian Trinkaus. You can change it at will.

# Open an image
img = Image.open('pics/berlin_adrian-trinkaus.jpg')

# Convert it to greyscale and RGB
img_gry = img.convert('L')
img_rgb = img.convert('RGB')

# Create a numpy array with dimensions (height, width, channel)
# and squash all values into interval [0,1]
img_gry = np.asarray(img_gry)/256.
img_rgb = np.asarray(img_rgb)/256.

# Print array shapes
print('grayscale shape:', img_gry.shape)
print('rgb shape:', img_rgb.shape)

# Example plot
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(15,15))
ax1.imshow(img_gry, cmap="Greys_r")
ax2.imshow(img_rgb)
plt.show()

# Convolutional Layer

## Typical Computer Vision Filter

Because we do not learn the filters during the exercise we will need some for your experiments. Some filters are given and you should create at least two more. Do a little research on 'image processing filters' and pick what you like. Remember that your kernels need to have the same depth as your input. You may consider this issue during your implementation of the convolution operation.

class Filter:
"""Provides some typical CV filter as attributes.

Attributes:
edge_detector:
sobel:
vertical_edge:
sharpener:
blur:
"""
def __init__(self):
"""Initialize common filter as attributes"""
self.edge_detector = np.array([[-1,-1, -1],
[-1, 8, -1],
[-1,-1, -1]])

self.sobel = np.array([[ 1., 2., 1.],
[ 0., 0., 0],
[-1.,-2.,-1.]])

self.vertical_edge = np.array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])

self.sharpener = np.array([[ 0,-1, 0],
[-1, 5,-1],
[ 0,-1, 0]])

self.blur = self.gaussian_blur(3)

def gaussian_blur(self, size):
"""Creates a squared gaussian blur kernel

Args:
size: Defines length one axis

Returns:
A squared matrix with normalized gaussian values
"""
gauss_values = scipy.signal.get_window(('gaussian',1.),size)
gauss_matrix = np.outer(gauss_values,gauss_values)
return gauss_matrix/gauss_matrix.sum()  

## Convolution

Create a Conv class that implements a (naive) convolution operation on one image at the time. Do not use any modules besides numpy, your goal is to get a better understanding for a 2d-conv operation. If your input has more than one channel, apply the same conv-operation on each channel. Document your code and follow the specification. After your implementation, give a statement about the runtime of your algorithm based on the$O$ notation.

class Conv:
def __init__(self, kernel, stride=1, padding=True, verbose=None):
"""
Args:
kernel: a filter for the convulution
stride: step size with which the kernel slides over the image
padding: if set zero padding will be applied to keep image dimensions
verbose: if set additional information will be printed, e.g., input and output dimensions
"""
raise NotImplementedError("This is your duty")

def forward(self, image):
""" Executes a convolution on the given image with init params

Args:
image (ndarray): squared image

Returns:
ndarray: activation map
"""
return NotImplementedError("This is your duty")

## Convolution Experiments

Use the test image (you may want to try some more images) and do some experiments with your implementation to get more familiar with operations in a convolutional layer. Plot your results and compare them. Here are some suggestions, e.g.:

• Try different filters
• Create new filters with a bigger size
• Use different stride values
• Stack several convolution operationns

### Testing Kernels

#### RGB

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

k = Filter()
i = 0
for attr, value in vars(k).items():
c = Conv(value, verbose=True, padding=False)
img = c.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title(attr)
i += 1
plt.show()

#### B/W

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

i = 0
for attr, value in vars(k).items():
c = Conv(value, padding=True)
img = c.forward(img_gry)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title(attr)
i += 1
plt.show()

### Testing Stride

#### Without Padding

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

for i in range(6):
c = Conv(k.blur, stride=i+1, verbose=True, padding=False)
img = c.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title("Stride size: " + str(i+1))
plt.show()

#### With Padding

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

for i in range(6):
c = Conv(k.blur, stride=i+1, padding=True)
img = c.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title("Stride size: " + str(i+1))
plt.show()

# Pooling Layer

## Pooling

Create a Pooling class that implements the pooling operation with different functions (max, sum, mean) on a given image. Document your code and follow the specification.

class Pooling:
def __init__(self, pooling_function=None, pooling_size=2, stride=2, verbose=None):
"""
Args:
pooling_function: defines the pooling operator 'max' (default), 'mean' or 'sum'
poolig_size: size of one axis of the squared pooling filter
stride: step size with which the filter slides over the image
verbose: if set additional information will be printed, e.g., input and output dimensions
"""
raise NotImplementedError("This is your duty")

def forward(self, image):
""" Executes pooling on the given image with init params

Args:
image (ndarray): squared image

Returns:
ndarray: activation map
"""
return NotImplementedError("This is your duty")        

## Pooling Experiments

Test your implementation with different values for pooling_size and stride. Compare the three different pooling-functions.

fig, axs = plt.subplots(1, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

pooling_func = ['max','mean','sum']

for i in range(3):
p = Pooling(pooling_func[i], pooling_size=2, stride=2)
img = p.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title("Function:" + pooling_func[i])
plt.show()

### Different Strides and pool-functions

#### Max-Pool

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

for i in range(6):
p = Pooling('max', pooling_size=2, stride=2+i*2)
img = p.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title("Stride: " + str(i*2+2))

plt.show()

#### Mean-Pool

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

for i in range(6):
p = Pooling('mean', pooling_size=2, stride=2+i*2)
img = p.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title("Stride: " + str(i*2+2))

plt.show()

#### Sum-Pool

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

for i in range(6):
p = Pooling('sum', pooling_size=2, stride=2+i*2)
img = p.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title("Stride: " + str(i*2+2))

plt.show()

### Different pooling sizes

fig, axs = plt.subplots(2, 3, figsize=(15, 10))
fig.subplots_adjust(hspace=.5, wspace=.001)
axs = axs.ravel()

for i in range(6):
p = Pooling('max', pooling_size=i*2+2, stride=2)
img = p.forward(img_rgb)
axs[i].imshow(img, cmap="Greys_r")
axs[i].set_title("Pooling size: " + str(i*2+2))

plt.show()

# Licenses

## Notebook License (CC-BY-SA 4.0)

The following license applies to the complete notebook, including code cells. It does however not apply to any referenced external media (e.g., images).

Neural Networks - Exercise: Convolution and Pooling
by Benjamin Voigt
is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://gitlab.com/deep.TEACHING.

## Code License (MIT)

The following license only applies to code cells of the notebook.

Copyright 2018 Benjamin Voigt

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.