Densely Connected Networks (DenseNet)
=====================================
ResNet significantly changed the view of how to parametrize the
functions in deep networks. DenseNet is to some extent the logical
extension of this. To understand how to arrive at it, let’s take a small
detour to theory. Recall the Taylor expansion for functions. For scalars
it can be written as
.. math:: f(x) = f(0) + f'(x) x + \frac{1}{2} f''(x) x^2 + \frac{1}{6} f'''(x) x^3 + o(x^3)
Function Decomposition
----------------------
The key point is that it decomposes the function into increasingly
higher order terms. In a similar vein, ResNet decomposes functions into
.. math:: f(\mathbf{x}) = \mathbf{x} + g(\mathbf{x})
That is, ResNet decomposes :math:`f` into a simple linear term and a
more complex nonlinear one. What if we want to go beyond two terms? A
solution was proposed by :cite:`Huang.Liu.Van-Der-Maaten.ea.2017` in
the form of DenseNet, an architecture that reported record performance
on the ImageNet dataset.
.. figure:: ../img/densenet-block.svg
The main difference between ResNet (left) and DenseNet (right) in
cross-layer connections: use of addition and use of concatenation.
The key difference between ResNet and DenseNet is that in the latter
case outputs are *concatenated* rather than added. As a result we
perform a mapping from :math:`\mathbf{x}` to its values after applying
an increasingly complex sequence of functions.
.. math:: \mathbf{x} \to \left[\mathbf{x}, f_1(\mathbf{x}), f_2(\mathbf{x}, f_1(\mathbf{x})), f_3(\mathbf{x}, f_1(\mathbf{x}), f_2(\mathbf{x}, f_1(\mathbf{x})), \ldots\right]
In the end, all these functions are combined in an MLP to reduce the
number of features again. In terms of implementation this is quite
simple - rather than adding terms, we concatenate them. The name
DenseNet arises from the fact that the dependency graph between
variables becomes quite dense. The last layer of such a chain is densely
connected to all previous layers. The main components that compose a
DenseNet are dense blocks and transition layers. The former defines how
the inputs and outputs are concatenated, while the latter controls the
number of channels so that it is not too large.
.. figure:: ../img/densenet.svg
Dense connections in DenseNet
Dense Blocks
------------
DenseNet uses the modified “batch normalization, activation, and
convolution” architecture of ResNet (see the exercise in
:numref:`chapter_resnet`). First, we implement this architecture in
the ``conv_block`` function.
.. code:: python
import d2l
from mxnet import gluon, nd
from mxnet.gluon import nn
def conv_block(num_channels):
blk = nn.Sequential()
blk.add(nn.BatchNorm(),
nn.Activation('relu'),
nn.Conv2D(num_channels, kernel_size=3, padding=1))
return blk
A dense block consists of multiple ``conv_block`` units, each using the
same number of output channels. In the forward computation, however, we
concatenate the input and output of each block on the channel dimension.
.. code:: python
class DenseBlock(nn.Block):
def __init__(self, num_convs, num_channels, **kwargs):
super(DenseBlock, self).__init__(**kwargs)
self.net = nn.Sequential()
for _ in range(num_convs):
self.net.add(conv_block(num_channels))
def forward(self, X):
for blk in self.net:
Y = blk(X)
# Concatenate the input and output of each block on the channel
# dimension
X = nd.concat(X, Y, dim=1)
return X
In the following example, we define a convolution block with two blocks
of 10 output channels. When using an input with 3 channels, we will get
an output with the :math:`3+2\times 10=23` channels. The number of
convolution block channels controls the increase in the number of output
channels relative to the number of input channels. This is also referred
to as the growth rate.
.. code:: python
blk = DenseBlock(2, 10)
blk.initialize()
X = nd.random.uniform(shape=(4, 3, 8, 8))
Y = blk(X)
Y.shape
.. parsed-literal::
:class: output
(4, 23, 8, 8)
Transition Layers
-----------------
Since each dense block will increase the number of channels, adding too
many of them will lead to an excessively complex model. A transition
layer is used to control the complexity of the model. It reduces the
number of channels by using the :math:`1\times 1` convolutional layer
and halves the height and width of the average pooling layer with a
stride of 2, further reducing the complexity of the model.
.. code:: python
def transition_block(num_channels):
blk = nn.Sequential()
blk.add(nn.BatchNorm(), nn.Activation('relu'),
nn.Conv2D(num_channels, kernel_size=1),
nn.AvgPool2D(pool_size=2, strides=2))
return blk
Apply a transition layer with 10 channels to the output of the dense
block in the previous example. This reduces the number of output
channels to 10, and halves the height and width.
.. code:: python
blk = transition_block(10)
blk.initialize()
blk(Y).shape
.. parsed-literal::
:class: output
(4, 10, 4, 4)
DenseNet Model
--------------
Next, we will construct a DenseNet model. DenseNet first uses the same
single convolutional layer and maximum pooling layer as ResNet.
.. code:: python
net = nn.Sequential()
net.add(nn.Conv2D(64, kernel_size=7, strides=2, padding=3),
nn.BatchNorm(), nn.Activation('relu'),
nn.MaxPool2D(pool_size=3, strides=2, padding=1))
Then, similar to the four residual blocks that ResNet uses, DenseNet
uses four dense blocks. Similar to ResNet, we can set the number of
convolutional layers used in each dense block. Here, we set it to 4,
consistent with the ResNet-18 in the previous section. Furthermore, we
set the number of channels (i.e. growth rate) for the convolutional
layers in the dense block to 32, so 128 channels will be added to each
dense block.
In ResNet, the height and width are reduced between each module by a
residual block with a stride of 2. Here, we use the transition layer to
halve the height and width and halve the number of channels.
.. code:: python
# Num_channels: the current number of channels
num_channels, growth_rate = 64, 32
num_convs_in_dense_blocks = [4, 4, 4, 4]
for i, num_convs in enumerate(num_convs_in_dense_blocks):
net.add(DenseBlock(num_convs, growth_rate))
# This is the number of output channels in the previous dense block
num_channels += num_convs * growth_rate
# A transition layer that halves the number of channels is added between
# the dense blocks
if i != len(num_convs_in_dense_blocks) - 1:
num_channels //= 2
net.add(transition_block(num_channels))
Similar to ResNet, a global pooling layer and fully connected layer are
connected at the end to produce the output.
.. code:: python
net.add(nn.BatchNorm(),
nn.Activation('relu'),
nn.GlobalAvgPool2D(),
nn.Dense(10))
Data Acquisition and Training
-----------------------------
Since we are using a deeper network here, in this section, we will
reduce the input height and width from 224 to 96 to simplify the
computation.
.. code:: python
lr, num_epochs, batch_size = 0.1, 10, 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)
d2l.train_ch5(net, train_iter, test_iter, num_epochs, lr)
.. parsed-literal::
:class: output
loss 0.142, train acc 0.948, test acc 0.884
5118.5 exampes/sec on gpu(0)
.. image:: output_densenet_1bae0e_17_1.svg
Summary
-------
- In terms of cross-layer connections, unlike ResNet, where inputs and
outputs are added together, DenseNet concatenates inputs and outputs
on the channel dimension.
- The main units that compose DenseNet are dense blocks and transition
layers.
- We need to keep the dimensionality under control when composing the
network by adding transition layers that shrink the number of
channels again.
Exercises
---------
1. Why do we use average pooling rather than max-pooling in the
transition layer?
2. One of the advantages mentioned in the DenseNet paper is that its
model parameters are smaller than those of ResNet. Why is this the
case?
3. One problem for which DenseNet has been criticized is its high memory
consumption.
- Is this really the case? Try to change the input shape to
:math:`224\times 224` to see the actual (GPU) memory consumption.
- Can you think of an alternative means of reducing the memory
consumption? How would you need to change the framework?
4. Implement the various DenseNet versions presented in Table 1 of
:cite:`Huang.Liu.Van-Der-Maaten.ea.2017`.
5. Why do we not need to concatenate terms if we are just interested in
:math:`\mathbf{x}` and :math:`f(\mathbf{x})` for ResNet? Why do we
need this for more than two layers in DenseNet?
6. Design a DenseNet for fully connected networks and apply it to the
Housing Price prediction task.
Scan the QR Code to `Discuss `__
-----------------------------------------------------------------
|image0|
.. |image0| image:: ../img/qr_densenet.svg