I'm dumb but still trying to understand the code provided from this
e-book on deep learning, but it doesn't explain where the
n_in=40*4*4
comes from.
40
is from the 40 previous feature maps, but what about the
4*4
?
>>> net = Network([
ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
filter_shape=(20, 1, 5, 5),
poolsize=(2, 2),
activation_fn=ReLU),
ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),
filter_shape=(40, 20, 5, 5),
poolsize=(2, 2),
activation_fn=ReLU),
FullyConnectedLayer(
n_in=40*4*4, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
FullyConnectedLayer(
n_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
SoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)],
mini_batch_size)
>>> net.SGD(expanded_training_data, 40, mini_batch_size, 0.03,
validation_data, test_data)
For instance, what if I do a similar analysis in 1D as shown below, which should that n_in
term be?
>>> net = Network([
ConvPoolLayer(image_shape=(mini_batch_size, 1, 81, 1),
filter_shape=(20, 1, 5, 1),
poolsize=(2, 1),
activation_fn=ReLU),
ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 1),
filter_shape=(40, 20, 5, 1),
poolsize=(2, 1),
activation_fn=ReLU),
FullyConnectedLayer(
n_in=40*???, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
FullyConnectedLayer(
n_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
SoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)],
mini_batch_size)
>>> net.SGD(expanded_training_data, 40, mini_batch_size, 0.03,
validation_data, test_data)
Thanks!