Codementor Events

Understanding CNN (Part 2)

Published Jul 27, 2017
Understanding CNN (Part 2)

This is the continuation of Part-1 of this series. if you are just started in the subject please review the Part-1 of this series.

Training Perceptron

Perceptron

Dataset

X = np.array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]]) Y = np.array([0, 1, 1, 0])

We are starting with this truth-table to train our perceptron model.

X : Feature Vector of each sample

Y : Label for each sample

weights-update-process-visualization

We initialize our perceptron model with random values of weights.

W = np.random.rand(1, X.shape[1] + 1)

As explained in the Training Algorithm section, we aim to increase the performance at tasks in T. In each epoch, we make a feed-forward our perceptron to predict the desired output. When the predicted(calculated) is not consistent with the desired, we make an update to the weights of perceptron. Updating the weights means, wither the values of W are going to be increased or decreaded. This delta change is what Training Algorithm dictates in order to minimize the overall error/loss as defined.

While we train any model, we always have a loss defined over which the weights are optimized. A very typical training loss should appear like this.

Learning rate is the momentum we give to our training speed.

In our case we have defined our loss as the squared error. As explained in the previous session, the delta weights update formulates to this:

def delta_weight(eta, true_label, predicted_label, x): """ :param eta: Learning rate
 :param true_label:
 :param predicted_label:
 :param x:
 """ lambda_param = -1 delta_w = lambda_param * eta * (predicted_label - true_label) * x return delta_w

So, we have hyper-parameter to choose when we start training a model. In our case, hyper-parameters are:

"""
eta: learning rate of the training
lambda_param: to limit the maximum delta change in weights
"""

With different learning rates we may get different loss-plots. If the learning rate is high, weights updates are large, which _ may _ result in faster training.

With Learning Rate, eta = 0.0001 , the training took more epochs to reach the desired minimum loss.

Training loss Graph

When Learning Rate, eta = 0.01 , training speeds up almost with the factor of 10.

Training loss Graph

def training_perceptron(eta, X, Y, number_of_epoch=5000):
    """
    :param eta: learning rate of perceptron
    :param X: the feature set for training
    :param Y: the target value against feature set
    """

    logging.info('Training Config:\nNumber_of_epoch: {} Eta: {}'.format(number_of_epoch, eta))
    W = np.random.rand(1, X.shape[1] + 1)
    loss_log = []
    X = np.insert(X, 2, values=1, axis=1)
    for epoch in range(number_of_epoch):
        X, Y = shuffle(X, Y)
        loss = 0.0

        for index, (feature_row, true_label) in enumerate(zip(X, Y)):
            theta = np.dot(np.array(feature_row), W.T)
            # predicted_output = 1 if theta > 0 else 0
            predicted_output = float(theta)

            loss += (true_label - predicted_output) ** 2
            delta_W = [delta_weight(eta, true_label, predicted_output, x) for x in feature_row]
            logging.debug([feature_row, true_label, np.around(W, decimals=1), predicted_output, theta, delta_W])

            W = np.add(W, delta_W)
        if epoch % 50 == 0:
            loss_log.append([epoch, loss])
        logging.info('Epoch Summary : Epoch: {} Loss: {}'.format(epoch, loss))
        if loss < 0.001:
            break

        time.sleep(0.001)
    df = pd.DataFrame(loss_log, columns=['Epoch', 'Loss'])
    logging.info(df)
    df.to_csv('training_log.csv')
    return number_of_epoch
    ```






The blog originally appears [here](http://amitkushwaha.co.in/understanding-cnn-part-2.html)
Discover and read more posts from Amit Kushwaha
get started
post commentsBe the first to share your opinion
Show more replies