.

Machine learning coursera quiz answers week 1, week 2 , week 3, week 4

MACHINE LEARNING COURSERA QUIZ ANSWERS WEEK 4

introduction to machine learning duke university coursera quiz answers, machine learning coursera github, coursera machine learning quiz answers week 3, introduction to machine learning coursera quiz answers, machine learning for all university of london coursera quiz answers, coursera university of washington machine learning quiz answers, machine learning coursera quiz answers week 1, machine learning coursera quiz answers week 2

MACHINE LEARNING COURSERA ASSIGNMENT ANSWERS WEEK 4

 

lrCostFunction.m :

function [J, grad] = lrCostFunction(theta, X, y, lambda)
  %LRCOSTFUNCTION Compute cost and gradient for logistic regression with 
  %regularization
  %   J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
  %   theta as the parameter for regularized logistic regression and the
  %   gradient of the cost w.r.t. to the parameters. 
  
  % Initialize some useful values
  m = length(y); % number of training examples
  
  % You need to return the following variables correctly 
  J = 0;
  grad = zeros(size(theta));
  
  % ====================== YOUR CODE HERE ======================
  % Instructions: Compute the cost of a particular choice of theta.
  %               You should set J to the cost.
  %               Compute the partial derivatives and set grad to the partial
  %               derivatives of the cost w.r.t. each parameter in theta
  %
  % Hint: The computation of the cost function and gradients can be
  %       efficiently vectorized. For example, consider the computation
  %
  %           sigmoid(X * theta)
  %
  %       Each row of the resulting matrix will contain the value of the
  %       prediction for that example. You can make use of this to vectorize
  %       the cost function and gradient computations. 
  %
  % Hint: When computing the gradient of the regularized cost function, 
  %       there're many possible vectorized solutions, but one solution
  %       looks like:
  %           grad = (unregularized gradient for logistic regression)
  %           temp = theta; 
  %           temp(1) = 0;   % because we don't add anything for j = 0  
  %           grad = grad + YOUR_CODE_HERE (using the temp variable)
  %
  
  %DIMENSIONS: 
  %   theta = (n+1) x 1
  %   X     = m x (n+1)
  %   y     = m x 1
  %   grad  = (n+1) x 1
  %   J     = Scalar
  
  z   = X * theta;   % m x 1
  h_x = sigmoid(z);  % m x 1 
  
  reg_term = (lambda/(2*m)) * sum(theta(2:end).^2);
  
  J = (1/m)*sum((-y.*log(h_x))-((1-y).*log(1-h_x))) + reg_term; % scalar
  
  grad(1) = (1/m) * (X(:,1)'*(h_x-y));                                    % 1 x 1
  grad(2:end) = (1/m) * (X(:,2:end)'*(h_x-y)) + (lambda/m)*theta(2:end);  % n x 1
  
  % =============================================================
  
  grad = grad(:);
end

 

oneVsAll.m :

function [all_theta] = oneVsAll(X, y, num_labels, lambda)
  %ONEVSALL trains multiple logistic regression classifiers and returns all
  %the classifiers in a matrix all_theta, where the i-th row of all_theta 
  %corresponds to the classifier for label i
  %   [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
  %   logistic regression classifiers and returns each of these classifiers
  %   in a matrix all_theta, where the i-th row of all_theta corresponds 
  %   to the classifier for label i
  
  % num_labels = No. of output classifier (Here, it is 10)
  
  % Some useful variables
  m = size(X, 1);        % No. of Training Samples == No. of Images : (Here, 5000) 
  n = size(X, 2);        % No. of features == No. of pixels in each Image : (Here, 400)
  
  % You need to return the following variables correctly 
  all_theta = zeros(num_labels, n + 1);  
  %DIMENSIONS: num_labels x (input_layer_size+1) == num_labels x (no_of_features+1) == 10 x 401
  
  %DIMENSIONS: X = m x input_layer_size
  %Here, 1 row in X represents 1 training Image of pixel 20x20
  
  % Add ones to the X data matrix
  X = [ones(m, 1) X];   %DIMENSIONS: X = m x (input_layer_size+1) = m x (no_of_features+1)
  
  % ====================== YOUR CODE HERE ======================
  % Instructions: You should complete the following code to train num_labels
  %               logistic regression classifiers with regularization
  %               parameter lambda. 
  %
  % Hint: theta(:) will return a column vector.
  %
  % Hint: You can use y == c to obtain a vector of 1's and 0's that tell you
  %       whether the ground truth is true/false for this class.
  %
  % Note: For this assignment, we recommend using fmincg to optimize the cost
  %       function. It is okay to use a for-loop (for c = 1:num_labels) to
  %       loop over the different classes.
  %
  %       fmincg works similarly to fminunc, but is more efficient when we
  %       are dealing with large number of parameters.
  %
  % Example Code for fmincg:
  %
  %     % Set Initial theta
  %     initial_theta = zeros(n + 1, 1);
  %     
  %     % Set options for fminunc
  %     options = optimset('GradObj', 'on', 'MaxIter', 50);
  % 
  %     % Run fmincg to obtain the optimal theta
  %     % This function will return theta and the cost 
  %     [theta] = ...
  %         fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
  %                 initial_theta, options);
  %
  
  initial_theta = zeros(n+1, 1);
  options = optimset('GradObj', 'on', 'MaxIter', 50);
  
  for c=1:num_labels
  all_theta(c,:) = ...
           fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
                   initial_theta, options);
  end
  
  % =========================================================================
end

 

predictOneVsAll.m :

function p = predictOneVsAll(all_theta, X)
  %PREDICT Predict the label for a trained one-vs-all classifier. The labels
  %are in the range 1..K, where K = size(all_theta, 1).
  %  p = PREDICTONEVSALL(all_theta, X) will return a vector of predictions
  %  for each example in the matrix X. Note that X contains the examples in
  %  rows. all_theta is a matrix where the i-th row is a trained logistic
  %  regression theta vector for the i-th class. You should set p to a vector
  %  of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2
  %  for 4 examples)
  
  m = size(X, 1);     % No. of Input Examples to Predict (Each row = 1 Example)
  num_labels = size(all_theta, 1); %No. of Ouput Classifier
  
  % You need to return the following variables correctly
  p = zeros(size(X, 1), 1);    % No_of_Input_Examples x 1 == m x 1
  
  % Add ones to the X data matrix
  X = [ones(m, 1) X];
  
  % ====================== YOUR CODE HERE ======================
  % Instructions: Complete the following code to make predictions using
  %               your learned logistic regression parameters (one-vs-all).
  %               You should set p to a vector of predictions (from 1 to
  %               num_labels).
  %
  % Hint: This code can be done all vectorized using the max function.
  %       In particular, the max function can also return the index of the
  %       max element, for more information see 'help max'. If your examples
  %       are in rows, then, you can use max(A, [], 2) to obtain the max
  %       for each row.
  %
  % num_labels = No. of output classifier (Here, it is 10)
  % DIMENSIONS:
  % all_theta = 10 x 401 = num_labels x (input_layer_size+1) == num_labels x (no_of_features+1)
  
  prob_mat = X * all_theta';     % 5000 x 10 == no_of_input_image x num_labels
  [prob, p] = max(prob_mat,[],2); % m  x 1 
  %returns maximum element in each row  == max. probability and its index for each input image
  %p: predicted output (index)
  %prob: probability of predicted output
  
  %%%%%%%% WORKING: Computation per input image %%%%%%%%%
  % for i = 1:m                               % To iterate through each input sample
  %     one_image = X(i,:);                   % 1 x 401 == 1 x no_of_features
  %     prob_mat = one_image * all_theta';    % 1 x 10  == 1 x num_labels
  %     [prob, out] = max(prob_mat);
  %     %out: predicted output
  %     %prob: probability of predicted output
  %     p(i) = out;
  % end
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  
  %%%%%%%% WORKING %%%%%%%%%
  % for i = 1:m
  %     RX = repmat(X(i,:),num_labels,1);
  %     RX = RX .* all_theta;
  %     SX = sum(RX,2);
  %     [val, index] = max(SX);
  %     p(i) = index;
  % end
  %%%%%%%%%%%%%%%%%%%%%%%%%%
  % =========================================================================
end

predict.m :

function p = predict(Theta1, Theta2, X)
  %PREDICT Predict the label of an input given a trained neural network
  %   p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
  %   trained weights of a neural network (Theta1, Theta2)
  
  % Useful values
  m = size(X, 1);
  num_labels = size(Theta2, 1);
  
  % You need to return the following variables correctly 
  p = zeros(size(X, 1), 1);  % m x 1
  
  % ====================== YOUR CODE HERE ======================
  % Instructions: Complete the following code to make predictions using
  %               your learned neural network. You should set p to a 
  %               vector containing labels between 1 to num_labels.
  %
  % Hint: The max function might come in useful. In particular, the max
  %       function can also return the index of the max element, for more
  %       information see 'help max'. If your examples are in rows, then, you
  %       can use max(A, [], 2) to obtain the max for each row.
  %
  %DIMENSIONS:
  % theta1 = 25 x 401
  % theta2 = 10 x 26
  
  % layer1 (input)  = 400 nodes + 1bias
  % layer2 (hidden) = 25 nodes + 1bias 
  % layer3 (output) = 10 nodes
  % 
  % theta dimensions = S_(j+1) x ((S_j)+1)
  % theta1 = 25 x 401
  % theta2 = 10 x 26
  
  % theta1:
  %     1st row indicates: theta corresponding to all nodes from layer1 connecting to for 1st node of layer2
  %     2nd row indicates: theta corresponding to all nodes from layer1 connecting to for 2nd node of layer2
  %     and
  %     1st Column indicates: theta corresponding to node1 from layer1 to all nodes in layer2
  %     2nd Column indicates: theta corresponding to node2 from layer1 to all nodes in layer2
  %     
  % theta2:
  %     1st row indicates: theta corresponding to all nodes from layer2 connecting to for 1st node of layer3
  %     2nd row indicates: theta corresponding to all nodes from layer2 connecting to for 2nd node of layer3
  %     and
  %     1st Column indicates: theta corresponding to node1 from layer2 to all nodes in layer3
  %     2nd Column indicates: theta corresponding to node2 from layer2 to all nodes in layer3
      
  a1 = [ones(m,1) X]; % 5000 x 401 == no_of_input_images x no_of_features % Adding 1 in X 
  %No. of rows = no. of input images
  %No. of Column = No. of features in each image
  
  z2 = a1 * Theta1';  % 5000 x 25
  a2 = sigmoid(z2);   % 5000 x 25
 
  a2 =  [ones(size(a2,1),1) a2];  % 5000 x 26
  
  z3 = a2 * Theta2';  % 5000 x 10
  a3 = sigmoid(z3);  % 5000 x 10
  
  [prob, p] = max(a3,[],2); 
  %returns maximum elment in each row  == max. probability and its index for each input image
  %p: predicted output (index)
  %prob: probability of predicted output
  
  % =========================================================================
end

MACHINE LEARNING COURSERA QUIZ ANSWERS WEEK 4

    1. Which of the following statements are true? Check all that apply.
        •  Any logical function over binary-valued (0 or 1) inputs x1 and x2 can be (approximately) represented using some neural network.
        •  Suppose you have a multi-class classification problem with three classes, trained with a 3 layer network. Let  be the activation of the first output unit, and similarly  and . Then for any input x, it must be the case that .
        •  A two layer (one input layer, one output layer; no hidden layer) neural network can represent the XOR function.
        •  The activation values of the hidden units in a neural network, with the sigmoid activation function applied at every layer, are always in the range (0, 1).

    1. Consider the following neural network which takes two binary-valued inputs
       and outputs . Which of the following logical functions does it (approximately) compute?
      enter image description here

        •  AND

           This network outputs approximately 1 only when both inputs are 1.

        •  NAND (meaning “NOT AND”)
        •  OR
        •  XOR (exclusive OR)

 


    1. Consider the following neural network which takes two binary-valued inputs
       and outputs . Which of the following logical functions does it (approximately) compute?
      enter image description here

        •  AND
        •  NAND (meaning “NOT AND”)
        •  OR

           This network outputs approximately 1 when atleast one input is 1.

        •  XOR (exclusive OR)

 


    1. Consider the neural network given below. Which of the following equations correctly computes the activation ? Note:  is the sigmoid activation
      function.
      enter image description here

        •  

           Thiscorrectly uses the first row of  and includes the “+1” term of .

        •  
        •  
        •  

 


  1. You have the following neural network:
    enter image description here
    You’d like to compute the activations of the hidden layer . One way to do
    so is the following Octave code:
    enter image description here
    You want to have a vectorized implementation of this (i.e., one that does not use for loops). Which of the following implementations correctly compute ? Check all
    that apply.

      •  z = Theta1 * x; a2 = sigmoid (z);

         This version computes  correctly in two steps , first the multiplication and then the sigmoid activation.

      •  a2 = sigmoid (x * Theta1);
      •  a2 = sigmoid (Theta2 * x);
    •  z = sigmoid(x); a2 = sigmoid (Theta1 * z);

 

 

  1. You are using the neural network pictured below and have learned the parameters  (used to compute ) and  (used to compute  as a function of ). Suppose you swap the parameters for the first hidden layer between its two units so  and also swap the output layer so . How will this change the value of the output ?
    enter image description here

      •  It will stay the same.

         Swapping  swaps the hidden layers output . But the swap of  cancels out the change, so the output will remain unchanged.

      •  It will increase.
      •  It will decrease
    •  Insufficient information to tell: it may increase or decrease.

 

MACHINE LEARNING COURSERA Assignment ANSWERS WEEK 5

 

sigmoidGradient.m :

function g = sigmoidGradient(z)
  %SIGMOIDGRADIENT returns the gradient of the sigmoid function
  %evaluated at z
  %   g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
  %   evaluated at z. This should work regardless if z is a matrix or a
  %   vector. In particular, if z is a vector or matrix, you should return
  %   the gradient for each element.
  
  g = zeros(size(z));
  
  % ====================== YOUR CODE HERE ======================
  % Instructions: Compute the gradient of the sigmoid function evaluated at
  %               each value of z (z can be a matrix, vector or scalar).
  
  g = sigmoid(z).*(1-sigmoid(z));
  
  % =============================================================
end

 

randInitializeWeights.m :

function W = randInitializeWeights(L_in, L_out)
  %RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
  %incoming connections and L_out outgoing connections
  %   W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights 
  %   of a layer with L_in incoming connections and L_out outgoing 
  %   connections. 
  %
  %   Note that W should be set to a matrix of size(L_out, 1 + L_in) as
  %   the first column of W handles the "bias" terms
  %
  
  % You need to return the following variables correctly 
  W = zeros(L_out, 1 + L_in);
  
  % ====================== YOUR CODE HERE ======================
  % Instructions: Initialize W randomly so that we break the symmetry while
  %               training the neural network.
  %
  % Note: The first column of W corresponds to the parameters for the bias unit
  %
  % epsilon_init = 0.12;
  
  epsilon_init = sqrt(6)/(sqrt(L_in)+sqrt(L_out));
  W = - epsilon_init + rand(L_out, 1 + L_in) * 2 * epsilon_init ;
  
  % =========================================================================
end

nnCostFunction.m :

function [J, grad] = nnCostFunction(nn_params, ...
      input_layer_size, ...
      hidden_layer_size, ...
      num_labels, ...
      X, y, lambda)
  %NNCOSTFUNCTION Implements the neural network cost function for a two layer
  %neural network which performs classification
  %   [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
  %   X, y, lambda) computes the cost and gradient of the neural network. The
  %   parameters for the neural network are "unrolled" into the vector
  %   nn_params and need to be converted back into the weight matrices.
  %
  %   The returned parameter grad should be a "unrolled" vector of the
  %   partial derivatives of the neural network.
  %
  
  % Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
  % for our 2 layer neural network
  % DIMENSIONS:
  % Theta1 = 25 x 401
  % Theta2 = 10 x 26
  
  Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
      hidden_layer_size, (input_layer_size + 1));
  
  Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
      num_labels, (hidden_layer_size + 1));
  
  % Setup some useful variables
  m = size(X, 1);
  
  % You need to return the following variables correctly
  J = 0;
  Theta1_grad = zeros(size(Theta1)); %25 x401
  Theta2_grad = zeros(size(Theta2)); %10 x 26
  
  % ====================== YOUR CODE HERE ======================
  % Instructions: You should complete the code by working through the
  %               following parts.
  %
  % Part 1: Feedforward the neural network and return the cost in the
  %         variable J. After implementing Part 1, you can verify that your
  %         cost function computation is correct by verifying the cost
  %         computed in ex4.m
  %
  % Part 2: Implement the backpropagation algorithm to compute the gradients
  %         Theta1_grad and Theta2_grad. You should return the partial derivatives of
  %         the cost function with respect to Theta1 and Theta2 in Theta1_grad and
  %         Theta2_grad, respectively. After implementing Part 2, you can check
  %         that your implementation is correct by running checkNNGradients
  %
  %         Note: The vector y passed into the function is a vector of labels
  %               containing values from 1..K. You need to map this vector into a
  %               binary vector of 1's and 0's to be used with the neural network
  %               cost function.
  %
  %         Hint: We recommend implementing backpropagation using a for-loop
  %               over the training examples if you are implementing it for the
  %               first time.
  %
  % Part 3: Implement regularization with the cost function and gradients.
  %
  %         Hint: You can implement this around the code for
  %               backpropagation. That is, you can compute the gradients for
  %               the regularization separately and then add them to Theta1_grad
  %               and Theta2_grad from Part 2.
  %
  
  %%%%%%%%%%% Part 1: Calculating J w/o Regularization %%%%%%%%%%%%%%%
  
  X = [ones(m,1), X];  % Adding 1 as first column in X
  
  a1 = X; % 5000 x 401
  
  z2 = a1 * Theta1';  % m x hidden_layer_size == 5000 x 25
  a2 = sigmoid(z2); % m x hidden_layer_size == 5000 x 25
  a2 = [ones(size(a2,1),1), a2]; % Adding 1 as first column in z = (Adding bias unit) % m x (hidden_layer_size + 1) == 5000 x 26
  
  z3 = a2 * Theta2';  % m x num_labels == 5000 x 10
  a3 = sigmoid(z3); % m x num_labels == 5000 x 10
  
  h_x = a3; % m x num_labels == 5000 x 10
  
  %Converting y into vector of 0's and 1's for multi-class classification
  
  %%%%% WORKING %%%%%
  % y_Vec = zeros(m,num_labels);
  % for i = 1:m
  %     y_Vec(i,y(i)) = 1;
  % end
  %%%%%%%%%%%%%%%%%%%
  
  y_Vec = (1:num_labels)==y; % m x num_labels == 5000 x 10
  
  %Costfunction Without regularization
  J = (1/m) * sum(sum((-y_Vec.*log(h_x))-((1-y_Vec).*log(1-h_x))));  %scalar
  
  
  %%%%%%%%%%% Part 2: Implementing Backpropogation for Theta_gra w/o Regularization %%%%%%%%%%%%%
  
  %%%%%%% WORKING: Backpropogation using for loop %%%%%%%
  % for t=1:m
  %     % Here X is including 1 column at begining
  %     
  %     % for layer-1
  %     a1 = X(t,:)'; % (n+1) x 1 == 401 x 1
  %     
  %     % for layer-2
  %     z2 = Theta1 * a1;  % hidden_layer_size x 1 == 25 x 1
  %     a2 = [1; sigmoid(z2)]; % (hidden_layer_size+1) x 1 == 26 x 1
  %   
  %     % for layer-3
  %     z3 = Theta2 * a2; % num_labels x 1 == 10 x 1    
  %     a3 = sigmoid(z3); % num_labels x 1 == 10 x 1    
  % 
  %     yVector = (1:num_labels)'==y(t); % num_labels x 1 == 10 x 1    
  %     
  %     %calculating delta values
  %     delta3 = a3 - yVector; % num_labels x 1 == 10 x 1    
  %     
  %     delta2 = (Theta2' * delta3) .* [1; sigmoidGradient(z2)]; % (hidden_layer_size+1) x 1 == 26 x 1
  %     
  %     delta2 = delta2(2:end); % hidden_layer_size x 1 == 25 x 1 %Removing delta2 for bias node  
  %     
  %     % delta_1 is not calculated because we do not associate error with the input  
  %     
  %     % CAPITAL delta update
  %     Theta1_grad = Theta1_grad + (delta2 * a1'); % 25 x 401
  %     Theta2_grad = Theta2_grad + (delta3 * a2'); % 10 x 26
  %  
  % end
  % 
  % Theta1_grad = (1/m) * Theta1_grad; % 25 x 401
  % Theta2_grad = (1/m) * Theta2_grad; % 10 x 26
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  
  %%%%%% WORKING: Backpropogation (Vectorized Implementation) %%%%%%%
  % Here X is including 1 column at begining
  A1 = X; % 5000 x 401
  
  Z2 = A1 * Theta1';  % m x hidden_layer_size == 5000 x 25
  A2 = sigmoid(Z2); % m x hidden_layer_size == 5000 x 25
  A2 = [ones(size(A2,1),1), A2]; % Adding 1 as first column in z = (Adding bias unit) % m x (hidden_layer_size + 1) == 5000 x 26
  
  Z3 = A2 * Theta2';  % m x num_labels == 5000 x 10
  A3 = sigmoid(Z3); % m x num_labels == 5000 x 10
  
  % h_x = a3; % m x num_labels == 5000 x 10
  
  y_Vec = (1:num_labels)==y; % m x num_labels == 5000 x 10
  
  DELTA3 = A3 - y_Vec; % 5000 x 10
  DELTA2 = (DELTA3 * Theta2) .* [ones(size(Z2,1),1) sigmoidGradient(Z2)]; % 5000 x 26
  DELTA2 = DELTA2(:,2:end); % 5000 x 25 %Removing delta2 for bias node
  
  Theta1_grad = (1/m) * (DELTA2' * A1); % 25 x 401
  Theta2_grad = (1/m) * (DELTA3' * A2); % 10 x 26
  
  %%%%%%%%%%%% WORKING: DIRECT CALCULATION OF THETA GRADIENT WITH REGULARISATION %%%%%%%%%%%
  % %Regularization term is later added in Part 3
  % Theta1_grad = (1/m) * Theta1_grad + (lambda/m) * [zeros(size(Theta1, 1), 1) Theta1(:,2:end)]; % 25 x 401
  % Theta2_grad = (1/m) * Theta2_grad + (lambda/m) * [zeros(size(Theta2, 1), 1) Theta2(:,2:end)]; % 10 x 26
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  
  
  %%%%%%%%%%%% Part 3: Adding Regularisation term in J and Theta_grad %%%%%%%%%%%%%
  reg_term = (lambda/(2*m)) * (sum(sum(Theta1(:,2:end).^2)) + sum(sum(Theta2(:,2:end).^2))); %scalar
  
  %Costfunction With regularization
  J = J + reg_term; %scalar
  
  %Calculating gradients for the regularization
  Theta1_grad_reg_term = (lambda/m) * [zeros(size(Theta1, 1), 1) Theta1(:,2:end)]; % 25 x 401
  Theta2_grad_reg_term = (lambda/m) * [zeros(size(Theta2, 1), 1) Theta2(:,2:end)]; % 10 x 26
  
  %Adding regularization term to earlier calculated Theta_grad
  Theta1_grad = Theta1_grad + Theta1_grad_reg_term;
  Theta2_grad = Theta2_grad + Theta2_grad_reg_term;
  
  % -------------------------------------------------------------
  
  % =========================================================================
  
  % Unroll gradients
  grad = [Theta1_grad(:) ; Theta2_grad(:)];
end

MACHINE LEARNING COURSERA QUIZ ANSWERS WEEK 5

 

    1. You are training a three layer neural network and would like to use backpropagation to compute the gradient of the cost function. In the backpropagation algorithm, one of the steps is to update
      for every i,j. Which of the following is a correct vectorization of this step?

        •  
        •  
        •  
        •  

           This version is correct, as it takes the “outer product” of the two vectors  and  which is a matrix such that the (i,j)-th entry is as  desired.

 


    1. Suppose Theta1 is a 5×3 matrix, and Theta2 is a 4×6 matrix. You set thetaVec = [Theta1(:), Theta2(:)]. Which of the following correctly recovers ?
        •  reshape(thetaVec(16 : 39), 4, 6)

           This choice is correct, since Theta1 has 15 elements, so Theta2 begins at
          index 16 and ends at index 16 + 24 – 1 = 39.

        •  reshape(thetaVec(15 : 38), 4, 6)
        •  reshape(thetaVec(16 : 24), 4, 6)
        •  reshape(thetaVec(15 : 39), 4, 6)
        • reshape(thetaVec(16 : 39), 6, 4)

 


    1. Let . Let , and . Use the formula  to numerically compute an approximation to the derivative at . What value do you get? (When , the true/exact derivati ve is .)
        •  8
        •  6.0002

           We compute .

        •  6
        •  5.9998

  1. Which of the following statements are true? Check all that apply.
      •  For computational efficiency, after we have performed gradient checking to verify that our backpropagation code is correct, we usually disable gradient checking before using backpropagation to train the network.

         Checking the gradient numerically is a debugging tool: it helps ensure a correct implementation, but it is too slow to use as a method for actually computing gradients.

      •  Computing the gradient of the cost function in a neural network has the same efficiency when we use backpropagation or when we numerically compute it using the method of gradient checking.
      •  Using gradient checking can help verify if one’s implementation of backpropagation is bug-free.

         If the gradient computed by backpropagation is the same as one computed numerically with gradient checking, this is very strong evidence that you have a correct implementation of backpropagation.

    •  Gradient checking is useful if we are using one of the advanced optimization methods (such as in fminunc) as our optimization algorithm. However, it serves little purpose if we are using gradient descent.
  1. Which of the following statements are true? Check all that apply.
      •  If we are training a neural network using gradient descent, one reasonable “debugging” step to make sure it is working is to plot J(Θ) as a function of the number of iterations, and make sure it is decreasing (or at least non-increasing) after each iteration.

         Since gradient descent uses the gradient to take a step toward parameters
        with lower cost (ie, lower J(Θ)), the value of J(Θ) should be equal or less at each iteration if the gradient computation is correct and the learning rate is set properly.

      •  Suppose you have a three layer network with parameters  (controlling the function mapping from the inputs to the hidden units) and  (controlling the mapping from the hidden units to the outputs). If we set all the elements of  to be 0, and all the elements of  to be 1, then this suffices for symmetry breaking, since the neurons are no longer all computing the same function of the input.
      •  Suppose you are training a neural network using gradient descent. Depending on your random initialization, your algorithm may converge to different local optima (i.e., if you run the algorithm twice with different random initializations, gradient descent may converge to two different solutions).

         The cost function for a neural network is non-convex, so it may have multiple minima. Which minimum you find with gradient descent depends on the initialization.

    •  If we initialize all the parameters of a neural network to ones instead of zeros, this will suffice for the purpose of “symmetry breaking” because the parameters are no longer symmetrically equal to zero.

 

 

 

ASSEMBLE A COMPUTER COURSERA ANSWERS

 

IGNORE TAGS:=====

 

“machine learning coursera quiz answers week 1”
“machine learning coursera quiz answers week 2”
“machine learning coursera quiz answers week 6”
“machine learning coursera quiz answers github”
“machine learning coursera quiz answers week 3”
“machine learning coursera quiz answers week 5”
“machine learning coursera quiz answers week 4”
“machine learning coursera quiz answers week 9”
“machine learning coursera quiz answers week 10”
“getting started with aws machine learning coursera quiz answers”
“introduction to machine learning coursera quiz answers”
“getting started with aws machine learning coursera quiz answers github”
“stanford university machine learning coursera quiz answers”
“advice for applying machine learning coursera quiz answers”
“mathematics for machine learning coursera quiz answers”
“introduction to applied machine learning coursera quiz answers”
“large scale machine learning coursera quiz answers”
“neural networks for machine learning coursera quiz answers”
“how google does machine learning coursera quiz answers”

introduction to machine learning coursera quiz answers
coursera machine learning quiz answers week 3
machine learning-coursera github
introduction to machine learning duke university coursera quiz answers
machine learning coursera quiz answers week 2
machine learning coursera quiz answers week 1

 
error: Content is Protected !!!