I am trying to code a ML algorithm in Matlab. These are my different functions:
sigmoid.m:
function g = sigmoid(z)
g = zeros(size(z));
g = 1 ./ (1+exp(z));
costFunction.m
function [J, grad ] = costFunction(theta, X, y)
m = length(y); % number of training examples
z = -X * theta;
g = sigmoid(z);
J = 1/m * ((-y * log(g)') - ((1 - y) * log(1 - g)'));
grad = zeros(size(theta'));
grad = (1/m) * (X' * (g - y));
ex2.m (This is the main file of my project and I put the relative lines I get this error message)
options = optimset('GradObj', 'on', 'MaxIter', 400);
[theta, cost] = ...
fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
The error message:
Error using fminunc (line 348) Supplied objective function must return a scalar value.
Error in ex2 (line 97) fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
I don't know is there enough information above or not? If not, let me know to add extra information.
I changed the following line of code:
J = 1/m * ((-y * log(g)') - ((1 - y) * log(1 - g)'));
To the following line of code:
J = 1/m * (((-y)' * log(g)) - ((1 - y)' * log(1 - g)));
And problem solved!
The y
and g
were 100*1
matrices and with previous code I had J=100*100
matrix, but with new code I have J=1*1
matrix or scalar number and problem solved!