matlab - What is wrong with my Gradient Descent algorithm -
hi i'm trying implement gradient descent algorithm function:
my starting point algorithm w = (u,v) = (2,2). learning rate eta = 0.01 , bound = 10^-14. here matlab code:
function [resulttable, bounditer] = gradientdescent(w, iters, bound, eta) % function [resulttable, bounditer] = gradientdescent(w, its, bound, eta) % % description: % - function gradient descent error minimization % function e(u,v) = (u*exp(v) - 2*v*exp(-u))^2. % % inputs: % 'w' 1-by-2 vector indicating initial weights w = [u,v] % 'its' positive integer indicating number of gradient descent % iterations % 'bound' real number indicating error lower bound % 'eta' positive real number indicating learning rate of gd algorithm % % outputs: % 'resulttable' iters+1-by-6 table indicating error, partial % derivatives , weights each gd iteration % 'bounditer' positive integer specifying gd iteration when error % function got below given error bound 'bound' % % error function e = @(u,v) (u*exp(v) - 2*v*exp(-u))^2; % partial derivative of e respect u pepu = @(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(exp(v) + 2*v*exp(-u)); % partial derivative of e respect v pepv = @(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(u*exp(v) - 2*exp(-u)); % initialize bounditer bounditer = 0; % create table holding results resulttable = zeros(iters+1, 6); % iteration number resulttable(1, 1) = 0; % error @ iteration resulttable(1, 2) = e(w(1), w(2)); % value of pepu @ initial w = (u,v) resulttable(1, 3) = pepu(w(1), w(2)); % value of pepv @ initial w = (u,v) resulttable(1, 4) = pepv(w(1), w(2)); % initial u resulttable(1, 5) = w(1); % initial v resulttable(1, 6) = w(2); % loop iterations = 2:iters+1 % save iteration number resulttable(i, 1) = i-1; % update weights temp1 = w(1) - eta*(pepu(w(1), w(2))); temp2 = w(2) - eta*(pepv(w(1), w(2))); w(1) = temp1; w(2) = temp2; % evaluate error function @ new weights resulttable(i, 2) = e(w(1), w(2)); % evaluate pepu @ new point resulttable(i, 3) = pepu(w(1), w(2)); % evaluate pepv @ new point resulttable(i, 4) = pepv(w(1), w(2)); % save new weights resulttable(i, 5) = w(1); resulttable(i, 6) = w(2); % if error function below specified bound save iteration % index if e(w(1), w(2)) < bound bounditer = i-1; end end this exercise in machine learning course, reason results wrong. there must wrong in code. have tried debugging , debugging , haven't found wrong...can identify problem here?...in other words can check code valid gradient descent algorithm given function?
please allow me know if question unclear or if need more info :)
thank effort , help! =)
here results 5 iterations , other people got:
parameters: w = [2,2], eta = 0.01, bound = 10^-14, iters = 5
as discussed below question: others wrong... minimization leads smaller values of e(u,v), check:
e(1.4,1.6) = 37.8 >> 3.6 = e(0.63, -1.67) algorithm matlab optimization machine-learning gradients
No comments:
Post a Comment