The Optimization of cost in any product is plays the important role. The Gradient descent method is used to solve the problem of local minimization. This is an Optimizing algorithm that also used for minimize the cost function in various machine learning. Basically we discussed the types of Gradient descent algorithm like Batch gradient descant, Stochastic gradient descant and Mini batch descant. This paper will be discussed on the different parts of Gradient descant and stochastic gradient descant algorithm. This paper is the comparative study of both algorithms.
REFERENCES
1.
S.
Liu
and Y.
Takaki
, Appl. Sci.
10
, 4283
(2020
).2.
A.C.
Kara
and F.
Hardalac
, Mach. learn. knowl. extr.
3
, 1009
–1029
(2021
).3.
F. F.
Lopes
, J. C.
Ferrcira
and M.
Fernandes
, Electronics
8
, 631
(2019
).4.
5.
6.
M.
Farhaoui
, Water Resour. Prot.
8
, 777
–786
(2016
).7.
P.
Baladi
, IEEE Trans Neural Netw Learn syst.
6
, 182
–195
(1995
).8.
S. H.
Haji
and A. M.
Abdulazeez
, PalArch’s J. Archaeol. Egypt/ Egyptol
18
, 2715
–2743
(2021
).9.
Z.
Kaoudi
and S.
Chawla
, “A cost based Optimizer for gradient descent optimizer
” in Proceedings of the 2017 ACM International Conference on Management of Data, Qatar Computing Research Institute, HBKU
(Association for Computing Machinery
, New York, NY, United States
, 2017
).10.
11.
R. L.
Bot
and E. R.
Csetnek
, Optim. Lett.
66
, 1383
–1396
(2017
).12.
13.
C.
Yiu
, J Glob Optim.
28
, 229
–238
(2004
).
This content is only available via PDF.
©2023 Authors. Published by AIP Publishing.
2023
Author(s)
You do not currently have access to this content.