Artificial neural networks are successfully applied to many different problems. Among them, a large class of problems related to computer vision can be distinguished. In this area, the use of convolutional neural networks is particularly successful. Most of the existing neural network architectures are trained on large clusters that require a large amount of computational resources. Therefore, urgent is the task of optimizing neural networks, which can include both increasing performance and reducing the size of the computing power used. In this paper, we propose a method for optimizing (increasing the performance and reducing the amount of consumed resources) of a convolutional neural network, applicable in conditions of redundancy in the input data. Using the Caltech256 dataset and VGG16 network architecture, it was shown that the proposed method can improve network performance by 10% while maintaining accuracy and reducing the amount of resources consumed by 25%.

1.
A. V.
Belko
,
K. S.
Dobratulin
, and
A. V.
Kuznetsov
,
Computer Optics
45
(
5
),
728
735
(
2021
).
2.
A. V.
Astafiev
,
D. V.
Titov
,
A. L.
Zhiznyakov
, and
A. A.
Demidov
,
Computer Optics
45
(
2
),
277
285
(
2021
).
3.
G.
Sapunov
,
Speeding up BERT. How to make BERT models faster
(
2019
) https://blog.inten.to
4.
C.
Bucilua
,
R.
Caruana
, and
A.
Niculescu-Mizil
, “Model compression,” in
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
,
(Philadelphia
,
PA, USA
,
2006
), pp.
535
541
.
5.
G.
Hinton
,
O.
Vinyals
, and
J.
Dean
, “
Distilling the knowledge in a neural network
,” arXiv preprint, arXiv:1503.02531 (
2015
).
6.
G.
Griffin
,
A.
Holub
, and
P.
Perona
, “
Caltech-256 object category dataset
,” (
California Institute of Technology
,
2007
).
7.
M.
Stone
,
Journal of the Royal Statistical Society: Series B (Methodological)
36
(
2
), pp.
111
133
(
1974
).
8.
K.
Simonyan
and
A.
Zisserman
, “
Very deep convolutional networks for large-scale image recognition
,” arXiv preprint arXiv:1409.1556 (
2014
).
9.
D. P.
Kingma
and
J.
Ba
, “
Adam: A method for stochastic optimization
,” arXiv preprint, arXiv:1412.6980 (
2014
).
10.
MakiResearchTeam, MakiFlow Framework (
2021
) https://github.com
11.
S.
Ioffe
and
C.
Szegedy
, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in
International Conference on Machine Learning
,
(Lille
,
France
,
2015
), pp.
448
456
.
12.
N.
Japkowicz
and
S.
Stephen
,
Intelligent data analysis
6
(
5
),
429
449
(
2002
).
This content is only available via PDF.
You do not currently have access to this content.