Algorithm for Calculating the Weight Values of a Convolutional Neural Network without Training
- Авторлар: Geidarov P.S.1
-
Мекемелер:
- Institute of Control Systems, Ministry of Science and Education of Azerbaijan
- Шығарылым: № 3 (2024)
- Беттер: 54-70
- Бөлім: Machine Learning, Neural Networks
- URL: https://bakhtiniada.ru/2071-8594/article/view/265359
- DOI: https://doi.org/10.14357/20718594240305
- EDN: https://elibrary.ru/JVKGJU
- ID: 265359
Дәйексөз келтіру
Толық мәтін
Аннотация
This study provides a description of the algorithm on the basis of which weights and thresholds are analytically calculated, as well as the number of channels in the layers of a convolutional neural network. Based on the proposed algorithm, a number of experiments were carried out with recognition of the MNIST database. The results of the experiments described in the work showed that the time for calculating the weights of convolutional neural networks is relatively short and amounts to fractions of a second or a minute. The experimental results also showed that already using only 10 selected images from the MNIST database, analytically calculated convolutional neural networks are able to recognize more than half of the images of the MNIST test database, without using neural network training algorithms. Preliminary analytical calculation of the value of the weights of a convolutional neural network allows to speed up the training procedure of a convolutional neural network.
Авторлар туралы
Polad Geidarov
Institute of Control Systems, Ministry of Science and Education of Azerbaijan
Хат алмасуға жауапты Автор.
Email: plbaku2010@gmail.com
Doctor of Technical Sciences, Associate Professor, Leading Researcher
Әзірбайжан, BakuӘдебиет тізімі
- Matveeva K.A., Minnihanov R.N., Katasyov A.S. Svertochnaya nejrosetevaya model' akusticheskogo obnaruzheniya avarijno-spasatel'nyh mashin [Convolutional neural network model of acoustic detection of emergency vehicles] // Vestnik Tekhnologicheskogo universiteta [Bulletin of the Technological University]. 2024. V. 27. No 1. P. 76-80.
- Abramova I.A., Demchev D.M., Haryutkina E.V., Savenkova E.N., Sudakov I.A. Primenenie svertochnoj nejronnoj seti u-net i ee modifikacij dlya segmentacii tundrovyh ozer na sputnikovyh opticheskih izobrazheniyah [Application of a convolutional neural u-net network and its modifications for segmentation of tundra lakes on satellite optical images] // Optika atmosfery i okeana [Atmospheric and ocean optics]. 2024. V. 37. No 1 (420). P. 48-53.
- Hachumov M.V., Emel'yanova YU.G. Parabola kak funkciya aktivacii iskusstvennyh nejronnyh setej [Parabola as a function of activation of artificial neural networks] // Iskusstvennyj intellekt i prinyatie reshenij [Artificial Intelligence and Decision Making]. 2023. No 2. P. 89-97.
- Dobrohvalov M.O., Filatov A.YU. Detektirovanie stressa po dannym davleniya pul'sa krovi s ispol'zovaniem personalizirovannyh svertochnyh setej [Stress detection from blood pulse pressure data using personalized convolutional networks] // Izvestiya SPBGETU LETI [News of SPBGETU LETI]. 2024. V. 17. No 1. P. 55-67.
- Samylkin M.S. Analiz raboty svyortochnoj nejronnoj seti pri reshenii zadachi klassifikacii izobrazhenij s gaussovskim shumom i shumom soli i perca [Analysis of the operation of a convolutional neural network when solving the problem of image classification with Gaussian noise and salt and pepper noise] // Innovacii i investicii [Innovation and investment]. 2024. No 1. P. 184-187.
- LeCun Y., Bengio Y., Hinton G. Deep learning // Nature. 2015. V. 521. No 7553. P. 436–444.
- Schmidhuber J. Deep learning in neural networks: An overview // Neural Networks. 2015. V. 61. P. 85– 117.
- Geidarov P.Sh. Comparative analysis of the results of training the neural network with calculated weights and with random generation of the weights // Automation and Remote Control. 2020. V. 81. No 7. P. 1211–1229.
- Geidarov P.Sh. A Comparative Analysis of a Neural Network with Calculated Weights and a Neural Network with Random Generation of Weights Based on the Training Dataset Size // Optical Memory and Neural Networks. 2022. V. 31. No 3. P. 309–321.
- Soumi Chaki, Aurobinda Routray, W. K. Mohanty A probabilistic neural network (PNN) based framework for lithology classification using seismic attributes // Journal of Applied Geophysics. 2022. V. 199. P. 104-112.
- Hanchi Ren, Jingjing Deng, Xianghua Xie GRNN: Generative Regression Neural Network—A Data Leakage Attack for Federated // ACM Transactions on Intelligent Systems and Technology. 2022. V. 13. No 4. P. 1–24.
- Izonin Ivan, Tkachenko Roman, Gregus Michal ml, Zub Khrystyna, Tkachenko Pavlo. A GRNN-based Approach towards Prediction from Small Datasets in Medical Application // Procedia Computer Science. 2021. V. 184. P. 242-249.
- Glorot X. and Bengio Y. Understanding the difficulty of training deep feedforward neural networks // International Conference on Artificial Intelligence and Statistics. 2010. P. 249–256.
- Glorot X., Bordes A., Bengio Y. Deep sparse rectifier networks // Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. 2011. P. 315–323.
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification // arXiv:1502.01852.
- A Guide to One-Shot Learning. URL: https://serokell.io/blog/nn-and-one-shot-learning/ (last access date: 07.12.2023).
- Wang Yaqing, Yao Quanming, Kwok James T., Lionel M. Ni. Generalizing from a Few Examples: A Survey on Fewshot Learning // ACM Computing Surveys. 2020. V. 53. No 3. P. 1–34.
- Tobias Fechter, Dimos Baltas One-Shot Learning for Deformable Medical Image Registration and Periodic Motion Tracking // IEEE Transactions on Medical Imaging. 2020. V. 39. No 7. P. 2506-2517.
- Geidarov P. Sh. On the Possibility of Determining the Values of Neural Network Weights in an Electrostatic Field // Scientific and Technical Information Processing. 2022. V. 49. No 6. P. 1–13.
- Geidarov P.Sh. Experiment for Creating a Neural Network with Weights Determined by the Potential of a Simulated Electrostatic Field // Scientific and Technical Information Processing. 2022. V. 49. No 6. P. 1–13.
- Cummaudo A. What’s the minimum amount of data needed to teach a neural network? Applied Artificial Intelligence Institute’s Blog // URL: https://a2i2.deakin.edu. au/2018/02/21/whats-the-minimum-amount-of-dataneeded-to-teach-a-neural-network/ (date of last viewing: 02.03.2024).
Қосымша файлдар
