Preview

Современная наука и инновации

Расширенный поиск

МЕТОД СОВМЕСТНОЙ ОПТИМИЗАЦИИ ВЕСОВ И СТРУКТУРЫ ИСКУССТВЕННЫХ НЕЙРОННЫХ СЕТЕЙ ПРЯМОГО РАСПРОСТРАНЕНИЯ ПРИ ГЛУБОКОМ МУЛЬТИАГЕНТНОМ ОБУЧЕНИИ С ПОДКРЕПЛЕНИЕМ

https://doi.org/10.37493/2307-910X.2021.2.8

Аннотация

Увеличение интеллектуальности задач, решаемых с помощью мобильных киберфизических систем (МКФС), требует применения искусственных нейронных сетей (ИНС) и методов глубокого мультиагентного обучения с подкреплением (ГМОП).

Об авторе

В. И. Петренко
Северо-Кавказский федеральный университет
Россия


Список литературы

1. Abiodun O.I. et al. State-of-the-art in artificial neural network applications: A survey // Heliyon. 2018. Vol. 4, № 11.

2. Zhang C., Vinyals O., Munos R., Bengio S. A study on overfitting in deep reinforcement learning // arXiv. 2018.

3. Petrenko V.I., Tebueva F.B., Ryabtsev S.S., Gurchinsky M.M., Struchkov I. V. Consensus achievement method for a robotic swarm about the most frequently feature of an environment // IOP Conference Series: Materials Science and Engineering. 2020. Vol. 919, № 4.

4. Kovacs G., Yussupova N., Rizvanov D. Resource management simulation using multi-agent approach and semantic constraints // Pollack Period. 2017. Vol. 12, № 1.

5. Pshikhopov V.KH., Medvedev M.YU. Gruppovoye upravleniye dvizheniyem mo-bil'nykh robotov v neopredelennoy srede s ispol'zovaniyem neustoychivykh rezhimov // Trudy SPIIRAN. 2018. T. 60, № 5. S. 39-63.

6. Tugengol'd A.K., Luk'yanov Ye.A. Intellektual'nyye funktsii i upravleniye avtonomnymi tekhnologicheskimi mekhatronnymi ob"yektami. Rostov-na-Donu: Donskoy gosudarstvennyy tekhnicheskiy universitet, 2013. 203 s.

7. Darintsev O.V., Migranov A.B. Raspredelennaya sistema upravleniya gruppami mo-bil'nykh robotov // Vestnik UGATU. 2017. T. 2, № 76.

8. Petrenko V.I., Tebuyeva F.B., Gurchinskiy M.M., Ryabtsev S.S. Analiz tekhnologiy obespecheniya informatsionnoy bezopasnosti mul'tiagentnykh robototekhnicheskikh sistem s royevym intellektom // Nauka i biznes puti razvitiya. 2020. № 4 (106). S. 96-99.

9. Munasypov R.A.A., Masalimov K.A.A. Neural network models for diagnostics of complex technical objects state by example of electrochemical treatment process // Proceedings -2017 2nd International Ural Conference on Measurements, UralCon 2017. 2017. Vol. 2017-Novem. P. 156-160.

10. Mironov K.V. Pongratz M.U. Applying neural networks for prediction of flying objects trajectory // Vestn. UGATU. 2013. № 6.

11. Mnih V. et al. Human-level control through deep reinforcement learning // Nature. Nature Publishing Group, 2015. Vol. 518, № 7540. P. 529-533.

12. Hausknecht M., Stone P. Deep recurrent q-learning for partially observable MDPs // AAAI Fall Symposium - Technical Report. 2015.

13. Matignon L., Laurent G.J., Le Fort-Piat N. Independent reinforcement learners in cooperative Markov games: A survey regarding coordination problems // Knowledge Engineering Review. 2012. Vol. 27, № 1.

14. Jaderberg M. et al. Human-level performance in 3D multiplayer games with population-based reinforcement learning // Science. 2019. Vol. 364, № 6443.

15. Espeholt L. et al. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures // 35th International Conference on Machine Learning, ICML 2018. 2018. Vol. 4.

16. Lowe R. et al. Multi-agent actor-critic for mixed cooperative-competitive environments // Advances in Neural Information Processing Systems. 2017. Vol. 2017-December.

17. Hernandez-Leal P., Kartal B., Taylor M.E. A survey and critique of multiagent deep reinforcement learning // Auton. Agent. Multi. Agent. Syst. 2019. Vol. 33, № 6.

18. Gabella M., Ebli S., Afambo N., Spreemann G. Topology of learning in artificial neural networks // arXiv. 2019.

19. Perez-Espinosa H. et al. Tuning the Parameters of a Convolutional Artificial Neural Network by Using Covering Arrays // Res. Comput. Sci. 2016. Vol. 121, № 1.

20. Pham H., Guan M.Y., Zoph B., Le Q. V., Dean J. Efficient Neural Architecture Search via parameter Sharing // 35th International Conference on Machine Learning, ICML 2018. 2018. Vol. 9.

21. Blalock D., Ortiz J.J.G., Frankle J., Guttag J. What is the state of neural network pruning? // arXiv. 2020.

22. Mocanu D.C. et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science // Nat. Commun. 2018. Vol. 9, № 1.

23. Canziani A., Culurciello E., Paszke A. Evaluation of neural network architectures for embedded systems // Proceedings - IEEE International Symposium on Circuits and Systems. 2017.

24. Ma Y., Ping B., Liu G., Liao Y., Zeng D. Feedforward Feedback Control Based on DQN // Proceedings of the 32nd Chinese Control and Decision Conference, CCDC 2020. 2020.

25. Han D., Doya K., Tani J. Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks // Neural Networks. 2020. Vol. 129.


Рецензия

Для цитирования:


Петренко В.И. МЕТОД СОВМЕСТНОЙ ОПТИМИЗАЦИИ ВЕСОВ И СТРУКТУРЫ ИСКУССТВЕННЫХ НЕЙРОННЫХ СЕТЕЙ ПРЯМОГО РАСПРОСТРАНЕНИЯ ПРИ ГЛУБОКОМ МУЛЬТИАГЕНТНОМ ОБУЧЕНИИ С ПОДКРЕПЛЕНИЕМ. Современная наука и инновации. 2021;(2):91-100. https://doi.org/10.37493/2307-910X.2021.2.8

For citation:


Petrenko V.I. METHOD FOR JOINT OPTIMIZATION FEEDFORWARD ARTIFICIAL NEURAL NETWORKS WEIGHTS AND STRUCTURE IN DEEP MULTI-AGENT REINFORCEMENT LEARNING. Modern Science and Innovations. 2021;(2):91-100. (In Russ.) https://doi.org/10.37493/2307-910X.2021.2.8

Просмотров: 83


Creative Commons License
Контент доступен под лицензией Creative Commons Attribution 4.0 License.


ISSN 2307-910X (Print)