Сравнение моделей нейронных сетей для автоматического управления полетом квадрокоптера по заданной траектории
Аннотация
Ключевые слова
Полный текст:
PDFЛитература
Åström K. J., Hägglund T. The future of PID control // Control Engineering Practice Vol. 9, Is. 11, 2001, pp. 1163-1175. URL: https://doi.org/10.1016/S0967-0661(01)00062-4
Bannwarth J. X. J., Chen Z. J., Stol K. A., MacDonald B. A. Disturbance accomodation control for wind rejection of a quadcopter // International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 2016. URL: https://doi.org/10.1109/ICUAS.2016.7502632
Berner C., Brockman G. et.al. Dota 2 with Large Scale Deep Reinforcement Learning // OpenAI, 2019. URL: https://doi.org/10.48550/arXiv.1912.06680
Bock S., Weiß M. A proof of local convergence for the Adam optimizer // International Joint Conference on Neural Networks (IJCNN), IEEE, 2019 https://doi.org/10.1109/IJCNN.2019.8852239
Ding L., He Q., Wang C., Qi R. Disturbance rejection attitude control for a quadrotor: Theory and experiment // International Journal of Aerospace Engineering, 2021, Vol. 2, 1-15 pp. URL: https://doi.org/10.1155/2021/8850071.
Engstrom L., Ilyas A., Santurkar S., Tsipras D., Janoos F., Rudolph L., Madry A. Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO // International Conference on Learning Representations (ICLR), 2020. URL: https://doi.org/10.48550/arXiv.2005.12729
Fan J., Saadeghvaziri M. Applications of Drones in Infrastructures: Challenges and Opportunities // International Journal of Mechanical, Industrial and Aerospace Sciences, Vol. 12, n. 10, 2019. URL: https://doi.org/10.5281/zenodo.3566281
Grondman I., Busoniu L., Lopes G. A. D., Babuska R. A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients // IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 6, pp. 1291-1307, 2012. URL: https://doi.org/10.1109/TSMCC.2012.2218595
Gupta O. Precision Agriculture with Drones: A New Age of Farming // AkiNik Publications, 2025. 82 pp. URL: https://doi.org/10.22271/ed.book.3126
Henderson P., Islam R., Bachman P., Pineau J., Precup D., Meger D. Deep reinforcement learning that matters // Thirthy-Second AAAI Conference On Artificial Intelligence, 2018. URL: https://doi.org/10.1609/aaai.v32i1.11694
Iman S., Aria A. Self-Tuning PID Control via a Hybrid Actor-Critic-Based Neural Structure for Quadcopter Control // The 30th Annual International Conference of Iranian Society of Mechanical Engineers, 2022, Iran. URL: https://doi.org/10.48550/arXiv.2307.01312
Kingma D.P., Ba J. Adam: A method for stochastic optimization // 3rd International Conference for Learning Representations, San Diego, 2015. URL: https://doi.org/10.48550/arXiv.1412.6980
Li Y., Zhu Q., Elahi A. Quadcopter trajectory tracking control based on flatness model predictive control and neural network // Actuators 2024, Vol. 13, 154, 20 p. URL: https://doi.org/10.3390/act13040154.
Lopez-Sanchez I., Moreno-Valenzuela J. PID control of quadrotor UAVs: A survey // Annual Reviews in Control. 2023. Vol. 56. p. 100900. URL: https://doi.org/10.1016/j.arcontrol.2023.100900.
Mahran Y., Gamal Z., El-Badawy A. Reinforcement Learning Position Control of a Quadrotor Using Soft Actor-Critic (SAC) // 6th Novel Intelligent and Leading Emerging Sciences Conference (NILES). IEEE, 2024. URL: https://doi.org/10.1109/NILES63360.2024.10753187
Nguyen N. P., Mung N. X., Thanh H. L. N. N., Huynh T. T., Lam N. T., Hong S. K. Adaptive Sliding Mode Control for Attitude and Altitude System of a Quadcopter UAV via Neural Network // IEEE Access, vol. 9, pp. 40076-40085, 2021. URL: https://doi.org/10.1109/ACCESS.2021.3064883.
Pounds P.E.I., Bersak D.R., Dollar A.M. Stability of small-scale UAV helicopters and quadrotors with added payload mass under PID control // Auton Robot 33, 129–142 pp., 2012. URL: https://doi.org/10.1007/s10514-012-9280-5.
Rumelhart D.E., Hinton G.E., Williams R.J. Learning representations by back-propagating errors // Nature, Vol. 323, 1986, 533–536 pp. URL: https://doi.org/10.1038/323533a0.
Schulman J., Wolski F., Dhariwal P., Radford A., Klimov O. Proximal Policy Optimization Algorithms // OpenAI, 2017. URL: https://doi.org/10.48550/arXiv.1707.06347.
Shahmoradi J., Talebi E., Roghanchi P., Hassanalian M. A Comprehensive Review of Applications of Drone Technology in the Mining Industry // Drones 2020, vol. 4, no. 3: 34. URL: https://doi.org/10.3390/drones4030034.
Sutton R.S., Barto A.G. Reinforcement learning: An introduction. // IEEE Transactions on Neural Networks, Vol. 9, Is 5, 1998, 1054 pp. URL: https://doi.org/10.1109/TNN.1998.712192.
Tangkaratt V., Abdolmaleki A., Sugiyama M. Guide Actor-Critic for Continuous Control // International Conference on Learning Representations (ICLR), 2018. URL: https://doi.org/10.48550/arXiv.1705.07606.
Tripathi V.K., Behera L., Verma N. Design of sliding mode and backstepping controllers for a quadcopter // 39th National Systems Conference (NSC), IEEE, 2015. URL: https://doi.org/10.1109/NATSYS.2015.7489097.
Waharte S., Trigoni N. Supporting Search and Rescue Operations with UAVs // International Conference on Emerging Security Technologies, Canterbury, UK, 2010. URL: https://doi.org/10.1109/EST.2010.31.
Zhang J., Rivera C.E.O., Tyni K., Nguyen S., Leal U. S. C., Shoukry Y. AirPilot Drone Controller: Enabling Interpretable On-the-Fly PID Auto-Tuning via DRL // IEEE 6th International Conference on Civil Aviation Safety and Information Technology (ICCASIT), 2024. URL: https://doi.org/10.1109/ICCASIT62299.2024.10828099.
Zhou J., Wang H., Wei J., Liu L., Huang X., Gao S., Liu W., Li J., Yu C., Li Z. Adaptive moment estimation for polynomial nonlinear equalizer in PAM8-based optical interconnects // Optics express 2019, Vol. 27, No. 22, Pp. 32210–32216. URL: https://doi.org/10.1364/OE.27.032210
Гурчинский М. М., Тебуева Ф. Б. Обнаружение нарушителя агентами роевых робототехнических систем в условиях недетерминированной среды функционирования // СИИТ. 2024. Т. 6, № 3(18). С. 71-82. URL: https://doi.org/10.54708/2658-5014-SIIT-2024-no3-p71. EDN AUVYOX. [[Gurchinsky M. M., Tebueva F. B. Intruder Detection by Agents of Swarm Robotic Systems in a Non-Deterministic Operating Environment // SIIT. 2024. Vol. 6, No. 3(18). Pp. 71-82. (In Russian).]]
Муслимов Т. З. Методы и алгоритмы группового управления беспилотными летательными аппаратами самолетного типа // СИИТ. 2024. Т. 6, № 1(16). С. 3-15. URL: https://doi.org/10.54708/2658-5014-SIIT-2024-no1-p3. EDN HOTUZU. [[Muslimov T. Z. Methods and Algorithms for Formation Control of Fixed-Wing Unmanned Aerial Vehicles // SIIT. 2024. Vol. 6, No. 1(16). Pp. 3-15. (In Russian).]]
Приходько В. Е., Тепляшин П. Н., Плотников А. В., Шебухова О. А. Практическая реализация коммуникационной системы мобильной группы на основе нейронных сетей // СИИТ. 2025. Т. 7, № 1(20). С. 96-104. URL: https://doi.org/10.54708/2658-5014-SIIT-2025-no1-p96. EDN UYDDVC. [[Prikhodko V. E., Teplyashin P. N., Plotnikov A. V., Shebukhov O. A. Practical Implementation of a Mobile Group Communication System Based on Neural Networks // SIIT. 2025. Vol. 7, No. 1(20). Pp. 96-104. (In Russian).]]
Саитова Г. А., Габдуллина Э. Р. Методика определения проективного покрытия полей на основе дистанционного мониторинга // СИИТ. 2025. Т. 7, № 2(21). С. 48-55. URL: https://doi.org/10.54708/2658-5014-SIIT-2025-no2-p48. EDN XTKJHQ. [[Saitova G. A., Gabdullina E. R. Methodology for Determining Field Projective Cover Based on Remote Monitoring // SIIT. 2025. Vol. 7, No. 2(21). Pp. 48-55. (In Russian).]]
DOI: https://doi.org/10.54708/2658-5014-SIIT-2025-no5-p86
Ссылки
- На текущий момент ссылки отсутствуют.
(c) 2025 Р. Д. Халилов, Т. З. Муслимов




