[1]梁永勋,甄子洋,李苏宁,等.基于 Transformer 模块和 CNN 的无人机避障方法研究[J].机械与电子,2023,41(05):56-61.
 LIANG Yongxun,ZHEN Ziyang,LI Suning,et al.Research on Obstacle Avoidance Method of UAV Based on Transformer Module and CNN[J].Machinery & Electronics,2023,41(05):56-61.
点击复制

基于 Transformer 模块和 CNN 的无人机避障方法研究()
分享到:

《机械与电子》[ISSN:1001-2257/CN:52-1052/TH]

卷:
41
期数:
2023年05期
页码:
56-61
栏目:
智能工程
出版日期:
2023-05-25

文章信息/Info

Title:
Research on Obstacle Avoidance Method of UAV Based on Transformer Module and CNN
文章编号:
1001-2257 ( 2023 ) 05-0056-06
作者:
梁永勋甄子洋李苏宁李晓轩闫 川
南京航空航天大学自动化学院,江苏 南京 211106
Author(s):
LIANG Yongxun ZHEN Ziyang LI Suning LI Xiaoxuan YAN Chuan
( College of Automation Engineering , Nanjing University of Aeronautics and Astronautics , Nanjing 211106 , China )
关键词:
无人机避障Swin Transformer CNN 单目相机
Keywords:
UAV obstacle avoidance Swin Transformer CNN monocular camera
分类号:
V279
文献标志码:
A
摘要:
针对传统 CNN 避障方法无法获得全局感受野、图像特征提取计算量大的问题,以四旋翼无人机为研究对象,提出一种基于 Swin Transformer 模块改进 CNN 模型的无人机避障方法。首先,使用 Swin Transformer 代替 CNN 模型中的 Conv2D 层,进行全局信息特征提取;然后,构建 3 个残差结构相连的 Swin Transformer 网络,输出无人机在当前飞行环境下的转向预测和碰撞预测;最后,设计无人机多姿态映射控制系统,输出无人机避障控制指令。实验结果表明,所提方法碰撞预测平均准确率为96.8% ,转向预测均方根误差( RMSE )为 0.068 ,满足了无人机自主避障的要求。
Abstract:
Aiming at the problems of the traditional CNN obstacle avoidance method , which can not obtain the global receptive field and the large amount of calculation for image feature extraction , taking the quadrotor UAV as the research object , a UAV obstacle avoidance method based on Swin Transformer module to improve the CNN model is proposed.Firstly , Swin Transformer is used to replace the Conv2D layer in the CNN model to extract global information features.Then , the Swin Transformer network connected with three residual structures is constructed to output the steering prediction and collision prediction of the UAV in the current flight environment.Finally , the UAV multi-attitude mapping control system is designed to output the UAV obstacle avoidance control command.Experimental results show that the average accuracy of collision prediction was 96.8% , and the root mean square error( RMSE ) of steering prediction was 0.068 , which meets the requirements of autonomous obstacle avoidance of UAV.

参考文献/References:

[ 1 ] SINGLA A , PADAKANDLA S , BHATNAGAR S. Memory-based deep reinforcement learning for obstacle avoidance in UAV with limited environment knowledge [ J ] .IEEE Transactions on intelligent transportation systems , 2019 , 22 ( 1 ): 107-118.

[ 2 ] SANKET N J , SINGH C D , GANGULY K , et al.GapFlyt : active vision based minimalist structure-less gap detection for quadrotor flight [ J ] .IEEE Robotics and automation letters , 2018 , 3 ( 4 ): 2799-2806.
[ 3 ] KAUFMANN E , GEHRIG M , FOEHN P , et al.Beauty and the beast : optimal methods meet learning for drone racing [ C ]// 2019 International Conference on Robotics and Automation ( ICRA ) .New York : IEEE , 2019 : 690-696.
[ 4 ] WANG D S , LI W , LIU X G , et al.UAV environmental perception and autonomous obstacle avoidance : a deep learning and depth camera combined solution [ J ] . Computers and electronics in agriculture , 2020 ,175 : 105523.
[ 5 ] 杨兴隆 . 基于碰撞时间的多旋翼无人机视觉避障方法研究[ D ] . 上海:上海交通大学,2020.
[ 6 ] WOO J , KIM N.Collision avoidance for an unmanned surface vehicle using deep reinforcement learning [ J ] . Ocean engineering , 2020 , 199 : 107001.
[ 7 ] LOQUERCIO A , MAQUEDA A I , DEL BLANCO C R , et al.DroNet : learning to fly by driving [ J ] .IEEE Robotics and automation letters , 2018 , 3 ( 2 ): 1088-1095.
[ 8 ] HE K M , ZHANG X Y , REN S q , et al.Deep residual learning for image recognition [ C ]// 2016 IEEE Conference on Computer Vision and Pattern Recognition( CVPR ) .New York : IEEE , 2016 : 770-778.
[ 9 ] LIU Z , LIN Y T , CAO Y , et al.Swin transformer : Hierarchical vision transformer using shifted windows[ C ]// 2021 IEEE / CVF International Conference on Computer Vision ( ICCV ) .New York : IEEE , 2021 : 10012-10022.

备注/Memo

备注/Memo:
收稿日期: 2022-10-29
作者简介:梁永勋 ( 1999- ),男,江西九江人,硕士研究生,研究方向为无人机避障;甄子洋 ( 1981- ),男,浙江金华人,博士,教授,研究方向为舰载机/无人机着舰引导与控制、无人机集群编队协同控制与决策等。
更新日期/Last Update: 2023-05-24