[1]钟世龙,张代聪,罗宝琪,等. 点云虚实融合数据增强的物体识别研究[J].机械与电子,2026,44(01):72-80.
 ZHONG Shilong,ZHANG Daicong,LUO Baoqi,et al. Research on Object Recognition Using Point Cloud Virtual-real Fusion Data Augmentation[J].Machinery & Electronics,2026,44(01):72-80.
点击复制

 点云虚实融合数据增强的物体识别研究()
分享到:

《机械与电子》[ISSN:1001-2257/CN:52-1052/TH]

卷:
44
期数:
2026年01期
页码:
72-80
栏目:
智能制造
出版日期:
2026-01-27

文章信息/Info

Title:
 Research on Object Recognition Using Point Cloud Virtual-real Fusion Data Augmentation
文章编号:
1001-2257(2026)01-0072-09
作者:
 钟世龙张代聪罗宝琪张 伟曲 政秦志超
 (西安工程大学机电工程学院,陕西 西安 710048)
Author(s):
ZHONG ShilongZHANG DaicongLUO BaoqiZHANG WeiQU ZhengQIN Zhichao
 (School of Mechanical and Electronic Engineering,Xi’an Polytechnic University,Xi’an 710048,China)
关键词:
虚实点云多视角的不完整点云点云虚实融合伪彩色图像
Keywords:
virtual physical point cloudmulti view incomplete point cloudpoint cloud virtual physical fusionpseudocolor image
分类号:
TH692.3;TP391.4
文献标志码:
A
摘要:
针对仓储环境中物体识别任务面临的三维点云数据采集成本高与标注效率低的问题,提出了一种融合真实与虚拟数据的点云增强方法。该方法首先通过三维建模与STL格式转换构建虚拟物体模型,并基于三角面片顶点生成初始点云;进而通过线性插值增加点云密度,并基于多视角投影生成不完整虚拟点云;在虚实融合阶段,对真实点云进行中位数高度校正,并删除与虚拟包围盒重叠的区域,以消除“浮空”或“穿透”问题;最后将点云法向量转化为二维高度图与梯度场,生成伪彩色图像以增强特征表达。在煤炭仓储场景中的实验表明,基于融合数据训练的识别模型准确率、召回率和F1分数分别达到99.3%、99.6%和99.4%。该研究为仓储环境下的物体识别提供了一种低成本、高精度的点云增强解决方案。
Abstract:
To address the challenges of high data acquisition costs and low annotation efficiency associated with 3D point cloud data in object recognition tasks within warehouse environments,a point cloud augmentation method that integrates real and virtual data is proposed.At first,the method constructs virtual object models through 3D modeling and STL format conversion,generating initial point clouds based on triangular mesh vertices.Point cloud density is then increased via linear interpolation,and incomplete virtual point clouds are generated through multi view projection.In the fusion stage,median height correction is applied to real point clouds,and the regions overlapping virtual bounding boxes are removed to eliminate floating or penetration artifacts.Finally,the point cloud normal vectors are converted into 2D height maps and gradient fields to generate pseudo color images enhancing feature representation.The experiments conducted in coal storage scenarios demonstrate that the recognition model trained on fused data achieves 99.3% accuracy,99.6% recall,and 99.4% F1 score,respectively.This study provides a low cost,high precision point cloud augmentation solution for object recognition in storage environments.

参考文献/References:

[1] LI X,WEI M Q,CHEN S C.PointSmile:point selfsupervised learning via curriculum mutual information[J].Science China (information sciences),2024,67:212104.
[2] QI C R,YI L,SU H,et al.PointNet++:deep hierarchical feature learning on point sets in a metric space[J].Advances in neural information processing systems,2017,30:5100-5109.
[3] 吴登禄,薛喜辉,张东文,等.基于PointNet++的室外场景三维点云多目标检测方法[J].自动化与信息工程,2019,40(4):5-10.
[4] 孟琮棠,赵银娣,韩文泉,等.基于RandLA Net的机载激光雷达点云城市建筑物变化检测[J].自然资源遥感,2022,34(4):113-121.
[5] GUO Y L,WANG H Y,HU Q Y,et al.Deep learning for 3D point clouds:a survey[J].IEEE Transactions on pattern analysis and machine intelligence,2020,43(12):4338-4364.
[6] 王焱乘.基于深度视频的三维人体行为识别技术研究[D].武汉:华中科技大学,2021.
[7] CHARLES R Q,SU H,KAICHUN M,et al.Pointnet:deep learning on point sets for 3D classification and segmentation[C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) .New York:IEEE,2017:77-85.
[8] NGUYEN PHUOC T,LI C,THEIS L,et al.Hologan:unsupervised learning of 3d representations from natural images[C]∥2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).New York:IEEE,2019:7588-7597.
[9] SHU D W,PARK S W,KWON J.3D point cloud generative adversarial network based on tree structured graph convolutions[C]∥Proceedings of the IEEE/CVF International Conference on Computer Vision,2019:3859-3868.
[10] YOO J H,KIM Y,KIM J,et al.3D CVF:generating joint camera and lidar features using cross view spatial feature fusion for 3d object detection[C]∥Computer Vision ECCV 2020,2020:720-736.
[11] ZHANG H,LUO G Y,TIAN Y L,et al.A virtual real interaction approach to object instance segmentation in traffic scenes[J].IEEE Transactions on intelligent transportation systems,2020,22(2):863-875.
[12] KARRAS T,LAINE S,AILA T.A style based generator architecture for generative adversarial networks[C]∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).New York:IEEE,2019:4401-4410.
[13] NAUSHEEN N,SEAL A,KHANNA P,et al.A FPGA based implementation of Sobel edge detection[J].Microprocessors and microsystems,2018,56:84-91.
[14] BEN SHABAT Y,LINDENBAUM M,FISCHER A.Nesti net:normal estimation for unstructured 3d point clouds using convolutional neural networks[C]∥2019IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR).New York:IEEE,2019:10112-10120.
[15] 王素,刘恒,朱心雄.STL模型的分层邻接排序快速切片算法[J].计算机辅助设计与图形学学报,2011,26(4):4-6.
[16] 王杰,李洪兴,王加银,等.一种图像快速线性插值的实现方案与分析[J].电子学报,2009,37(7):1481-1486.
[17] QI C R,SU H,MO K,et al.Pointnet:deep learning on point sets for 3d classification and segmentation[C]∥2017 IEEE Conference on Computer Vision andPattern Recognition (CVPR).New York:IEEE,2017:652-660.
[18] 赵红涛,裴四宝.平面网格点中无直角的最大点集构造[J].汕头大学学报(自然科学版),2018,33(1):26-30.
[19] 邱天旭,王涛,张艳,等.融合RGB 图像特征的Li-DAR点云道路目标检测[J].地球信息科学学报,2025,27(10):2387-2403.
[20] LIU H,XING F.Non planar slicing algorithm based on AABB and OBB bounding boxes for additive manufacturing[C]∥2024 10th International Forum on Manufacturing Technology and Engineering Materials(ICMTEM 2024),2025,2951:012096.
[21] HE K M,GKIOXARI G,DOLLÀR P,et al.Mask R -CNN[C]∥2017 IEEE International Conference on Computer Vision (ICCV) .New York:IEEE,2017:2961-2969.
[22] 朱立新,王平安,夏德深.基于梯度场均衡化的图像对比度增强[J].计算机辅助设计与图形学学报,2007(12):1546-1552.
[23] REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once:unified,real time object detection[C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).New York:IEEE,2016:779-788.

备注/Memo

备注/Memo:
收稿日期:2025-09-05
作者简介:钟世龙 (2000-),男,山东烟台人,硕士研究生,研究方向为点云处理、机器视觉;张代聪 (1986-),男,山东青岛人,博士,副教授,硕士研究生导师,研究方向为增材制造、计算机图形学和3D视觉,通信作者,E-mail:391000644@qq.com。
更新日期/Last Update: 2026-03-09