Abstract: |
In military target detection technologies based on visual methods, precise camera calibration is essential for accurate target measurements and serves as the foundation for subsequent image processing, target tracking, and 3D reconstruction. The crux of camera calibration lies in the accurate detection of calibration feature points in images. Focusing on the widely used chessboard calibration method, this paper addresses the difficulty in feature point extraction from calibration images affected by disturbances such as blur, heavy noise, extreme poses, and significant lens distortion. We propose a camera calibration feature point detection algorithm that integrates an enhanced YOLOv7 tiny deep learning network with Harris corner detection. To address various issues in the original network’s detection of camera calibration feature regions, we introduce a Gather and Distribute information aggregation and distribution mechanism to replace the Feature Pyramid Network (FPN) in YOLOv7 tiny, enhancing the capability of feature fusion across different layers. Additionally, a Biformer attention mechanism is added after the main feature extraction segment to enhance the detection of small sized feature point candidate regions. In the Head section, an improved Efficient Decoupled Head is used to increase accuracy while maintaining low computational overhead. Test results demonstrate that the improved YOLOv7 tiny network significantly enhances the accuracy of feature point candidate region detection, achieving 95.3%, thereby proving the effectiveness and feasibility of the enhanced network. |