Abstract:
Aircraft detection is an essential and noteworthy area of object detection that has received significant interest from scholars,
especially with the progress of deep learning techniques. Aircraft detection is now extensively employed in various civil and military
domains. Automated aircraft detection systems play a crucial role in preventing crashes, controlling airspace, and improving aviation
traffic and safety on a civil scale. In the context of military operations, detection systems play a crucial role in quickly locating aircraft
for surveillance purposes, enabling decisive military strategies in real time. This article proposes a system that accurately detects
airplanes independent of their type, model, size, and color variations. However, the diversity of aircraft images, including variations
in size, illumination, resolution, and other visual factors, poses challenges to detection performance. As a result, an aircraft detection
system must be designed to distinguish airplanes clearly without affecting the aircraft’s position, rotation, or visibility. The methodology
involves three significant steps: feature extraction, detection, and evaluation. Firstly, deep features will be extracted using a pre-trained
VGG19 model and transfer learning principle. Subsequently, the extracted feature vectors are employed in One Class Support Vector
Machine (OCSVM) for detection purposes. Finally, the results are assessed using evaluation criteria to ensure the effectiveness and
accuracy of the proposed system. The experimental evaluations were conducted across three distinct datasets: Caltech-101, Military
dataset, and MTARSI dataset. Furthermore, the study compares its experimental results with those of comparable publications released
in the past three years. The findings illustrate the efficacy of the proposed approach, achieving F1-scores of 96% on the Caltech-101
dataset and 99% on both Military and MTARSI datasets.