Automatic Object Detection in Oil Palm Plantation using a Hybrid Feature Extractor of YOLO-based Model
Synopsis
The current manual harvesting process is very laborious and time-consuming. Implementing a machine vision-based automated crop harvesting system may minimize operational costs and increase productivity. This paper aims to develop a one-stage object detection model with high accuracy, lightweight size, and low computing cost. A novel PalmYOLO model is proposed by modifying the architecture of the YOLOv3 tiny model to localize and detect oil palm tree, grabber and Fresh Fruit Bunch (FFB) in varied environmental conditions. The PalmYOLO model employed a lightweight-hybrid feature extractor composed of densely connected neural network and mobile inverted bottleneck module, multi-scale detection architecture, Mish activation function and complete intersection over union loss function. The proposed PalmYOLO model obtained an excellent mAP and F1 score of 97.20% and 0.91. Moreover, the proposed model generated a lower BFLOPS value of 26.732 and a lightweight model size of 46.7MB. The extensive results demonstrate the PalmYOLO model’s ability to accurately detect objects in palm oil plantations.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.