Departamento de Informática
Permanent URI for this community
Browse
Browsing Departamento de Informática by Sustainable Development Goals (SDG) "12:Produção e Consumo Sustentáveis"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- Portable, multi-task, on-the-edge and low-cost computer vision framework based on deep learning: Precision agriculture applicationPublication . Assunção, Eduardo Timóteo; Proença, Hugo Pedro Martins Carriço; Gaspar, Pedro Miguel de Figueiredo Dinis OliveiraPrecision agriculture is a new concept that has been introduced worldwide to increase production, reduce labor and ensure efficient management of fertilizers and irrigation processes. Computer vision is an essential component of precision agriculture and plays an important role in many agricultural tasks. It serves as a perceptual tool for the mechanical interface between robots and environments or sensed objects, as well as for many other tasks such as crop yield prediction. Another important consideration is that some vision applications must run on edge devices, which typically have very limited processing power and memory. Therefore, the computer vision models that are to run on edge devices must be optimized to achieve good performance. Due to the significant impact of Deep Learning and the advent of mobile devices with accelerators, there has been increased research in recent years on computer vision for general purpose applications that have the potential to increase the efficiency of precision agriculture tasks. This thesis explore how deep learning models running on edge devices are affected by optimizations, i.e., inference accuracy and inference time. Lightweight models for weed segmentation, peach fruit detection, and fruit disease classification are cases of studies. First, a case study of peach fruit detection with the well-known Faster R-CNN object detector using the breakthrough AlexNet Convolutional Neural Network (CNN) as the image feature extractor is performed. A detection accuracy of 0.90 was achieved using metric Average Precision (AP). The breakthrough AlexNet CNN is not an optimized model for use in mobile devices. To explore a lightweight model, a case study of peach fruit disease classification is next conducted using the MobineNet CNN. The MobileNet was trained on a small dataset of images of healthy, rotten, mouldy, and scabby peach fruit and achieved a performance of 0.96 F1. Lessons learned from this work led to using this model as a baseline CNN for other computer vision applications (e.g., fruit detection and weed segmentation). Next, a study was conducted on robotic weed control using an automated herbicide spot sprayer. The DeepLab semantic segmentation model with the MobileNet backbone was used to segment weeds and determine spatial coordinates for the mechanism. The model was optimized and deployed on the Jetson Nano device and integrated with the robotic vehicle to evaluate real-time performance. An inference time of 0.04 s was achieved, and the results obtained in this work provide insight into how the performance of the semantic segmentation model of plants and weeds degrades when the model is adapted through optimization for operation on edge devices. Finally, to extend the application of lightweight deep learning models and the use of edge devices and accelerators, the Single Shot Detector (SSD) was trained to detect peach fruit in three different varieties and was deployed in a Raspberry Pi device with an integrated Tensor Unity Processor (TPU) accelerator. Some variations of MobileNet as a backbone were explored to investigate the tradeoff between accuracy and inference time. MobileNetV1 yielded the best inference time with 21.01 Frame Per Second (FPS), while MobileDet achieved the best detection accuracy (88.2% AP). In addition, an image dataset of three peach cultivars from Portugal was developed and published. This thesis aims to contribute to future steps in the development of precision agriculture and agricultural robotics, especially when computer vision needs to be processed on small devices.