Manage cookies
Title Deployment and Verification of Custom Autonomous Low-Budget IoT Devices for Image Feature Extraction in Wheat
Authors MARTÍNEZ, FATIMA BELÉN, ROMAINE, JAMES BRIAN, MANZANO CRESPO, JOSÉ MARÍA, IERARDI, CARMELINA, MILLÁN GATA, PABLO
External publication No
Means IEEE Access
Scope Article
Nature Científica
JCR Quartile 2
SJR Quartile 1
Web https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203491921&doi=10.1109%2fACCESS.2024.3453993&partnerID=40&md5=208d701106cc0960e8286d67375d152e
Publication date 05/09/2024
ISI 001316116800001
Scopus Id 2-s2.0-85203491921
DOI 10.1109/ACCESS.2024.3453993
Abstract Given the need for effective crop monitoring while reducing human intervention and workload, it is necessary to implement devices that operate autonomously and are durable. These devices must be capable of operating over long distances, operate with low energy requirements, and resist climatic adversities and external factors such as dust and water to function effectively in rural areas. In this work, we introduce a low-cost autonomous IP67 IoT vision device designed and implemented in a real-world scenario. The device is used for automatic detection of wheat characteristics in order to optimise human resources when monitoring the growth and health of crops. Equipped with algorithms capable of capturing and processing images, the device has been designed to be computationally efficient, cost effective and power efficient. The device utilises the LoRaWAN communication protocol and requires 2.878 W and has 3.5 months of autonomy. Furthermore, it leverages specific low computational algorithms that can operate with low-resolution images making them more effective. To test the device in a real-world scenario, an algorithm that measures the height of wheat was introduced using classical vision techniques, with 97 % accuracy. Furthermore, the device incorporates an ad-hoc trained YOLOv8 and YOLOV10 object detection machine learning algorithm for the detection of spikes and stubble areas, which achieves a recall of 71.1 % and a precision of 79.5 % for the YOLOv8. In the case of the YOLOv10, it achieves a recall of 70 % and a precision of 77 %. The results are validated by expert agronomists annotations and data collected via a custom created web platform for remote visualisation and decision making tasks.
Keywords Crops; Robots; Monitoring; Robot kinematics; Cameras; Accuracy; YOLO; Computer vision; Smart agriculture; object detection; smart agriculture; YOLOv8; YOLOv10; spiking; stubble; wheat; height; IoT
Universidad Loyola members