Open-source deep learning for precision agriculture

This paper presents a newly developed open-source system for precision agriculture in lettuce production. The system, known as AirSurf, uses a lightweight manned aircraft to gather images of lettuce fields, then a deep learning algorithm assesses the state of the lettuce crops on a number of characteristics, including lettuce size and number per field.

As shown in the figure below, the system enables users to see where lettuces of different sizes are located in fields.

Image: Figure 8, Bauer et al. a–c AirSurf-Lettuce is applied to count and classify millions of lettuce heads (small is coloured blue, medium is coloured green, and large is coloured red), in three plantation fields in the Cambridgeshire, UK. d The overall quantification of Lettuce heads and size categories in three fields

The researchers note that assessing the state of crops in fields can currently be both time-consuming and labour-intensive. They hope that the new system could help farmers to plan the distribution and trading of harvested crops at the right time, as well as harvest at the right time to maximise yield.

In the future, the researchers suggest, the new platform could be developed into a more detailed analytical platform that could help to increase yield. In particular, it is important to monitor the planting density of rice and wheat because of the close links between density and yields.

 

Abstract

Aerial imagery is regularly used by crop researchers, growers and farmers to monitor crops during the growing season. To extract meaningful information from large-scale aerial images collected from the field, high-throughput phenotypic analysis solutions are required, which not only produce high-quality measures of key crop traits, but also support professionals to make prompt and reliable crop management decisions. Here, we report AirSurf, an automated and open-source analytic platform that combines modern computer vision, up-to-date machine learning, and modular software engineering in order to measure yield-related phenotypes from ultra-large aerial imagery. To quantify millions of in-field lettuces acquired by fixed-wing light aircrafts equipped with normalised difference vegetation index (NDVI) sensors, we customised AirSurf by combining computer vision algorithms and a deep-learning classifier trained with over 100,000 labelled lettuce signals. The tailored platform, AirSurf-Lettuce, is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution across the field, based on which associated global positioning system (GPS) tagged harvest regions have been identified to enable growers and farmers to conduct precision agricultural practises in order to improve the actual yield as well as crop marketability before the harvest.

 

Reference

Bauer, A., Bostrom, A.G., Ball, J., Applegate, C., Cheng, T., Laycock, S., Rojas, S.M., Kirwan, J. and Zhou, J., 2019. Combining computer vision and deep learning to enable ultra-scale aerial phenotyping and precision agriculture: A case study of lettuce production. Horticulture Research, 6(1), p.70.

Read the full paper here. The project source code can be found on GitHub here: AirSurf-Lettuce. See also the Foodsource building block What is sustainable intensification?

Publication
25 Jun 2019
Image
Image: Michael, Green Plants Field, Pexels, Pexels license
Image: Michael, Green Plants Field, Pexels, Pexels license
Actors
Fodder Category
Region
Research trails