Rice, a major food crop, is cultivated on nearly 162 million hectares of land worldwide. One of the most commonly used methods to quantify rice production is rice plant counting. This technique is used to estimate yield, diagnose growth, and assess losses in paddy fields. Most rice counting processes across the world are still carried out manually. However, this is extremely tedious, laborious, and time-consuming, indicating the need for faster and more efficient machine-based solutions.
Researchers from China and Singapore have recently developed a method to replace manual rice counting with a much more sophisticated method, involving the use of unmanned aerial vehicles (UAVs) or drones.
According to Professor Jianguo Yao from Nanjing University of Posts and Telecommunications in China, who led the study, “The new technique uses UAVs to capture RGB images—images composed primarily with red, green, and blue light—of the paddy field. These images are then processed using a deep learning network that we have developed, called RiceNet, which can accurately identify the density of rice plants in the field, as well as provide higher-level semantic features, such as crop location and size.”
Their paper has been published Plant Phenomics.
The RiceNet network architecture consists of one feature extractor, at the front end, that analyzes the input images, and three feature decoder modules that are responsible for estimating the density of plants in the paddy field, the location of plants in the paddy field, and the size of the plants, respectively. The latter two features are particularly important for future research on automated crop management techniques, such as fertilizer spraying.
As a part of the study, the research team deployed a camera-equipped UAV over rice fields in the Chinese city of Nanchang and subsequently analyzed the acquired data using a sophisticated image analysis technique. Next, the researchers employed a training dataset and a test dataset. The former was used as a reference to train the system and the latter was used to validate the analytical findings.
More specifically, out of the 355 images with 257,793 manually labeled points, 246 were randomly selected and used as training images, whereas the remaining 109 were used as test images. Each image contained an average of 726 rice plants.
According to the team, the RiceNet technique used for image analysis has a good signal-to-noise ratio. In other words, it is able to efficiently distinguish rice plants from background, thus improving the quality of the generated plant density maps.
The results of the study showed that the mean absolute error and root mean square error of the RiceNet technique were 8.6 and 11.2, respectively. In other words, the density maps generated using RiceNet were in good agreement with those generated using manual methods.
Moreover, based on their observations, the team also shared a few key recommendations. For instance, the team does not recommend acquiring images on rainy days. It also suggests collecting UAV-based images within a period of 4 hours following sunrise, so as to minimize fog time as well as the occurrence of rice leaf curls, both of which adversely affect the output quality.
“In addition to this, we further validated the performance of our technique using two other popular crop datasets. The results showed that our method significantly outperforms other state-of-the-art techniques. This underscores the potential of RiceNet to replace the traditional method of manual rice counting,” concludes Professor Yao.
RiceNet further paves the way toward other UAV- and deep learning-based crop analysis techniques, which can in turn guide decisions and strategies to improve the production of food and cash crops worldwide.