Semi-supervised learning for accurate weed mapping of UAV imagery

Huasheng Huang, Yubin Lan, Jizhong Deng, Yali Zhang, Aqing Yang

Abstract


Abstract: Weed mapping is essential for Site Specific Weed Management (SSWM) applications.  Semantic segmentation is the mainstream algorithm to perform the weed mapping at pixel level, which is proven to be superiors than the traditional Object Based Image Analysis (OBIA) approaches.  However, the semantic segmentation requires large amount of annotated data for parameter updating, and the development of such methods are currently limited by the shortage of annotated data in the SSWM community.  To address this problem, this paper proposed a semi-supervised learning method for accurate weed mapping of UAV imagery.  Firstly, we applied limited training data with annotation to train the classifiers of the OBIA method.  Secondly, we used the trained OBIA model to produce the pseudo labels for other training data without annotations.  Finally, we applied both the manual annotations and the generated pseudo labels to train the semantic segmentation models.  The proposed method is compared with the mainstream semantic segmentation at different training sizes.  Experimental results showed that the proposed semi-supervised learning method significantly improved the prediction precision at different training sizes.  Furthermore, the proposed algorithm well addressed the overfitting problem of supervised learning at extremely small training set.  The proposed semi-supervised method is expected to reduce the manual annotation efforts and enhance the weed mapping researches in the context of SSWM applications.

Keywords: weed mapping; UAV imagery; OBIA; semantic segmentation; semi-supervised learning

DOI: 10.33440/j.ijpaa.20220501.190

 

Citation: Huang H H, Lan Y B, Deng J Z, Zhang Y L, Yang A Q.  Semi-supervised learning for accurate weed mapping of UAV imagery.  Int J Precis Agric Aviat, 2022; 5(1): 29–34.

Full Text:

PDF

References


H Huang, J Deng, Y Lan, et al. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery. PLOS ONE, 2018, 13(4): e196302. DOI: 10.1371/journal.pone.0196302.

H Huang, Y Lan, J Deng, et al. A Semantic Labeling Approach for Accurate Weed Mapping of High Resolution UAV Imagery. Sensors, 2018, 18(7): 2113. DOI: 10.3390/s18072113.

D Stroppiana, P Villa, G Sona, et al. Early season weed apping in rice crops using multi-spectral UAV data. International Journal of remote sensing, 2018, 39(15-16): 5432–452. DOI: 10.1080/ 01431161.2018.1441569.

F López-Granados, J Torres-Sánchez, A Serrano-Pérez, et al. Early season weed mapping in sunflower using UAV technology: variability of herbicide treatment maps against weed thresholds. Precision Agriculture, 2016, 17(2): 183–199. DOI: 10.1007/s11119-015-9415-8.

M Gašparovi?, M Zrinjski, Barkovi?, et al. An automatic method for weed mapping in oat fields based on UAV imagery. Computers and Electronics in Agriculture, 2020, 173: 105385. DOI: 10.1016/ j.compag.2020.105385.

H Huang, Y Lan, A Yang, et al. Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. International Journal of Remote Sensing, 2020, 41(9): 3446–3479.

M D Hossain, D Chen. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 150: 115–134. DOI: 10.1016/j.isprsjprs.2019.02.009.

L Janowski, K Tylmann, K Trzcinska, et al. Exploration of glacial landforms by object-based image analysis and spectral parameters of digital elevation model. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 1–17. DOI: 10.1109/TGRS.2021.3091771.

D Ventura, A Bonifazi, M F Gravina, et al. Mapping and classification of ecologically sensitive marine habitats using unmanned aerial vehicle (UAV) imagery and object-based image analysis (OBIA). Remote Sensing, 2018, 10(9): 1331. DOI: 10.3390/rs10091331.

A I De Castro, J Torres-Sánchez, J M Peña, et al. An automatic random forest-OBIA algorithm for early weed mapping between and within crop rows using UAV imagery. Remote Sensing, 2018, 10(2): 285. DOI: 10.3390/rs10020285.

J Gao, W Liao, D Nuyttens, et al. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery. International journal of applied earth observation and geoinformation, 2018, 67: 43-53. DOI: 10.1016/j.jag.2017.12.012.

J Long, E Shelhamer, T Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp:3431–3440. DOI: 10.1109/CVPR.2015.7298965.

L Chen, G Papandreou, I Kokkinos, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(4): 834–848. DOI: 10.1109/TPAMI.2017.2699184.

K Zou, X Chen, Y Wang, et al. A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field. Computers and Electronics in Agriculture, 2021, 187: 106242. DOI: https://doi.org/10.1016/j.compag.2021.106242.

I Sa, M Popovi?, R Khanna, et al. WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming. Remote Sensing, 2018, 10(9): 1423. DOI: 10.3390/rs10091423.

K He, H Fan, Y Wu, et al. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp:9729–9738. DOI: 10.1109/cvpr42600.2020.00975.

Y Ouali, C Hudelot, M Tami. Semi-supervised semantic segmentation with cross-consistency training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp:12674–12684. DOI: 10.1109/CVPR42600.2020.01269.

D Li, J Yang, K Kreis, et al. Semantic segmentation with generative models: Semi-supervised learning and strong out-of-domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp:8300–8311. DOI: 10.1109/ cvpr46437.2021.00820.

T Lin, P Dollár, R Girshick, et al. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp:2117–2125. DOI: 10.1109/CVPR.2017.106.

S Liu, L Qi, H Qin, et al. Path aggregation network for instance segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp:8759–8768. DOI: 10.1109/ CVPR.2018.00913.

W Liu, D Anguelov, D Erhan, et al. Ssd: Single shot multibox detector. In European conference on computer vision, 2016, pp:21–37. DOI: 10.1007/978-3-319-46448-0_2.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.