Crowdsourcing experiment and fully convolutional neural networks for coastal remote sensing of seagrass and macro-algae

Brandon Hobley, Michal Mackiewicz, Julie Bremner, Tony Dolphin, Riccardo Arosio

Research output: Contribution to journalArticlepeer-review


Recently, convolutional neural networks (CNNs) and fully convolutional neural networks (FCNs) have been successfully used for monitoring coastal marine ecosystems, in particular vegetation. However, even with recent advances in computational modelling and data acquisition, deep learning models require substantial amounts of good quality reference data to effectively self-learn internal representations of input imagery. The classical approach for coastal mapping requires experts to transcribe insitu records and delineate polygons from high-resolution imagery such that FCNs can self-learn. However, labelling by a single individual limits the training data, whereas crowdsourcing labels can increase the volume of training data, but may compromise label quality and consistency. In this paper we assessed the reliability of crowdsourced labels on a complex multi-class problem domain for estuarine vegetation and unvegetated sediment. An inter-observer variability experiment was conducted in order to assess the statistical differences in crowdsourced annotations for plant species and sediment. The participants were grouped based on their discipline and level of expertise, and the statistical differences were evaluated using the Cochran's Q-test and the annotation accuracy of each group to determine for observation biases. Given the crowdsourced labels, FCNs were trained with majority-vote annotations from each group to check whether observation biases were propagated to FCN performance. Two scenarios were examined: first, a direct comparison of FCNs trained with transcribed in-situ labels and crowdsourced labels from each group was established. Then, transcribed in-situ labels were supplemented with crowdsourced labels to investigate the feasibility of training FCNs with crowdsourced labels in coastal mapping applications. We show that annotations sourced from discipline experts (ecologists and geomorphologists) familiar with the study site were more accurate than experts wi...
Original languageEnglish
JournalIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Early online date7 Sep 2023
Publication statusE-pub ahead of print - 7 Sep 2023

Cite this