Datasets:
Question about labelling strategy
Hello!
Firstly, thanks for creating this amazing dataset and making it available!
I have a question about the labelling strategy used for this dataset. Which bands were used by the labellers? Did they use (i.e. visualize) all bands, including cirrus? We ask because we have found some examples where a thin cloud is not visible in RGB, but appears as "haze" in the cirrus band. Would such a phenomenon have been labelled as haze, even though it is mostly invisible in RGB?
Thanks!
Hello!
In our protocol we label a pixel as thin cloud if the labeler observed cloud contamination in one of the visible, nir or swir bands (excluding the 60m bands). Therefore if the cirrus is only visible in the cirrus band it won't be marked as cloud. The labelers had access to the cirrus band and to previous images in that location in our labeling tool.
From CloudSEN12plus paper:
Thin cloud: Semitransparent clouds that alter the surface spectral signal but still allow to recognize the background. This is the hardest class to identify. We utilize CloudApp [1] to better understand the background, both with and without cloud cover.
That is to say, the case that you described is one of the most challeging cases so it might happen that there are some errors. If you want that we look a bit deeper, could you post the images where you found the problems and their correspoding ids?
Thanks for your feedback!
Thanks for the response @gonzmg88 , very clear. This was something observed during inference on a scene outside of the CloudSen12Plus dataset, so we are unable to provide a corresponding sample. However, for reference, I will attach some pictures of an example RGB along with the output of ESA's cloud mask which uses cirrus thresholding. In these images we can see the RGB provides no indication of cloud, but cirrus-based thresholding, according to the ESA, indicates the presence of a cloud.
The fact that a model trained on CloudSen12+ does not predict haze for this kind of scene makes sense if cirrus was not used to make the labels. This is not necessarily an issue, I just wanted to confirm whether the model's performance was consistent with the labelling strategy.