Detection task
Template code link
To get you started we created a repository that contains the minimum code for running your algorithm, and it can be found here: https://gitlab.com/arise-biodiversity/DSI/algorithms/arise-challenge-detection-algorithm-template.Data
The training set for the detection task consists of 3,237 images of DIOPSIS screens containing zero or more insects. The public DIOPSIS detection data with images and labels can be found here.
Expected output
A folder with JSON files for each image. The JSON files have as name [image_name].json
and contain the following JSON
{ "annotations": [ { "labels": [ { "probability": 0.85, "name": "Object", } ], "shape": { "x": 96, "y": 1710, "width": 156, "height": 111, } } ] }
Evaluation
A script for evaluating the performance of the insect detection model using different performance measures can be found here. The same script will be used to compute the measures on the hold-out test set. Please note that the script might be updated during the first phase of the challenge. We will update you when this happens and let you know when the final script is ready. Below, we describe the performance measures. An intersection-over-union (IoU) of 0.8 is used.
mean Average Precision (mAP)
Area under the precision recall curve.
Precision/recall/F1 at max F1
In practice, when deploying a model, a (precision, recall) operating point needs to be chosen. We chose the operating point which gets the best balance between precision and recall: the F1 score. The precision, recall and F1 are reported at the maximum value of the F1.
mAP/Precision/recall/F1 per size class
TODO