-
Notifications
You must be signed in to change notification settings - Fork 50
Open
Description
Hi,
I believe there's an issue with the get_confidence_list() function.
When I used your pre-trained model, I couldn't achieve the same performance metrics as you reported.
I think the else clause should be removed because it calculates values that should not be considered.
The function should only handle cases where the prediction is a true positive and matches well with the ground truth (GT).
If a prediction does not match well with a ground truth, it should not append 0 to the same list (true_positive_list).
Appending 0 incorrectly includes non-matching predictions in the calculation, which creates problems when calculating precision and recall.
I will wait your reply.
Thanks
Metadata
Metadata
Assignees
Labels
No labels