Skip to content

Incorrect Precision and Recall Metrics #38

@YoungseokOh

Description

@YoungseokOh

Hi,

I believe there's an issue with the get_confidence_list() function.

When I used your pre-trained model, I couldn't achieve the same performance metrics as you reported.

I think the else clause should be removed because it calculates values that should not be considered.

The function should only handle cases where the prediction is a true positive and matches well with the ground truth (GT).

If a prediction does not match well with a ground truth, it should not append 0 to the same list (true_positive_list).

Appending 0 incorrectly includes non-matching predictions in the calculation, which creates problems when calculating precision and recall.

I will wait your reply.

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions