Skip to content

Layernorm in Prototype learning #12

@Trainingzy

Description

@Trainingzy

Thanks for your great work!
I notice you use layernorm for the final features before the classifier and also for the predictions. I think it is quite uncommon in prototype learning (correct me if i am wrong).

Could you please provide some explanation for this? And if removing the two layernorm, will the performance be degraded?

self.feat_norm = nn.LayerNorm(in_channels)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions