Skip to content

Conversation

@expanse88
Copy link
Contributor

@expanse88 expanse88 commented Oct 19, 2024

Summary of the Change:
The evaluate_model.py script introduces functionality to evaluate image generation models by computing two critical metrics: Frechet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS). These metrics are used to measure the quality of generated images compared to real images, making the evaluation of image synthesis models more efficient.

  • [ ✔️] New feature (non-breaking change which adds functionality)
  • [✔️ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)

Checklist:

  • [✔️ ] My code follows the style guidelines of this project
  • [ ✔️] I have performed a self-review of my own code
  • [✔️ ] I have commented my code, particularly in hard-to-understand areas
  • [✔️ ] I have made corresponding changes to the documentation
  • [ ✔️] My changes generate no new warnings
  • [✔️ ] New and existing unit tests pass locally with my changes

Screenshots (if applicable)

Additional Information

@sohambuilds sohambuilds self-requested a review October 20, 2024 13:33
@sohambuilds
Copy link
Collaborator

update main,py to calculate FID and LPIPS scores after image generation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants