-
Notifications
You must be signed in to change notification settings - Fork 0
Feed for all publications #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
To really judge if this is generating output as expected, please add a test case!
tests/test_publications_api.py should be a good starting point. The test should insert some test data (with different geometries) and then check if the created feed content is as expected. You can also store the generated XML in a file as a reference compare the two files, that is also pretty illustrating.
|
I have updated the code and added the test case, I just wanted to ask, should I delete the generated XML after comparing the results? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're getting there 😄 See my review comments below, and also the PR comment at #88 (comment)
tests/test_feeds.py
Outdated
|
|
||
| self.maxDiff = None | ||
|
|
||
| self.assertEqual( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle this is good enough to catch if something changed, and I agree it will likely work in most cirumstances.
However, I see a case where this might fail and we can also provide more helpful output to see what is different between the files. This might fail on a different operating system or with a different underlying XML library, e.g., the whitespace might be different.
Please give https://xmldiff.readthedocs.io/en/stable/api.html and https://github.com/Exirel/python-xmlunittest/tree/master a try and decide to use one (or both) to create more meaningful failure outputs (which elements in the XML changed how).
The former even allows to validate against a schema, so we can double-check that the XML files are valid. We can assume that Django generates valid stuff, but it doesn't hurt to make sure.
Let me know if you need help digging up the schema locations. The validation could work against a URL, but it is certainly more performant when we save the schema files in our test directory. They are probably big, but we only need to add them once...
|
Please commit the reference file to the repo, and yes, delete the generated one. Or event better: create it in a temp directory and the forget about it, see https://docs.pytest.org/en/6.2.x/tmpdir.html (see the end of that page for the deletion policy of pytest). The fixtures seem to be pretty straightforward to use. |
|
@uxairibrar unit tests are running again. |
Closes #49