-
Notifications
You must be signed in to change notification settings - Fork 70
Cushion Normal Improvements #248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Cushion Normal Improvements #248
Conversation
…omposition instead of coordinate transformation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #248 +/- ##
==========================================
+ Coverage 44.22% 45.34% +1.11%
==========================================
Files 127 144 +17
Lines 8554 10103 +1549
==========================================
+ Hits 3783 4581 +798
- Misses 4771 5522 +751
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ekiefl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It avoids quaternions and the coordinate system transformation, though, which means the whole thing can be numba-ized.
This is really nice. It seems to be about 60x faster, and in absolute terms this fn call used to take around 160us, which is pretty slow considering a 9-ball break shot currently takes 42ms on my computer. Since there were 15 ball-cushion collisions, around 2ms of that is due to resolving ball-cushion collisions.
One concern I have is the precision for sphere_half_space_collision seems a little bit worse this way. I had to increase the tolerance in the symmetry unit test.
I find this concerning, too. It's quite a concession to bump the atol/rtol to 1e-2 when their defaults are 1e-5/1e-8. My biggest fear is that there is some numerical imprecision that's palattable under most conditions, but becomes problematic when doing sensitive parameter sweeps--like the numerical imprecision Ersin discovered in Mathavan ball-ball when he was fitting his 3-cushion trajectories.
Is it clear to you why this imprecision exists?
I noticed that the decomposition function can return negative magnitudes. I added a test case with a FIXME highlighting the issue. It seems sensible that maybe that's a desired behavior and that these really aren't magnitudes but signed components or something. If that's the case, maybe we could change the language so there's less potential for misinterpretation.
Thanks for improving test parametrization w.r.t
| return ptmath.unit_vector(np.array([self.lx, self.ly, 0])) | ||
|
|
||
| def get_normal_xy(self, rvw: NDArray[np.float64]) -> NDArray[np.float64]: | ||
| def get_normal_xy(self, xyz: NDArray[np.float64]) -> NDArray[np.float64]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for creating this symmetry in the call signatures.
| ), | ||
| ], | ||
| ) | ||
| def test_decompose_normal_tangent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding these tests
No. Maybe due to the cross products in the non-transformed coordinate system. There are probably more chances for error to accumulate this way vs in the collision-normal coordinate system. Or maybe I made a subtle error. |
|
Well since I'm worried about it, I'll find time to do a sensitivity test and see if it really is an issue in practice |
I suspect it’s due to the subtractions in the cross product calculation. There could be “catastrophic cancellation” in some cases. And probably some other stuff I’m unaware of. Maybe better to just leave it how it was. |
This is my instinct. And I suspect you're bang on about catastrophic cancellation. Should we just stomach the loss and revert that change, or should we dig deeper? Your call. |
…ctor decomposition instead of coordinate transformation" This reverts commit f4a418e.
Reverting for now. Not worth the trouble, I think. |
|
That's an interesting analysis. Yeah Mathavan doesn't look great there. Could maybe fiddle with max_steps and delta_p. |
ekiefl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Merging now. Thanks for these changes.
ekiefl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@derek-mcblane, just noticed this straggler:
I noticed that the decomposition function can return negative magnitudes. I added a test case with a FIXME highlighting the issue. It seems sensible that maybe that's a desired behavior and that these really aren't magnitudes but signed components or something. If that's the case, maybe we could change the language so there's less potential for misinterpretation.
Am I right that this is a signed component?




This is a few minor changes. See the individual commits.
One concern I have is the precision for sphere_half_space_collision seems a little bit worse this way. I had to increase the tolerance in the symmetry unit test. It avoids quaternions and the coordinate system transformation, though, which means the whole thing can be numba-ized.
TODO: