Skip to content

Conversation

@jglaser
Copy link

@jglaser jglaser commented Nov 6, 2018

I am opening this PR to solicit input at an early design stage and track progress. The PR aims to eliminate the need to go through snapshots and expensive device/host copies for computing CVs on the GPU, by allowing them to be computed inside the engine and also the forces to be accumulated there.

The minimal information needed to bias a system in a CV is a constant scaling factor applied to all forces, and the value of the CV itself. The new CollectiveVariable::ApplyBias method takes care of this. The new collective variable HOOMDCV is able to take a user-defined HOOMD md.force object and wrap it's C++-side implementation into a SSAGES CV.

At this point, the changes only compile, and are most likely not doing anything sensible. Most important missing bit is to call the HalfStepHook before HOOMD's computeNetForce(), in addition to afterwards (which is currently the case), so as to be able to bias the force before accumulation.

When that is done, the snapshot copying can be optimized out entirely dependent on whether all active CVs have the modifiesParticleForces() flag set.

- compiles, untested
- also update to HOOMD 2.4
@jglaser
Copy link
Author

jglaser commented Nov 6, 2018

Obviously, a GPU kernel to multiply the particle forces and reduce the CV will have to be implemented as well.

@bdice
Copy link
Contributor

bdice commented Nov 6, 2018

@mquevill @sesevgen Hi! I talked a bit with @jglaser while he was conceptualizing this. It'd be great to chat with you and/or any interested SSAGES devs about this idea. It seems like it could offer a reasonable extension of existing behavior that is highly performant (on the GPU, no host-device copies needed). Jens has written some advanced sampling methods for HOOMD before, and this would pave the way to run those through SSAGES instead of having our own plugins/libraries for every advanced sampling method. Maybe we can set up a call? Let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants