Skip to content
Discussion options

You must be logged in to vote

How large do you want to make L? We've tested up to L=7,7,7 interactions with high GPU utilization, see line 87 of tests/benchmark.py. You can benchmark it yourself by modifying the irreps in the example at the top of the README to have high L values.

We haven't benchmarked against those methods (but you are welcome to). In terms of memory for our implementation: the nonzeros for the TP are coded into the instruction stream, so as L increases we don't really run into memory constraints (but the I-cache will eventually spill, resulting in slowdown). I'd say we are still pretty memory and compute efficient overall :)

The SO2 convolution is very clever, but you have to rotate irreps by a uni…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@ChenQianA
Comment options

@vbharadwaj-bk
Comment options

Answer selected by vbharadwaj-bk
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants