Add LazyKernelMatrix and lazykernelmatrix#515
Add LazyKernelMatrix and lazykernelmatrix#515sethaxen wants to merge 4 commits intoJuliaGaussianProcesses:masterfrom
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Yes, I imagine this would be useful also in the context of #93 (comment). Though there maybe often the non-lazy version might be sufficient (or even preferable). |
| module KernelFunctions | ||
|
|
||
| export kernelmatrix, kernelmatrix!, kernelmatrix_diag, kernelmatrix_diag! | ||
| export LazyKernelMatrix, lazykernelmatrix |
There was a problem hiding this comment.
Do we have to export both? Is lazykernelmatrix sufficient maybe?
|
It looks really nice already! Only thing which is a bit uneasy is the output type... I am not sure there is a strong guarantee that the first evaluated type would correspond to the rest of the matrix. But I also don't see how it could be solved easily... |
| The result is a matrix with the same entries as [`kernelmatrix(κ, x)`](@ref) but where the | ||
| entries are not computed until they are needed. | ||
| """ | ||
| lazykernelmatrix(κ::Kernel, x) = lazykernelmatrix(κ, x, x) |
There was a problem hiding this comment.
It would be good to optimize this for the symmetric case, IMO, similar to kernelmatrix (which IIRC often does not use such a fallback but more optimized methods).
There was a problem hiding this comment.
The optimization for the symmetric case is only calculating half the matrix lets say the top half and then redirecting all queries from the bottom half to the top half. (Actually distances simply copies the top half into the bottom half).
Since this is lazy, there is probably no point in this optimization because you do not do the calculation from the start. And when you call getindex it does not matter whether you calculate the element in the top or bottom half.
There was a problem hiding this comment.
At least with other lazy iterators in Base it's a common pattern to collect results at some point (e.g., after filtering, mapping, etc.). In this case it seems beneficial to know that the lazy matrix is symmetric.
There was a problem hiding this comment.
Good point, I'll see if there are ops we can optimize without too much extra code complexity.
There was a problem hiding this comment.
Is there an interface for that? I mean you could use LinearAlgebra.Symmetric since that just wraps the original matrix afaik, but it also simply redirects queries so collect would still cause two calculations since you do a calculation per query.
I mean you could just specialize collect I guess
Relevant PartFor this a lazy ProductArray would also be a neat abstraction: now the lazy kernelmatrix is simply a lazy mappedarray of this product array. As mappedarray has a field for its mapping: you can write custom code for when Synergy tangentThis is neat, because the productArray abstraction is also helpful as a multioutput abstraction: cf. https://github.com/lazyLibraries/ProductArrays.jl (not yet a package JuliaRegistries/General#84683) |
|
I have the same feeling as @sethaxen:
A separate dedicated type provides more information about the context and allows us to define dispatches, special operations and optimizations that might be less relevant or not well defined in the general case. |
I edited my reply to explain how you can still do special dispatches |
|
I actually found this pull request because I wanted to ask: does it ever make sense for kernelmatrix to be eager? I mean memory access is expensive and you take |
I wonder of it's safe to ask Julia to infer the return type of |
mappedarrays does eltype inference 🤷 |
|
A separate option would be to be a little less ambitious with this initial implementation, and not implement a new matrix type, and just provide an interface for the operation that we want. Specifically, add the following function to the interface: where the above methods are the default implementations and specify the semantics. The nice thing about doing things this way is that we avoid having to e.g. guess at the output type of Does this cover our needs? I guess really I'm just asking whether we actually need to go to the trouble of implementing a new matrix type, and can instead just implement a single function that can be overloaded. If there is a range of functionality that we require, then maybe a matrix type is needed, but if we really only have one operation in mind, maybe it's not? edit: or we could add an additional argument to the functioon that says how things should be computed when you're doing block-wise operations. e.g. |
|
ProductArrays v1.0.0 is now online, so you could do and then implement additional functionality for if you want I can also write a convenience method to skip |
Summary
This PR introduces functionality for lazily representing kernel matrices, which is necessary when the matrix might be too large to store in memory. Fixes #514
Proposed changes
lazykernelmatrix: supports similar semantics askernelmatrixbut constructs a lazy representationAbstractMatrixsubtypeLazyKernelMatrix, constructed for thelazykernelmatrixdefault.What alternatives have you considered?
lazykernelmatrix, but this could allow even more structured matrix types to be defined for specific kernel types and returned in the future.LazyKernelMatrixshould also storeobsdim1andobsdim2? Currently we require the user has passed a vector e.g.RowVecsorColVecs.ybeing anothingto define a symmetric kernel matrix? This would allow a couple checks to be done at compile time. In particular we could support+(::LazyKernelMatrix{T,Tk,Tx,Nothing}, ::Diagonal) -> LazyKernelMatrix.To-Do