mgplvm.models.svgp module
- class mgplvm.models.svgp.Svgp(kernel, n, m, n_samples, z, likelihood, whiten=True, tied_samples=True)[source]
Bases:
mgplvm.models.svgp.SvgpBase
- property msg
- name = 'Svgp'
- property prms: Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
- Return type
Tuple
[Tensor
,Tensor
,Tensor
]
- training: bool
- class mgplvm.models.svgp.SvgpBase(kernel, n, m, n_samples, n_inducing, likelihood, q_mu=None, q_sqrt=None, whiten=True, tied_samples=True)[source]
Bases:
mgplvm.models.gp_base.GpBase
- elbo(y, x, sample_idxs=None, m=None)[source]
- Parameters
- yTensor
data tensor with dimensions (n_samples x n x m)
- xTensor (single kernel) or Tensor list (product kernels)
input tensor(s) with dimensions (n_mc x n_samples x d x m)
- mOptional int
used to scale the svgp likelihood. If not provided, self.m is used which is provided at initialization. This parameter is useful if we subsample data but want to weight the prior as if it was the full dataset. We use this e.g. in crossvalidation
- Returns
- lik, prior_klTuple[torch.Tensor, torch.Tensor]
lik has dimensions (n_mc x n) prior_kl has dimensions (n)
Notes
Implementation largely follows derivation of the ELBO presented in here.
- Return type
Tuple
[Tensor
,Tensor
]
- predict(x, full_cov, sample_idxs=None)[source]
- Parameters
- xTensor (single kernel) or Tensor list (product kernels)
test input tensor(s) with dimensions (n_b x n_samples x d x m)
- full_covbool
returns full covariance if true otherwise returns the diagonal
- Returns
- muTensor
mean of predictive density at test inputs [ s ]
- vTensor
variance/covariance of predictive density at test inputs [ s ] if full_cov is true returns full covariance, otherwise returns diagonal variance
- Return type
Tuple
[Tensor
,Tensor
]
- sample(query, n_mc=1000, square=False, noise=True)[source]
- Parameters
- queryTensor (single kernel)
test input tensor with dimensions (n_samples x d x m)
- n_mcint
numper of samples to return
- squarebool
determines whether to square the output
- noisebool
determines whether we also sample explicitly from the noise model or simply return samples of the mean
- Returns
- y_sampsTensor
samples from the model (n_mc x n_samples x d x m)
- training: bool