mgplvm.models.gplvm module
- class mgplvm.models.gplvm.Gplvm(obs, lat_dist, lprior, n, m, n_samples)[source]
Bases:
torch.nn.modules.module.Module
- calc_LL(data, n_mc, kmax=5, m=None)[source]
- Parameters
- dataTensor
data with dimensionality (n_samples x n x m)
- n_mcint
number of MC samples
- kmaxint
parameter for estimating entropy for several manifolds (not used for some manifolds)
- mOptional int
used to scale the svgp likelihood and sgp prior. If not provided, self.m is used which is provided at initialization. This parameter is useful if we subsample data but want to weight the prior as if it was the full dataset. We use this e.g. in crossvalidation
- Returns
- LLTensor
E_mc[p(Y)] (burda et al.) (scalar)
- elbo(data, n_mc, kmax=5, batch_idxs=None, sample_idxs=None, neuron_idxs=None, m=None, analytic_kl=False)[source]
- Parameters
- dataTensor
data with dimensionality (n_samples x n x m)
- n_mcint
number of MC samples
- kmaxint
parameter for estimating entropy for several manifolds (not used for some manifolds)
- batch_idxsOptional int list
if None then use all data and (batch_size == m) otherwise, (batch_size == len(batch_idxs))
- sample_idxsOptional int list
if None then use all data otherwise, compute elbo only for selected samples
- neuron_idxs: Optional int list
if None then use all data otherwise, compute only elbo for selected neurons
- mOptional int
used to scale the svgp likelihood and sgp prior. If not provided, self.m is used which is provided at initialization. This parameter is useful if we subsample data but want to weight the prior as if it was the full dataset. We use this e.g. in crossvalidation
- Returns
- svgp_elboTensor
evidence lower bound of sparse GP per neuron, batch and sample (n_mc x n) note that this is the ELBO for the batch which is proportional to an unbiased estimator for the data.
- klTensor
estimated KL divergence per batch between variational distribution and prior (n_mc)
Notes
ELBO of the model per batch is [ svgp_elbo - kl ]
- forward(data, n_mc, kmax=5, batch_idxs=None, sample_idxs=None, neuron_idxs=None, m=None, analytic_kl=False)[source]
- Parameters
- dataTensor
data with dimensionality (n_samples x n x m)
- n_mcint
number of MC samples
- kmaxint
parameter for estimating entropy for several manifolds (not used for some manifolds)
- batch_idxs: Optional int list
if None then use all data and (batch_size == m) otherwise, (batch_size == len(batch_idxs))
- sample_idxsOptional int list
if None then use all data otherwise, compute elbo only for selected samples
- neuron_idxs: Optional int list
if None then use all data otherwise, compute only elbo for selected neurons
- mOptional int
used to scale the svgp likelihood and sgp prior. If not provided, self.m is used which is provided at initialization. This parameter is useful if we subsample data but want to weight the prior as if it was the full dataset. We use this e.g. in crossvalidation
- Returns
- elboTensor
evidence lower bound of the GPLVM model averaged across MC samples and summed over n, m, n_samples (scalar)
- name = 'Gplvm'
- training: bool