mgplvm.models.bfa module
- class mgplvm.models.bfa.Bfa(n, d, sigma=None, learn_sigma=True, Y=None, learn_neuron_scale=False, ard=False, learn_scale=None)[source]
Bases:
mgplvm.models.gp_base.GpBase
Bayesian Factor Analysis Assumes Gaussian observation noise Computes log_prob and posterior predictions exactly
- property dim_scale
- elbo(y, x, sample_idxs=None, m=None)[source]
- Parameters
- yTensor
data tensor with dimensions (n_samples x n x m)
- xTensor (single kernel) or Tensor list (product kernels)
input tensor(s) with dimensions (n_mc x n_samples x d x m)
- Returns
- lik, prior_klTuple[torch.Tensor, torch.Tensor]
lik has dimensions (n_mc x n) prior_kl has dimensions (n) and is zero
- Return type
Tuple
[Tensor
,Tensor
]
- property msg
- name = 'Bfa'
- property neuron_scale
- property prms: torch.Tensor
- Return type
Tensor
- property scale
- property sigma: torch.Tensor
- Return type
Tensor
- training: bool
- class mgplvm.models.bfa.Bvfa(n, d, m, n_samples, likelihood, q_mu=None, q_sqrt=None, tied_samples=True, Y=None, learn_neuron_scale=False, ard=False, learn_scale=None, rel_scale=1, scale=None, dim_scale=None, neuron_scale=None)[source]
Bases:
mgplvm.models.gp_base.GpBase
- property dim_scale
- elbo(y, x, sample_idxs=None, m=None)[source]
- Parameters
- yTensor
data tensor with dimensions (n_samples x n x m)
- xTensor (single kernel) or Tensor list (product kernels)
input tensor(s) with dimensions (n_mc x n_samples x d x m)
- mOptional int
used to scale the svgp likelihood. If not provided, self.m is used which is provided at initialization. This parameter is useful if we subsample data but want to weight the prior as if it was the full dataset. We use this e.g. in crossvalidation
- Returns
- lik, prior_klTuple[torch.Tensor, torch.Tensor]
lik has dimensions (n_mc x n) prior_kl has dimensions (n)
- Return type
Tuple
[Tensor
,Tensor
]
- property msg
- name = 'Bvfa'
- property neuron_scale
- predict(x, full_cov, sample_idxs=None)[source]
- Parameters
- xTensor (single kernel) or Tensor list (product kernels)
test input tensor(s) with dimensions (n_b x n_samples x d x m)
- full_covbool
returns full covariance if true otherwise returns the diagonal
- Returns
- muTensor
mean of predictive density at test inputs [ s ]
- vTensor
variance/covariance of predictive density at test inputs [ s ] if full_cov is true returns full covariance, otherwise returns diagonal variance
- Return type
Tuple
[Tensor
,Tensor
]
- property prms: Tuple[torch.Tensor, torch.Tensor]
- Return type
Tuple
[Tensor
,Tensor
]
- property q_mu
- property q_sqrt
- sample(query, n_mc=1000, square=False, noise=True)[source]
- Parameters
- queryTensor (single kernel)
test input tensor with dimensions (n_samples x d x m)
- n_mcint
numper of samples to return
- squarebool
determines whether to square the output
- noisebool
determines whether we also sample explicitly from the noise model or simply return samples of the mean
- Returns
- y_sampsTensor
samples from the model (n_mc x n_samples x d x m)
- property scale
- training: bool
- class mgplvm.models.bfa.Fa(n, d, sigma=None, learn_sigma=True, Y=None, C=None)[source]
Bases:
mgplvm.models.gp_base.GpBase
Standard non-Bayesian Factor Analysis Assumes Gaussian observation noise Computes log_prob and posterior predictions exactly
- elbo(y, x, sample_idxs=None, m=None)[source]
- Parameters
- yTensor
data tensor with dimensions (n_samples x n x m)
- xTensor (single kernel) or Tensor list (product kernels)
input tensor(s) with dimensions (n_mc x n_samples x d x m)
- Returns
- lik, prior_klTuple[torch.Tensor, torch.Tensor]
lik has dimensions (n_mc x n) prior_kl has dimensions (n) and is zero
- Return type
Tuple
[Tensor
,Tensor
]
- log_prob(y, x)[source]
compute p(y|X) = N(y|CX, I) x is (n_mc x n_samples x d x m) y is (n_samples x n x m)
- property msg
- name = 'Fa'
- property prms: torch.Tensor
- Return type
Tensor
- sample(query, n_mc=1000, square=False, noise=True)[source]
- Parameters
- queryTensor (single kernel)
test input tensor with dimensions (n_samples x d x m)
- n_mcint
numper of samples to return
- squarebool
determines whether to square the output
- noisebool
determines whether we also sample explicitly from the noise model or simply return samples of the mean
- Returns
- y_sampsTensor
samples from the model (n_mc x n_samples x d x m)
- property sigma: torch.Tensor
- Return type
Tensor
- training: bool
- mgplvm.models.bfa.batch_capacitance_tril(W, D)[source]
Copied from pytorch source code Computes Cholesky of \(I + W.T @ inv(D) @ W\) for a batch of matrices \(W\) and a batch of vectors \(D\).
- class mgplvm.models.bfa.vFa(n, d, m, n_samples, likelihood, Y=None, rel_scale=1, C=None)[source]
Bases:
mgplvm.models.gp_base.GpBase
Variational non-Bayesian Factor Analysis Allows for non-Gaussian noise
- elbo(y, x, sample_idxs=None, m=None)[source]
- Parameters
- yTensor
data tensor with dimensions (n_samples x n x m)
- xTensor (single kernel) or Tensor list (product kernels)
input tensor(s) with dimensions (n_mc x n_samples x d x m)
- Returns
- lik, prior_klTuple[torch.Tensor, torch.Tensor]
lik has dimensions (n_mc x n) prior_kl has dimensions (n) and is zero
- Return type
Tuple
[Tensor
,Tensor
]
- property msg
- name = 'vFa'
- property prms: torch.Tensor
- Return type
Tensor
- sample(query, n_mc=1000, square=False, noise=True)[source]
- Parameters
- queryTensor (single kernel)
test input tensor with dimensions (n_samples x d x m)
- n_mcint
numper of samples to return
- squarebool
determines whether to square the output
- noisebool
determines whether we also sample explicitly from the noise model or simply return samples of the mean
- Returns
- y_sampsTensor
samples from the model (n_mc x n_samples x d x m)
- training: bool