Linear#
- class tinygp.transforms.Linear(scale: JAXArray, kernel: Kernel)[source]#
Bases:
Kernel
Apply a linear transformation to the input coordinates of the kernel
For example, the following transformed kernels are all equivalent, but the second supports more flexible transformations:
>>> import numpy as np >>> from tinygp import kernels, transforms >>> kernel0 = kernels.Matern32(4.5) >>> kernel1 = transforms.Linear(1.0 / 4.5, kernels.Matern32()) >>> np.testing.assert_allclose( ... kernel0.evaluate(0.5, 0.1), kernel1.evaluate(0.5, 0.1) ... )
- Parameters:
scale (JAXArray) – A 0-, 1-, or 2-dimensional array specifying the scale of this transform.
kernel (Kernel) – The kernel to use in the transformed space.
- evaluate(X1: tinygp.helpers.JAXArray, X2: tinygp.helpers.JAXArray) tinygp.helpers.JAXArray [source]#
Evaluate the kernel at a pair of input coordinates
This should be overridden be subclasses to return the kernel-specific value. Two things to note:
Users shouldn’t generally call
Kernel.evaluate()
. Instead, always “call” the kernel instance directly; for example, you can evaluate the Matern-3/2 kernel usingMatern32(1.5)(x1, x2)
, for arrays of input coordinatesx1
andx2
.When implementing a custom kernel, this method should treat
X1
andX2
as single datapoints. In other words, these inputs will typically either be scalars of have shapen_dim
, wheren_dim
is the number of input dimensions, rather thann_data
or(n_data, n_dim)
, and you should let theKernel
vmap
magic handle all the broadcasting for you.
- evaluate_diag(X: tinygp.helpers.JAXArray) tinygp.helpers.JAXArray #
Evaluate the kernel on its diagonal
The default implementation simply calls
Kernel.evaluate()
withX
as both arguments, but subclasses can use this to make diagonal calcuations more efficient.