mccd.grads

GRADIENTS.

Defines the gradient classes that will be used in the optimization procedures from the ModOpt package.

: Authors: Tobias Liaudat <tobiasliaudat@gmail.com>

class CoeffLocGrad(data, weights, S, VT, H_glob, flux, sig, ker, ker_rot, SNR_weights, D, save_iter_cost=False, data_type='float', input_data_writeable=False, verbose=True)[source]

Bases: modopt.opt.gradient.GradParent, modopt.math.matrix.PowerMethod

Gradient class for the local coefficient update.

Local Alpha, \(\\alpha_{k}\).

Parameters
  • data (numpy.ndarray) – Observed data.

  • weights (numpy.ndarray) – Corresponding pixel-wise weights.

  • S (numpy.ndarray) – Current eigenPSFs \(S\).

  • VT (numpy.ndarray) – Matrix of spatial constraint enforcement (in the MCCD-RCA case will be the matrix of concatenated graph Laplacians.)

  • H_glob (numpy.ndarray) – Current estimation of the global model.

  • flux (numpy.ndarray) – Per-object flux value.

  • sig (numpy.ndarray) – Noise levels.

  • ker (numpy.ndarray) – Shifting kernels.

  • ker_rot (numpy.ndarray) – Inverted shifting kernels.

  • SNR_weights (numpy.ndarray) – Array of per star weights.

  • D (float) – Upsampling factor.

  • save_iter_cost (bool) – To save iteration diagnostic data. Default is False.

  • data_type (str) – Data type to be used. Default is float.

  • input_data_writeable (bool) – Option to make the observed data writeable. Default is False. See ModOpt for more info.

  • verbose (bool) – Option for verbose output. Default is True. See ModOpt for more info.

reset_iter_cost()[source]

Reset iteration cost.

get_iter_cost()[source]

Get current iteration cost.

update_S(new_S, update_spectral_radius=True)[source]

Update current eigenPSFs.

update_H_glob(new_H_glob)[source]

Update current global model.

MX(alpha)[source]

Apply degradation operator and renormalize.

Parameters

alpha (numpy.ndarray) – Current coefficients (after factorization by \(V^{\\top}\)).

MtX(x)[source]

Adjoint to degradation operator MX().

Parameters

x (numpy.ndarray) – Set of finer-grid images.

cost(x, y=None, verbose=False)[source]

Compute data fidelity term.

Notes

y is unused (it’s just so modopt.opt.algorithms.Condat can feed the dual variable.)

get_grad(x)[source]

Compute current iteration’s gradient.

class CoeffGlobGrad(data, weights, S, Pi, H_loc, flux, sig, ker, ker_rot, D, SNR_weights, save_iter_cost=False, data_type='float', input_data_writeable=False, verbose=True)[source]

Bases: modopt.opt.gradient.GradParent, modopt.math.matrix.PowerMethod

Gradient class for the local coefficient update.

Global Alpha, :math: \tilde{\alpha}`.

Parameters
  • data (numpy.ndarray) – Observed data.

  • weights (numpy.ndarray) – Corresponding pixel-wise weights.

  • S (numpy.ndarray) – Current eigenPSFs \(S\).

  • Pi (numpy.ndarray) – Matrix of positions polynomials.

  • H_loc (numpy.ndarray) – Current estimation of the local model

  • flux (numpy.ndarray) – Per-object flux value.

  • sig (numpy.ndarray) – Noise levels.

  • ker (numpy.ndarray) – Shifting kernels.

  • ker_rot (numpy.ndarray) – Inverted shifting kernels.

  • SNR_weights (numpy.ndarray) – Array of per star weights.

  • D (float) – Upsampling factor.

  • save_iter_cost (bool) – To save iteration diagnostic data. Default is False.

  • data_type (str) – Data type to be used. Default is float.

  • input_data_writeable (bool) – Option to make the observed data writeable. Default is False. See ModOpt for more info.

  • verbose (bool) – Option for verbose output. Default is True. See ModOpt for more info.

reset_iter_cost()[source]

Reset iteration cost.

get_iter_cost()[source]

Get current iteration cost.

update_S(new_S, update_spectral_radius=True)[source]

Update current eigenPSFs.

update_H_loc(new_H_loc)[source]

Update current local models.

MX(alpha)[source]

Apply degradation operator and renormalize.

Parameters

alpha (numpy.ndarray) – Current coefficients (after factorization by \(\\Pi\)).

MtX(x)[source]

Adjoint to degradation operator MX().

Parameters

x (numpy.ndarray) – Set of finer-grid images.

cost(x, y=None, verbose=False)[source]

Compute data fidelity term.

Notes

y is unused (it’s just so modopt.opt.algorithms.Condat can feed the dual variable.)

get_grad(x)[source]

Compute current iteration’s gradient.

class SourceLocGrad(data, weights, A, H_glob, flux, sig, ker, ker_rot, SNR_weights, D, filters, save_iter_cost=False, data_type='float', input_data_writeable=False, verbose=True)[source]

Bases: modopt.opt.gradient.GradParent, modopt.math.matrix.PowerMethod

Gradient class for the local eigenPSF update.

Local S, \(S_{k}\).

Parameters
  • data (numpy.ndarray) – Input data array, a array of 2D observed images (i.e. with noise).

  • weights (numpy.ndarray) – Corresponding pixel-wise weights.

  • A (numpy.ndarray) – Current estimation of corresponding coefficients.

  • H_glob (numpy.ndarray) – Current estimation of the global model

  • flux (numpy.ndarray) – Per-object flux value.

  • sig (numpy.ndarray) – Noise levels.

  • ker (numpy.ndarray) – Shifting kernels.

  • ker_rot (numpy.ndarray) – Inverted shifting kernels.

  • D (float) – Upsampling factor.

  • filters (numpy.ndarray) – Set of filters.

  • save_iter_cost (bool) – To save iteration diagnostic data. Default is False.

  • data_type (str) – Data type to be used. Default is float.

  • input_data_writeable (bool) – Option to make the observed data writeable. Default is False. See ModOpt for more info.

  • verbose (bool) – Option for verbose output. Default is True. See ModOpt for more info.

reset_iter_cost()[source]

Reset iteration cost.

get_iter_cost()[source]

Get current iteration cost.

update_A(new_A, update_spectral_radius=True)[source]

Update current coefficients.

update_H_glob(new_H_glob)[source]

Update current global model.

MX(transf_S)[source]

Apply degradation operator and renormalize.

Parameters

transf_S (numpy.ndarray) – Current eigenPSFs in wavelet (by default Starlet) space.

Returns

Return type

numpy.ndarray result

MtX(x)[source]

Adjoint to degradation operator MX().

cost(x, y=None, verbose=False)[source]

Compute data fidelity term.

Notes

y is unused (it’s just so modopt.opt.algorithms.Condat can feed the dual variable.)

get_grad(x)[source]

Compute current iteration’s gradient.

class SourceGlobGrad(data, weights, A, H_loc, flux, sig, ker, ker_rot, SNR_weights, D, filters, save_iter_cost=False, data_type='float', input_data_writeable=False, verbose=True)[source]

Bases: modopt.opt.gradient.GradParent, modopt.math.matrix.PowerMethod

Gradient class for the global eigenPSF update.

Global S, \(\\tilde{S}\).

Parameters
  • data (numpy.ndarray) – Input data array, a array of 2D observed images (i.e. with noise).

  • weights (numpy.ndarray) – Corresponding pixel-wise weights.

  • A (numpy.ndarray) – Current estimation of corresponding coefficients.

  • H_loc (numpy.ndarray) – Current estimation of the local models

  • flux (numpy.ndarray) – Per-object flux value.

  • sig (numpy.ndarray) – Noise levels.

  • ker (numpy.ndarray) – Shifting kernels.

  • ker_rot (numpy.ndarray) – Inverted shifting kernels.

  • D (float) – Upsampling factor.

  • filters (numpy.ndarray) – Set of filters.

  • save_iter_cost (bool) – To save iteration diagnostic data. Default is False.

  • data_type (str) – Data type to be used. Default is float.

  • input_data_writeable (bool) – Option to make the observed data writeable. Default is False. See ModOpt for more info.

  • verbose (bool) – Option for verbose output. Default is True. See ModOpt for more info.

reset_iter_cost()[source]

Reset iteration cost.

get_iter_cost()[source]

Get current iteration cost.

update_A(new_A, update_spectral_radius=True)[source]

Update current coefficients.

update_H_loc(new_H_loc)[source]

Update current local models.

MX(transf_S)[source]

Apply degradation operator and renormalize.

Parameters

transf_S (numpy.ndarray) – Current eigenPSFs in Starlet space.

Returns

Return type

numpy.ndarray result

MtX(x)[source]

Adjoint to degradation operator MX().

cost(x, y=None, verbose=False)[source]

Compute data fidelity term.

Notes

y is unused (it’s just so modopt.opt.algorithms.Condat can feed the dual variable.)

get_grad(x)[source]

Compute current iteration’s gradient.