Picard: a very fast algorithm for Independent Component Analysis.
Number of components to use. If None is passed, all are used.
If True, uses Picard-O and enforce an orthogonal constraint. Otherwise, uses the standard Picard.
If True, uses the extended algorithm to separate sub and super-gaussian
sources. If None
(default), it is set to True if ortho == True,
and False otherwise. With extended=True we recommend you keep the
different density to ‘tanh’. See notes below.
If whiten is false, the data is already considered to be whitened, and no whitening is performed.
Either a built-in density model (‘tanh’, ‘exp’ and ‘cube’), or a custom density. A custom density is a class that should contain two methods called ‘log_lik’ and ‘score_and_der’. See examples in the densities.py file.
Maximum number of iterations during fit.
Tolerance on update at each iteration.
The mixing matrix to be used to initialize the algorithm.
Size of L-BFGS’s memory.
Number of attempts during the backtracking line-search.
Threshold on the eigenvalues of the Hessian approximation. Any eigenvalue below lambda_min is shifted to lambda_min.
Used to initialize w_init
when not specified, with a
normal distribution. Pass an int, for reproducible results
across multiple function calls.
Notes
Using a different density than ‘tanh’ may lead to erratic behavior of the algorithm: when extended=True, the non-linearity used by the algorithm is x +/- fun(x). The non-linearity should correspond to a density, hence fun should be dominated by x ** 2. Further, x + fun(x) should separate super-Gaussian sources and x-fun(x) should separate sub-Gaussian sources. This set of requirement is met by ‘tanh’.
Examples
>>> from sklearn.datasets import load_digits
>>> from picard import Picard
>>> X, _ = load_digits(return_X_y=True)
>>> transformer = Picard(n_components=7,
... random_state=0)
>>> X_transformed = transformer.fit_transform(X)
>>> X_transformed.shape
(1797, 7)
The linear operator to apply to the data to get the independent
sources. This is equal to the unmixing matrix when whiten
is
False, and equal to np.dot(unmixing_matrix, self.whitening_)
when
whiten
is True.
The pseudo-inverse of components_
. It is the linear operator
that maps independent sources to the data.
The mean over features. Only set if self.whiten is True.
Only set if whiten is ‘True’. This is the pre-whitening matrix that projects data onto the first n_components principal components.
Methods
|
Fit the model to X. |
|
Fit the model and recover the sources from X. |
|
Get output feature names for transformation. |
|
Get parameters for this estimator. |
|
Transform the sources back to the mixed data (apply mixing matrix). |
|
Set the parameters of this estimator. |
|
Recover the sources from X (apply the unmixing matrix). |
Methods
|
|
|
Fit the model to X. |
|
Fit the model and recover the sources from X. |
|
Get output feature names for transformation. |
|
Get parameters for this estimator. |
|
Transform the sources back to the mixed data (apply mixing matrix). |
|
Set the parameters of this estimator. |
|
Recover the sources from X (apply the unmixing matrix). |