Perform Independent Component Analysis.
Training vector, where n_samples is the number of samples and n_features is the number of features.
Either a built in density model (‘tanh’, ‘exp’ and ‘cube’), or a custom density. A custom density is a class that should contain two methods called ‘log_lik’ and ‘score_and_der’. See examples in the densities.py file.
Number of components to extract. If None no dimension reduction is performed.
If True, uses Picard-O. Otherwise, uses the standard Picard.
If True, uses the extended algorithm to separate sub and super-gaussian sources. By default, True if ortho == True, False otherwise. Using a different density than ‘tanh’ may lead to erratic behavior of the algorithm: when extended=True, the non-linearity used by the algorithm is x +/- fun(x). The non-linearity should correspond to a density, hence fun should be dominated by x ** 2. Further, x + fun(x) should separate super-Gaussian sources and x-fun(x) should separate sub-Gaussian sources. This set of requirement is met by ‘tanh’.
If True perform an initial whitening of the data. If False, the data is assumed to have already been preprocessed: it should be centered, normed and white, otherwise you will get incorrect results. In this case the parameter n_components will be ignored.
If True, X_mean is returned too. Equals to 0 if centering is False.
Whether or not to return the number of iterations.
If True, X is mean corrected.
Maximum number of iterations to perform.
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
Size of L-BFGS’s memory.
Number of attempts during the backtracking line-search.
Threshold on the eigenvalues of the Hessian approximation. Any eigenvalue below lambda_min is shifted to lambda_min.
Whether to check the fun provided by the user at the beginning of the run. Setting it to False is not safe.
Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then a random rotation is used.
If an int, perform fastica_it iterations of FastICA before running Picard. It might help starting from a better point.
Used to perform a random initialization when w_init is not provided. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
Prints informations about the state of the algorithm if True.
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n_components principal components. If whiten is ‘False’, K is ‘None’.
Estimated un-mixing matrix. The mixing matrix can be obtained by:
w = np.dot(W, K) A = np.dot(w.T, np.linalg.inv(np.dot(w, w.T)))
Estimated source matrix
The mean over features. Returned only if return_X_mean is True.
Number of iterations taken to converge. This is returned only when return_n_iter is set to True.