| Home | Trees | Indices | Help |
|
|---|
|
|
Perform Independent Slow Feature Analysis on the input data.
**Internal variables of interest**
``self.RP``
The global rotation-permutation matrix. This is the filter
applied on input_data to get output_data
``self.RPC``
The *complete* global rotation-permutation matrix. This
is a matrix of dimension input_dim x input_dim (the 'outer space'
is retained)
``self.covs``
A `mdp.utils.MultipleCovarianceMatrices` instance containing
the current time-delayed covariance matrices of the input_data.
After convergence the uppermost ``output_dim`` x ``output_dim``
submatrices should be almost diagonal.
``self.covs[n-1]`` is the covariance matrix relative to the ``n``-th
time-lag
Note: they are not cleared after convergence. If you need to free
some memory, you can safely delete them with::
>>> del self.covs
``self.initial_contrast``
A dictionary with the starting contrast and the SFA and ICA parts of
it.
``self.final_contrast``
Like the above but after convergence.
Note: If you intend to use this node for large datasets please have
a look at the ``stop_training`` method documentation for
speeding things up.
References:
Blaschke, T. , Zito, T., and Wiskott, L. (2007).
Independent Slow Feature Analysis and Nonlinear Blind Source Separation.
Neural Computation 19(4):994-1021 (2007)
http://itb.biologie.hu-berlin.de/~wiskott/Publications/BlasZitoWisk2007-ISFA-NeurComp.pdf
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Inherited from Inherited from |
|||
| Inherited from Node | |||
|---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
| Inherited from Node | |||
|---|---|---|---|
|
|||
|
|||
|
|||
|
Inherited from |
|||
| Inherited from Node | |||
|---|---|---|---|
|
_train_seq List of tuples:: |
|||
|
dtype dtype |
|||
|
input_dim Input dimensions |
|||
|
output_dim Output dimensions |
|||
|
supported_dtypes Supported dtypes |
|||
|
|||
Perform Independent Slow Feature Analysis.
The notation is the same used in the paper by Blaschke et al. Please
refer to the paper for more information.
:Parameters:
lags
list of time-lags to generate the time-delayed covariance
matrices (in the paper this is the set of au). If
lags is an integer, time-lags 1,2,...,'lags' are used.
Note that time-lag == 0 (instantaneous correlation) is
always implicitly used.
sfa_ica_coeff
a list of float with two entries, which defines the
weights of the SFA and ICA part of the objective
function. They are called b_{SFA} and b_{ICA} in the
paper.
sfaweights
weighting factors for the covariance matrices relative
to the SFA part of the objective function (called
\kappa_{SFA}^{ au} in the paper). Default is
[1., 0., ..., 0.]
For possible values see the description of icaweights.
icaweights
weighting factors for the cov matrices relative
to the ICA part of the objective function (called
\kappa_{ICA}^{ au} in the paper). Default is 1.
Possible values are:
- an integer ``n``: all matrices are weighted the same
(note that it does not make sense to have ``n != 1``)
- a list or array of floats of ``len == len(lags)``:
each element of the list is used for weighting the
corresponding matrix
- ``None``: use the default values.
whitened
``True`` if input data is already white, ``False``
otherwise (the data will be whitened internally).
white_comp
If whitened is false, you can set ``white_comp`` to the
number of whitened components to keep during the
calculation (i.e., the input dimensions are reduced to
``white_comp`` by keeping the components of largest variance).
white_parm
a dictionary with additional parameters for whitening.
It is passed directly to the WhiteningNode constructor.
Ex: white_parm = { 'svd' : True }
eps_contrast
Convergence is achieved when the relative
improvement in the contrast is below this threshold.
Values in the range [1E-4, 1E-10] are usually
reasonable.
max_iter
If the algorithms does not achieve convergence within
max_iter iterations raise an Exception. Should be
larger than 100.
RP
Starting rotation-permutation matrix. It is an
input_dim x input_dim matrix used to initially rotate the
input components. If not set, the identity matrix is used.
In the paper this is used to start the algorithm at the
SFA solution (which is often quite near to the optimum).
verbose
print progress information during convergence. This can
slow down the algorithm, but it's the only way to see
the rate of improvement and immediately spot if something
is going wrong.
output_dim
sets the number of independent components that have to
be extracted. Note that if this is not smaller than
input_dim, the problem is solved linearly and SFA
would give the same solution only much faster.
|
|
|
|
|
|
|
|
|
|
Return the list of dtypes supported by this node. Support floating point types with size larger or equal than 64 bits.
|
|
|
|
|
|
|
|
Stop the training phase. If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white): >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
|
|
Process the data contained in `x`. If the object is still in the training phase, the function `stop_training` will be called. `x` is a matrix having different variables on different columns and observations on the rows. By default, subclasses should overwrite `_execute` to implement their execution phase. The docstring of the `_execute` method overwrites this docstring.
|
Invert `y`. If the node is invertible, compute the input ``x`` such that ``y = execute(x)``. By default, subclasses should overwrite `_inverse` to implement their `inverse` function. The docstring of the `inverse` method overwrites this docstring.
|
Stop the training phase. If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white): >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
|
Update the internal structures according to the input data `x`. `x` is a matrix having different variables on different columns and observations on the rows. By default, subclasses should overwrite `_train` to implement their training phase. The docstring of the `_train` method overwrites this docstring. Note: a subclass supporting multiple training phases should implement the *same* signature for all the training phases and document the meaning of the arguments in the `_train` method doc-string. Having consistent signatures is a requirement to use the node in a flow.
|
| Home | Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1 on Thu Mar 10 15:27:38 2016 | http://epydoc.sourceforge.net |