FDAFeatureUnion#
- class skfda.preprocessing.feature_construction.FDAFeatureUnion(transformer_list, *, n_jobs=1, transformer_weights=None, verbose=False, array_output=False)[source]#
Concatenates results of multiple functional transformer objects.
This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results (They can be either FDataGrid and FDataBasis objects or multivariate data itself).This is useful to combine several feature extraction mechanisms into a single transformer. Parameters of the transformers may be set using its name and the parameter name separated by a ‘__’. A transformer may be replaced entirely by setting the parameter with its name to another transformer, or removed by setting to ‘drop’.
- Parameters:
transformer_list (Sequence[Tuple[str, TransformerMixin[Any, Any, Any]],]) – list of tuple List of tuple containing (str, transformer). The first element of the tuple is name affected to the transformer while the second element is a scikit-learn transformer instance. The transformer instance can also be “drop” for it to be ignored.
n_jobs (int) – int Number of jobs to run in parallel.
Nonemeans 1 unless in ajoblib.parallel_backendcontext.-1means using all processors. The default value is Nonetransformer_weights (Mapping[str, float] | None) – dict Multiplicative weights for features per transformer. Keys are transformer names, values the weights. Raises ValueError if key not present in
transformer_list.verbose (bool) – bool If True, the time elapsed while fitting each transformer will be printed as it is completed. By default the value is False
array_output (bool) – bool indicates if the transformed data is requested to be a NumPy array output. By default the value is False.
Examples
Firstly we will import the Berkeley Growth Study data set:
>>> from skfda.datasets import fetch_growth >>> X,y = fetch_growth(return_X_y=True)
Then we need to import the transformers we want to use. In our case we will use the Recursive Maxima Hunting method to select important features. We will concatenate to the results of the previous method the original curves with an Evaluation Transfomer.
>>> from skfda.preprocessing.feature_construction import ( ... FDAFeatureUnion, ... ) >>> from skfda.preprocessing.dim_reduction.variable_selection import ( ... RecursiveMaximaHunting, ... ) >>> from skfda.preprocessing.feature_construction import ( ... EvaluationTransformer, ... ) >>> import numpy as np
Finally we apply fit and transform.
>>> union = FDAFeatureUnion( ... [ ... ("rmh", RecursiveMaximaHunting()), ... ("eval", EvaluationTransformer()), ... ], ... array_output=True, ... ) >>> np.around(union.fit_transform(X,y), decimals=2) array([[ 195.1, 141.1, 163.8, ..., 193.8, 194.3, 195.1], [ 178.7, 133. , 148.1, ..., 176.1, 177.4, 178.7], [ 171.5, 126.5, 143.6, ..., 170.9, 171.2, 171.5], ..., [ 166.8, 132.8, 152.2, ..., 166. , 166.3, 166.8], [ 168.6, 139.4, 161.6, ..., 168.3, 168.4, 168.6], [ 169.2, 138.1, 161.7, ..., 168.6, 168.9, 169.2]], shape=(93, 37))
Methods
fit(X[, y])Fit all transformers using X.
fit_transform(X[, y])Fit all transformers, transform the data and concatenate results.
get_feature_names_out([input_features])Get output feature names for transformation.
Get metadata routing of this object.
get_params([deep])Get parameters for this estimator.
set_output(*[, transform])Set the output container when "transform" and "fit_transform" are called.
set_params(**kwargs)Set the parameters of this estimator.
transform(X, **params)Transform X separately by each transformer, concatenate results.
- fit(X, y=None, **fit_params)#
Fit all transformers using X.
- Parameters:
X (iterable or array-like, depending on transformers) – Input data, used to fit transformers.
y (array-like of shape (n_samples, n_outputs), default=None) – Targets for supervised learning.
**fit_params (dict, default=None) –
If enable_metadata_routing=False (default): Parameters directly passed to the fit methods of the sub-transformers.
If enable_metadata_routing=True: Parameters safely routed to the fit methods of the sub-transformers. See Metadata Routing User Guide for more details.
Changed in version 1.5: **fit_params can be routed via metadata routing API.
- Returns:
self – FeatureUnion class instance.
- Return type:
- fit_transform(X, y=None, **params)#
Fit all transformers, transform the data and concatenate results.
- Parameters:
X (iterable or array-like, depending on transformers) – Input data to be transformed.
y (array-like of shape (n_samples, n_outputs), default=None) – Targets for supervised learning.
**params (dict, default=None) –
If enable_metadata_routing=False (default): Parameters directly passed to the fit methods of the sub-transformers.
If enable_metadata_routing=True: Parameters safely routed to the fit methods of the sub-transformers. See Metadata Routing User Guide for more details.
Changed in version 1.5: **params can now be routed via metadata routing API.
- Returns:
X_t – The hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.
- Return type:
array-like or sparse matrix of shape (n_samples, sum_n_components)
- get_feature_names_out(input_features=None)#
Get output feature names for transformation.
- Parameters:
input_features (array-like of str or None, default=None) – Input features.
- Returns:
feature_names_out – Transformed feature names.
- Return type:
ndarray of str objects
- get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Added in version 1.5.
- Returns:
routing – A
MetadataRouterencapsulating routing information.- Return type:
MetadataRouter
- get_params(deep=True)#
Get parameters for this estimator.
Returns the parameters given in the constructor as well as the estimators contained within the transformer_list of the FeatureUnion.
- Parameters:
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
params – Parameter names mapped to their values.
- Return type:
mapping of string to any
- set_output(*, transform=None)#
Set the output container when “transform” and “fit_transform” are called.
set_output will set the output of all estimators in transformer_list.
- Parameters:
transform ({"default", "pandas", "polars"}, default=None) –
Configure output of transform and fit_transform.
”default”: Default output format of a transformer
”pandas”: DataFrame output
”polars”: Polars output
None: Transform configuration is unchanged
- Returns:
self – Estimator instance.
- Return type:
estimator instance
- set_params(**kwargs)#
Set the parameters of this estimator.
Valid parameter keys can be listed with
get_params(). Note that you can directly set the parameters of the estimators contained in transformer_list.
- transform(X, **params)#
Transform X separately by each transformer, concatenate results.
- Parameters:
X (iterable or array-like, depending on transformers) – Input data to be transformed.
**params (dict, default=None) –
Parameters routed to the transform method of the sub-transformers via the metadata routing API. See Metadata Routing User Guide for more details.
Added in version 1.5.
- Returns:
X_t – The hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.
- Return type:
array-like or sparse matrix of shape (n_samples, sum_n_components)