Automl

Package Contents

Classes Summary

AutoMLSearch

Automated Pipeline search.

EngineBase

Helper class that provides a standard way to create an ABC using

SequentialEngine

The default engine for the AutoML search. Trains and scores pipelines locally and sequentially.

Functions

get_default_primary_search_objective

Get the default primary search objective for a problem type.

make_data_splitter

Given the training data and ML problem parameters, compute a data splitting method to use during AutoML search.

search

Given data and configuration, run an automl search.

search_iterative

Given data and configuration, run an automl search.

tune_binary_threshold

Tunes the threshold of a binary pipeline to the X and y thresholding data

Contents

class evalml.automl.AutoMLSearch(X_train=None, y_train=None, problem_type=None, objective='auto', max_iterations=None, max_time=None, patience=None, tolerance=None, data_splitter=None, allowed_component_graphs=None, allowed_model_families=None, start_iteration_callback=None, add_result_callback=None, error_callback=None, additional_objectives=None, alternate_thresholding_objective='F1', random_seed=0, n_jobs=- 1, tuner_class=None, optimize_thresholds=True, ensembling=False, max_batches=None, problem_configuration=None, train_best_pipeline=True, pipeline_parameters=None, custom_hyperparameters=None, sampler_method='auto', sampler_balanced_ratio=0.25, _ensembling_split_size=0.2, _pipelines_per_batch=5, _automl_algorithm='iterative', engine='sequential', verbose=False)[source]

Automated Pipeline search.

Parameters
  • X_train (pd.DataFrame) – The input training data of shape [n_samples, n_features]. Required.

  • y_train (pd.Series) – The target training data of length [n_samples]. Required for supervised learning tasks.

  • problem_type (str or ProblemTypes) – Type of supervised learning problem. See evalml.problem_types.ProblemType.all_problem_types for a full list.

  • objective (str, ObjectiveBase) –

    The objective to optimize for. Used to propose and rank pipelines, but not for optimizing each pipeline during fit-time. When set to ‘auto’, chooses:

    • LogLossBinary for binary classification problems,

    • LogLossMulticlass for multiclass classification problems, and

    • R2 for regression problems.

  • max_iterations (int) – Maximum number of iterations to search. If max_iterations and max_time is not set, then max_iterations will default to max_iterations of 5.

  • max_time (int, str) – Maximum time to search for pipelines. This will not start a new pipeline search after the duration has elapsed. If it is an integer, then the time will be in seconds. For strings, time can be specified as seconds, minutes, or hours.

  • patience (int) – Number of iterations without improvement to stop search early. Must be positive. If None, early stopping is disabled. Defaults to None.

  • tolerance (float) – Minimum percentage difference to qualify as score improvement for early stopping. Only applicable if patience is not None. Defaults to None.

  • allowed_component_graphs (dict) –

    A dictionary of lists or ComponentGraphs indicating the component graphs allowed in the search. The format should follow { “Name_0”: [list_of_components], “Name_1”: [ComponentGraph(…)] }

    The default of None indicates all pipeline component graphs for this problem type are allowed. Setting this field will cause allowed_model_families to be ignored.

    e.g. allowed_component_graphs = { “My_Graph”: [“Imputer”, “One Hot Encoder”, “Random Forest Classifier”] }

  • allowed_model_families (list(str, ModelFamily)) – The model families to search. The default of None searches over all model families. Run evalml.pipelines.components.utils.allowed_model_families(“binary”) to see options. Change binary to multiclass or regression depending on the problem type. Note that if allowed_pipelines is provided, this parameter will be ignored.

  • data_splitter (sklearn.model_selection.BaseCrossValidator) – Data splitting method to use. Defaults to StratifiedKFold.

  • tuner_class – The tuner class to use. Defaults to SKOptTuner.

  • optimize_thresholds (bool) – Whether or not to optimize the binary pipeline threshold. Defaults to True.

  • start_iteration_callback (callable) – Function called before each pipeline training iteration. Callback function takes three positional parameters: The pipeline instance and the AutoMLSearch object.

  • add_result_callback (callable) – Function called after each pipeline training iteration. Callback function takes three positional parameters: A dictionary containing the training results for the new pipeline, an untrained_pipeline containing the parameters used during training, and the AutoMLSearch object.

  • error_callback (callable) – Function called when search() errors and raises an Exception. Callback function takes three positional parameters: the Exception raised, the traceback, and the AutoMLSearch object. Must also accepts kwargs, so AutoMLSearch is able to pass along other appropriate parameters by default. Defaults to None, which will call log_error_callback.

  • additional_objectives (list) – Custom set of objectives to score on. Will override default objectives for problem type if not empty.

  • alternate_thresholding_objective (str) – The objective to use for thresholding binary classification pipelines if the main objective provided isn’t tuneable. Defaults to F1.

  • random_seed (int) – Seed for the random number generator. Defaults to 0.

  • n_jobs (int or None) – Non-negative integer describing level of parallelism used for pipelines. None and 1 are equivalent. If set to -1, all CPUs are used. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used.

  • ensembling (boolean) – If True, runs ensembling in a separate batch after every allowed pipeline class has been iterated over. If the number of unique pipelines to search over per batch is one, ensembling will not run. Defaults to False.

  • max_batches (int) – The maximum number of batches of pipelines to search. Parameters max_time, and max_iterations have precedence over stopping the search.

  • problem_configuration (dict, None) – Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the date_index, gap, forecast_horizon, and max_delay variables.

  • train_best_pipeline (boolean) – Whether or not to train the best pipeline before returning it. Defaults to True.

  • pipeline_parameters (dict) –

    A dict of the parameters used to initialize a pipeline with. Keys should consist of the component names and values should specify parameter values

    e.g. pipeline_parameters = { ‘Imputer’ : { ‘numeric_impute_strategy’: ‘most_frequent’ } }

  • custom_hyperparameters (dict) –

    A dict of the hyperparameter ranges used to iterate over during search. Keys should consist of the component names and values should specify a singular value or skopt.Space.

    e.g. custom_hyperparameters = { ‘Imputer’ : { ‘numeric_impute_strategy’: Categorical([‘most_frequent’, ‘median’]) } }

  • sampler_method (str) – The data sampling component to use in the pipelines if the problem type is classification and the target balance is smaller than the sampler_balanced_ratio. Either ‘auto’, which will use our preferred sampler for the data, ‘Undersampler’, ‘Oversampler’, or None. Defaults to ‘auto’.

  • sampler_balanced_ratio (float) – The minority:majority class ratio that we consider balanced, so a 1:4 ratio would be equal to 0.25. If the class balance is larger than this provided value, then we will not add a sampler since the data is then considered balanced. Overrides the sampler_ratio of the samplers. Defaults to 0.25.

  • _ensembling_split_size (float) – The amount of the training data we’ll set aside for training ensemble metalearners. Only used when ensembling is True. Must be between 0 and 1, exclusive. Defaults to 0.2

  • _pipelines_per_batch (int) – The number of pipelines to train for every batch after the first one. The first batch will train a baseline pipline + one of each pipeline family allowed in the search.

  • _automl_algorithm (str) – The automl algorithm to use. Currently the two choices are ‘iterative’ and ‘default’. Defaults to iterative.

  • engine (EngineBase or str) – The engine instance used to evaluate pipelines. Dask or concurrent.futures engines can also be chosen by providing a string from the list [“sequential”, “cf_threaded”, “cf_process”, “dask_threaded”, “dask_process”]. If a parallel engine is selected this way, the maximum amount of parallelism, as determined by the engine, will be used. Defaults to “sequential”.

  • verbose (boolean) – Whether or not to display semi-real-time updates to stdout while search is running. Defaults to False.

Methods

add_to_rankings

Fits and evaluates a given pipeline then adds the results to the automl rankings with the requirement that automl search has been run.

best_pipeline

Returns a trained instance of the best pipeline and parameters found during automl search. If train_best_pipeline is set to False, returns an untrained pipeline instance.

close_engine

Function to explicitly close the engine, client, parallel resources.

describe_pipeline

Describe a pipeline

full_rankings

Returns a pandas.DataFrame with scoring results from all pipelines searched

get_pipeline

Given the ID of a pipeline training result, returns an untrained instance of the specified pipeline

load

Loads AutoML object at file path

plot

rankings

Returns a pandas.DataFrame with scoring results from the highest-scoring set of parameters used with each pipeline.

results

Class that allows access to a copy of the results from automl_search.

save

Saves AutoML object at file path

score_pipelines

Score a list of pipelines on the given holdout data.

search

Find the best pipeline for the data set.

train_pipelines

Train a list of pipelines on the training data.

add_to_rankings(self, pipeline)[source]

Fits and evaluates a given pipeline then adds the results to the automl rankings with the requirement that automl search has been run.

Parameters

pipeline (PipelineBase) – pipeline to train and evaluate.

property best_pipeline(self)

Returns a trained instance of the best pipeline and parameters found during automl search. If train_best_pipeline is set to False, returns an untrained pipeline instance.

Returns

A trained instance of the best pipeline and parameters found during automl search. If train_best_pipeline is set to False, returns an untrained pipeline instance.

Return type

PipelineBase

close_engine(self)[source]

Function to explicitly close the engine, client, parallel resources.

describe_pipeline(self, pipeline_id, return_dict=False)[source]

Describe a pipeline

Parameters
  • pipeline_id (int) – pipeline to describe

  • return_dict (bool) – If True, return dictionary of information about pipeline. Defaults to False.

Returns

Description of specified pipeline. Includes information such as type of pipeline components, problem, training time, cross validation, etc.

property full_rankings(self)

Returns a pandas.DataFrame with scoring results from all pipelines searched

get_pipeline(self, pipeline_id)[source]

Given the ID of a pipeline training result, returns an untrained instance of the specified pipeline initialized with the parameters used to train that pipeline during automl search.

Parameters

pipeline_id (int) – pipeline to retrieve

Returns

untrained pipeline instance associated with the provided ID

Return type

PipelineBase

static load(file_path, pickle_type='cloudpickle')[source]

Loads AutoML object at file path

Parameters
  • file_path (str) – location to find file to load

  • {"pickle" (pickle_type) – the pickling library to use. Currently not used since the standard pickle library can handle cloudpickles.

  • "cloudpickle"} – the pickling library to use. Currently not used since the standard pickle library can handle cloudpickles.

Returns

AutoSearchBase object

property plot(self)
property rankings(self)

Returns a pandas.DataFrame with scoring results from the highest-scoring set of parameters used with each pipeline.

property results(self)

Class that allows access to a copy of the results from automl_search.

Returns: dict containing pipeline_results: a dict with results from each pipeline,

and search_order: a list describing the order the pipelines were searched.

save(self, file_path, pickle_type='cloudpickle', pickle_protocol=cloudpickle.DEFAULT_PROTOCOL)[source]

Saves AutoML object at file path

Parameters
  • file_path (str) – location to save file

  • {"pickle" (pickle_type) – the pickling library to use.

  • "cloudpickle"} – the pickling library to use.

  • pickle_protocol (int) – the pickle data stream format.

Returns

None

score_pipelines(self, pipelines, X_holdout, y_holdout, objectives)[source]

Score a list of pipelines on the given holdout data.

Parameters
  • pipelines (list(PipelineBase)) – List of pipelines to train.

  • X_holdout (pd.DataFrame) – Holdout features.

  • y_holdout (pd.Series) – Holdout targets for scoring.

  • objectives (list(str), list(ObjectiveBase)) – Objectives used for scoring.

Returns

Dictionary keyed by pipeline name that maps to a dictionary of scores. Note that the any pipelines that error out during scoring will not be included in the dictionary but the exception and stacktrace will be displayed in the log.

Return type

Dict[str, Dict[str, float]]

search(self, show_iteration_plot=True)[source]

Find the best pipeline for the data set.

Parameters
  • feature_types (list, optional) – list of feature types, either numerical or categorical. Categorical features will automatically be encoded

  • show_iteration_plot (boolean, True) – Shows an iteration vs. score plot in Jupyter notebook. Disabled by default in non-Jupyter enviroments.

train_pipelines(self, pipelines)[source]

Train a list of pipelines on the training data.

This can be helpful for training pipelines once the search is complete.

Parameters

pipelines (list(PipelineBase)) – List of pipelines to train.

Returns

Dictionary keyed by pipeline name that maps to the fitted pipeline. Note that the any pipelines that error out during training will not be included in the dictionary but the exception and stacktrace will be displayed in the log.

Return type

Dict[str, PipelineBase]

class evalml.automl.EngineBase[source]

Helper class that provides a standard way to create an ABC using inheritance.

Methods

setup_job_log

submit_evaluation_job

Submit job for pipeline evaluation during AutoMLSearch.

submit_scoring_job

Submit job for pipeline scoring.

submit_training_job

Submit job for pipeline training.

static setup_job_log()[source]
abstract submit_evaluation_job(self, automl_config, pipeline, X, y)[source]

Submit job for pipeline evaluation during AutoMLSearch.

abstract submit_scoring_job(self, automl_config, pipeline, X, y, objectives)[source]

Submit job for pipeline scoring.

abstract submit_training_job(self, automl_config, pipeline, X, y)[source]

Submit job for pipeline training.

evalml.automl.get_default_primary_search_objective(problem_type)[source]

Get the default primary search objective for a problem type.

Parameters

problem_type (str or ProblemType) – problem type of interest.

Returns

primary objective instance for the problem type.

Return type

ObjectiveBase

evalml.automl.make_data_splitter(X, y, problem_type, problem_configuration=None, n_splits=3, shuffle=True, random_seed=0)[source]

Given the training data and ML problem parameters, compute a data splitting method to use during AutoML search.

Parameters
  • X (pd.DataFrame) – The input training data of shape [n_samples, n_features].

  • y (pd.Series) – The target training data of length [n_samples].

  • problem_type (ProblemType) – The type of machine learning problem.

  • problem_configuration (dict, None) – Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the date_index, gap, and max_delay variables. Defaults to None.

  • n_splits (int, None) – The number of CV splits, if applicable. Defaults to 3.

  • shuffle (bool) – Whether or not to shuffle the data before splitting, if applicable. Defaults to True.

  • random_seed (int) – Seed for the random number generator. Defaults to 0.

Returns

Data splitting method.

Return type

sklearn.model_selection.BaseCrossValidator

evalml.automl.search(X_train=None, y_train=None, problem_type=None, objective='auto', mode='fast', max_time=None, patience=None, tolerance=None, problem_configuration=None)[source]

Given data and configuration, run an automl search.

This method will run EvalML’s default suite of data checks. If the data checks produce errors, the data check results will be returned before running the automl search. In that case we recommend you alter your data to address these errors and try again.

This method is provided for convenience. If you’d like more control over when each of these steps is run, consider making calls directly to the various pieces like the data checks and AutoMLSearch, instead of using this method.

Parameters
  • X_train (pd.DataFrame) – The input training data of shape [n_samples, n_features]. Required.

  • y_train (pd.Series) – The target training data of length [n_samples]. Required for supervised learning tasks.

  • problem_type (str or ProblemTypes) – Type of supervised learning problem. See evalml.problem_types.ProblemType.all_problem_types for a full list.

  • objective (str, ObjectiveBase) –

    The objective to optimize for. Used to propose and rank pipelines, but not for optimizing each pipeline during fit-time. When set to ‘auto’, chooses:

    • LogLossBinary for binary classification problems,

    • LogLossMulticlass for multiclass classification problems, and

    • R2 for regression problems.

  • mode (str) – mode for DefaultAlgorithm. There are two modes: fast and long, where fast is a subset of long. Please look at DefaultAlgorithm for more details.

  • max_time (int, str) – Maximum time to search for pipelines. This will not start a new pipeline search after the duration has elapsed. If it is an integer, then the time will be in seconds. For strings, time can be specified as seconds, minutes, or hours.

  • patience (int) – Number of iterations without improvement to stop search early. Must be positive. If None, early stopping is disabled. Defaults to None.

  • tolerance (float) – Minimum percentage difference to qualify as score improvement for early stopping. Only applicable if patience is not None. Defaults to None.

  • problem_configuration (dict) – Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the date_index, gap, and max_delay variables.

Returns

the automl search object containing pipelines and rankings, and the results from running the data checks. If the data check results contain errors, automl search will not be run and an automl search object will not be returned.

Return type

(AutoMLSearch, dict)

evalml.automl.search_iterative(X_train=None, y_train=None, problem_type=None, objective='auto', problem_configuration=None, **kwargs)[source]

Given data and configuration, run an automl search.

This method will run EvalML’s default suite of data checks. If the data checks produce errors, the data check results will be returned before running the automl search. In that case we recommend you alter your data to address these errors and try again.

This method is provided for convenience. If you’d like more control over when each of these steps is run, consider making calls directly to the various pieces like the data checks and AutoMLSearch, instead of using this method.

Parameters
  • X_train (pd.DataFrame) – The input training data of shape [n_samples, n_features]. Required.

  • y_train (pd.Series) – The target training data of length [n_samples]. Required for supervised learning tasks.

  • problem_type (str or ProblemTypes) – Type of supervised learning problem. See evalml.problem_types.ProblemType.all_problem_types for a full list.

  • objective (str, ObjectiveBase) –

    The objective to optimize for. Used to propose and rank pipelines, but not for optimizing each pipeline during fit-time. When set to ‘auto’, chooses:

    • LogLossBinary for binary classification problems,

    • LogLossMulticlass for multiclass classification problems, and

    • R2 for regression problems.

  • problem_configuration (dict) – Additional parameters needed to configure the search. For example,

  • time series problems (in) –

  • should be passed in for the date_index (values) –

  • gap

  • forecast_horizon

  • max_delay variables. (and) –

Other keyword arguments which are provided will be passed to AutoMLSearch.

Returns

the automl search object containing pipelines and rankings, and the results from running the data checks. If the data check results contain errors, automl search will not be run and an automl search object will not be returned.

Return type

(AutoMLSearch, dict)

class evalml.automl.SequentialEngine[source]

The default engine for the AutoML search. Trains and scores pipelines locally and sequentially.

Methods

close

setup_job_log

submit_evaluation_job

Submit job for pipeline evaluation during AutoMLSearch.

submit_scoring_job

Submit job for pipeline scoring.

submit_training_job

Submit job for pipeline training.

close(self)[source]
static setup_job_log()
submit_evaluation_job(self, automl_config, pipeline, X, y)[source]

Submit job for pipeline evaluation during AutoMLSearch.

submit_scoring_job(self, automl_config, pipeline, X, y, objectives)[source]

Submit job for pipeline scoring.

submit_training_job(self, automl_config, pipeline, X, y)[source]

Submit job for pipeline training.

evalml.automl.tune_binary_threshold(pipeline, objective, problem_type, X_threshold_tuning, y_threshold_tuning)[source]

Tunes the threshold of a binary pipeline to the X and y thresholding data

Parameters
  • pipeline (Pipeline) – Pipeline instance to threshold.

  • objective (ObjectiveBase) – The objective we want to tune with. If not tuneable and best_pipeline is True, will use F1.

  • problem_type (ProblemType) – The problem type of the pipeline.

  • X_threshold_tuning (pd.DataFrame) – Features to tune pipeline to.

  • y_threshold_tuning (pd.Series) – Target data to tune pipeline to.