Automl ======================= .. py:module:: evalml.automl .. autoapi-nested-parse:: AutoMLSearch and related modules. Subpackages ----------- .. toctree:: :titlesonly: :maxdepth: 3 automl_algorithm/index.rst engine/index.rst Submodules ---------- .. toctree:: :titlesonly: :maxdepth: 1 automl_search/index.rst callbacks/index.rst pipeline_search_plots/index.rst progress/index.rst utils/index.rst Package Contents ---------------- Classes Summary ~~~~~~~~~~~~~~~ .. autoapisummary:: evalml.automl.AutoMLSearch evalml.automl.EngineBase evalml.automl.Progress evalml.automl.SequentialEngine Functions ~~~~~~~~~ .. autoapisummary:: :nosignatures: evalml.automl.get_default_primary_search_objective evalml.automl.get_threshold_tuning_info evalml.automl.make_data_splitter evalml.automl.resplit_training_data evalml.automl.search evalml.automl.search_iterative evalml.automl.tune_binary_threshold Contents ~~~~~~~~~~~~~~~~~~~ .. py:class:: AutoMLSearch(X_train=None, y_train=None, X_holdout=None, y_holdout=None, problem_type=None, objective='auto', max_iterations=None, max_time=None, patience=None, tolerance=None, data_splitter=None, allowed_component_graphs=None, allowed_model_families=None, features=None, start_iteration_callback=None, add_result_callback=None, error_callback=None, additional_objectives=None, alternate_thresholding_objective='F1', random_seed=0, n_jobs=-1, tuner_class=None, optimize_thresholds=True, ensembling=False, max_batches=None, problem_configuration=None, train_best_pipeline=True, search_parameters=None, sampler_method='auto', sampler_balanced_ratio=0.25, allow_long_running_models=False, _pipelines_per_batch=5, automl_algorithm='default', engine='sequential', verbose=False, timing=False, exclude_featurizers=None, holdout_set_size=0) Automated Pipeline search. :param X_train: The input training data of shape [n_samples, n_features]. Required. :type X_train: pd.DataFrame :param y_train: The target training data of length [n_samples]. Required for supervised learning tasks. :type y_train: pd.Series :param X_holdout: The input holdout data of shape [n_samples, n_features]. :type X_holdout: pd.DataFrame :param y_holdout: The target holdout data of length [n_samples]. :type y_holdout: pd.Series :param problem_type: Type of supervised learning problem. See evalml.problem_types.ProblemType.all_problem_types for a full list. :type problem_type: str or ProblemTypes :param objective: The objective to optimize for. Used to propose and rank pipelines, but not for optimizing each pipeline during fit-time. When set to 'auto', chooses: - LogLossBinary for binary classification problems, - LogLossMulticlass for multiclass classification problems, and - R2 for regression problems. :type objective: str, ObjectiveBase :param max_iterations: Maximum number of iterations to search. If max_iterations and max_time is not set, then max_iterations will default to max_iterations of 5. :type max_iterations: int :param max_time: Maximum time to search for pipelines. This will not start a new pipeline search after the duration has elapsed. If it is an integer, then the time will be in seconds. For strings, time can be specified as seconds, minutes, or hours. :type max_time: int, str :param patience: Number of iterations without improvement to stop search early. Must be positive. If None, early stopping is disabled. Defaults to None. :type patience: int :param tolerance: Minimum percentage difference to qualify as score improvement for early stopping. Only applicable if patience is not None. Defaults to None. :type tolerance: float :param allowed_component_graphs: A dictionary of lists or ComponentGraphs indicating the component graphs allowed in the search. The format should follow { "Name_0": [list_of_components], "Name_1": ComponentGraph(...) } The default of None indicates all pipeline component graphs for this problem type are allowed. Setting this field will cause allowed_model_families to be ignored. e.g. allowed_component_graphs = { "My_Graph": ["Imputer", "One Hot Encoder", "Random Forest Classifier"] } :type allowed_component_graphs: dict :param allowed_model_families: The model families to search. The default of None searches over all model families. Run evalml.pipelines.components.utils.allowed_model_families("binary") to see options. Change `binary` to `multiclass` or `regression` depending on the problem type. Note that if allowed_pipelines is provided, this parameter will be ignored. :type allowed_model_families: list(str, ModelFamily) :param features: List of features to run DFS on AutoML pipelines. Defaults to None. Features will only be computed if the columns used by the feature exist in the search input and if the feature itself is not in search input. If features is an empty list, the DFS Transformer will not be included in pipelines. :type features: list :param data_splitter: Data splitting method to use. Defaults to StratifiedKFold. :type data_splitter: sklearn.model_selection.BaseCrossValidator :param tuner_class: The tuner class to use. Defaults to SKOptTuner. :param optimize_thresholds: Whether or not to optimize the binary pipeline threshold. Defaults to True. :type optimize_thresholds: bool :param start_iteration_callback: Function called before each pipeline training iteration. Callback function takes three positional parameters: The pipeline instance and the AutoMLSearch object. :type start_iteration_callback: callable :param add_result_callback: Function called after each pipeline training iteration. Callback function takes three positional parameters: A dictionary containing the training results for the new pipeline, an untrained_pipeline containing the parameters used during training, and the AutoMLSearch object. :type add_result_callback: callable :param error_callback: Function called when `search()` errors and raises an Exception. Callback function takes three positional parameters: the Exception raised, the traceback, and the AutoMLSearch object. Must also accepts kwargs, so AutoMLSearch is able to pass along other appropriate parameters by default. Defaults to None, which will call `log_error_callback`. :type error_callback: callable :param additional_objectives: Custom set of objectives to score on. Will override default objectives for problem type if not empty. :type additional_objectives: list :param alternate_thresholding_objective: The objective to use for thresholding binary classification pipelines if the main objective provided isn't tuneable. Defaults to F1. :type alternate_thresholding_objective: str :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int :param n_jobs: Non-negative integer describing level of parallelism used for pipelines. None and 1 are equivalent. If set to -1, all CPUs are used. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. :type n_jobs: int or None :param ensembling: If True, runs ensembling in a separate batch after every allowed pipeline class has been iterated over. If the number of unique pipelines to search over per batch is one, ensembling will not run. Defaults to False. :type ensembling: boolean :param max_batches: The maximum number of batches of pipelines to search. Parameters max_time, and max_iterations have precedence over stopping the search. :type max_batches: int :param problem_configuration: Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the time_index, gap, forecast_horizon, and max_delay variables. :type problem_configuration: dict, None :param train_best_pipeline: Whether or not to train the best pipeline before returning it. Defaults to True. :type train_best_pipeline: boolean :param search_parameters: A dict of the hyperparameter ranges or pipeline parameters used to iterate over during search. Keys should consist of the component names and values should specify a singular value/list for pipeline parameters, or skopt.Space for hyperparameter ranges. In the example below, the Imputer parameters would be passed to the hyperparameter ranges, and the Label Encoder parameters would be used as the component parameter. e.g. search_parameters = { 'Imputer' : { 'numeric_impute_strategy': Categorical(['most_frequent', 'median']) }, 'Label Encoder': {'positive_label': True} } :type search_parameters: dict :param sampler_method: The data sampling component to use in the pipelines if the problem type is classification and the target balance is smaller than the sampler_balanced_ratio. Either 'auto', which will use our preferred sampler for the data, 'Undersampler', 'Oversampler', or None. Defaults to 'auto'. :type sampler_method: str :param sampler_balanced_ratio: The minority:majority class ratio that we consider balanced, so a 1:4 ratio would be equal to 0.25. If the class balance is larger than this provided value, then we will not add a sampler since the data is then considered balanced. Overrides the `sampler_ratio` of the samplers. Defaults to 0.25. :type sampler_balanced_ratio: float :param allow_long_running_models: Whether or not to allow longer-running models for large multiclass problems. If False and no pipelines, component graphs, or model families are provided, AutoMLSearch will not use Elastic Net or XGBoost when there are more than 75 multiclass targets and will not use CatBoost when there are more than 150 multiclass targets. Defaults to False. :type allow_long_running_models: bool :param _ensembling_split_size: The amount of the training data we'll set aside for training ensemble metalearners. Only used when ensembling is True. Must be between 0 and 1, exclusive. Defaults to 0.2 :type _ensembling_split_size: float :param _pipelines_per_batch: The number of pipelines to train for every batch after the first one. The first batch will train a baseline pipline + one of each pipeline family allowed in the search. :type _pipelines_per_batch: int :param automl_algorithm: The automl algorithm to use. Currently the two choices are 'iterative' and 'default'. Defaults to `default`. :type automl_algorithm: str :param engine: The engine instance used to evaluate pipelines. Dask or concurrent.futures engines can also be chosen by providing a string from the list ["sequential", "cf_threaded", "cf_process", "dask_threaded", "dask_process"]. If a parallel engine is selected this way, the maximum amount of parallelism, as determined by the engine, will be used. Defaults to "sequential". :type engine: EngineBase or str :param verbose: Whether or not to display semi-real-time updates to stdout while search is running. Defaults to False. :type verbose: boolean :param timing: Whether or not to write pipeline search times to the logger. Defaults to False. :type timing: boolean :param exclude_featurizers: A list of featurizer components to exclude from the pipelines built by search. Valid options are "DatetimeFeaturizer", "EmailFeaturizer", "URLFeaturizer", "NaturalLanguageFeaturizer", "TimeSeriesFeaturizer" :type exclude_featurizers: list[str] :param holdout_set_size: The size of the holdout set that AutoML search will take for datasets larger than 500 rows. If set to 0, holdout set will not be taken regardless of number of rows. Must be between 0 and 1, exclusive. Defaults to 0.1. :type holdout_set_size: float **Methods** .. autoapisummary:: :nosignatures: evalml.automl.AutoMLSearch.add_to_rankings evalml.automl.AutoMLSearch.best_pipeline evalml.automl.AutoMLSearch.close_engine evalml.automl.AutoMLSearch.describe_pipeline evalml.automl.AutoMLSearch.full_rankings evalml.automl.AutoMLSearch.get_ensembler_input_pipelines evalml.automl.AutoMLSearch.get_pipeline evalml.automl.AutoMLSearch.load evalml.automl.AutoMLSearch.plot evalml.automl.AutoMLSearch.rankings evalml.automl.AutoMLSearch.results evalml.automl.AutoMLSearch.save evalml.automl.AutoMLSearch.score_pipelines evalml.automl.AutoMLSearch.search evalml.automl.AutoMLSearch.train_pipelines .. py:method:: add_to_rankings(self, pipeline) Fits and evaluates a given pipeline then adds the results to the automl rankings with the requirement that automl search has been run. :param pipeline: pipeline to train and evaluate. :type pipeline: PipelineBase .. py:method:: best_pipeline(self) :property: Returns a trained instance of the best pipeline and parameters found during automl search. If `train_best_pipeline` is set to False, returns an untrained pipeline instance. :returns: A trained instance of the best pipeline and parameters found during automl search. If `train_best_pipeline` is set to False, returns an untrained pipeline instance. :rtype: PipelineBase :raises PipelineNotFoundError: If this is called before .search() is called. .. py:method:: close_engine(self) Function to explicitly close the engine, client, parallel resources. .. py:method:: describe_pipeline(self, pipeline_id, return_dict=False) Describe a pipeline. :param pipeline_id: pipeline to describe :type pipeline_id: int :param return_dict: If True, return dictionary of information about pipeline. Defaults to False. :type return_dict: bool :returns: Description of specified pipeline. Includes information such as type of pipeline components, problem, training time, cross validation, etc. :raises PipelineNotFoundError: If pipeline_id is not a valid ID. .. py:method:: full_rankings(self) :property: Returns a pandas.DataFrame with scoring results from all pipelines searched. .. py:method:: get_ensembler_input_pipelines(self, ensemble_pipeline_id) Returns a list of input pipeline IDs given an ensembler pipeline ID. :param ensemble_pipeline_id: Ensemble pipeline ID to get input pipeline IDs from. :type ensemble_pipeline_id: id :returns: A list of ensemble input pipeline IDs. :rtype: list[int] :raises ValueError: If `ensemble_pipeline_id` does not correspond to a valid ensemble pipeline ID. .. py:method:: get_pipeline(self, pipeline_id) Given the ID of a pipeline training result, returns an untrained instance of the specified pipeline initialized with the parameters used to train that pipeline during automl search. :param pipeline_id: Pipeline to retrieve. :type pipeline_id: int :returns: Untrained pipeline instance associated with the provided ID. :rtype: PipelineBase :raises PipelineNotFoundError: if pipeline_id is not a valid ID. .. py:method:: load(file_path, pickle_type='cloudpickle') :staticmethod: Loads AutoML object at file path. :param file_path: Location to find file to load :type file_path: str :param pickle_type: The pickling library to use. Currently not used since the standard pickle library can handle cloudpickles. :type pickle_type: {"pickle", "cloudpickle"} :returns: AutoSearchBase object .. py:method:: plot(self) :property: Return an instance of the plot with the latest scores. .. py:method:: rankings(self) :property: Returns a pandas.DataFrame with scoring results from the highest-scoring set of parameters used with each pipeline. .. py:method:: results(self) :property: Class that allows access to a copy of the results from `automl_search`. :returns: Dictionary containing `pipeline_results`, a dict with results from each pipeline, and `search_order`, a list describing the order the pipelines were searched. :rtype: dict .. py:method:: save(self, file_path, pickle_type='cloudpickle', pickle_protocol=cloudpickle.DEFAULT_PROTOCOL) Saves AutoML object at file path. :param file_path: Location to save file. :type file_path: str :param pickle_type: The pickling library to use. :type pickle_type: {"pickle", "cloudpickle"} :param pickle_protocol: The pickle data stream format. :type pickle_protocol: int :raises ValueError: If pickle_type is not "pickle" or "cloudpickle". .. py:method:: score_pipelines(self, pipelines, X_holdout, y_holdout, objectives) Score a list of pipelines on the given holdout data. :param pipelines: List of pipelines to train. :type pipelines: list[PipelineBase] :param X_holdout: Holdout features. :type X_holdout: pd.DataFrame :param y_holdout: Holdout targets for scoring. :type y_holdout: pd.Series :param objectives: Objectives used for scoring. :type objectives: list[str], list[ObjectiveBase] :returns: Dictionary keyed by pipeline name that maps to a dictionary of scores. Note that the any pipelines that error out during scoring will not be included in the dictionary but the exception and stacktrace will be displayed in the log. :rtype: dict[str, Dict[str, float]] .. py:method:: search(self, interactive_plot=True) Find the best pipeline for the data set. :param interactive_plot: Shows an iteration vs. score plot in Jupyter notebook. Disabled by default in non-Jupyter enviroments. :type interactive_plot: boolean, True :raises AutoMLSearchException: If all pipelines in the current AutoML batch produced a score of np.nan on the primary objective. :returns: Dictionary keyed by batch number that maps to the timings for pipelines run in that batch, as well as the total time for each batch. Pipelines within a batch are labeled by pipeline name. :rtype: Dict[int, Dict[str, Timestamp]] .. py:method:: train_pipelines(self, pipelines) Train a list of pipelines on the training data. This can be helpful for training pipelines once the search is complete. :param pipelines: List of pipelines to train. :type pipelines: list[PipelineBase] :returns: Dictionary keyed by pipeline name that maps to the fitted pipeline. Note that the any pipelines that error out during training will not be included in the dictionary but the exception and stacktrace will be displayed in the log. :rtype: Dict[str, PipelineBase] .. py:class:: EngineBase Base class for EvalML engines. **Methods** .. autoapisummary:: :nosignatures: evalml.automl.EngineBase.setup_job_log evalml.automl.EngineBase.submit_evaluation_job evalml.automl.EngineBase.submit_scoring_job evalml.automl.EngineBase.submit_training_job .. py:method:: setup_job_log() :staticmethod: Set up logger for job. .. py:method:: submit_evaluation_job(self, automl_config, pipeline, X, y, X_holdout=None, y_holdout=None) :abstractmethod: Submit job for pipeline evaluation during AutoMLSearch. .. py:method:: submit_scoring_job(self, automl_config, pipeline, X, y, objectives, X_train=None, y_train=None) :abstractmethod: Submit job for pipeline scoring. .. py:method:: submit_training_job(self, automl_config, pipeline, X, y, X_holdout=None, y_holdout=None) :abstractmethod: Submit job for pipeline training. .. py:function:: get_default_primary_search_objective(problem_type) Get the default primary search objective for a problem type. :param problem_type: Problem type of interest. :type problem_type: str or ProblemType :returns: primary objective instance for the problem type. :rtype: ObjectiveBase .. py:function:: get_threshold_tuning_info(automl_config, pipeline) Determine for a given automl config and pipeline what the threshold tuning objective should be and whether or not training data should be further split to achieve proper threshold tuning. Can also be used after automl search has been performed to determine whether the full training data was used to train the pipeline. :param automl_config: The AutoMLSearch's config object. Used to determine threshold tuning objective and whether data needs resplitting. :type automl_config: AutoMLConfig :param pipeline: The pipeline instance to Threshold. :type pipeline: Pipeline :returns: threshold_tuning_objective, data_needs_resplitting (str, bool) .. py:function:: make_data_splitter(X, y, problem_type, problem_configuration=None, n_splits=3, shuffle=True, random_seed=0) Given the training data and ML problem parameters, compute a data splitting method to use during AutoML search. :param X: The input training data of shape [n_samples, n_features]. :type X: pd.DataFrame :param y: The target training data of length [n_samples]. :type y: pd.Series :param problem_type: The type of machine learning problem. :type problem_type: ProblemType :param problem_configuration: Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the time_index, gap, and max_delay variables. Defaults to None. :type problem_configuration: dict, None :param n_splits: The number of CV splits, if applicable. Defaults to 3. :type n_splits: int, None :param shuffle: Whether or not to shuffle the data before splitting, if applicable. Defaults to True. :type shuffle: bool :param random_seed: Seed for the random number generator. Defaults to 0. :type random_seed: int :returns: Data splitting method. :rtype: sklearn.model_selection.BaseCrossValidator :raises ValueError: If problem_configuration is not given for a time-series problem. .. py:class:: Progress(max_time=None, max_batches=None, max_iterations=None, patience=None, tolerance=None, automl_algorithm=None, objective=None, verbose=False) Progress object holding stopping criteria and progress information. :param max_time: Maximum time to search for pipelines. :type max_time: int :param max_iterations: Maximum number of iterations to search. :type max_iterations: int :param max_batches: The maximum number of batches of pipelines to search. Parameters max_time, and max_iterations have precedence over stopping the search. :type max_batches: int :param patience: Number of iterations without improvement to stop search early. :type patience: int :param tolerance: Minimum percentage difference to qualify as score improvement for early stopping. :type tolerance: float :param automl_algorithm: The automl algorithm to use. Used to calculate iterations if max_batches is selected as stopping criteria. :type automl_algorithm: str :param objective: The objective used in search. :type objective: str, ObjectiveBase :param verbose: Whether or not to log out stopping information. :type verbose: boolean **Methods** .. autoapisummary:: :nosignatures: evalml.automl.Progress.elapsed evalml.automl.Progress.return_progress evalml.automl.Progress.should_continue evalml.automl.Progress.start_timing .. py:method:: elapsed(self) Return time elapsed using the start time and current time. .. py:method:: return_progress(self) Return information about current and end state of each stopping criteria in order of priority. :returns: list of dictionaries containing information of each stopping criteria. :rtype: List[Dict[str, unit]] .. py:method:: should_continue(self, results, interrupted=False, mid_batch=False) Given AutoML Results, return whether or not the search should continue. :param results: AutoMLSearch results. :type results: dict :param interrupted: whether AutoMLSearch was given an keyboard interrupt. Defaults to False. :type interrupted: bool :param mid_batch: whether this method was called while in the middle of a batch or not. Defaults to False. :type mid_batch: bool :returns: True if search should continue, False otherwise. :rtype: bool .. py:method:: start_timing(self) Sets start time to current time. .. py:function:: resplit_training_data(pipeline, X_train, y_train) Further split the training data for a given pipeline. This is needed for binary pipelines in order to properly tune the threshold. Can be used after automl search has been performed to recreate the data that was used to train a pipeline. :param pipeline: the pipeline whose training data we are splitting :type pipeline: PipelineBase :param X_train: training data of shape [n_samples, n_features] :type X_train: pd.DataFrame or np.ndarray :param y_train: training target data of length [n_samples] :type y_train: pd.Series, or np.ndarray :returns: Feature and target data each split into train and threshold tuning sets. :rtype: pd.DataFrame, pd.DataFrame, pd.Series, pd.Series .. py:function:: search(X_train=None, y_train=None, problem_type=None, objective='auto', mode='fast', max_time=None, patience=None, tolerance=None, problem_configuration=None, n_splits=3, verbose=False, timing=False) Given data and configuration, run an automl search. This method will run EvalML's default suite of data checks. If the data checks produce errors, the data check results will be returned before running the automl search. In that case we recommend you alter your data to address these errors and try again. This method is provided for convenience. If you'd like more control over when each of these steps is run, consider making calls directly to the various pieces like the data checks and AutoMLSearch, instead of using this method. :param X_train: The input training data of shape [n_samples, n_features]. Required. :type X_train: pd.DataFrame :param y_train: The target training data of length [n_samples]. Required for supervised learning tasks. :type y_train: pd.Series :param problem_type: Type of supervised learning problem. See evalml.problem_types.ProblemType.all_problem_types for a full list. :type problem_type: str or ProblemTypes :param objective: The objective to optimize for. Used to propose and rank pipelines, but not for optimizing each pipeline during fit-time. When set to 'auto', chooses: - LogLossBinary for binary classification problems, - LogLossMulticlass for multiclass classification problems, and - R2 for regression problems. :type objective: str, ObjectiveBase :param mode: mode for DefaultAlgorithm. There are two modes: fast and long, where fast is a subset of long. Please look at DefaultAlgorithm for more details. :type mode: str :param max_time: Maximum time to search for pipelines. This will not start a new pipeline search after the duration has elapsed. If it is an integer, then the time will be in seconds. For strings, time can be specified as seconds, minutes, or hours. :type max_time: int, str :param patience: Number of iterations without improvement to stop search early. Must be positive. If None, early stopping is disabled. Defaults to None. :type patience: int :param tolerance: Minimum percentage difference to qualify as score improvement for early stopping. Only applicable if patience is not None. Defaults to None. :type tolerance: float :param problem_configuration: Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the time_index, gap, forecast_horizon, and max_delay variables. :type problem_configuration: dict :param n_splits: Number of splits to use with the default data splitter. :type n_splits: int :param verbose: Whether or not to display semi-real-time updates to stdout while search is running. Defaults to False. :type verbose: boolean :param timing: Whether or not to write pipeline search times to the logger. Defaults to False. :type timing: boolean :returns: The automl search object containing pipelines and rankings, and the results from running the data checks. If the data check results contain errors, automl search will not be run and an automl search object will not be returned. :rtype: (AutoMLSearch, dict) :raises ValueError: If search configuration is not valid. .. py:function:: search_iterative(X_train=None, y_train=None, problem_type=None, objective='auto', problem_configuration=None, n_splits=3, timing=False, **kwargs) Given data and configuration, run an automl search. This method will run EvalML's default suite of data checks. If the data checks produce errors, the data check results will be returned before running the automl search. In that case we recommend you alter your data to address these errors and try again. This method is provided for convenience. If you'd like more control over when each of these steps is run, consider making calls directly to the various pieces like the data checks and AutoMLSearch, instead of using this method. :param X_train: The input training data of shape [n_samples, n_features]. Required. :type X_train: pd.DataFrame :param y_train: The target training data of length [n_samples]. Required for supervised learning tasks. :type y_train: pd.Series :param problem_type: Type of supervised learning problem. See evalml.problem_types.ProblemType.all_problem_types for a full list. :type problem_type: str or ProblemTypes :param objective: The objective to optimize for. Used to propose and rank pipelines, but not for optimizing each pipeline during fit-time. When set to 'auto', chooses: - LogLossBinary for binary classification problems, - LogLossMulticlass for multiclass classification problems, and - R2 for regression problems. :type objective: str, ObjectiveBase :param problem_configuration: Additional parameters needed to configure the search. For example, in time series problems, values should be passed in for the time_index, gap, forecast_horizon, and max_delay variables. :type problem_configuration: dict :param n_splits: Number of splits to use with the default data splitter. :type n_splits: int :param timing: Whether or not to write pipeline search times to the logger. Defaults to False. :type timing: boolean :param \*\*kwargs: Other keyword arguments which are provided will be passed to AutoMLSearch. :returns: the automl search object containing pipelines and rankings, and the results from running the data checks. If the data check results contain errors, automl search will not be run and an automl search object will not be returned. :rtype: (AutoMLSearch, dict) :raises ValueError: If the search configuration is invalid. .. py:class:: SequentialEngine The default engine for the AutoML search. Trains and scores pipelines locally and sequentially. **Methods** .. autoapisummary:: :nosignatures: evalml.automl.SequentialEngine.close evalml.automl.SequentialEngine.setup_job_log evalml.automl.SequentialEngine.submit_evaluation_job evalml.automl.SequentialEngine.submit_scoring_job evalml.automl.SequentialEngine.submit_training_job .. py:method:: close(self) No-op. .. py:method:: setup_job_log() :staticmethod: Set up logger for job. .. py:method:: submit_evaluation_job(self, automl_config, pipeline, X, y, X_holdout=None, y_holdout=None) Submit a job to evaluate a pipeline. :param automl_config: Structure containing data passed from AutoMLSearch instance. :param pipeline: Pipeline to evaluate. :type pipeline: pipeline.PipelineBase :param X: Input data for modeling. :type X: pd.DataFrame :param y: Target data for modeling. :type y: pd.Series :param X_holdout: Holdout input data for holdout scoring. :type X_holdout: pd.Series :param y_holdout: Holdout target data for holdout scoring. :type y_holdout: pd.Series :returns: Computation result. :rtype: SequentialComputation .. py:method:: submit_scoring_job(self, automl_config, pipeline, X, y, objectives, X_train=None, y_train=None) Submit a job to score a pipeline. :param automl_config: Structure containing data passed from AutoMLSearch instance. :param pipeline: Pipeline to train. :type pipeline: pipeline.PipelineBase :param X: Input data for modeling. :type X: pd.DataFrame :param y: Target data for modeling. :type y: pd.Series :param X_train: Training features. Used for feature engineering in time series. :type X_train: pd.DataFrame :param y_train: Training target. Used for feature engineering in time series. :type y_train: pd.Series :param objectives: List of objectives to score on. :type objectives: list[ObjectiveBase] :returns: Computation result. :rtype: SequentialComputation .. py:method:: submit_training_job(self, automl_config, pipeline, X, y) Submit a job to train a pipeline. :param automl_config: Structure containing data passed from AutoMLSearch instance. :param pipeline: Pipeline to evaluate. :type pipeline: pipeline.PipelineBase :param X: Input data for modeling. :type X: pd.DataFrame :param y: Target data for modeling. :type y: pd.Series :returns: Computation result. :rtype: SequentialComputation .. py:function:: tune_binary_threshold(pipeline, objective, problem_type, X_threshold_tuning, y_threshold_tuning, X=None, y=None) Tunes the threshold of a binary pipeline to the X and y thresholding data. :param pipeline: Pipeline instance to threshold. :type pipeline: Pipeline :param objective: The objective we want to tune with. If not tuneable and best_pipeline is True, will use F1. :type objective: ObjectiveBase :param problem_type: The problem type of the pipeline. :type problem_type: ProblemType :param X_threshold_tuning: Features to which the pipeline will be tuned. :type X_threshold_tuning: pd.DataFrame :param y_threshold_tuning: Target data to which the pipeline will be tuned. :type y_threshold_tuning: pd.Series :param X: Features to which the pipeline will be trained (used for time series binary). Defaults to None. :type X: pd.DataFrame :param y: Target to which the pipeline will be trained (used for time series binary). Defaults to None. :type y: pd.Series