lightgbm_regressor

Module Contents

Classes Summary

LightGBMRegressor

LightGBM Regressor.

Contents

class evalml.pipelines.components.estimators.regressors.lightgbm_regressor.LightGBMRegressor(boosting_type='gbdt', learning_rate=0.1, n_estimators=20, max_depth=0, num_leaves=31, min_child_samples=20, bagging_fraction=0.9, bagging_freq=0, n_jobs=- 1, random_seed=0, **kwargs)[source]

LightGBM Regressor.

Parameters
  • boosting_type (string) – Type of boosting to use. Defaults to “gbdt”. - ‘gbdt’ uses traditional Gradient Boosting Decision Tree - “dart”, uses Dropouts meet Multiple Additive Regression Trees - “goss”, uses Gradient-based One-Side Sampling - “rf”, uses Random Forest

  • learning_rate (float) – Boosting learning rate. Defaults to 0.1.

  • n_estimators (int) – Number of boosted trees to fit. Defaults to 100.

  • max_depth (int) – Maximum tree depth for base learners, <=0 means no limit. Defaults to 0.

  • num_leaves (int) – Maximum tree leaves for base learners. Defaults to 31.

  • min_child_samples (int) – Minimum number of data needed in a child (leaf). Defaults to 20.

  • bagging_fraction (float) – LightGBM will randomly select a subset of features on each iteration (tree) without resampling if this is smaller than 1.0. For example, if set to 0.8, LightGBM will select 80% of features before training each tree. This can be used to speed up training and deal with overfitting. Defaults to 0.9.

  • bagging_freq (int) – Frequency for bagging. 0 means bagging is disabled. k means perform bagging at every k iteration. Every k-th iteration, LightGBM will randomly select bagging_fraction * 100 % of the data to use for the next k iterations. Defaults to 0.

  • n_jobs (int or None) – Number of threads to run in parallel. -1 uses all threads. Defaults to -1.

  • random_seed (int) – Seed for the random number generator. Defaults to 0.

Attributes

hyperparameter_ranges

{ “learning_rate”: Real(0.000001, 1), “boosting_type”: [“gbdt”, “dart”, “goss”, “rf”], “n_estimators”: Integer(10, 100), “max_depth”: Integer(0, 10), “num_leaves”: Integer(2, 100), “min_child_samples”: Integer(1, 100), “bagging_fraction”: Real(0.000001, 1), “bagging_freq”: Integer(0, 1),}

model_family

ModelFamily.LIGHTGBM

modifies_features

True

modifies_target

False

name

LightGBM Regressor

predict_uses_y

False

SEED_MAX

SEED_BOUNDS.max_bound

SEED_MIN

0

supported_problem_types

[ProblemTypes.REGRESSION]

Methods

clone

Constructs a new component with the same parameters and random state.

default_parameters

Returns the default parameters for this component.

describe

Describe a component and its parameters

feature_importance

Returns importance associated with each feature.

fit

Fits component to data

load

Loads component at file path

needs_fitting

Returns boolean determining if component needs fitting before

parameters

Returns the parameters which were used to initialize the component

predict

Make predictions using selected features.

predict_proba

Make probability estimates for labels.

save

Saves component at file path

clone(self)

Constructs a new component with the same parameters and random state.

Returns

A new instance of this component with identical parameters and random state.

default_parameters(cls)

Returns the default parameters for this component.

Our convention is that Component.default_parameters == Component().parameters.

Returns

default parameters for this component.

Return type

dict

describe(self, print_name=False, return_dict=False)

Describe a component and its parameters

Parameters
  • print_name (bool, optional) – whether to print name of component

  • return_dict (bool, optional) – whether to return description as dictionary in the format {“name”: name, “parameters”: parameters}

Returns

prints and returns dictionary

Return type

None or dict

property feature_importance(self)

Returns importance associated with each feature.

Returns

Importance associated with each feature

Return type

np.ndarray

fit(self, X, y=None)[source]

Fits component to data

Parameters
  • X (list, pd.DataFrame or np.ndarray) – The input training data of shape [n_samples, n_features]

  • y (list, pd.Series, np.ndarray, optional) – The target training data of length [n_samples]

Returns

self

static load(file_path)

Loads component at file path

Parameters

file_path (str) – Location to load file

Returns

ComponentBase object

needs_fitting(self)

Returns boolean determining if component needs fitting before calling predict, predict_proba, transform, or feature_importances. This can be overridden to False for components that do not need to be fit or whose fit methods do nothing.

property parameters(self)

Returns the parameters which were used to initialize the component

predict(self, X)[source]

Make predictions using selected features.

Parameters

X (pd.DataFrame, np.ndarray) – Data of shape [n_samples, n_features]

Returns

Predicted values

Return type

pd.Series

predict_proba(self, X)

Make probability estimates for labels.

Parameters

X (pd.DataFrame, or np.ndarray) – Features

Returns

Probability estimates

Return type

pd.Series

save(self, file_path, pickle_protocol=cloudpickle.DEFAULT_PROTOCOL)

Saves component at file path

Parameters
  • file_path (str) – Location to save file

  • pickle_protocol (int) – The pickle data stream format.

Returns

None