Use the gradient boosting lessons in Scikit-Study to unravel totally different classification and regression issues
Within the first a part of this text, we offered the gradient boosting algorithm and confirmed its implementation in pseudocode.
On this a part of the article, we’ll discover the lessons in Scikit-Study that implement this algorithm, focus on their varied parameters, and show the way to use them to unravel a number of classification and regression issues.
Though the XGBoost library (which will likely be coated in a future article) gives a extra optimized and extremely scalable implementation of gradient boosting, for small to medium-sized knowledge units it’s typically simpler to make use of the gradient boosting lessons in Scikit-Study, which have a less complicated interface and a considerably fewer variety of hyperparameters to tune.
Scikit-Study gives the next lessons that implement the gradient-boosted choice timber (GBDT) mannequin:
- GradientBoostingClassifier is used for classification issues.
- GradientBoostingRegressor is used for regression issues.
Along with the usual parameters of choice timber, corresponding to criterion, max_depth (set by default to three) and min_samples_split, these lessons present the next parameters:
- loss — the loss operate to be optimized. In GradientBoostingClassifier, this operate may be ‘log_loss’ (the default) or ‘exponential’ (which can make gradient boosting behave just like the AdaBoost algorithm). In GradientBoostingRegressor, this operate may be ‘squared_loss’ (the default), ‘absolute_loss’, ‘huber’, or ‘quantile’.
- n_estimators — the variety of boosting iterations (defaults to 100).
- learning_rate — an element that shrinks the contribution of every tree (defaults to 0.1).
- subsample — the fraction of samples to make use of for coaching every tree (defaults to 1.0).
- max_features — the variety of options to contemplate when looking for the perfect cut up in every node. The choices are to specify an integer for the…