How to use the mlopt.lgb_tune.LGBMOpt function in mlopt

To help you get started, we’ve selected a few mlopt examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github arnaudvl / ml-parameter-optimization / examples / ml_optimization.py View on Github external
# 6. reduce learning rate and start over until stopping criterium reached
xgb = XGBoostOpt(X_slice,y_slice,params_cv=params_cv,max_rounds=2,model_name='xgb_porto_seguro',save_dir=save_dir)
xgb.tune_params()
print('Best model score: %f.' %(xgb.best_score))
print('Best model parameters:')
print(xgb.best_model)
xgb.save_model()

# a similar approach is taken for lightgbm:
# 1. fix learning rate and number of estimators for tuning tree-based parameters
# 2. tune num_leaves and min_data_in_leaf
# 3. tune min_gain_to_split
# 4. tune bagging_fraction + bagging_freq and feature_fraction
# 5. tune lambda_l2
# 6. reduce learning rate and start over until stopping criterium reached
lgb = LGBMOpt(X_slice,y_slice,params_cv=params_cv,max_rounds=2,model_name='lgb_porto_seguro',save_dir=save_dir)
lgb.tune_params()
print('Best model score: %f.' %(lgb.best_score))
print('Best model parameters:')
print(lgb.best_model)
lgb.save_model()