YAML Metadata
Error:
"widget" must be an array
Model description
This is an XGBoost model trained to predict daily alcohol consumption of students.
Training Procedure
Hyperparameters
The model is trained with below hyperparameters.
Click to expand
Hyperparameter | Value |
---|---|
memory | |
steps | [('onehotencoder', OneHotEncoder(handle_unknown='ignore', sparse=False)), ('xgbregressor', XGBRegressor(base_score=None, booster=None, callbacks=None, colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, early_stopping_rounds=None, enable_categorical=False, eval_metric=None, feature_types=None, gamma=None, gpu_id=None, grow_policy=None, importance_type=None, interaction_constraints=None, learning_rate=None, max_bin=None, max_cat_threshold=None, max_cat_to_onehot=None, max_delta_step=None, max_depth=5, max_leaves=None, min_child_weight=None, missing=nan, monotone_constraints=None, n_estimators=100, n_jobs=None, num_parallel_tree=None, predictor=None, random_state=None, ...))] |
verbose | False |
onehotencoder | OneHotEncoder(handle_unknown='ignore', sparse=False) |
xgbregressor | XGBRegressor(base_score=None, booster=None, callbacks=None, colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, early_stopping_rounds=None, enable_categorical=False, eval_metric=None, feature_types=None, gamma=None, gpu_id=None, grow_policy=None, importance_type=None, interaction_constraints=None, learning_rate=None, max_bin=None, max_cat_threshold=None, max_cat_to_onehot=None, max_delta_step=None, max_depth=5, max_leaves=None, min_child_weight=None, missing=nan, monotone_constraints=None, n_estimators=100, n_jobs=None, num_parallel_tree=None, predictor=None, random_state=None, ...) |
onehotencoder__categories | auto |
onehotencoder__drop | |
onehotencoder__dtype | <class 'numpy.float64'> |
onehotencoder__handle_unknown | ignore |
onehotencoder__sparse | False |
xgbregressor__objective | reg:squarederror |
xgbregressor__base_score | |
xgbregressor__booster | |
xgbregressor__callbacks | |
xgbregressor__colsample_bylevel | |
xgbregressor__colsample_bynode | |
xgbregressor__colsample_bytree | |
xgbregressor__early_stopping_rounds | |
xgbregressor__enable_categorical | False |
xgbregressor__eval_metric | |
xgbregressor__feature_types | |
xgbregressor__gamma | |
xgbregressor__gpu_id | |
xgbregressor__grow_policy | |
xgbregressor__importance_type | |
xgbregressor__interaction_constraints | |
xgbregressor__learning_rate | |
xgbregressor__max_bin | |
xgbregressor__max_cat_threshold | |
xgbregressor__max_cat_to_onehot | |
xgbregressor__max_delta_step | |
xgbregressor__max_depth | 5 |
xgbregressor__max_leaves | |
xgbregressor__min_child_weight | |
xgbregressor__missing | nan |
xgbregressor__monotone_constraints | |
xgbregressor__n_estimators | 100 |
xgbregressor__n_jobs | |
xgbregressor__num_parallel_tree | |
xgbregressor__predictor | |
xgbregressor__random_state | |
xgbregressor__reg_alpha | |
xgbregressor__reg_lambda | |
xgbregressor__sampling_method | |
xgbregressor__scale_pos_weight | |
xgbregressor__subsample | |
xgbregressor__tree_method | |
xgbregressor__validate_parameters | |
xgbregressor__verbosity |
Model Plot
The model plot is below.
Pipeline(steps=[('onehotencoder',OneHotEncoder(handle_unknown='ignore', sparse=False)),('xgbregressor',XGBRegressor(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None,colsample_bytree=None, early_stopping_rounds=None,enable_categorical=False, eval_metric=None,feature_types=None, gamma=None, gpu_id=None,grow_policy=None, importance_type=None,interaction_constraints=None, learning_rate=None,max_bin=None, max_cat_threshold=None,max_cat_to_onehot=None, max_delta_step=None,max_depth=5, max_leaves=None,min_child_weight=None, missing=nan,monotone_constraints=None, n_estimators=100,n_jobs=None, num_parallel_tree=None,predictor=None, random_state=None, ...))])Please rerun this cell to show the HTML repr or trust the notebook.
Pipeline(steps=[('onehotencoder',OneHotEncoder(handle_unknown='ignore', sparse=False)),('xgbregressor',XGBRegressor(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None,colsample_bytree=None, early_stopping_rounds=None,enable_categorical=False, eval_metric=None,feature_types=None, gamma=None, gpu_id=None,grow_policy=None, importance_type=None,interaction_constraints=None, learning_rate=None,max_bin=None, max_cat_threshold=None,max_cat_to_onehot=None, max_delta_step=None,max_depth=5, max_leaves=None,min_child_weight=None, missing=nan,monotone_constraints=None, n_estimators=100,n_jobs=None, num_parallel_tree=None,predictor=None, random_state=None, ...))])
OneHotEncoder(handle_unknown='ignore', sparse=False)
XGBRegressor(base_score=None, booster=None, callbacks=None,colsample_bylevel=None, colsample_bynode=None,colsample_bytree=None, early_stopping_rounds=None,enable_categorical=False, eval_metric=None, feature_types=None,gamma=None, gpu_id=None, grow_policy=None, importance_type=None,interaction_constraints=None, learning_rate=None, max_bin=None,max_cat_threshold=None, max_cat_to_onehot=None,max_delta_step=None, max_depth=5, max_leaves=None,min_child_weight=None, missing=nan, monotone_constraints=None,n_estimators=100, n_jobs=None, num_parallel_tree=None,predictor=None, random_state=None, ...)
Evaluation Results
You can find the details about evaluation process and the evaluation results.
Metric | Value |
---|---|
R squared | 0.382 |
Mean Squared Error | 0.43055 |
Feature Importance Plot
Explained as: feature importances
XGBoost feature importances; values are numbers 0 <= x <= 1;all values sum to 1.
Weight | Feature |
---|---|
0.3592 | x26_5 |
0.0499 | x26_1 |
0.0383 | x26_4 |
0.0325 | x23_3 |
0.0256 | x28_0 |
0.0229 | x30_10 |
0.0222 | x8_health |
0.0203 | x29_10 |
0.0200 | x14_2 |
0.0200 | x7_3 |
0.0199 | x31_16 |
0.0179 | x28_8 |
0.0155 | x28_6 |
0.0155 | x11_mother |
0.0149 | x29_12 |
0.0145 | x26_2 |
0.0138 | x21_no |
0.0112 | x6_2 |
0.0098 | x14_0 |
0.0092 | x18_no |
… 161 more … |
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.