登録は簡単!. 無料です
または 登録 あなたのEメールアドレスで登録
XGBoost により Mind Map: XGBoost

1. sklearn

1.1. Explains all the parameters used in XGBoost - including feature importances and scores https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html

2. Regressor in python

2.1. https://blog.paperspace.com/implementing-gradient-boosting-regression-python/ https://www.datarobot.com/blog/gradient-boosted-regression-trees/ https://towardsdatascience.com/machine-learning-part-18-boosting-algorithms-gradient-boosting-in-python-ef5ae6965be4

3. Explanation on how XGBoost tries to minimize the residuals

3.1. https://towardsdatascience.com/simplifying-gradient-boosting-5dcd934e9c76 https://towardsdatascience.com/all-you-need-to-know-about-gradient-boosting-algorithm-part-1-regression-2520a34a502 https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/ https://www.analyticsvidhya.com/blog/2021/09/gradient-boosting-algorithm-a-complete-guide-for-beginners/

4. In the gradient descent method, we try to reach the global minimum of an objective function (loss function) by walking through the solution space along the gradient. You can consider every added tree, a further step towads the greatest descending slope direction. Since the gradient of the loss functions logloss or mean square error happen to be the difference between prediction and observation, you can interpret every tree to be fitted to the previous residuals. Here is a simple example of fitting to a previous residual https://blog.mlreview.com/gradient-boosting-from-scratch-1e317ae4587d

5. Classification

5.1. https://blog.paperspace.com/gradient-boosting-for-classification/ https://towardsdatascience.com/understanding-gradient-boosting-machines-9be756fe76ab https://stackabuse.com/gradient-boosting-classifiers-in-python-with-scikit-learn/ https://affine.ai/gradient-boosting-trees-for-classification-a-beginners-guide/

6. What's the basic idea behind gradient boosting? Instead of creating a single powerful model, boosting combines multiple simple models into a single composite model. The idea is that, as we introduce more and more simple models, the overall model becomes stronger and stronger. In boosting terminology, the simple models are called weak models or weak learners. To improve its predictions, gradient boosting looks at the difference between its current approximation, , and the known correct target vector, , which is called the residual, . It then trains a weak model that maps feature vector to that residual vector. Adding a residual predicted by a weak model to an existing model's approximation nudges the model towards the correct target. Adding lots of these nudges, improves the overall models approximation. https://explained.ai/gradient-boosting/faq.html https://www.analyticsvidhya.com/blog/2021/03/gradient-boosting-machine-for-data-scientists/ https://www.frontiersin.org/articles/10.3389/fnbot.2013.00021/full