Decision Tree
Users: 1 - Average Rating: 5.00
Decision Trees (DT) have the following features:
• They are non-parametric supervised learning methods used for classification and regression.
• The model predicts the value of a target variable by learning simple decision rules (splits) inferred from the data features.
• The algorithm has the following main hyper-parameters: cost function used to measure the quality of a split, maximum depth of the tree and minimum number of samples required to be at a leaf node.
• Unlike forests, in DT algorithm each node is split using the best division among all variables and not a subset of them.
• They do not suffer from numerical issues (no normalization of the data is needed).
• They are non-parametric supervised learning methods used for classification and regression.
• The model predicts the value of a target variable by learning simple decision rules (splits) inferred from the data features.
• The algorithm has the following main hyper-parameters: cost function used to measure the quality of a split, maximum depth of the tree and minimum number of samples required to be at a leaf node.
• Unlike forests, in DT algorithm each node is split using the best division among all variables and not a subset of them.
• They do not suffer from numerical issues (no normalization of the data is needed).
Scientific Area:
Knime
Language/Environments:
Learning
Target Group:
Basic
Cite as:
Breiman, L. and Friedman, J. H. and Olshen, R. A. and Stone, C. J., Classification and regression trees, Routledge, 2017.
Author of the review:
Giulia Cademartori
University of Genoa
You have to login to leave a comment. If you are not registered click here