Introduction to Tree Methods
Tree methods are a supervised learning method. This means that there is a pre-determined output variable (classification or regression) that we will attempt to predict.
The tree attempts to recursively partition the predictor space to model the relationship between the predictors and the response variable . There are several advantages to trees over some other methods.
- Trees are non-parametric, requiring few statistical assumptions
- Trees can be applied to various data types including ordered, categorical, and continuous variables (or any combination thereof)
- Trees perform stepwise variable selection, complexity reduction, and interaction handling in an automatic manner
- Trees are invariant under all monotone transformations of the individual ordered predictors
- Trees are efficient in handling missing values, and there are methods we can use to determine variable importance for the output
- THe output is easily understood and interpreted. An individual observation follows a path through the tree that we can interpret well. This is preferable compared to “black box” methods such as Neural Networks or Support Vector Machines
- There is a tradeoff. Trees are robust, but they are also unstable and depend heavily on the data supplied
This tutorial will cover trees as covered by “CART”, by Breiman, Friedman, Olshen and Stone (1984). This methodology addressed some of the problems of prior tree methods, and will form the foundation of some of the ensemble methods we will explore later.
Terminology
- Trees are built from root nodes (top) to leaf or terminal nodes (bottom)
- A record (observation) first enters the root node. A split (test) is applied to determine which child node it should go to next. This process is repeated until the record arrives at a terminal node
- The path from the root to the leaf node provides an expression of a rule
CART Methodology
CART, standing for Classification And Regression Trees, prescribes a methdology to build and select a specific tree.
- Grow a large initial tree,
- Iteratively truncat branches of to obtain a sequence of optimally pruned (nested) subtrees
- Select the best tree size based on validation provided by either a separate test sample (not used to build the tree), or by cross-validation (CV. Will go over later)
Grow a Large Initial Tree
To grow the initial tree, you need 4 elements.
- A set of binary questions to induce a split
- AA goodness of split criterion to evaluate a split
- A stop-splitting rule
- A rule for assigning every terminal node to a class (0 or 1)
Binary Questions
Binary questions are questions whose answers can be represented as a 0 or 1.
- For Continuous Variables
? If yes, output 1, if no, output 0 - For ordinal variables, we can generally regard them as continuous and treat them in the same way
- For categorical variables, the binary question can be whether the observation belongs to a specific subset of possible values. If the possible values for are ,
? If yes, output 1, of no, output 0
Goodness of Split Criterion
We will use a node impurity based splitting criterion to evaluate the splits. In general, the impurity can be defined as a nonnegative function of and . These denote the proportions of node beloing to classes respectively.
The impurity function is maximized when both classes are equally mixed, and is the smallest when the node contains only one class. Thus it has the following properties.
- Attains minimum when or
- Attains its maximum when
- . IE, is symmetric about .
- Descendent and Ancestor nodes refer to the relationships between nodes. A node is called a descendant of a higher node (ancestor node) if there is a path leading down the tree from to
- Let denote the set of all terminal nodes of , and the set of all internal nodes of . Furthermore, denotes cardinality, IE the number of elements in the set. represents the number of terminal nodes in tree
- A tree is a Subtree of if has the same root node and and for every node in , there exists the same node in . We denote a subtree using the notation.
- A tree is called a branch of if with root node and all descendants of are descendants of
- We Prune a branch off by cutting off all descendants of the root node in . The pruned tree is denoted as .
- The Misclassification Cost of a terminal node is given as
- Split data randomly into 2 sets: , the training set, and , the test set
- The ratio at which we split the data is subjective. If we have a huge amount of data, then we can afford to give a large amount (50%) to the sample set. If we have a small amount of data, we may give as little as 25% to the sample set
- Using the training sample only, grow a large initial tree and prune it back to obtain the nested sequence of subtrees,
- Send the test sample down each subtree and compute the misclassification cost based on the test sample for each subtree ,
- The subtree having the smallest misclassification cost is selected the best subtree
- The subtree having the smallest misclassification cost is selected the best subtree
- Once the best subtree is determined, is used as an estimate of the misclassification cost
- For a fixed , grow a large initial tree using the training set . Use the pruning procedure to obtain a nested sequence of pruned subtrees, .
- Grow and prune a tree based on all the data to obtain the nested sequence of best-pruned subtress and the corresponding sequence of complexity parameters
- Find the Complexity Parameter (CP value) associated with the minimum cross-validated misclassification cost
- Choose the tree associated with that CP value as the optimal tree
Common choices for include the Minimum Error, the Entropy function, and the Gini Index.
Minimum or Baye’s Error
This measure corresponds to misclassification rate when majority vote is used. This error is rarely used in practice due to its inability to sufficiently reward purer nodes.
Entropy Function
The entropy function is equivalent to using the likelihood ratio statistic for association between the branches and the target categories.
Gini Index
This is the method proposed in the original CART Method. It has been observed since that the rule has an undesireable endcut preference problem. It gives preference to the splits thatresult in two child nodes of extremely unbalanced size. To resolve this problem, a method callled the Splitting Method has been adopted in which the method will not make a cut on the last percent of the data.
Goodness of Split Measure
Generally speaking, the goodness of split measure is the measure of the change of node impurity. Let e a candidate spkit and suppose divides node into and such that the proportions of cases that go into and are and repsecitvley. We can define the reduction in node impurity as
Which provides the Goodness of Split measure for . The best split for node is that which maximizes the impurity reduction.
Then will be split into and according to split and the search procedure will be repeated on and separately. A node becomes a terminal node when pre-specified terminal node conditions are satisfied.
Tree Pruning
Going back to the drawbacks of tree methods, trees are robust yet unstable. They heavily depend on the data that was given to them to build the tree, so they adapt to both the pattern of the data (what we want to see) and the noise of the data (what we’re trying to avoid). To obtain a tree model that can generalize well to new data, we trim back the tree iteratively to obtain a nested sequence of optimally pruned subtrees.
Note that there exists a balance between bias and variance within tree moodels. A well fitted tree has a low bias (adapts to signal) and low variance (does not adapt to noise). For a given dataset, the more complex a tree is, the more it adapts to noise and therefore increases variance. An under-fitted tree has high bias and low variance. An overfitted tree has low bias and high variance.
Cost Complexity Measure
In CART, the complexity of a tree is determined by the total number of terminal nodes it has. We will again need to define some terminology:
WORK IN PROGRESS
Tree Size Selection
As the third step in the CART algorithm, we are tasked with choosing the most appropriate tree size to give the finalmodel. A natural choice is to choose the subtree that optimizes an estimate of the performance measure. However, because of the adaptive nature of trees, naively using the same data used to train the tree for testing the tree will result in an overly optimistic estimate for the performance of the model. We have a couple ways to go around this.
Test Sample Method
There are advantages and disadvantages to the test sample method. It is very straightforward and simple. However, it does not use all the data. The sequence of subtrees, from which the best tree model is selected, is based on only part of the data.
Cross-Validation
Cross-validation arises to correct some of the shortcomings of the test sample method. It allows us to generate reliable out of sample error estimates without wasting data. The method explained here covers V-fold cross-validation, where we make V subsets of the data, train and calculate the out of sample error for each subset, and pick the complexity of the tree that minimizes it.
V-fold Cross-Validation
Whole sample is randomly divided into subsets. , . The sample sizes should be approximately equal. The th learning sample is denoted as
The th subset is used as the test sample corresponding to . CART suggests that we set , training on 90% of the data and performing the out of sample test on 10% of the data at a time.
In the interest of making tree models useful to new data, we may adopt a rule with regard to choosing the optimal complexity parameter. the 1-SE Rule advises us to find the complexity parameter with an associated cross-validated error less than the minimum cross-validated standard error + 1 standard deviation of that error. use of this rule is left up to the researcher. You are best off looking at a plot of tree complexity versus cross-validated error, and making your own choice.
Additional Information
The material contained in this section was derived from Dr. Fan’s lecture notes: ctree.pdf. These notes develop the solid mechanical underpinnings of the CART algorithm.
See also this Youtube Presentation by mathematicalmonk.