Cost-Sensitive Stochastic Gradient Boosting Within a Quantitative Regression Framework
Brian Kriegler
Ph.D., 2007
Advisor: Richard Berk
In a typical regression problem, one constructs a model using a symmetric loss function (e.g., squared error) so that the magnitudes of the errors can be minimized; consequently, overestimating and undestimating the response are weighted equally. However, if one type of deviation is more costly than the other based on the subject matter, the estimates that yield the smallest overall error will likely not provide the most satisfactory vector of predictions. In this dissertation, unequal error costs are incorporated into a quantitative regression framework using Friedman's stochastic gradient boosting machine. Stochastic gradient boosting has proven to be a highly effective learning algorithm in part because it can be applied to any function estimation setting, provided the loss function is differentiable and yields an optimal solution. Herein, the focus on obtaining estimates subject to three distinct loss criteria: absolute error, squared error and Poisson deviance. Our methodology entails weighting overestimates and underestimates in the loss function so that the subsequently derived gradient and predicted outcomes are also weighted. Results from a case study on counting the number of homeless in Los Angeles County, in which unequal estimation costs are relevant, are reported. General statements about the characteristics of cost-sensitive boosting are made when appropriate, and a number of prospective research topics are described.
2007