# Heteroskedascity in the Logit Model

The purpose of these notes is to provide an example of heteroskedascity and to explain how we can account for the unequal variances of our residuals in a weighted least squares model. The specific example of heteroskedascity that we will use is an analysis of proportions data in a logit model.

Suppose that we observe our dependent variable as a proportion or percentage:

$0\le {p}_{i}\le 1$

which reflects an underlying "true" probability:

$0\le {\pi }_{i}\le 1$

If we used the raw percentage as our dependent variable, then our predictions of the dependent variable would not be bounded by zero and one. Instead, it is possible that some of our predictions of the dependent variable would be less than zero and some would be greater than one.

Converting the raw percentage to log odds gives us an unbounded dependent variable:

$ln\left(\frac{{p}_{i}}{1-{p}_{i}}\right)=\alpha +\beta {X}_{i}+{ϵ}_{i}$

but the residuals in such a logit model do not have constant variance:

$var\left({ϵ}_{i}\right)=\frac{1}{{n}_{i}{\pi }_{i}\left(1-{\pi }_{i}\right)}$

Instead the residual variance will be larger when the true probability is closer to zero or one.

Assuming that our explanatory variable is correlated with the true probability, the residual variance will be larger at extreme values of our explanatory variable.

Because we know the nature of the heteroskedascity, we can modify our regression model to account for it. All we need is an unbiased estimate of the true probability for each observation.

OLS will provide such an unbiased estimate if there is no correlation between the residual and the explanatory variable (i.e. if the explanatory variable only affects the residual variance, not the residual itself).

So our first step is to estimate the regression coefficients with OLS:

$ln\left(\frac{{p}_{i}}{1-{p}_{i}}\right)=\alpha +\beta {X}_{i}+{ϵ}_{i}$

and then use the estimated coefficients to predict the true log odds:

$ln\left(\frac{\stackrel{^}{{\pi }_{i}}}{1-\stackrel{^}{{\pi }_{i}}}\right)=\stackrel{^}{\alpha }+\stackrel{^}{\beta }{X}_{i}$

But what we really want is an unbiased prediction of the true probability:

$\stackrel{^}{{\pi }_{i}}=\frac{1}{1+exp\left(-\alpha -\stackrel{^}{\beta }{X}_{i}\right)}$

which we can use to weight each observation:

${w}_{i}=\sqrt{{n}_{i}\stackrel{^}{{\pi }_{i}}\left(1-\stackrel{^}{{\pi }_{i}}\right)}$

And in the second step, we apply the weight to each variable and estimate the regression coefficients:

${w}_{i}ln\left(\frac{{p}_{i}}{1-{p}_{i}}\right)=\alpha {w}_{i}+\beta {w}_{i}{X}_{i}+{w}_{i}{ϵ}_{i}$

Weighting each observation in such a manner accounts for the heteroskedascity and the residuals in our regression model should now have constant variance.