Extensions
There are many ways of extending the previous model. For instance, we could allow for different distributions for zi (see Koop et al., 1995) or for many outputs
to exist (see Fernandez, Koop and Steel, 2000). Here we focus on two other extensions which are interesting in and of themselves, but also allow us to discuss some useful Bayesian techniques.
Explanatory variables in the efficiency distribution
Consider, for instance, a case where data are available for many firms, but some are private companies and others are state owned. Interest centers on investigating whether private companies tend to be more efficient than state owned ones. This type of question can be formally handled by stochastic frontier models if we extend them to allow for explanatory variables in the efficiency distribution. Let us suppose that data exist on m variables which may affect the efficiency of firms (i. e. Wjj, for i = 1… N and j = 1 … m). We assume wi1 = 1 is an intercept and wij are 01 dummy variables for j = 2 … m. The latter assumption could be relaxed at the cost of increasing the complexity of the computational methods. Since X, the mean of the inefficiency distribution, is a positive random variable, a logical extension of the previous model is to allow it to vary over firms in the following manner:
m
V = ПФГ’, (24.12)
j=1
where the ф; > 0 are unknown parameters. Note that if ф; = 1 for j = 2 … m then this model reduces to the previous one. To aid in interpretation, observe how this specification allows, for instance, for private and state owned firms to have different inefficiency distributions. If wi2 = 1 indicates that firm i is private, then ф2 > 1 implies that the mean of the inefficiency distribution is lower for private firms and, hence, that private firms tend to be more efficient than state owned ones. We stress that such a finding would not imply that every private firm is more efficient than every state owned one, but rather that the former are drawing their efficiencies from a distribution with a higher mean. Such a specification seems very suitable for many sorts of policy issues and immediately allows for outofsample predictions.
For the new parameters, ф = (ф1… фш)’, we assume independent gamma priors: р(ф) = р(ф1)… р(фш) with р(ф;) = /с(ф;1 a, bj) for j = 1… m. If the explanatory variables have no role to play (i. e. ф2 = … = ф„ = 1), then ф1 is equivalent to X1 in the previous model. This suggests one may want to follow the prior elicitation rule discussed above and set a1 = 1 and b1 = – ln (t*). The other prior hyperparameters, aj and bj for j = 2 … m, can be selected in the context of particular applications with moderate values for these parameters yielding a relatively noninformative prior. See Koop et al. (1997) for details.
A posterior simulator using Gibbs sampling with data augmentation can be set up as a straightforward extension of the one considered above. In fact, the posterior conditionals for в and h (i. e. equations (24.8) and (24.9)) are completely unaffected and the conditional for z in (24.11) is only affected in that X1iN must be replaced by the vector n = (X1… X))’, where X1 is given in equation (24.12). It can also be verified that for j = 1… m:11
f 
N N Л 

р(ф;1 y, x, z, p, h, w, ф( j)) = /G 
фі 
aj + X w, bj + X w4zt Пфwi 
i =1 i =1 s Ф j 
where ф(/) = (ф… фу_1г ф/+1… tym)’. Hence, Bayesian inference in this model can again be conducted through sequential drawing from tractable distributions.
So far, we have focused on posterior inference. This stochastic frontier model with varying efficiency distribution can be used to illustrate Bayesian model comparison. Suppose m = 2 and we are interested in calculating the Bayes factor comparing model M1 where ф2 = 1 (e. g. there is no tendency for state owned and private firms to differ in their efficiency distributions) against model M2 with ф2 Ф 1. The prior for M2 is given above. Define у = (P, h, ф(_2))’ as the parameters in the model M1 and let p( ) indicate a density under Ml for l = 1, 2. If we make the reasonable assumption that p2(y ф2 = 1) = p1(y), then the Bayes factor in favor of M1 can be written as the SavageDickey density ratio (see Verdinelli and Wasserman, 1995):
the ratio of posterior to prior density values at the point being tested. Note that the denominator of (24.14) is trivial to calculate since it is merely the gamma prior for ф2 evaluated at a point. The numerator is also easy to calculate using (24.13). As Verdinelli and Wasserman (1995) stress, a good estimator of р(ф2 = 1 y, x, w) on the basis of R Gibbs replications is:
1 R
X p (ф2 = 1 y, x, z(r), P(r), h(r), w, ф(_2)(г)), (24.15)
R r=1
where superscript (r) denotes the rth draw in the Gibbs sampling algorithm. That is, we can just evaluate (24.13) at ф2 = 1 for each draw and average. Bayes factors for hypotheses such as this can be easily calculated without recourse to evaluating the likelihood function or adding steps to the simulation algorithm (as in the more general methods of Gelfand and Dey, 1994 and Chib, 1995, respectively).
Leave a reply