# Identification in Parametric Models

Paul Bekker and Tom Wansbeek *

Identification is a notion of essential importance in quantitative empirical branches of science like economics and the social sciences. To the extent that statistical inference in such branches of science extends beyond a mere exploratory analysis, the generic approach is to use the subject matter theory to construct a stochastic model where the parameters in the distributions of the various random variables have to be estimated from the available evidence. Roughly stated, a model is then called identified when meaningful estimates for these parameters can be obtained. If that is not the case, the model is called underidentified. In an underidentified model different sets of parameter values agree equally well with the statistical evidence. Hence, preference of one set of parameter values over other ones is arbitrary. Scientific conclusions drawn on the basis of such arbitrariness are in the best case void and in the worst case dangerous.

So assessing the state of identification of a model is crucial. In this chapter we present a self-contained treatment of identification in parametric models. Some of the results can also be found in e. g. Fisher (1966), Rothenberg (1971), Bowden (1973), Richmond (1974), and Hsiao (1983, 1987). The pioneering work in the field is due to Haavelmo (1943), which contained the first identification theory for stochastic models to be developed in econometrics; see Aldrich (1994) for an extensive discussion.

The set-up of the chapter is as follows. In Section 2 we introduce the basic concepts of observational equivalence of two parameter points, leading to the definitions of local and global identification. The motivating connection between the notions of identification on the one hand and the existence of a consistent estimator on the other hand is discussed. In Section 3 an important theorem is presented that can be employed to assess the identification of a particular model.

It provides the link between identification and the rank of the information matrix. A further step towards practical usefulness is taken in Section 4, where the information matrix criterion is elaborated and an identification criterion is presented in terms of the rank of a Jacobian matrix. In Section 5 the role played by additional restrictions is considered.

All criteria presented have the practical drawback that they involve the rank evaluation of a matrix whose elements are functions of the parameters. The relevant rank is the rank for the true values of the parameters. These, however, are obviously unknown. Section 6 shows that this is fortunately not a matter of great concern due to considerations of rank constancy.

Up till then, the discussion involved the identification of the whole parameter vector. Now it may happen that the latter is not identified but some individual elements are. How to recognize such a situation is investigated in Section 7. The classical econometric context in which the identification issue figures predominantly is the simultaneous equations model. This issue has become a standard feature of almost every econometric textbook. See, e. g., Pesaran (1987) for a brief overview. In Section 8 we give the relevant theory for the classical simultaneous equations model. Section 9 concludes.

As the title shows, the chapter is restricted to identification in parametric models.1 It is moreover limited in a number of other respects. It essentially deals with the identification of "traditional" models, i. e. models for observations that are independently identically distributed (iid). Hence, dynamic models, with their different and often quite more complicated identification properties, are not discussed. See, e. g., Deistler and Seifert (1978), Hsiao (1983, 1997), Hannan and Deistler (1988), and Johansen (1995). Also, the models to be considered here are linear in the variables. For a discussion of nonlinear models, which in general have a more favorable identification status, see, e. g., McManus (1992).

We consider identification based on sample information and on exact restrictions on the parameters that may be assumed to hold. We do not pay attention to a Bayesian approach where non-exact restrictions on the parameters in the form of prior distributions are considered. For this approach, see, e. g., Zellner (1971), Dreze (1975), Kadane (1975), Leamer (1978), and Poirier (1998).

As to notation, we employ the following conventions. A superscript 0, as in p0, indicates the "true" value of a parameter, i. e. its value in the data generating process. When no confusion is possible, however, we may omit this superscript. We use the semicolon in stacking subvectors or submatrices, as the horizontal delimiter of subvectors and submatrices:

(A1; A2) – (A1, A2)

We will use this notation in particular for a1 and a2 being vectors. Covariance matrices are indicated by Ъ. When it has a single index, Ъ is the variance-covariance matrix of the vector in the subscript. When it has a double subscript, Ъ is the matrix of covariances between two random vectors, as indicated by the subscripts.

## Leave a reply