Elements Of Econometrics Kmenta Pdf To Word

Elements Of Econometrics Kmenta Pdf To Word 4,5/5 8762votes

This classic text has proven its worth in university classrooms and as a tool kit in research—selling over 40,000 copies in the United States and abroad in its first edition alone. Users have included undergraduate and graduate students of economics and business, and students and researchers in political science, sociology, and other fields where regression models and their extensions are relevant. The book has also served as a handy reference in the 'real world' for people who need a clear and accurate explanation of techniques that are used in empirical research. Throughout the book the emphasis is on simplification whenever possible, assuming the readers know college algebra and basic calculus. Jan Kmenta explains all methods within the simplest framework, and generalizations are presented as logical extensions of simple cases.

And while a relatively high degree of rigor is preserved, every conflict between rigor and clarity is resolved in favor of the latter. Download Ghar Aaja Mahi By Falak on this page. Apart from its clear exposition, the book's strength lies in emphasizing the basic ideas rather than just presenting formulas to learn and rules to apply. The book consists of two parts, which could be considered jointly or separately. Part one covers the basic elements of the theory of statistics and provides readers with a good understanding of the process of scientific generalization from incomplete information. Part two contains a thorough exposition of all basic econometric methods and includes some of the more recent developments in several areas.

Elements Of Econometrics Kmenta Pdf To WordElements Of Econometrics Kmenta Pdf To Word

Some of the questions at the end of the chapters have been taken from the examinations at several U.S. Universities, and from P. Phillips and M. Volvo Premium Tech Tool Keygen For Mac on this page. Wickens, Exercises in Econometrics (Cambridge, Massachusetts. Ballinger Publishing Co., 1978), Vol. (Many of the questions in that book are.

As a textbook, Elements of Econometrics is intended for upper-level undergraduate and master's degree courses and may usefully serve as a supplement for traditional Ph.D. Courses in econometrics. Researchers in the social sciences will find it an invaluable reference tool. NOTE: The solutions manual (Paper ISBN: 978-0-472-08476-0) is available for teachers who adopt the text for coursework. Please email for more information.

Econometrics is concerned with the application of statistical methods to economic data. Economists often apply statistical methods to data in order to quantify or test their theories or to make forecasts ( See ). However, traditional statistical methods are not always appropriate for application to economic data, in the sense that the assumptions underlying these methods may fail to be satisfied. Basically, this is so because much of traditional statistics has been developed with an eye toward application in the natural sciences, where data are generated by experimentation ( See ). In economics, data are virtually always nonexperimental. (This is, of course, also the case in other social sciences; not surprisingly, there is substantial overlap in the statistical methodologies of,,, etc.) Furthermore, the nature of the economist's view of the world is such that the mechanism viewed as generating the data creates some statistical problems which are distinctly “econometric,” and whose solution constitutes a large portion of econometric theory. Of course, when these assumptions are not satisfied, the least squares estimator does not have such nice properties.

Accordingly, for each of the assumptions above, it is reasonable to ask what damage is done by its violation, and what cure (if any) exists for this damage. This line of inquiry is by no means peculiar to econometrics.

Nevertheless, the consequences (and cures thereof) of the violations of the assumptions of the general linear model do receive considerable attention in all econometrics texts and in current econometric research. First, consider the assumption that the regressors are linearly independent. Its violation is a condition called, under which the regression coefficients are not estimable. The term “multicollinearity” is also applied to the case in which this assumption “almost” fails, due to one of the regressors being highly (although not perfectly) correlated with a linear combination of the other regressors. In this case, the coefficients are estimable but only imprecisely. The “solution” that is most commonly advanced is to attempt to reduce by the least squares estimator toward zero, through the use of or estimators. Good surveys (by econometricians) include Vinod and Judge and Bock.

Specification error is a serious problem because it potentially invalidates all the results of a regression; it causes biased and inconsistent estimators and invalid tests of hypotheses. There is no cure except to make sure that one's model is (more or less) correctly specified. On the other hand, there are ways to test the hypothesis of correct specification. Besides such heuristic (but useful) methods as looking for or patterns in the, a number of more formal specification error tests have been developed.

Good sources for these include Ramsey and Hausman. We now return to the assumption that the regressors are fixed, nonrandom variables. This assumption will often be appropriate in the analysis of experimental data since the explanatory variables will generally represent conditions of the experiment that were fixed by the experimenter. However, it is generally an unreasonable assumption when one is dealing with nonexperimental data. Thus, it is necessary to consider the mixed model, in which the randomness of the regressors is explicitly recognized. As an example, suppose that one has cross-sectional data on individuals and is trying to explain income as a function of the individual's age, education, sex, and other demographic variables.

Clearly, although individuals are endowed at birth with a given birth date and sex (and even the latter is not as permanently fixed as it used to be!), they are not so endowed at birth with either education or income. Both are subject to random influences over the course of the individual's lifetime. In this sense, education is no more “fixed” than income is.

Given a set of regressors, at least some of which are random, it should not be surprising that the properties of the least-squares estimator depend on the relationship between these regressors and the disturbances. To consider the simplest case first, suppose that the regressors and disturbances are independent. In such a case, one is justified in treating the regressors as if they were non-random, in the sense that the least-squares estimates remain unbiased and consistent, and the usual tests remain valid. As a result, the assumption that the regressors are independent of the disturbance is the random-regressor case counterpart to the assumption that the regressors are nonrandom. Philosophically, the assumption that the regressors are independent of the disturbances is tied to the notions of exogeneity and unidirectional causality.

Clearly, given the nature of a regression equation, any random effect on a regressor must cause an effect on the dependent variable. However, the assumption of independence of regressors and disturbances implies that the converse is not true—random effects on the dependent variable, as captured by the disturbance, do not affect the regressors. In other words, the assumption that the disturbances and regressors are independent is roughly equivalent to the notion that the regressors cause the dependent variable, but not vice versa. Attempts have been made to make this statement more precise, but they have not been entirely successful because it is hard to get agreement on a definition of causality. For one fairly rigorous such attempt, see Granger and Sims. For present purposes, it is sufficient to simply use the word exogenous for that are independent of the disturbances.

Exogenous variables are determined apart from the model under consideration. To carry through with the previous example, it may be plausible to assume that education is exogenous in an earnings function. This assumes that random effects on one's earnings do not affect one's educational level. Of course, to argue the other way, it is conceivable that earnings do affect education.

This could happen, for example, if unexpectedly high earnings increased one's ability to afford higher education. Models allowing for this type of feedback are of considerable importance and will be discussed in the next section. A second case worth considering is one in which any observation on the regressors is independent of the corresponding observation on the disturbance, although it may not be independent of all observations on the disturbance. The typical example of this occurs in a context, when one or more of the regressors is a lagged value of the dependent variable. In such a case, the desirable large-sample properties of least squares (,, and ) still hold, although its desirable small-sample properties are lost. The proof of this assertion is complicated since it requires establishing a central limit theorem for a sum of dependent random variables; this problem was first solved by Mann and Wald. Models with as regressors are quite common in a time-series context, especially in dealing with aggregate economic data.

Indeed, such models had routinely been fitted by least squares by economic forecasters for some time prior to the Mann and Wald article just cited; this article is a significant one in the history of econometrics because it was one of the first to identify a violation of the usual assumptions of the (which, furthermore, is distinctly due to the nonexperimental nature of the data) and to consider its consequences. The conjunction of lagged dependent variables and autocorrelated errors causes other substantial difficulties worth mentioning.

For one thing, the usual tests for autocorrelation (e.g., the Durbin–Watson test) are invalidated with lagged dependent variables among the regressors. Asymptotically valid tests are given by Durbin. Another problem, closely related to the testing problem, is that the usual estimates of the serial correlation pattern of the disturbances (e.g., sample autocorrelations of the least-squares residuals) are inconsistent. Consistent estimates of the serial correlation pattern of the disturbances can be obtained from instrumental variables residuals, where reasonable instruments for the lagged dependent variables might be the lagged values of other regressors. Finally, generalized least squares based on a consistent estimate of the disturbance covariance matrix is asymptotically inefficient (relative to maximum likelihood) in this case. An asymptotically efficient two-step estimator has been suggested by Hatanaka, however. For reasons based in economic theory, economists tend to view the world as determining the values of economic variables by the solution of sets of equations, each of which holds simultaneously.

For example, no one can escape a first course in economics without seeing a graph like that of Figure. It depicts the determination of the price and quantity sold of some commodity by the intersection of supply and demand curves. Quantity supplied depends positively on price, as given by S. The quantity demanded depends negatively on price, as given by D. Price and quantity are determined at the point where quantity supplied equals quantity demanded.

Note that the reduced form gives the solution for each endogenous variable, and that this solution will in general depend on every exogenous variable and on every disturbance. Thus, every endogenous variable will in general be correlated with every disturbance. The implication of this is that least squares will give biased and inconsistent estimates when applied to structural equations that have endogenous variables as righthand-side variables. This phenomenon is referred to as the simultaneous equations bias of least squares, and was first systematically identified by Haavelmo. Its obvious implication is that least squares is not an appropriate way to estimate structural equations. However, it should be noted explicitly that least squares can be used to estimate reduced-form equations consistently since the explanatory variables in reduced-form equations are exogenous.

Before turning to the problem of estimation of structural parameters, it should be noted that there is a problem of identification of the structural parameters. The reduced-form parameters are always identified, but the structural parameters are identified if and only if it is possible to solve for them uniquely from the reduced-form parameters.

In general, this is not possible; many different sets of structural parameters would imply the same reduced-form parameters. (For example, in Figure, many different supply and demand curves could yield the same intersection point.). If there are sufficient a priori restrictions on the structural parameters, they may be identified. Usually, these restrictions take the form of the exclusion of some variables from some equations. For example, the variable W (weather) appears in equation but not in equation, and this suffices to identify the structural parameters of equation. The structural parameters of equation are not identified, however, without further restrictions.

A very complete treatment of the of structural parameters can be found in Fisher, which also treats identification under other kinds of theoretical restrictions than exclusions of variables from particular equations. The oldest, and at first glance simplest, method of estimating structural parameters consistently is indirect least squares. The procedure is first to estimate the reduced form by least squares, and then to solve for estimates of the structural parameters in terms of the estimated reduced-form parameters.

Such a solution should be possible if the structural parameters are identified. The consistency of the indirect least-squares estimates follows directly from the consistency of the least-squares estimates of the reduced-form parameters.

As an example, consider the model in equations and. The supply curve in equation is not identified and cannot be estimated consistently by any method. However, the demand curve is identified, and we would like to estimate its parameters.

If we estimate the reduced form in equations and by least squares, we get consistent estimates of the reduced-form parameters, from which we can solve for consistent estimates of the structural parameters in the demand equation. For example, the parameter b can be consistently estimated by the ratio of the estimated coefficient of W in equation to the estimated coefficient of W in equation. The simplest procedure for estimating structural parameters consistently, but that handles the overidentified case reasonably, is two-stage least squares. If the equation being estimated is exactly identified, then two-stage least squares and indirect least squares are identical.

If the equation is overidentified, then two-stage least squares is more efficient than indirect least squares. (It can also be expressed as a weighted average of the possible indirect least-squares solutions.) Numerous other estimators that are asymptotically equivalent to two-stage least squares, but two-stage least squares is most widely used because of its simplicity. It is still possible to find more efficient estimates than the two-stage least-squares estimates if (as seems reasonable) the disturbances in the different equations are correlated. These more efficient techniques estimate the parameters of all (identified) equations jointly and are thus somewhat burdensome computationally. One such technique is three-stage least squares, which is beyond the scope of this survey but can be found in any econometric text. Another technique that is conceptually straightforward is maximum likelihood (sometimes called “full information” maximum likelihood), which involves maximizing the likelihood function of the system numerically with respect to all the structural parameters, usually by some iterative procedure. This method is feasible only for fairly small systems.

Finally, it should be admitted that the notion of a simultaneous set of structural equations, although by now deeply rooted in the intuition of most economists, is not accepted by all. There is still a dispute, touched off by the work of, over whether the world is, or could be, simultaneous, or whether things merely seem that way (e.g., because the time lag in people's actions is small relative to the period of observation of the data). This dispute is rather philosophical, and has really never been resolved; however, most practicing econometricians appear to have revealed a preference for use of simultaneous models. Another objection to the structural systems discussed above is that they rely on theoretical restrictions (exclusions of certain variables from certain equations) for identification. Some argue that one rarely has a strong theoretical basis for such restrictions, and that identification may therefore be illusory.

The alternative is some sort of “unrestricted” model, usually based on time-series methods. For an example, see Sargent and Sims. The comments by Klein are also of interest since they give the traditional defense of structural models, which is really to argue the strength and relevance of economic theory. The best example of this is recent work on unobservable variables. This is tied to earlier statistical work on, since for most unobservables there exist one or more, of varying accuracy ( See ). (For example, “intelligence” is unobservable, but it has various observable measures, such as various test scores.) Now, in a single equation, there is not much that can be done about measurement error since measurement error on an explanatory variable makes the model underidentified.

However, it turns out that in structural models, the overidentification due to exclusions of variables can be used to compensate for the underidentification due to measurement error. In this way, certain classes of simultaneous models with measurement error can be identified and estimated consistently. • (11) The ys are called indicators. The variable y i may be a measure of x*, in which case β i = 1, or it may be some variable that x* affects in some other way. [There may be other exogenous variables as regressors in equation.

With K >1, we have multiple indicators; hence, half of the name used above. Under the assumption that the εs are independent, the multiple-indicators model (without multiple causes) is identified for K ≥ 3. To add the notion of multiple causes, suppose that we add to equation the specification that. However, in the last 10 years or so, there has been a spectacular rise in the use of time-series methodology in econometrics. To some extent, this is a reflection of the influence of the work of Box and Jenkins. As their ARIMA models were applied to economic data, a striking thing occurred.

It was quickly discovered that very simple, univariate ARIMA models provided forecasts of economic time series such as gross national product, which were about as good as those provided by elaborate structural models. This was a bit of a blow to forecasters who used large models, and it illustrated at the least that they might wish to pay more attention to the time-series aspects of their models. On the other hand, the model builders have argued (somewhat convincingly, in my view) that at least with a structural model the source of forecasting errors is more easily identified, so that it is easier to learn from one's mistakes and, hopefully, avoid them in the future.

(With ARIMA models, forecast errors are just random events, which is not very informative.) Also, ARIMA models are sometimes criticized for being “mechanical” and for not making use of economic theory. This criticism assumes that economic theory is worth using, of course, and the relatively good performance of ARIMA models may bring this assumption into some question. Since both structural models and time-series methods appear useful, it is reasonable to try to combine them. This is the aim of recent work originating with Zellner and Palm. Suppose that one sets up a structural model and also identifies ARIMA processes for the exogenous variables. These, plus the form of the structural equations, imply very specific ARIMA processes for the endogenous variables. These implied ARIMA processes can be compared to the ARIMA processes actually found by analyzing the endogenous variables separately.

Such a comparison can be viewed as a test of the structural specification. From this point of view, one reason why structural models do not forecast better may be that the structure and the time-series properties of the data are not compatible.

Things are slightly more complicated when the variable to be analyzed is polytomous (has three or more possible values). Here the logit and probit specifications diverge in a fundamental way.

The logit model assumes a purely qualitative variable; that is, it is unordered in the sense that there is no numerical comparison whatever between the values of the dependent variable. An equation such as 10 is specified for each of the K − 1 distinct comparisons possible, where K is the number of possible values of the dependent variable.

Probabilities such as those in 11 can be expressed using nothing more complicated than exponentiation, so estimation is fairly easy. Good surveys, from rather different points of view, are McFadden and Nerlove and Press. The most natural polytomous probit specification assumes an ordered response. For example, if we know only that individuals are poor, middle class, or rich, we do not know how to assign numbers to these classes, but we do know in what order the numbers would have to be. The specification is basically the same as 12, but 13 is replaced by an equation that splits the range of y* into K possible subsets, with K − 1 dividing points to be estimated.

Probabilities are given by a univariate normal integral, so estimation is again not too difficult. See Amemiya for more details. The development of simultaneous equation models to use in forecasting economic time series was the historical genesis of econometrics as a distinct field, and it remains an important part of econometrics today. However, the current trend seems to be in the direction of a less distinctively “econometric” methodology in economics. Partly, this is the result of the increasing influence of so-called time-series methods in problems of forecasting and control, and partly it is the result of the increasing availability of good cross-sectional data, the analysis of which has created bridges to the methodologies of the other social sciences.

This broadening of the field will no doubt continue.