Goodness-of-fit procedure



A goodness-of-fit test for parametric models based on dependently truncated data

Takeshi Emura

E-mail: emura@stat.ncu.edu.tw

  Graduate Institute of Statistics, National Central University,

Jhongda Road, Taoyuan, Taiwan

and

Yoshihiko Konno[1]

E-mail: konno@fc.jwu.ac.jp

Department of Mathematical and Physical Sciences, Japan Women’s University,

2-8-1 Mejirodai, Bunkyo-ku, Tokyo 112-8681 Japan

ABSTRACT

Suppose that one can observe bivariate random variables [pic] only when [pic] holds. Such data are called left-truncated data and found in many fields, such as experimental education and epidemiology. Recently, a method of fitting a parametric model on [pic] has been considered, which can easily incorporate the dependent structure between the two variables. A primary concern for the parametric analysis is the goodness-of-fit for the imposed parametric forms. Due to the complexity of dependent truncation models, the traditional goodness-of-fit procedures, such as Kolmogorov-Smirnov type tests based on the Bootstrap approximation to null distribution, may not be computationally feasible. In this article, we develop a computationally attractive and reliable algorithm for the goodness-of-fit test based on the asymptotic linear expression. By applying the multiplier central limit theorem to the asymptotic linear expression, we obtain an asymptotically valid goodness-of-fit test. Monte Carlo simulations show that the proposed test has correct type I error rates and desirable empirical power. It is also shown that the method significantly reduces the computational time compared with the commonly used parametric Bootstrap method. Analysis on law school data is provided for illustration. R codes for implementing the proposed procedure are available in the supplementary material.

Key words Central limit theorem • Empirical process • Truncation • Maximum likelihood • Parametric Bootstrap • Shrinkage estimator

1. Introduction

Truncated data are those from which part of them are entirely excluded. For instance, in the study of aptitude test scores in experimental education, only those individuals whose test scores are above (or below) a threshold may appear in the sample (Schiel, 1998; Schiel and Harmston, 2000). Many different types of truncation are possible depending on how to determine the truncation criteria. A classical parametric method for analyzing truncated data is based on a fixed truncation. That is, a variable [pic] of interest can be included in the sample if it exceeds a fixed value [pic], where [pic] is known. Parametric estimation for the normal distribution of [pic] has been given by Cohen (1991). Other examples of the fixed truncation include the zero-truncated Poisson model in which [pic] is a Poisson random variable and [pic].

A more general truncation scheme is the so-called “left-truncation” in which the sample is observed when a variable [pic] exceeds another random variable[pic]. The left-truncated data is commonly seen in studies of biomedicine, epidemiology and astronomy (Klein and Moeschberger, 2003). Construction of nonparametric estimators for [pic] under the left-truncation has been extensively studied (e.g., Woodroofe, 1985; Wang, et al., 1986). It is well known that the nonparametric methods rely on the independence assumption between [pic] and [pic]. Accordingly, Tsai (1990), Martin and Betensky (2005), Chen, et al. (1996), and Emura and Wang (2010) have presented methods for testing the independence assumption. For positive random variables [pic] and [pic], semiparametric approaches proposed by Lakhal-Chaieb, et al. (2006) and Emura, et al. (2011) are alternatives in the absence of independence assumption, where the association structure between [pic] and [pic] is modeled via an Archimedean copula.

Compared with the nonparametric and semiparametric inferences, there is not much in the literature on the analysis of left-truncated data based on parametric modeling. Although parametric modeling easily incorporates the dependence structure between [pic] and [pic], it involves strong distributional assumptions, and the inference procedure may not be robust to departures from these assumptions (Emura and Konno, 2010). Nevertheless, parametric modeling is still useful in many applications where parameters in the model provide useful interpretation or a particular parametric form is supported by the subject matter knowledge. For instance, in the study of aptitude test scores in educational research, researchers may be interested in estimating the mean and standard deviation of the test score [pic] rather than [pic] (Schiel and Harmston, 2000; Emura and Konno, 2009). Hence, parameters of the normal distribution usually provide useful summary information (see Section 5 for details). For another example, the study of count data in epidemiological research often encounters the zero-modified Poisson model (Dietz and Böhning, 2000) for [pic] (see Example 3 in Appendix A for details). For count data, the main focus is to estimate the intensity parameter of the Poisson distribution rather than [pic]. In the preceding two examples, one needs to specify the parametric forms of [pic]. If the goodness-of-fit tests are used appropriately, the robustness concern about the parametric analysis can be circumvented.

In this article, we develop a computationally attractive and reliable algorithm for the goodness-of-fit test by utilizing the multiplier central limit theorem. The basic idea behind the proposed approach follows the goodness-of-fit procedure for copula models (Kojadinovic and Yan, 2011; Kojadinovic, et al., 2011), though the technical details and the computational advantages in the present setting are different. The rest of the paper is organized as follows: Section 2 briefly reviews the parametric formulation given in Emura and Konno (2010). Section 3 presents the theory and algorithm of the proposed goodness-of-fit test based on the multiplier central limit theorem. Simulations and data analysis are presented in Sections 4 and 5, respectively. Section 6 concludes this article.

2. Parametric inference for dependently truncated data

In this section, we introduce the parametric approach to dependent truncation data based on Emura and Konno (2010) and derive the asymptotic results of the maximum likelihood estimator (MLE) that are the basis for the subsequent developments.

Let [pic] be a density or probability function of a bivariate random variable [pic], where [pic] is a [pic]-variate vector of parameters and where [pic] is a parameter space. In a truncated sample, a pair [pic] is observed when [pic] holds. For observed data [pic] subject to [pic], the likelihood function has the form

[pic], (1)

where [pic]. Let [pic] be the column vector of partial derivatives (with respect to the component of [pic]) of [pic], i.e., [pic] for [pic], and let [pic] be the maximum likelihood estimator (MLE) that maximizes (1) in [pic]. Emura and Konno (2010) noted that for computing the MLE, it is crucial that the simple formula of [pic] is available. This also has a crucial role in the subsequent developments for the proposed goodness-of-fit test procedure. For easy reference, Appendix A lists three examples of the parametric forms that permit a tractable form in [pic]. The following Lemma is a basis for deriving the asymptotic expression for the goodness-of-fit statistics.

Lemma 1: Suppose that (R1) through (R7) listed in Appendix B hold. Then,

[pic]. (2)

where [pic] is the Fisher information matrix and [pic] is the transposed vector of [pic].

3. Goodness-of-fit procedure under truncation

3.1 Asymptotic linear expression of the goodness-of-fit process

Let [pic] be a given parametric family. Also, let [pic] be the underlying (true) density or probability function of a bivariate random variable [pic]. Given the observed data [pic], we wish to test the null hypothesis

[pic] against [pic].

One of the popular classes of goodness-of-fit tests consists of comparing the distance between [pic] and [pic], where [pic] is the indicator function and

[pic]

The Kolmogorov-Smirnov type test is based on

[pic]. (3)

The calculation of [pic] requires the numerical integrations (or summations) for [pic] at [pic] different points in [pic]. A computationally attractive alternative is the Cramér-von Mises type statistic

[pic]. (4)

This requires exactly [pic] evaluations of [pic]. The null distributions for [pic] and [pic] have not been derived and depend on the true value of [pic].

Empirical process techniques are useful for analyzing the goodness-of-fit process [pic] defined on [pic]. Let [pic] be [pic], where [pic] is a collection of bounded functions on [pic] that are continuous from above, equipped with the uniform norm [pic]. Under the assumptions (R1), (R3), (R4) and (R8) listed in Appendix B, the map [pic] is shown to be the Hadamard differentiable with the derivative given by

[pic],

where [pic] and [pic], the column vectors of partial derivatives (with respect to the component of [pic]) of [pic] and [pic], respectively. Applying the functional delta method under the regularity conditions for Theorem 20.8 of van der Vaart (1998), we have

[pic],

where [pic] is uniform in [pic]. Making the above arguments rigorous, we obtain the following theorem whose proof is given in Appendix C:

Theorem 1: Suppose that (R1) through (R8) listed in Appendix B hold. Under [pic],

[pic], (5)

where [pic].

3.2 Algorithm based on the multiplier central limit theorem

Equation (5) is the basis for developing a resampling scheme based on the multiplier central limit theorem (van der Vaart and Wellner, 1996). Let [pic] be independent random variables with [pic] and [pic]. Consider

[pic]. (6)

Conditional on [pic], the only random quantities in equation (6) are [pic], which will be called the “multiplier”. The conditional distribution of (6) can approximate the asymptotic distribution of [pic].

The random variable [pic] contains [pic], whose analytical expression is usually impossible or very difficult to obtain due to truncation. Define the observed Fisher information matrix [pic]. Replacing [pic] and [pic] with [pic] and [pic], respectively, equation (6) is approximated by

[pic],

where [pic]. The following lemma shows that the substitution does not alter the asymptotic behavior; the proof is given in Appendix D.

Lemma 2: Suppose that (R1) through (R8) listed in Appendix B hold. Under [pic],

[pic].

Therefore, the distribution of [pic] in (3) can be approximated by

[pic].

Also, the distribution of [pic] in (4) can be approximated by

[pic],

where [pic] is a matrix whose [pic] element is [pic], [pic] and [pic] for a vector [pic]. Once the matrix [pic] is calculated from observed data, one can easily generate an approximate version of [pic] by multiplying [pic] by [pic] and then taking the norm. For finite sample accuracy, the standard normal multiplier [pic] typically does a better job than other types, such as a two-point distribution with [pic] (see Section 1 of the supplementary material).

The testing algorithm is as follows. Modifications of the algorithm for the statistic [pic]in (3) are straightforward.

Algorithm based on the multiplier method:

Step 0: Calculate the statistic [pic] and matrix [pic].

Step 1: Generate [pic].

Step 2: Calculate [pic],

where [pic] and [pic].

Step 3: Reject [pic] at the [pic]% significance level if [pic].

The algorithm based on the parametric Bootstrap method (Efron and Tibshirani, 1993) is provided for comparison with the multiplier method.

Algorithm based on the parametric Bootstrap method:

Step 0: Calculate the statistic [pic] .

Step 1: Generate [pic] which follows the truncated distribution of [pic], subject to [pic], for [pic].

Step 2: Calculate [pic],

where [pic] and where [pic] and [pic]are the empirical CDF and MLE based on [pic].

Step 3: Reject [pic] at the [pic]% significance level if [pic].

In Step 1 of the multiplier method, it is fairly easy to generate [pic]. On the other hand, Step 1 of the parametric Bootstrap approach is more difficult since data are generated from a given bivariate distribution function subject to truncation. Usually, the following accept-reject algorithm is used: (i) data [pic] from the distribution function[pic] is generated; (ii) if [pic], we accept the sample and set [pic]; otherwise we reject [pic] and return to (i). This algorithm can be time-consuming especially when the acceptance rate [pic] is small.

In Step 2, the multiplier method only needs to multiply the standard normal vector to [pic]. On the other hand, the parametric Bootstrap method needs to calculate [pic] for [pic]. This is very time consuming since each maximization process requires repeated evaluations of the likelihood function.

Beside the computational time spent, the parametric Bootstrap method can produce erroneous results when some of [pic], [pic], are local maxima. If [pic] is a local maxima, it is usually not close to [pic], and the Bootstrap version of the Cramér-von Mises statistic [pic] tends to be very large. Although this error is subject to the ability of computer programs for numerical maximizations, it is not always easy to locate global maxima for a large number of resampling steps (usually [pic] times). The multiplier method is free from this error and more reliably implemented than the parametric Bootstrap.

Finally, we give the validity of the method along the lines of Kojadinovic, et al. (2011). Let [pic] and [pic] for [pic]. The proof of the following theorem is given in Appendix E.

Theorem 2: Suppose that (R1) through (R8) listed in Appendix B hold. Under [pic],

[pic]

in [pic], where [pic]is the mean zero Gaussian process whose covariance for [pic]is given as

[pic]

where [pic], and [pic] are independent copies of [pic].

Under [pic], [pic] are no longer approximate independent copies of [pic]. In particular, the goodness-of-fit process [pic] usually converges in probability to [pic] at some [pic] while the multiplier realization [pic] converges weakly to a tight Gaussian process. This produces consistency of the goodness-of-fit test.

3.3 Computational aspects

To compute the proposed goodness-of-fit statistic based on the multiplier method, one needs to calculate [pic], [pic], [pic], and [pic]. Although these can be calculated for each specific parametric model, the formulas are not always easy. In what follows, we describe how to calculate these quantities for the bivariate normal, bivariate Poisson, and zero-modified Poisson models discussed in Examples 1-3 of Appendix A.

For the bivariate normal, bivariate Poisson, and zero-modified Poisson models, their formulas are written respectively as:

[pic],

[pic],

[pic]

where [pic] is the cumulative distribution for [pic], and [pic]. Since it is not very easy to obtain analytical expressions for [pic] and [pic], we propose to use [pic] and [pic], where [pic] is a small value and [pic]. Computer programs, such as the “numDeriv” package (Gilberet, 2010) in R, are also useful. We conducted a simulation to examine the correctness of the proposed numerical derivative (simulation results are given in the supplementary material). The results show that the proposed numerical derivative with [pic] is virtually identical to both the analytical derivative and the numerical derivative from the numDeriv package. The proposed numerical derivative has less programming effort than the analytical one.

The formula of [pic] under the bivariate normal model is given in Emura and Konno (2010). Under the bivariate Poisson model, one has [pic] where

[pic],

[pic],

[pic],

where [pic]. Under the zero-modified Poisson model, one has [pic], where

[pic],

[pic].

3.4 Generalization to other estimators for [pic]

Although the proposed goodness-of-fit procedure is developed in the case where [pic] is estimated by the MLE, it is not difficult to modify it for more general estimators. The fundamental requirement is that the estimator is asymptotically linear as in (2). A particularly interesting example in the dependently truncated data is the shrinkage “testimator” (Waiker et al., 1984). Suppose that the parameter can be written as [pic] , where [pic] is the correlation between [pic] and [pic], and [pic] is the MLE under the assumption of [pic]. Let [pic], and

[pic],

where [pic] is a sequence of positive constants. If [pic] is the cutoff value for the chi-square distribution with one degree of freedom, [pic] if [pic] is accepted or [pic] if [pic] is rejected by the likelihood ratio test. By definition, [pic] is shrunk to [pic], borrowing strength from the smaller variance of [pic] than that of [pic]. Emura and Konno (2010) show by simulations that under the bivariate normal model, there is substantial efficiency gain near [pic]. This is closely related to a superefficient phenomenon due to Hodges (see Lehmann and Casella, 1993). The proof of the following Lemma is given in Appendix F.

Lemma 3: Suppose that [pic] is inside the parameter space and that (R1) through (R7) in Appendix B hold. Also, suppose that either one of the following conditions holds: (i) [pic]; (ii) [pic] for all [pic] and [pic]. Then,

[pic].

Remark 1: According to condition (R1) in Appendix B, the parameter space for [pic] should be an open set. This condition ensures that “the true parameter” is located inside the parameter space. When the correlation on [pic] is introduced by the random effect, as mentioned in Example 2 of Appendix A, we have [pic] and the case of [pic] is defined as the limit [pic] which is outside the parameter space. Under such circumstances, the likelihood ratio test for [pic] becomes one-sided and the statistic does not have the asymptotic chi-squared distribution with one degree of freedom. As a result, the statement in Lemma 3 does not hold.

Theorem 3: Suppose that (R1) through (R8) listed in Appendix B hold. Further suppose that either one of the following conditions holds: (i) [pic]; (ii) [pic] for all [pic] and [pic].Then, equation (5) holds when [pic] is replaced by [pic].

The proof of Theorem 3 is similar to that of Theorem 1 based on the result of Lemma 3. Therefore, the algorithm developed in Section 3.2 is applicable by replacing [pic] with [pic].

4. Simulation results

To investigate the performance of the proposed goodness-of-fit test, we conducted extensive simulations using R. All results reported in this section are based on the standard normal multiplier [pic] and the results based on the two-point multiplier are given in Section 2 of the supplementary material.

4.1 Simulations under the null distribution

In the first part, we have chosen the same design as Emura and Konno (2010) in which [pic] follows the bivariate normal distribution with the mean vector and covariance matrix given respectively by

[pic], [pic],

where [pic]0.70, 0.35, 0, -0.35, or -0.70. In this design, the population parameters of Japanese test scores (mean=60.82, SD=16.81) and English test scores (mean=62.63, SD=19.64) are determined by the records of the National Center Test for University for 2008 in Japan. We set the null hypothesis [pic], where [pic] is a family of bivariate normal distributions with [pic].

The R mvrnorm routine in MASS package (Ripley et al., 2011) is used to generate random samples from the bivariate normal distribution. Truncated data [pic] subject to [pic] represent those sample whose sum of Japanese and English scores is above 120. We set [pic] as the common number of prospective students in any one department in Japanese universities (Emura and Konno, 2010). For the truncated data, we calculate the MLE [pic] (and [pic]) and the Cramér-von Mises type statistic [pic] given by (4). Here, [pic] is obtained by minimizing [pic] using the non-linear minimization routine nlm in R. For each data realization, we compute [pic] under [pic]. Then we record the rejection/acceptance status of the goodness-of-fit test at the [pic]% level. We also record the sample mean and standard deviation (SD) of [pic] to be compared with the sampling distribution of [pic]. The results for 1,000 repetitions are shown in Table 1. In all configurations, the rejection rates are in good agreement with the selected nominal sizes ([pic]=0.01, 0.05, and 0.10). Also, the sample mean and SD of [pic] are close to their resampling versions based on [pic]. The results provide empirical evidence that the multiplier methods for approximating the null distribution works well under the bivariate normal model.

Insert Table 1

In the second part, we carried out simulations under the bivariate Poisson model and zero-modified Poisson model. Hence, the null hypothesis is [pic], where [pic] is either a family of bivariate Poisson model or zero-modified Poisson model where the form of [pic] is given in Examples 2 and 3 of Appendix A. For the bivariate Poisson model, we set [pic] (1, 1) or (1, 2), and set [pic] so that the correlation [pic] becomes 0.35 or 0.7. Unlike the bivariate normal model, the bivariate Poisson model only permits positive association on [pic]. For the zero-modified Poisson model, we set [pic] or [pic] and [pic]0.3 or [pic]0.7. Simulation results for [pic] are summarized in Table 2. The rejection rates essentially have the correct values ( [pic]0.01, 0.05, and 0.10 ) and the sample mean and SD of the statistic [pic] are correctly approximated by the multiplier method under both the bivariate Poisson and zero-modified Poisson models. Unlike the bivariate normal model, one can not apply [pic] under the bivariate Poisson model since the independence is defined as a limit [pic] (see Remark 1).

Insert Table 2

In the final part, we compare the computational time between the proposed and the parametric Bootstrap methods under the bivariate normal and bivariate Poisson models. For a fixed dataset, we calculate the required computational time (in seconds) of the two competing methods using the routine proc.time( ) in R. As shown in Table 3, the required computational time for the multiplier method is much smaller than that for the parametric Bootstrap method for all entries. In particular, the use of the multiplier method under the bivariate Poisson distribution reduces the computational time by 1,000 times.

Insert Table 3

4.2 Power property

To investigate the power of the proposed test, we generated data from the bivariate t-distribution (Lang et al., 1989) while we performed the goodness-of-fit test under the null hypothesis of the bivariate normal distribution. The mean and scale matrix of the bivariate t-distribution are chosen to be the same as the mean and covariance matrix of the bivariate normal model in Section 4.1. As the degree of freedom parameter[pic] is related to the discrepancy from the null hypothesis, we draw empirical power curves based on 1,000 repetitions for selected values of [pic]. Figure 1 shows the empirical power curves under [pic] with [pic] and 800. The empirical power increases as [pic] gets larger and it gets close to the nominal sizes ( [pic]0.10 or 0.05 ) as[pic]. The curves for [pic]800 dominate those for [pic]400. The empirical power curves for [pic] 0.35 and 0.00, not shown here, are very similar to those in Figure 1. In general, the value of [pic] does not have much influence on the power properties.

Insert Figure 1

5. Data analysis

The proposed method is illustrated using the law school data available in Efron and Tibshirani (1993). The data consist of the average score of the LSAT (the national law test) and average GPA (grade point average) for [pic][pic] American law schools. We denote a pair of LSAT and GPA by [pic] for [pic]. The average scores of the LSAT and GPA for the 82 schools are 597.55 and 3.13, respectively. The correlation coefficient between the LSAT and GPA is 0.76.

We consider a situation that one can only observe a vector [pic] whose sum of LSAT and [pic]GPA are above a threshold 600+[pic]3.0 = 900. The number of such samples is [pic], and the inclusion rate is therefore [pic]= 0.598. In this sampling design, we observe [pic], subject to [pic], where [pic] and [pic]. Under the (working) independence assumption between LSAT and GPA, the Lynden-Bell’s nonparametric estimator (Lynden-Bell, 1971) for the mean LSAT score is defined by

[pic],

where

[pic].

The resulting estimate is [pic]618.63, which seems somewhat larger than the average LSAT score 597.55. This bias may result from the wrong independence assumption between LSAT and GPA.

We fit the bivariate normal distribution models for the truncated data. The p-values of the goodness-of-fit test for the bivariate normality are 0.884 for the multiplier method (required computational time = 1.25 seconds) and 0.645 for the parametric Bootstrap method (required computational time = 222.46 seconds). Both methods reach the same conclusion that there is no evidence to reject the bivariate normality assumption.

We proceed to the parametric analysis under bivariate normality. The estimated population mean of LSAT is [pic]591.32. This is much closer to the average LSAT score 597.55 than the estimate based on Lynden-Bell’s method. The estimated inclusion probability [pic]=0.545 is also close to the inclusion rate [pic]=0.598. Our study shows that the parametric analysis of the dependently truncated data produces reliable results when the goodness-of-fit test favors the fitted model. Note that all the analysis results are easily reproduced by the R codes in the supplementary material.

A reviewer has pointed out the difference of the p-values between the two methods. This is explained by the right skewness of the resampling distribution for the multiplier method compared to that for the parametric Bootstrap (Figure 2). The difference of the two resampling distributions may be attributed to a slight departure of the data generating system for the LSAT and GPA values from the bivariate normal model. In particular, a few ties in the LSAT and GPA values indicate that the data do not exactly follow the bivariate normal model. This implies that the two resampling procedures can yield different powers in real applications.

Insert Figure 2

6. Conclusion and discussion

The main objective of the present paper is to develop a new goodness-of-fit procedure for parametric models based on dependently truncated data. The method utilizes the multiplier central limit theorem and has the advantage of being less computationally demanding than the parametric Bootstrap procedure by avoiding the complicated resampling scheme of the parametric Bootstrap under dependent truncation. Note that the method is easily implemented by the R codes available in the supplementary material.

Although many studies have already applied the multiplier method in many different contexts (Spiekerman and Lin, 1998, Jin et al., 2003; Bücher and Dette, 2010; Kojadinovic and Yan, 2011), the computational advantage for dependent truncation models is remarkable. The idea can be applied to a variety of problems in which the Bootstrap resampling involves truncation. In particular, generalizations of the proposed method to even more complicated truncation schemes, such as double truncation and multivariate truncation, deserves further investigation.

Acknowledgments

We would like to thank the associate editor and the two reviewers for helpful comments. The research was financially supported by the grant NSC100-2118-M008-006 (the first author) and by the Japan Society for the Promotion of Science through Grants-in-Aid for Scientific Research (C)(No. 21500283) (the second author).

Appendix A: Examples of parametric models

Example 1: Bivariate t- and normal distributions.

Let [pic] be a positive number and

[pic], [pic],

where the positive definite matrix [pic] is called scale matrix. The density of the bivariate t-distribution is given by

[pic] ,

where [pic] is the gamma function,

[pic],

and [pic] are unknown parameters. Using the fact that [pic] is also a t-distribution (Fang, et al., 1990), one can derive

[pic],

where [pic] is the cumulative distribution function for the standard t-distribution with [pic] degree of freedom. In the special case of [pic], [pic] is a normal distribution, and the inclusion probability is [pic]. Tables for [pic] and [pic] are available in well-known software packages, such as pt and pnorm routines in R. (

Example 2: Random effect model

Suppose that [pic] and [pic], where [pic], [pic] and [pic] are independent random variables. In this model, the correlation between [pic] and [pic] can be explained by the common latent variable [pic], called random effect. The model is particularly convenient in the present context since [pic] has a simple form

[pic],

where [pic] and [pic]. Hence [pic] is free from the distribution of [pic]. For bivariate discrete distributions, the above integration is replaced by summation. Since the integrant function [pic] can be easily calculated by software packages, computation of [pic] only requires one dimensional numerical integration or summation. For instance, if [pic], [pic] and [pic] are independent Poisson random variables, whose means are [pic], it leads to a classical bivariate Poisson model (Holgate, 1964). The density function and inclusion probability are given respectively by

[pic],

[pic].

[pic] is calculated from packages, for example, using “ppois” in R. (

Example 3: Zero-modified Poisson model

Suppose that [pic] is a binary variable taking [pic] with probability [pic] and [pic] with probability [pic]. Also suppose that [pic] is a Poisson distribution with parameter [pic], which is independent of [pic]. The joint density function is

[pic],

where [pic], and the inclusion probability is

[pic].

The observed distribution of [pic] is,

[pic]

Re-parameterizing by [pic], the above distribution is the zero-modified Poisson distribution with the zero-modification parameter [pic] (Dietz and Böhning, 2000). A special case of [pic] corresponds to the zero-truncated Poisson distribution. (

Appendix B: Regularity conditions

By modifying the regularity conditions (e.g. p.257 of Knight (2000)) with the truncated data, the following conditions are sufficient for verifying the asymptotic results:

(R1) The parameter space [pic] is an open subset of [pic].

(R2) The set [pic] does not depend on [pic].

(R3) For some [pic] and for every [pic], it holds that [pic].

(R4) [pic] is three times continuously differentiable (all of the third partial derivatives exist and they are continuous) for every [pic]. Also, [pic] is three times continuously differentiable.

(R5) [pic].

(R6) [pic] exists and is positive definite.

(R7) For some [pic] satisfying [pic],

[pic].

(R8) A map [pic] is Fréchet differentiable. That is,

[pic] as [pic].

Remarks on condition (R8): The Fréchet differentiability of [pic] requires that [pic] is uniformly bounded on [pic]. Under the bivariate normal model, partial derivatives [pic] have explicit bounds (given in Section 4 of the supplementary material).

Appendix C: Proof of Theorem 1

Under conditions (R3) and (R4), all the partial derivatives of [pic] exist and they are continuous. This implies that [pic]is differentiable in the sense that for [pic],

[pic] as [pic]. (C1)

By straightforward calculations,

[pic]

Under (R8) and by (C1), the last three terms have the order [pic]. Therefore,

[pic] as [pic],

which leads to the Fréchet differentiability of [pic]. The Hadamard differentiability of [pic] immediately follows since it is a weaker condition than the Fréchet differentiability. In fact, the Fréchet differentiability and Hadamard differentiability are the same condition since the domain of [pic] (i.e.,[pic]) is finite dimensional (see Example 3.9.2 of van der Vaart and Wellner (1996)). Applying the functional delta method (p.297 of van der Vaart, 1998) to the result of Lemma 1, we have

[pic],

where [pic] is uniform in [pic]. Therefore,

[pic]

(

Appendix D: Proof of Lemma 2

Let [pic]. By a Taylor expansion,

[pic].

Also, by the weak law of large numbers,

[pic],

where [pic] is uniform in [pic]. The preceding two equations lead to

[pic]

(

Appendix E: Proof of Theorem 2:

By Lemmas 1 and 2, [pic] are asymptotically uncorrelated. Therefore, to show the joint convergence [pic] in [pic], we only need to show [pic] and [pic] in [pic] for each [pic].

To show [pic], we first check the marginal convergence. By Cramér-Wold’s device, this can be proven by, for [pic] and [pic],

[pic] (E1)

in distribution. By Theorem 1 and the central limit theorem,

[pic]

in distribution, where

[pic]

By the definition of [pic], [pic] is a zero mean normal distribution with variance [pic]. Hence, (E1) is proven. Next, the tightness of [pic] is verified by showing the tightness of [pic] and [pic], respectively. The tightness of the former is deduced from the empirical process theory for the multi-dimensional distribution function (Example 2.5.4 of van der Vaart and Wellner, 1996). The tightness of the latter follows from the functional delta method applied for the Hadamard differentiable map [pic] (Appendix C). Therefore, [pic] in [pic] is proven.

Similarly, to show [pic][pic] in [pic], we first check

[pic] (E2)

in distribution. By Lemma 2,

[pic],

where

[pic]

Hence, (E2) holds. The tightness of [pic] is proven by the same approach as the tightness proof for [pic]. (

Appendix F: Proof of Lemma 3

It is enough to show that

[pic]. (F1)

By definition,

[pic].

Under (R1) through (R7), the likelihood ratio test is consistent under [pic]. Hence,

[pic], (F2)

where [pic] is the percentile of the chi-squared distribution with one degree of freedom. First, we prove (F1) when [pic]. For any [pic], (F2) and [pic] together imply

[pic],

which proves (F1). Next, we prove (F1) when [pic] for all [pic] and [pic]. Under (R1) through (R7), the likelihood ratio test is consistent under [pic] so that

[pic].

which proves (F1). (

REFERENCES

Bücher, A. and Dette, H. , 2010. A note on bootstrap approximations for the empirical copula process. Statist. Probab. Lett. 80, 1925-1932.

Chen, C. -H., Tsai, W. -Y., Chao, W. -H., 1996. The product-moment correlation coefficient and linear regression for truncated data. Journal of American Statistical Association 91, 1181-1186.

Cohen, A. C., 1991.Truncated and Censored Samples: Theory and Applications, New York, Marcel Dekker, Inc.

Dietz, E., Böhning, D., 2000. On estimation of the Poisson parameter in zero-modified Poisson models. Computational Statistics & Data Analysis 34, 441-459.

Efron, B., Tibshirani, R. J., 1993. An Introduction to the Bootstrap. Chapman & Hall, London.

Emura, T., Konno, Y., 2009. Multivariate parametric approaches for dependently left-truncated data. Technical Reports of Mathematical Sciences, Chiba University 25, No. 2.

Emura, T., Konno, Y, 2010. Multivariate normal distribution approaches for dependently truncated data. Statistical Papers, in press. Available online at :

Emura, T., Wang, W., 2010. Testing quasi-independence for truncation data. Journal of Multivariate Analysis 101, 223-239.

Emura, T., Wang, W., H. Hung, 2011. Semi-parametric inference for copula models for truncated data, Statistica Sinica 21, 349-367.

Fang, K.-T., Kotz, S., Ng, K. W., 1990. Symmetric Multivariate and Related Distributions. Chapman & Hall, New York.

Gilberet, P. 2010. Accurate Numerical Derivatives. R package version 2010.11-1.

Holgate, 1964. Estimation of the bivariate Poisson distribution. Biometrika 51, 241-245.

Jin, Z., Lin, D. Y., Wei, L. J., 2003. Rank-based inference for the accelerated failure time model. Biometrika 90, 341-353.

Klein, J. P., Moeschberger, M. L., 2003. Survival Analysis: Techniques for Censored and Truncated Data. New York, Springer.

Knight, K., 2000. Mathematical Statistics. Chapman & Hall.

Kojadinovic, I., Yan, J., Holmes, M., 2011. Fast large-sample goodness-of-fit tests for copulas. Statistica Sinica 21, 841-871.

Kojadinovic, I., Yan, J., 2011. A goodness-of-fit test for multivariate multiparameter copulas based on multiplier central limit theorems. Statistics and Computing 21, 17-30.

Lakhal-Chaieb, L., Rivest, L. -P., Abdous, B., 2006. Estimating survival under a dependent truncation. Biometrika 93, 665-669.

Lang, K. L., Little, J. A., Taylor, J. M. G., 1989. Robust statistical modeling using the t distribution. Journal of the American Statistical Association 84, 881-896.

Lehmann, E. L., and Casella, G., 1993. Theory of Point Estimation, 2nd Edition, New York: Springer-Verlag.

Lynden-Bell, D., 1971, A method of allowing for known observational selection in small samples applied to 3RC quasars. Monthly Notices. Roy. Astronom. Soc.155, 95-118.

Martin, E. C., Betensky, R. A., 2005, Testing quasi-independence of failure and truncation via conditional Kendall’s tau, Journal of the American Statistical Association 100, 484-492.

Ripley, P., Hornik, K., Gebhardt, A., 2011, Support Functions and Datasets for Venables and Ripley’s MASS, R package version 7.3-13.

Schiel, J. L., 1998. Estimating conditional probabilities of success and other course placement validity statistics under soft truncation . ACT Research Report Series 1998-2, Iowa City, IA: ACT.

Schiel, J. L., Harmston, M., 2000. Validating two-stage course placement systems when data are truncated. ACT Research Report Series 2000-3, Iowa City, IA: ACT.

Spiekerman, C. F., Lin, D. Y., 1998. Marginal regression models for multivariate failure time data. Journal of the American Statistical Association 93, 1164-1175.

Tsai, W. -Y., 1990. Testing the assumption of independence of truncation time and failure time. Biometrika 77, 169-177.

van der Vaart. A. W., 1998. Asymptotic Statistics, volume 3 of Cambridge Series in Statistics and Probabilistic Mathematics, Cambridge University Press, Cambridge.

van der Vaart. A. W., Wellner, J. A., 1996. Weak Convergence and Empirical Process, Springer Series in Statistics, Springer-Verlag, New York.

Waiker, V. B., Schuurmann, F. J., Raghunathan, T. E., 1984. On a two-stage shrinkage testimator of the mean of a normal distribution. Commun. Statist. – Theory and Method 13, 1901-1913.

Wang, M. C., Jewell, N. P., Tsai, W. Y., 1986. Asymptotic properties of the product-limit estimate and right censored data. Annals of Statistic 13, 1597-1605.

Woodroofe, M., 1985. Estimating a distribution function with truncated data. Annals of Statistics 13, 163-177.

-----------------------

[1] Corresponding author: TEL:+81-3-5981-3642, FAX +81-3-5981-3637.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download