Laplace approximation in highdimensional Bayesian regression
Abstract
We consider Bayesian variable selection in sparse highdimensional regression, where the number of covariates may be large relative to the sample size , but at most a moderate number of covariates are active. Specifically, we treat generalized linear models. For a single fixed sparse model with wellbehaved prior distribution, classical theory proves that the Laplace approximation to the marginal likelihood of the model is accurate for sufficiently large sample size . We extend this theory by giving results on uniform accuracy of the Laplace approximation across all models in a highdimensional scenario in which and , and thus also the number of considered models, may increase with . Moreover, we show how this connection between marginal likelihood and Laplace approximation can be used to obtain consistency results for Bayesian approaches to variable selection in highdimensional regression.
Laplace approximation in highdimensional regression
and
Bayesian inference \kwdgeneralized linear models \kwdLaplace approximation \kwdlogistic regression \kwdmodel selection \kwdvariable selection
1 Introduction
A key issue in Bayesian approaches to model selection is the evaluation of the marginal likelihood, also referred to as the evidence, of the different models that are being considered. While the marginal likelihood may sometimes be available in closed form when adopting suitable priors, most problems require approximation techniques. In particular, this is the case for variable selection in generalized linear models such as logistic regression, which are the models treated in this paper. Different strategies to approximate the marginal likelihood are reviewed by Friel and Wyse (2012). Our focus will be on the accuracy of the Laplace approximation that is derived from largesample theory; see also Bishop (2006, Section 4.4).
Suppose we have independent observations of a response variable, and along with each observation we record a collection of covariates. Write for the likelihood function of a generalized linear model relating the response to the covariates, where is a vector of coefficients in the linear predictor (McCullagh and Nelder, 1989). Let be a prior distribution, and let be the maximum likelihood estimator (MLE) of the parameter vector . Then the evidence for the (saturated) regression model is the integral
and the Laplace approximation is the estimate
where denotes the negative Hessian of the loglikelihood function .
Classical asymptotic theory for large sample size but fixed number of covariates shows that the Laplace approximation is accurate with high probability (Haughton, 1988). With fixed, this then clearly also holds for variable selection problems in which we would consider every one of the finitely many models given by the subsets of covariates. This accuracy result justifies the use of the Laplace approximation as a proxy for an actual model evidence. The Laplace approximation is also useful for proving frequentist consistency results about Bayesian methods for variable selection for a general class of priors. This is again discussed in Haughton (1988). The ideas go back to the work of Schwarz (1978) on the Bayesian information criterion (BIC).
In this paper, we set out to give analogous results on the interplay between Laplace approximation, model evidence, and frequentist consistency in variable selection for regression problems that are highdimensional, possibly with , and sparse in that we consider only models that involve small subsets of covariates. We denote as an upper bound on the number of active covariates. In variable selection for sparse highdimensional regression, the number of considered models is very large, on the order of . Our interest is then in bounds on the approximation error of Laplace approximations that, with high probability, hold uniformly across all sparse models. Theorem 1, our main result, gives such uniform bounds (see Section 3). A numerical experiment supporting the theorem is described in Section 4.
In Section 5, we show that when adopting suitable priors on the space of all sparse models, model selection by maximizing the product of model prior and Laplace approximation is consistent in an asymptotic scenario in which and may grow with . As a corollary, we obtain a consistency result for fully Bayesian variable selection methods. We note that the class of priors on models we consider is the same as the one that has been used to define extensions of BIC that have consistency properties for highdimensional variable selection problems (see, for example, Bogdan, Ghosh and Doerge, 2004; Chen and Chen, 2008; ŻakSzatkowska and Bogdan, 2011; Chen and Chen, 2012; Frommlet et al., 2012; Luo and Chen, 2013; Luo, Xu and Chen, 2015; Barber and Drton, 2015). The prior has also been discussed by Scott and Berger (2010).
2 Setup and assumptions
In this section, we provide the setup for the studied problem and the assumptions needed for our results.
2.1 Problem setup
We treat generalized linear models for independent observations of a response, which we denote as . Each observation follows a distribution from a univariate exponential family with density
where the density is defined with respect to some measure on . Let be the (natural) parameter indexing the distribution of , so . The vector is then assumed to lie in the linear space spanned by the columns of a design matrix , that is, for a parameter vector . Our work is framed in a setting with a fixed/deterministic design . In the language of McCullagh and Nelder (1989), our basic setup uses a canonical link, no dispersion parameter and an exponential family whose natural parameter space is the entire real line. This covers, for instance, logistic and Poisson regression. However, extensions beyond this setting are possible; see for instance the related work of Luo and Chen (2013) whose discussion of Bayesian information criteria encompasses other link functions.
We write for the th row of , that is, the vector of covariate values for observation . The regression model for the responses then has loglikelihood, score, and negative Hessian functions
The results in this paper rely on conditions on the Hessian , and we note that, implicitly, these are actually conditions on the design .
We are concerned with a sparsity scenario in which the joint distribution of is determined by a true parameter vector supported on a (small) set , that is, if and only if . Our interest is in the recovery of the set when knowing an upper bound on the cardinality of , so . To this end, we consider the different submodels given by the linear spaces spanned by subsets of the columns of the design matrix , where .
For notational convenience, we take to mean either an index set for the covariates or the resulting regression model. The regression coefficients in model form a vector of length . We index such vectors by the elements of , that is, , and we write for the Euclidean space containing all these coefficient vectors. This way the coefficient and the covariate it belongs to always share a common index. In other words, the coefficient for the th coordinate of covariate vector is denoted by in any model with . Furthermore, it is at times convenient to identify a vector with the vector in that is obtained from by filling in zeros outside of the set . As this is clear from the context, we simply write again when referring to this sparse vector in . Finally, and denote the subvector and submatrix of and , respectively, obtained by extracting entries indexed by . These depend only on the subvectors of the covariate vectors .
2.2 Assumptions
Recall that is the sample size, is the number of covariates, is an upper bound on the model size, and is the true parameter vector. We assume the following conditions to hold for all considered regression problems:

[label=(A0)]

The Euclidean norm of the true signal is bounded, that is, for a fixed constant .

There is a decreasing function and an increasing function such that for all with and all , the Hessian of the negative loglikelihood function is bounded as

There is a constant such that for all with and all ,
where is the spectral norm of a matrix.
Assumption 2 provides control of the spectrum of the Hessian of the negative loglikelihood function, and 3 yields control of the change of the Hessian. Together, 2 and 3 imply that for all , there is a such that
(2.1) 
for all with and with ; see Prop. 2.1 in Barber and Drton (2015). Note also that we consider sets with cardinality in 2 and 3 because it allows us to make arguments concerning false models, with , using properties of the true model given by the union .
Remark 2.1.
When treating generalized linear models, some control of the size of the true coefficient vector is indeed needed. For instance, in logistic regression, if the norm of is too large, then the binary response will take on one of its values with overwhelming probability. Keeping with the setting of logistic regression, Barber and Drton (2015) show how assumptions 2 and 3 hold with high probability in certain settings in which the covariates are generated as i.i.d. sample. Assumptions 2 and 3, or the implication from (2.1), also appear in earlier work on Bayesian information criteria for highdimensional problems such as Chen and Chen (2012) or Luo and Chen (2013).
Let be a family of probability density functions that we use to define prior distributions in all sparse models. We say that the family is logLipschitz with respect to radius and has bounded logdensity ratios if there exist two constants such that the following conditions hold for all with :

[label=(B0)]

The function is Lipschitz on the ball , i.e., for all , we have

For all ,
Example 2.1.
If we take to be the density of a fold product of a centered normal distribution with variance , then 1 holds with and .
3 Laplace approximation
This section provides our main result. For a highdimensional regression problem, we show that a Laplace approximation to the marginal likelihood of each sparse model,
leads to an approximation error that, with high probability, is bounded uniformly across all models. To state our result, we adopt the notation
Theorem 1.
Suppose conditions 1–3 hold. Then, there are constants depending only on such that if
then with probability at least the following two statements are true for all sparse models , :
(i) The MLE satisfies .
Proof.
(i) Bounded MLEs. It follows from Barber and Drton (2015, Sect. B.2)^{1}^{1}1In the proof of this theorem, we cite several results from Barber and Drton (2015, Sect. B.2, Lem. B.1). Although that paper treats the specific case of logistic regression, by examining the proofs of their results that we cite here, we can see that they hold more broadly for the general GLM case as long as we assume that the Hessian conditions hold, i.e., Conditions 1–3, and therefore we may use these results for the setting considered here. that, with the claimed probability, the norms for true models (i.e., and ) are bounded by a constant. The result makes reference to an event for which all the claims we make subsequently are true. The bound on the norm of an MLE of a true model was obtained by comparing the maximal likelihood to the likelihood at the true parameter . As we show now, for false sparse models, we may argue similarly but comparing to the likelihood at 0.
Recall that is the bound on the norm of assumed in 1 and that the functions and in 2 are decreasing and increasing in the norm of , respectively. Throughout this part, we use the abbreviations
First, we lowerbound the likelihood at 0 via a Taylorexpansion using the true model . For some , we have that
where we have applied 2. Lemma B.1 in Barber and Drton (2015) yields that
where can be bounded by a constant multiple of . By our sample size assumption (i.e., the existence of the constant ), we thus have that
(3.1) 
for some constant .
Second, we may consider the true model instead of and apply (B.17) in Barber and Drton (2015) to obtain the bound
(3.2) 
where can be bounded by a constant multiple of . Choosing our sample size constant large enough, we may deduce from (3.2) that there is a constant such that
whenever . Using the fact that for any model , we may deduce from (3.2) that there is a constant such that
whenever . Together with (3.1), this implies that is bounded by a constant . Having assumed 1, we may conclude that the norm of is bounded by .
(ii) Laplace approximation. Fix with . In order to analyze the evidence of model , we split the integration domain into two regions, namely, a neighborhood of the MLE and the complement . More precisely, we choose the neighborhood of the MLE as
Then the marginal likelihood, , is the sum of the two integrals
We will estimate via a quadratic approximation to the loglikelihood function. Outside of the region , the quadratic approximation may no longer be accurate but due to concavity of the loglikelihood function, the integrand can be bounded by for an appropriately chosen constant , which allows us to show that is negligible when is sufficiently large.
We now approximate and separately. Throughout this part we assume that we have a bound on the norms of the MLEs in sparse models with . For notational convenience, we now let
(iia) Approximation of integral . By a Taylorexpansion, for any there is a such that
By 3 and using that ,
Hence,
(3.3) 
Next, observe that 2 implies that
We deduce that for any vector ,
(3.4) 
This gives
(3.5) 
Choosing the constant large enough, we can ensure that the upper bound in (3.4) is no larger than 1. In other words, for all points . By our assumption that , the set is thus contained in the ball
Since, by 1, the logarithm of the prior density is Lipschitz on , it follows form (3.4) that
(3.6) 
Plugging (3.5) and (3.6) into , and writing to denote , we find that
(3.7) 
In the last integral, change variables to to see that
(3.8) 
where we use a tail bound for the distribution stated in Lemma A.1. We now substitute (3.8) into (3.7), and simplify the result using that and for all . We find that
(3.9) 
when the constant is chosen large enough.
(iib) Approximation of integral . Let be a point on the boundary of . It then holds that
We may deduce from (3.5) that
for sufficiently small, which can be ensured by choosing large enough. The concavity of the loglikelihood function now implies that for all we have
(3.10) 
Moreover, using first assumption 2 and then assumption 1, we have that
Since , it thus holds that
(3.11) 
Combining the bounds from (3.10) and (3.11), the integral can be bounded as
(3.12) 
Changing variables to and applying Lemma A.2, we may bound the integral in (3.12) as
Stirling’s lower bound on the Gamma function gives
Using this inequality, and returning to (3.12), we see that
(3.13) 
Based on this fact, we certainly have the very loose bound that
(3.14) 
for all sufficiently large .
Remark 3.1.
The proof of Theorem 1 could be modified to handle other situations of interest. For instance, instead of a fixed Lipschitz constant for all log prior densities, one could consider the case where is Lipschitz with respect to a constant that grows with the cardinality of , e.g., at a rate of in which case the rate of square root of could be modified to square root of . The term that would appear in (3.15) could be compensated using (3.13) in less crude of a way than when moving to (3.14).
4 Numerical experiment for sparse Bayesian logistic regression
In this section, we perform a simulation study to assess the approximation error in Laplace approximations to the marginal likelihood of logistic regression models. To this end, we generate independent covariate vectors with i.i.d. entries. For each choice of a (small) value of , we take the true parameter vector to have the first entries equal to two and the rest of the entries equal zero. So, . We then generate independent binary responses , with values in and distributed as , where
based on the usual (and canonical) logit link function.
We record that the logistic regression model with covariates indexed by has the likelihood function
(4.1) 
where, as previously defined, denotes the subset of covariates for model . The negative Hessian of the loglikelihood function is
For Bayesian inference in the logistic regression model given by , we consider as a prior distribution a standard normal distribution on , that is, the distribution of a random vector with independent coordinates. As in previous section, we denote the resulting prior density by . We then wish to approximate the evidence or marginal likelihood
As a first approximation, we use a Monte Carlo approach in which we simply draw independent samples from the prior and estimate the evidence as
where we use in all of our simulations. As a second method, we compute the Laplace approximation
where is the maximum likelihood estimator in model . For each choice of the number of covariates , the model size , and the sample size , we calculate the Laplace approximation error as
We consider in our experiment. Since we wish to compute the Laplace approximation error of every sparse model, and the number of possible models is on the order of , we consider and . The Laplace approximation error, averaged across 100 independent simulations, is shown in Figure 1. We remark that the error in the Monte Carlo approximation to the marginal likelihood is negligible compared to the quantity plotted in Figure 1. With two independent runs of our Monte Carlo integration routine, we found the Monte Carlo error to be on the order of 0.05.
For each , Figure 1 shows a decrease in Laplace approximation error as increases. We emphasize that and thus also the number of considered sparse models increase with . As we increase the number of active covariates , the Laplace approximation error increases. These facts are in agreement with Theorem 1. This said, the scope of this simulation experiment is clearly limited by the fact that only small values of and moderate values of and are computationally feasible.
5 Consistency of Bayesian variable selection
In this section, we apply the result on uniform accuracy of the Laplace approximation (Theorem 1) to prove a highdimensional consistency result for Bayesian variable selection. Here, consistency refers to the property that the probability of choosing the most parsimonious true model tends to one in a largesample limit. As discussed in the Introduction, we consider priors of the form
(5.1) 
where is a parameter that allows one to interpolate between the case of a uniform distribution on models () and a prior for which the model cardinality is uniformly distributed ().
Bayesian variable selection is based on maximizing the (unnormalized) posterior probability
(5.2) 
over , . Approximate Bayesian variable section can be based on maximizing instead the quantity
(5.3) 
We will identify asymptotic scenarios under which maximization of yields consistent variable selection. Using Theorem 1, we obtain as a corollary that fully Bayesian variable selection, i.e., maximization of , is consistent as well.
To study consistency, we consider a sequence of variable selection problems indexed by the sample size , where the th problem has covariates, true parameter with support , and signal strength
In addition, let be the upper bound on the size of the considered models. The following consistency result is similar to the related results for extensions of the Bayesian information criterion (see, for instance, Chen and Chen, 2012; Barber and Drton, 2015).
Theorem 2.
Suppose that for , that for , that for , and that . Assume that 1 holds for a fixed constant and that there a fixed functions and with respect to which the covariates satisfy the Hessian conditions 2 and 3 for all with . Moreover, assume that for the considered family of prior densities there are constants