+ All documents
Home > Documents > Bayesian inference for nondecomposable graphical Gaussian models

Bayesian inference for nondecomposable graphical Gaussian models

Date post: 19-Nov-2023
Category:
Upload: independent
View: 2 times
Download: 0 times
Share this document with a friend
15
Transcript

Bayesian inference for nondecomposable graphicalGaussian modelsPetros Dellaportas�, Paolo Giudiciyand Gareth RobertszAbstractIn this paper we propose a method to calculate the posterior probability of anondecomposable graphical Gaussian model. Our proposal is based on a new deviceto sample from Wishart distributions, conditional on the graphical constraints. Asa result, our methodology allows Bayesian model selection within the whole class ofgraphical Gaussian models, including nondecomposable ones.1 INTRODUCTIONLet G be a conditional independence graph, describing the association structure of a vectorof random variables, say X. A graphical model is a family of probability distributions PGwhich is Markov over G. In particular, when all the random variables in X are continuous,a graphical Gaussian model is obtained by assuming PG = N(�;�G), with �G positivede�nite and such that PG is Markov over G. For an introduction to graphical models, seefor instance Lauritzen (1996) or Whittaker (1990).Typically the association structure of X is uncertain and, thus, has to be inferred fromthe data. This leads to entertain a statistical procedure to select a class of graphs and,�Department of Statistics, Athens University of Economics, 76 Patission St, 10434 Athens, GreeceyDipartimento di Economia Politica e Metodi Quantitativi, University of Pavia, Via San Felice 5,27100 Pavia, ItalyzStatistical Laboratory, University of Cambridge, 16 Mill Lane, Cambridge CB2 1SB, UK1

consequently, of graphical models, which can account for most of the model uncertainty.A Bayesian approach is particularly suited for the latter purpose, see for instance thepaper by Madigan and Raftery (1995). It requires calculating the posterior probability ofeach graph included in the comparison.A graphical Gaussian model is parameterised by its precision matrix KG = ��1G ,so that Bayesian inference for these models requires the introduction of a class of priordistributions on KG. Dawid and Lauritzen (1993) proposed a class of hyper Markov priorswhich, essentially, reproduces the factorisations of the likelihood at the apriori level. Sucha class presents many advantages, particularly from an operational point of view, since itallows most of the inferences to be performed locally, on a suitably chosen collection ofsubvectors of X related to the cliques and the separators of G. However this method candeal only with decomposable graphical models.A di�erent class of priors, named global, has been proposed in Giudici (1996). Sucha class can be derived for any graphical structure, following the conditional approachin Dickey (1971). However, when the graphical model is not decomposable, an analyticderivation of the global prior is typically not attainable and, consequently, it is not possibleto obtain closed-form expressions for the posterior probability of the model.The main objective of this paper is to provide a simulation-based method to calculatethe posterior probability of a nondecomposable graphical Gaussian model. Our proposedmethodology allows Bayesian model selection within the whole class of graphical models,including nondecomposable ones. Nondecomposable graphs arise frequently, as discussedin Cox and Wermuth (1993) and, thus, being able to deal with them has also an importantpractical motivation.The plan of the paper is as follows: after some preliminary background in Section 2,Section 3 states the required computational task whereas Section 4 contains the proposedMonte Carlo method to solve it. In Section 5 the methodology is applied to performBayesian model selection on Fret's data (see for instance Whittaker, 1990). Finally,Section 6 contains some concluding remarks.2

2 SOME PRELIMINARY BACKGROUNDLet X = (X1; : : : ; Xp)T be a vector of p � 3 continuous random variables. Denote withV = f1; : : : ; pg the index set and let, for A � V , XA = (Xa : a 2 A), a collection ofrandom variables. Let P be the probability distribution of X, de�ned on the measurablespace (X ;F), with X � Rp and F a Borel �-algebra on X .Let then G = (V;E) be an undirected graph, in which V is a set of nodes and E is aset of undirected edges between them. Given a graph G, we shall assume P Markov overG. In order to express such relationship between P and G, the probability distributionwill be denoted by PG. More precisely, we shall assume:PG = Np(0; K�1G ); (1)where Np(0; K�1G ) indicates a multivariate Gaussian distribution, with zero expected valueand precision KG, such that PG be Markov over G (see, e.g. Dawid and Lauritzen, 1993).The statistical model speci�ed in (1) is known as a graphical Gaussian model (Speed andKiiveri, 1986). To understand the constraints imposed on KG by the graphical structure,we need a preliminary de�nition.Consider an arbitrary multivariate Gaussian distribution, with KG positive de�nite.Let then (A;B;C) be a partition of V , for which:KG = 0BBB@KAA KAB KACKBA KBB KBCKCA KCB KCC 1CCCA :It is then known that the class of probability distributions in (1) is Markov over G if andonly if, for each partition (A;B; S), such that A and B are separated by S, KAB = 0 (see,e.g. Dempster, 1972). As a consequence of this we have that a missing edge between anytwo nodes, say, i and j is equivalent to setting kij = 0 in the precision matrix which, inturn, is equivalent to the conditional independence statement Xi qXjjXV nfi;jg.Given a random sample of n p-variate observations from PG, X(n) = x(n), our objectiveis to perform structural learning, that is, to establish which graphs (and, thus, whichgraphical models) can explain most of the model uncertainty, in the sense of Madigan andRaftery (1995). To account for model uncertainty, we shall follow a Bayesian approach,3

and calculate the posterior probabilities of each graph. Let fG1; : : : ;Gmg be a collectionof graphs, describing alternative association structures for X.Conditionally on a graph, say Gi, let S = (x(n))(x(n))T indicate the observed sum-of-products matrix. The likelihood of the graphical Gaussian model speci�ed by PGi isthen: p(x(n)jKGi) = (2�)�np2 (det(KGi))n2 expf�12tr(SKGi)g: (2)Let Qi indicate the class of all p � p symmetric and positive de�nite matrices KGi suchthat PGi is Markov over Gi. According to the Bayesian approach, in the next sectionwe introduce a conjugate prior on Qi. A suitable prior, able to deal with all graphicalGaussian models, is the global prior proposed in Giudici (1996).3 PRIOR DISTRIBUTIONS3.1 Decomposable graphsWe �rst need to brie y recall some important graph-theoretic notions. For more detailssee for instance Dawid and Lauritzen (1993). First of all, a graph is said to be completeif all vertices are joined. A clique is a maximal complete subset and a set of vertices, sayC = A [ B is said to separate two subsets, say A and B, if all sequences of connectedvertices from A to B intersect C. A pair (A;B) of subsets of the vertex set V of anundirected graph G is said to form a decomposition of G if: i) V = A [ B; ii) A \ Bis complete; iii) A \ B separates A from B. If both A and B are proper subsets of Vthe decomposition is said to be proper. Finally, an undirected graph G is said to bedecomposable if it is complete, or if there exists a proper decomposition of (A;B) intodecomposable subgraphs GA and GB. More operationally, a graph is decomposable if itcan be decomposed into its cliques.Suppose now that Gi is complete so that the concentration matrix KGi in (2) is notconstrained. In this case, we shall write K instead of KGi . A prior distribution on K,conjugate to the likelihood in (2), is the Wishart distribution, which will be denoted byWp(�;�), which takes values in the space of non-negative de�nite positive de�nite p� p4

matrices. Its density with respect to p(p + 1)=2-dimensional Lebesgue measure (for thecomponents kj1;j2; j1 � j2) can be writtenf(K) = f((kj1;j2; j1 � j2)) = (det(�))��2 (det(K))��p�12 expf�12 tr(K��1)g2 (�p)2 �p(�2 ) ; (3)where � > (p � 1) is a �xed constant and � a p � p �xed positive de�nite matrix.Furthermore, �p(�2 ) is the multivariate gamma function, such that:�p(�2 ) = � p(p�1)4 pYi=1�(�� i+ 12 ):We recall that, when K is distributed as speci�ed in (3), E(K) = ��.When the graph is not complete, KGi has to incorporate the graphical constraintson K which can be expressed in terms of zero concentrations. Hence, KGi cannot bedistributed as in (3). Let Ai indicate the set of o�-diagonal elements of K constrained tobe zero by Gi = (V;Ei): Ai = fkj1j2 : (j1; j2) 62 Eig:A way to incorporate the knowledge of Ai into the prior distribution is to follow theconditional approach proposed by Dickey (1971), namely to condition upon the graphicalconstraints, assuming that the distribution of KGi has a density speci�ed by the following:l(KGi) = f(KjAi) / f(K)IfAig(K): (4)When Gi is decomposable the expression of the prior density (4) can be obtainedanalytically, and, consequently, when the graphs to be compared are all decomposable,model selection can also be performed exactly. More speci�cally, model selection will bebased on the posterior probability of each graph, p(Gijx(n)), obtained as:p(Gijx(n)) / p(x(n)jGi)p(Gi); (5)where p(Gi) is an appropriate prior on the class of graphs under comparison.When the considered graph is decomposable, Giudici (1996) has showed that, using aglobal prior, the marginal likelihood of Gi, p(x(n)jGi), is given by:p(x(n)jGi) = QC2Ci p(x(n)C )QS2Si p(x(n)S ) ; (6)5

where Ci and Si are, respectively, the class of all cliques and separators of Gi.The above result is similar to what obtained by Dawid and Lauritzen (1993), using alocal class of priors which is however applicable only to decomposable graphs.3.2 Nondecomposable graphsWhen a graph is not decomposable, numerical integration methods are needed to derivethe prior and to draw subsequent inferences. As a consequence, the marginal likelihoodp(x(n)jAi) needs to be calculated numerically. More precisely, let G0 indicate a non-decomposable graph. In order to obtain p(x(n)jA0), we need to calculate:p(x(n)jG0) = R p(x(n)jK)f(K)IfA0g(K)Qj1�j2; (j1;j2)62A0 dkj1;j2R f(K)IfA0g(K)Qj1�j2; (j1;j2)62A0 dkj1;j2 (7)= R (det(K))�+n�p�12 expf�12 tr(K(��1 + S))gIfA0g(K)Qj1�j2; (j1;j2)62A0 dkj1;j2R (det(K))��p�12 expf�12 tr(K(��1))gIfA0g(K)Qj1�j2; (j1;j2)62A0 dkj1;j2 : (8)Care has to be taken when interpreting conditional probabilities when conditioning onevents of probability zero (see Borel's paradox, for example Billingsley, 1979). In theabove we are considering the regular conditional probabilities of the Wishart matrix,conditional on certain o�-diagonal elements taking prescribed values, and evaluated wherethe prescribed values are set to zero. These conditional probabilities are unique undercertain smoothness conditions in the conditioned values.A simulation-based method to calculate the above ratio of integrals for any graphicalmodel will be presented in the next section. However, we believe that, particularly whenlarge graphs are considered, it is important to exploit any possible localisation of therequired simulations. We are now going to recall an important result in this direction.We �rst need a preliminary de�nition.De�nition. A subgraph GA, A � V is said to be an irreducible component of G if itcannot be further decomposed. A graph is not decomposable if and only if it contains atleast one irreducible component which is not complete.Let R indicate the class of all maximal irreducible components of a graph. Let GR =(V R; E\V R) be the subgraph of G which contains all vertices contained only in irreducible6

components which are complete (cliques). It then follows thatp(x(n)jG0) = p(x(n)V RjGR)� p(x(n)V nV R jxV R;G);and, consequently, p(x(n)jG0) can be broken in two factors, the �rst of which, involvingthe subgraph GR, can be obtained exactly, as in (6). On the other hand, the second termrequires calculating numerically two integrals, involving kernels essentially similar to thoseinvolved in (8) and, consequently, computable with our proposed method. Therefore,exact and Monte Carlo methods can be combined in order to reduce the computationalburden involved in graphical model selection.4 THE PROPOSED MONTE CARLO METHODOur objective is to calculate the marginal likelihood p(x(n)jG0) in (8). Each integrand in(8) is the kernel of a Wishart distribution; the numerator having parameters (�+n; (��1+S)�1) and the denominator having parameters (�;�). These kernels are integrated overa restricted space de�ned by A0. We propose an importance sampling methodology forcomputing both the numerator and the denominator, that is, to obtain the normalisingconstants of the corresponding conditional Wishart density.Consider the problem of sampling a random symmetric matrix from aWishartW (��;��),conditional on A0, with �� an integer such that �� > p� 1. Let ZTi = (Zi1; : : : ; Zip), for1 � i � �� be a collection of �� random variables, distributed as i.i.d. Np(0;��). Then itis well known that: U = ��Xi=1ZiZTi � W (��;��):We intend to translate the conditioning constraint on K to a more tractable constrainton the Zi vectors. First we compute the marginal distribution of U restricted toE0 \ (i < j), for i 2 V and j 2 V .Let T = f(i; j) : (i; j) 62 E0 \ (i < j)g and T i = fj : (i; j) 2 Tg. For each i such thatT i 6= ;, partition �� as ��i = 0@��i11 ��i12��i21 ��i22 1A ;7

by a suitable permutation matrix which leads to ��i11 containing all elements ��jk such that(j 2 Ti) \ (k 2 Ti). Now we can express the joint density of the fZig asf(z1; : : : ; z��) = ��Yi=1 f(z�i )� f(z+i jz�i )/ expf�12 ��Xi=1(z�i )T (��i22)�1z�i g� expf�12 ��Xi=1[(z+i )� (��i12)(��i22)�1z�i ]T [��i11:2]�1[(z+i )� (��i12)(��i22)�1z�i ];where ��i11:2 = ��i11 � ��i12(��i22)�1��i21; z+i = fzij : j 2 Ti \ (i < j)g and z�i = fzij : j 62Ti [ (i > j)g.Therefore, the marginal density of (z�1 ; : : : ; z���) is given byf(z�1 ; : : : ; z���) / expf�12 ��Xi=1(z�i )T (��i22)�1z�i g: (9)By sampling from the unconditional Wishart distribution, we shall use (9) as ourimportance sampling density. Now introduce the following transformation of the fzig.Z 0ij = 8><>: Z 0ij = Hij j 2 Ti \ (i < j)Z 0ij = Zij j 62 Ti [ (i � j) ;where Hij = P��l=1 zlizlj:Let (Z0)T = (z011; z012; : : : ; z01p; z021; z022; : : : ; z02p; : : : ; z0��1; z0��2; : : : ; z0��p). The Jacobian ofthe transformation Z0 = g(Z), jJ j = d(Z0)d(Z) is easily computed for speci�c examples.Notice that the conditioning event A0 can be written as a constraint on Z0: A0 =fz0ij = 0; 8(i; j) 2 T )g.Finally, the joint density of Z0 can be written as:fZ0(z0) = jJ j�1fZ(g�1(z0)) (10)and therefore the conditional density of Z0jA0 is proportional to fZ0(z0)IfA0g(K).Thus, by sampling from the unconditional Wishart and from (9) and (10) using, asimportance weight, the ratio between expression (10) and (9) we have constructed animportance sampling algorithm for generating a conditional Wishart distribution and,therefore, to solve our computational problem.8

4.1 ExampleConsider the simplest nondecomposable graph, the chordless four-cycle in Figure 1 below.Figure 1 about hereThe graph in Figure 1 implies that T = f(1; 4); (2; 3)g, T 1 = f(1; 4)g, T 2 = f(2; 3)g.Each Zi is a 4-dimensional random vector: Zi = (Zi1; : : : ; Zi4). The graphical constraintsare translated onto the vectors Z1 and Z2, namely z+1 = fz14g and z+2 = fz23g. Corre-spondingly, we set: Z 014 = H14Z 023 = H23Z 0ij = Zij otherwise;where H14 = P��l=1 zl1zl4 and H23 = P��l=1 zl2zl3. Consequently, jJ j = jz14z23j.Finally, the 10 elements of � are partitioned twice, for i = 1 and i = 2. For the formerwe have: ��111 = 0@�11 �14�41 �441A ; ��112 = 0@�13 �12�34 �241A ; ��122 = 0@�22 �23�32 �331A :On the other hand, for the latter:��211 = 0@�22 �23�32 �331A ; ��212 = 0@�13 �12�34 �241A ; ��222 = 0@�11 �14�41 �441A :4.2 Structure of the JacobianIt is interesting to study further the structure of the Jacobian. We have the followingproposition which we state without proof. Let Y be the following 2� jT j matrix:0@ 1 : : : 1 2 : : : 2 : : :j11 : : : j jT1j1 j12 : : : j jT2j2 : : :1A ;where fjki ; 1 � k � jTijg lists the elements of Ti. In other words, the second row of Ylists the elements of T lexicographically. 9

Proposition 1. Let M be the jT j � jT j matrix such that:Mlm = 8>>>>><>>>>>: Zi;i if l = m; Y1l = iZi1;i2 if Y2l = Y2m; Y1l 6= Y1m0 otherwiseThen jJ j = jM j:Remark. Therefore, for any graph which has the property that all vertices are con-nected to at least all but one of the other vertices:jJ j = p�1Yi=1(Zii)jTij:Unfortunately the vast majority of such graphs are decomposable, the previous examplebeing a notable exception.5 APPLICATIONTo illustrate our method, we shall now analyse Fret's data, described in Whittaker (1990),concerning head measurements on pairs of sons in a sample of 25 families. The randomvariables of interest are: X1 = fhead length of the �rst song; X2 = fhead breadth of the�rst song; X3 = fhead length of the second song; X4 = fhead breadth of the second song.The observed sum-of-products matrix is the following:S = 0BBBBBBB@ 2287:05 1268:85 1671:87 1106:671268:85 1304:65 1231:47 841:271671:87 1231:47 2419:35 1356:951106:67 841:27 1356:95 1080:551CCCCCCCA :Since p = 4, the number of possible graphs is equal to 2(42) = 64, including three whichare not decomposable. However, in order to better illustrate and compare our results,we have decided to consider only the graphs suggested by subject-matter considerations,namely, those connecting X1 with X2 and X3 with X4, with no more than two missingedges. 10

Therefore, the number of competing graphs is equal to 11. Only two of the latterare nondecomposable. However, for completeness, we consider all of the three possiblenondecomposable graphs, along with the 9 decomposable ones, for a total of 12 graphs. Itcan be shown that, when all 64 possible graphs are considered, the posterior probabilityattached to the excluded graphs is practically negligible.Although the marginal likelihoods of the decomposable graphs can be obtained analyt-ically, as in (6), for comparability purposes all marginal likelihoods have been computedfollowing the Monte Carlo method outlined in the last section, which we have implementedin a Fortran routine.Let � be the prior precision matrix corresponding to the unconstrained model in (3).It is necessary to �x � and �. As a prior precision matrix we shall consider a 4 � 4intra-class correlation structure: � = � 0BBBBBBB@ 1 � � �� 1 � �� � 1 �� � � 11CCCCCCCA : (11)Notice that, to guarantee that j�j > 0, since j�j = � p (1� �)p�1(1 + �(p� 1)) we need tohave � > � 1p�1 and � < 1. Three hyperparameters have to be speci�ed a priori: �, � and�. Recall that E(K) = ��. Thus, � determines the prior partial correlation between eachpair of random variables (according to the inverse variance lemma, see e.g. Whittaker,1990), ��1 determines the (common) prior scale of each random variable and, �nally, �regulates the relative importance of the prior.For illustrative purposes in this application we present results for � equal to its mini-mum possible integer value (� = 4 > p� 1). Concerning the remaining hyperparameters,we perform a sensitivity analysis with respect to the following values: � = (�:25; 0; :25)and � = (10�3; 1; 103).For each of the considered graphs, and the 6 hyperparameter combinations just dis-cussed, we obtain, following the MCMC method proposed in the last section, a samplefrom a Wishart distribution, with parameters (S + ��1)�1 and (n + �), conditional onthe precision elements constrained to be zero, and we approximate the corresponding11

marginal likelihood in (8).Finally, we have taken, as a prior on the graphs under comparison, a simple uniformprior: p(G) / m�1, where m is the number of graphs under comparison. This allows theposterior probability of each graph to be obtained as:p(Gijx(n)) / p(x(n)jG):The resulting posterior probabilities p(Gijx(n)) are reported in Table 1. Each graph isdescribed by its missing edges. Table 1 about hereTable 1 indicates that the graph receiving the highest posterior probability is thenondecomposable graph with missing edges (1; 4) and (2; 3). In general, the graphs withthe highest posterior probabilities are typically these with exactly two missing edges, aresult which is consistent with phisycal intuition. An exception to this occurs for verylow values of � . Notice also that the posterior probabilities of the graphs do not appearto be overly sensitive to �. However, when � con icts in absolute value with the observedcorrelation coe�cient (for instance, in our case, when � = 0) more complex graphs receiveeven lower posterior probability.Finally, for completeness, we have also performed exact calculations for the 9 de-composable graphs. The results of these calculations are in agreement with the obtainednumerical results. We remark that the non-decomposable graph corresponding to the �rstconstraint is the one which would be selected by a classical procedure (see e.g. Whittaker,1990).6 COMMENTS AND EXTENSIONSAn alternative procedure for performing the sampling from the conditional Wishart usingthe same importance weights as derived in Section 4 uses the independence sampler (seee.g. Tierney, 1994). Using the independence sampler allows exibility in extending ourmethodology in several ways. 12

For instance, a mean parameter � can parametrise the graphical Gaussian model in 2.Such a nuisance term can be given an appropriate prior distribution, for instance p-variatenormal, with mean m and variance-covariance matrix �(KG)�1. The hyperparameters mand � can be taken as �xed, or given appropriate hyperprior distributions.Similarly, a hyperprior distribution can be placed on the pair (�;�), although careis needed in the speci�cation of the structure of the latter, and a fully general prior isnot possible because of the presence of normalising constants which are not available fornondecomposable graphs.Note that, by Proposition 1, when T is large, the Jacobian appearing in the impor-tance weights will be highly variable and the resulting estimation procedure extremelyine�cient. Therefore, the decomposition strategy described in Section 3 is crucial to thepractical implementation of our method for large graphs.One problem with conditional priors is the di�culty caused by the possibility of Borel'sparadox. We argue that in many applications, and particularly those common in graphicalmodelling, a natural parameterisation exists, so that there exists a correspondingly naturalregular conditional probability distribution re ecting our constraints.REFERENCESBillingsley P. (1979). Probability and measure. Wiley, New York.Cox D.R. and Wermuth, N. (1993), Linear dependencies represented by chain graphs.Statist. Sci. 8, 204-283.Dawid, A.P. and Lauritzen, S.L. (1993). Hyper Markow Laws in the statistical analysisof decomposable graphical models. Ann. Statist. 21, 1272-1317.Dickey, J.M. (1971). The weighted likelihood ratio, linear hypotheses on normal loca-tion parameters. Ann. Math. Statist. 42, 204-223.Dempster, A.P. (1972). Covariance selection. Biometrics 28, 157-175.Giudici, P. (1996). Learning in graphical Gaussian models. In Bayesian Statistics 5(J.M. Bernardo, J.O. Berger, A.P. Dawid, A.F.M. Smith eds), Oxford University Press,Oxford. 13

Lauritzen, S.L. (1996). Graphical models. Oxford University Press, Oxford.Madigan, D. and Raftery, A.E. (1995). Model selection and accounting for modeluncertainty in graphical models using Occam's window. J. Amer. Statist. Assoc. 89,1535-1546.Speed, T.P. and Kiiveri, H. (1986). Gaussian Markov distributions over �nite graphs.Ann. Statist. 14, 138-150.Tierney, L. (1994). Markov Chains for exploring posterior distributions. Ann. Statist.22,1701-1762.Whittaker, J. (1990). Graphical models in applied multivariate statistics. Wiley, NewYork. Figure 1: Minimal nondecomposable graph.

���� �������� ����2 41 3

14

Table 1: Posterior probabilities of the considered graphs.Missing edges � = �:25 � = 0 � = :25� = 1 � = 1000 � = 10�3 � = 1 � = 1000 � = 10�3 � = 1 � = 1000 � = 10�3(1,2) (3,4) .106 .151 8E-2 .114 .085 .064 .113 .092 .088(1,4), (2,3) .398 .575 .013 .428 .448 .104 .453 .492 .121(1,3), (2,4) .350 .273 .011 .362 .391 .085 .356 .343 .112(2,3), (2,4) .036 3E-2 5E-2 .026 .015 .021 .034 .021 .051(1,3), (2,3) .039 3E-2 6E-2 .047 .034 .021 .043 .045 .048(1,3) (1,4) .070 1E-2 6E-2 7E-2 .026 .035 9E-2 .006 .112(1,4), (2,4) 7E-3 5E-3 2E-2 .022 7E-2 .042 5E-3 5E-3 .049(2,4) 2E-3 6E-8 .010 1E-3 5E-8 .085 6E-4 2E-5 .072(2,3) 7E-3 1E-7 .015 4E-3 1E-7 .127 1E-3 5E-5 .096(1,4) 3E-3 9E-8 .010 1E-3 7E-8 .090 7E-4 2E-8 .072(1,3) 6E-3 1E-7 .012 4E-3 1E-7 .104 1E-3 5E-7 .084Complete 1E-5 1E-14 .928 4E-5 3E-14 .222 1E-6 9E-13 .094

15


Recommended