%This is a plain Tex file \magnification=1200 \overfullrule=0pt \def\Square{\hbox{\vrule \vbox{\hrule\phantom{o}\hrule}\vrule}} \def\square{\kern 1pt\hbox{\vrule\vbox{\hrule\phantom{o}\hrule}\vrule}\kern 1pt} \def\l{\ell} \def\parno{\par \noindent} \def\ref#1{\lbrack {#1}\rbrack} \def\leq#1#2{$${#2}\leqno(#1)$$} \def\vvekv#1#2#3{$$\leqalignno{&{#2}&({#1})\cr &{#3}\cr}$$} \def\vvvekv#1#2#3#4{$$\leqalignno{&{#2}&({#1})\cr &{#3}\cr &{#4}\cr}$$} \def\vvvvekv#1#2#3#4#5{$$\leqalignno{&{#2}&({#1})\cr &{#3}\cr &{#4}\cr &{#5}\cr}$$} \def\ekv#1#2{$${#2}\eqno(#1)$$} \def\eekv#1#2#3{$$\eqalignno{&{#2}&({#1})\cr &{#3}\cr}$$} \def\eeekv#1#2#3#4{$$\eqalignno{&{#2}&({#1})\cr &{#3}\cr &{#4}\cr}$$} \def\eeeekv#1#2#3#4#5{$$\eqalignno{&{#2}&({#1})\cr &{#3}\cr &{#4}\cr &{#5}\cr}$$} \def\eeeeekv#1#2#3#4#5#6{$$\eqalignno{&{#2}&({#1})\cr &{#3}\cr &{#4}\cr &{#5}\cr &{#6}\cr}$$} \def\eeeeeekv#1#2#3#4#5#6#7{$$\eqalignno{&{#2}&({#1})\cr &{#3}\cr &{#4}\cr &{#5}\cr&{#6}\cr&{#7}\cr}$$} \def\iint{\int\hskip -2mm\int} \def\iiint{\int\hskip -2mm\int\hskip -2mm\int} \font\liten=cmr10 at 8pt \font\stor=cmr10 at 12pt \font\Stor=cmbx10 at 14pt \centerline{\stor {\bf Supersymmetric Measures}} \centerline{\stor {\bf and Maximum Principles in the Complex Domain}} \vskip 2pt \centerline{Exponential Decay of Green's Function} \vskip 0.5cm \centerline{J.Sj\"ostrand\footnote{*}{\liten Centre de Math\'ematiques, Ecole Polytechnique, F-91128 Palaiseau, France and URA 169, CNRS} and W.M.Wang\footnote{**}{\liten D\'ept. de Math\'ematiques, Universit\'e de Paris Sud, F-91405 Orsay cedex and URA 760, CNRS}} \vskip 1cm \par\noindent \it R\'esum\'e: \liten Nous \'etudions une classe de m\'esures holomorphes complexes, proches \`a une gaussienne complexe. Nous montrons que ces m\'esures peuvent \^etre reduites \`a un produit de gaussiennes r\'eelles \`a l'aide d'un principe de maximum dans le domaine complexe. La motivation de ce probl\`eme est l'\'etude d'une classe d'op\'erateurs de Schr\"odinger al\'eatoires, pour lesquels nous montrons que l'esp\'erence de la fonction de Green d\'ecroit exponentiellement. \bigskip \par\noindent \it Abstract: \liten We study a class of holomorphic complex measures, which is close in an appropriate sense to a complex Gaussian. We show that these measures can be reduced to a product measure of real Gaussians with the aid of a maximum principle in the complex domain. The formulation of this problem has its origin in the study of a certain class of random Sch\"odinger operators, for which we show that the expectation value of the Green's function decays exponentially. \rm \vskip 1cm \par\noindent \it Acknowledgements: \liten We thank B. Helffer for first pointing out the possible applications of the results here to statistical mechanics. The second author also thanks J. M. Bismut for useful conversations regarding supersymmetry. We acknowledge the support of the European Network TMR program FMRX-CT 960001. \rm \vfill\eject \centerline{\bf 1. Introduction.} \medskip We study a class of (normalized) complex holomorphic measures of the form $e^{-\psi_n(x)}d^{2n}x$ in ${\bf R}^{2n}$, where $\psi_n(x)$ is holomorphic in $x$ and $Re \psi_n\ge 0$ and grows sufficiently fast at infinity, so that the integral is well defined. (It is not presumed that $e^{-\psi_n(x)}d^{2n}x$ is a product measure.) Moreover we assume that $e^{-\psi_n(x)}$ is ``close", in some sense, to a complex Gaussian in certain regions of the complex space. An example of a normalized complex Gaussian is: $$[\det i(\Delta-E)+1]e^{-(i(\sum_{j,k,\vert j-k\vert _1=1}x_j\cdot x_k -i\sum_jEx_j\cdot x_j+\sum_jx_j\cdot x_j)}\prod _{j=1}^n{d^2x_j\over \pi},$$ where $E\in{\bf R}$, $x_j\in{\bf R}^2$, $x_j\cdot x_k$ is the usual scalar product in ${\bf R}^2$. Assuming that $f$ does not grow too fast at infinity, we are interested in estimates of integrals of the form $$\int f(x)e^{-\psi_n(x)}d^{2n}x,$$ which are {\it uniform} in $n$. So that eventually we can take the limit $n\to\infty$. Assume (for argument's sake) $|f(x)|_{\infty}={\cal O}(1)$, then if $\psi_n(x)$ were real, we would immediately have $$\int f(x)e^{-\psi_n(x)}d^{2n}x={\cal O}(1)$$ uniformly in $n$. However it is clear that in the case $\psi_n(x)$ complex the same argument will not give us a bound which is uniform in $n$. Since typically, $$\int |e^{-\psi_n(x)}|d^{2n}x\to\infty$$ as $n\to\infty$, even though $$\int e^{-\psi_n(x)}d^{2n}x=1,$$ for all $n$.\smallskip \par In the following, we show that under appropriate conditions (convexity, domain of holomorphicity etc.), this class of measures can be reduced, {\bf uniformly} with respect to the dimension of the space, to a product of real Gaussians. Hence the usual estimates of integrals with respect to positive measures become applicable. \smallskip \par The initial inspiration for this paper comes from random Schr\"odinger operators (which we describe below), where the expectation values of certain spectral quantities can be naturally expressed as the correlation functions of some normalized complex measures in even dimensions. (However as we will see later, the evenness of the dimension plays no role in our constructions.) Other examples of complex measures arise, for example, from considerations of analyticity of certain quantities in statistical mechanics. However for concreteness, we only state our results in the random Schr\"odinger case, although it is our belief that the method presented here should prove to be of a general nature, with possible applications to other fields.\bigskip \par We now describe the discrete random Schr\"odinger operator on $\l ^2({\bf Z}^d)$: \ekv{1.1}{H=t\Delta +V,\qquad (0c>0$ and some additional conditions on $\nabla L$, which ensures that the resulting flow stays in tube domains around the real axis. (This is in fact why we need to find the first vector field $v_t$ to ensure that the new phase $L$ is such that $\nabla L$ has the required properties.) (See sect. 7, appendix C.)\smallskip \par Under these two changes of contours, the final measure takes the simple form $$e^{-\sum_jz_j\cdot z_j}\prod_j {d^2z_j\over\pi}.$$ We then obtain in sect. 8 that for $t/(|E|+1)$ sufficiently small and $E$ in the appropriate range (depending on $g$), $\langle G_\Lambda(\mu,\nu;E+i\eta)\rangle$ decays exponentially in $|\mu-\nu|$ for all $\Lambda$ sufficiently large, by using weighted estimates on $M_\Lambda^{-1}(\mu,\nu;E)$. The precise estimate is formulated in Theorem 2.1 in sect. 2. \smallskip \par We should mention here that the region of analyticity in $t$ is uniform in $\Lambda$. The construction above does not depend on the fact that we have a nearest neighbour Laplacian (1.2). It works the same way if $\Delta$ is replaced by any other symmetric matrix with off-diagonal matrix elements decay sufficiently fast. \smallskip \par As we have seen earlier $\langle G \rangle$ can be expressed as a correlation function of a normalized complex measure. In fact (1.5) shows clearly the link between the present problem and problems in statistical mechanics. ((1.7) is special to the present problem. Our main constructions however do not depend on these special equalities arising from the symmetries of the present problem.)\smallskip \par Before the first in a series of the works of B. Helffer and J. Sj\" ostrand [HS], where the equation (1.10) (to our knowledge) first appeared in the context of statistical mechanics, one of the main tools to study correlation functions was cluster expansion--an algebraic way of rearranging the perturbation (e.g. in $t$) series. (1.10) provides an alternative way of treating such problems. The advantage (in our opinion) is that there is no combinatorics involved. The mathematics involved is purely analytical and self-contained. Moreover the convexity condition on $L$ that one meets is the natural one.\smallskip \par Another general (more probabilistic) approach to statistical mechanics is by using semi-groups or heat equations. It seems interesting to us to understand what would be the analogue of the construction presented here. \smallskip \par Although, as mentioned earlier, the inspiration for the present paper comes from quite a different source--random Schr\" odinger operators, in the end, the work presented here should be seen as a logical extension of the works of B. Helffer and J. Sj\" ostrand [HS,S1,S2] in statistical mechanics. Indeed one can take the standard example of studying the correlation function for the measure $${{e^{-\sum_{j,k\in\Lambda ,\vert j-k\vert _1=1}tx _j\cdot x _k}\prod _{j\in \Lambda}e^{-k(x_j^2)} dx_j}\over {\int e^{-\sum_{j,k\in\Lambda ,\vert j-k\vert _1=1}tx _j\cdot x _k}\prod _{j\in \Lambda}e^{-k(x_j^2)} dx_j}},\qquad x_j\in {\bf R}.$$ (Assuming that $k$ is such that the measure is well defined.) It seems clear to us that under appropriate conditions on $k$, which essentially amounts to assuming $k$ analytic and $k\ne 0$ on ${\bf R}^+$, $k$ does not grow faster than linearly at infinity and some convexity conditions on $k$ (See Lemma 3.1.), the analyticity of the correlation function in $t$ for small $t$ should be a direct consequence of the constructions here. \vskip 1cm \centerline{\bf 2. The supersymmetric representation and statement of the main result.} \medskip Let $t\in ]0,1]$ and let $H$ be the discrete Schr\"odinger operator on $\l ^2({\bf Z}^d)$ defined earlier in sect. 1. For convenience, we recall it here: \ekv{2.1}{H=t\Delta +V,} where $\Delta $ is the discrete Laplacian with matrix elements \ekv{2.2}{\Delta _{i,j}=1\hbox{, when }\vert i-j\vert _1=1,\hbox{ and }=0 \hbox{ otherwise.} } $V$ is a multiplication operator, $(Vu)(j)=v_ju_j$, with $v_j\in{\bf R}$ and $\vert \cdot \vert _1$ is the $\l^1$ norm. We assume that the $v_j$ are independent random variables with a common distribution density $g(v_j)$. For real $E$, we consider the inverse operator \ekv{2.3}{G(E+i\eta )=(H-E-i\eta )^{-1},} and more specifically, we are interested in the expectation value of the kernel (i.e. matrix) of $G(E+i\eta )$ (the so called Green's function): $\langle G(\mu ,\nu;E+i\eta )\rangle $ in the limit $\eta \searrow 0$. We will write, \ekv{2.4}{\langle G(\mu ,\nu ;E+i0)\rangle :=\lim_{\eta \searrow 0}\langle G(\mu ,\nu; E+i\eta )\rangle ,} if the right hand side (RHS) exists. \par We proceed by taking $\Lambda \subset {\bf Z}^d$ to be a finite set or to be a large discrete torus of the form $({\bf Z}/N{\bf Z})^d$. The corresponding discrete Laplacian $\Delta _\Lambda $ on $\Lambda $ is then defined as in (2.2), with $i,j$ in $\Lambda $. Define \ekv{2.5}{H_\Lambda =t\Delta _\Lambda +V,} on $\l^2(\Lambda )$. Let \ekv{2.6}{G_\Lambda =(H_\Lambda -E)^{-1},} for complex $E$, whenever the inverse is well-defined. We also consider the expectation values $\langle G_\Lambda (\mu ,\nu ;E+i\eta )\rangle $ for $E\in{\bf R}$, $\eta >0$, and the corresponding limits when $\eta \searrow 0$. The aim of the game is of course to have estimates which are uniform in $\Lambda $, and in this way we get information about $\langle G\rangle $ whenever we can take the infinite volume limit $\Lambda \to {\bf Z}^d$. (The possibility of taking this limit can be obtained by [SW] and we will not enter into the details in this paper, even though the present methods can give that limit too.) \par We use the supersymmetric formalism to express $\langle G_\Lambda \rangle $. (In order not to make too much of a digression, we will only write the few lines that are necessary to reach the representation (2.9), and we refer to appendix A and references therein for a more complete discussion.) Using Gaussian integrals ((A.9) in appendix A), we have the following expression for the Green's function: \eekv{2.7}{G_\Lambda (\mu ,\nu ;E+i\eta )=i\int x_\mu \cdot x_\nu \det [i(H-(E+i\eta ))]\times }{\hskip 3cm \exp[-i\sum_{j,k}(H-(E+i\eta ))_{j,k}x_j\cdot x_k]\prod_{j\in\Lambda }{d^2x_j\over\pi },} where $x_j\in{\bf R}^2$, $\mu ,\nu \in\Lambda $ and we sometimes drop the subscript $\Lambda $ and write $H$ instead of $H_\Lambda $. \par Let $|\Lambda|$ be the number of points in $\Lambda$. We use the Grassmann algebra of $2\vert \Lambda \vert $ generators to express $\det[i(H-E)]$. This algebra is generated by $2\vert \Lambda \vert $ anticommuting variables $\xi _i$, $\eta _i$, $i\in\Lambda $ satisfying the relations: \eeekv{2.8}{[\xi _i,\eta _j]=\xi _i\eta _j+\eta _j\xi _i=0,} {[\xi _i,\xi _j]=\xi _i\xi _j+\xi _j\xi _i=0,} {[\eta _i,\eta _j]=\eta _i\eta _j+\eta _j\eta _i=0.} where we write $[a,b]=ab+ba$ for the anti-commutator. It is denoted by $\mit \Lambda [\xi _1,\eta _1,..,\xi _{\vert \Lambda \vert} ,\eta _{\vert \Lambda \vert }]$ (if we identify $\Lambda $ with $\{1,..,\vert \Lambda \vert \}$). ``$C^\infty $ functions" $F(\xi_i ,\eta_j )$ of these anticommuting variables are defined by Taylor's formula at $(0,0)$ which contains a finite number of terms because of nilpotency. In this way $F(\xi ,\eta )$ becomes an element of the Grassmann algebra. For example if \ekv{2.9}{F(\xi ,\eta ):=e^{A_{i,j}\xi _i\eta _j},} then \ekv{2.10}{F(\xi ,\eta )=1+A_{i,j}\xi _i\eta _j.} This is the function that we need in writing the determinant. We also need to define the notions of differentiation and integration. Define: \ekv{2.11}{{\partial\over\partial \xi _i}(\xi _i)=1,} \ekv{2.12}{{\partial \over \partial \eta _i}(\eta _i)=1.} We also require that these differentiations be linear operators and that Leibnitz' rule hold. We can then define integrals (with respect to $\partial $) as follows: \ekv{2.13}{\int 1 d\xi _i=0,\,\,\int \xi_i d\xi _i=1,\,\,\int 1 d\eta _i=0,\,\, \int \eta_i d\eta _i=1.} A multiple integral is defined to be a repeated integral. For example, \ekv{2.14}{\int\xi _i\eta _jd\xi _id\eta _j=-\int\eta _j\xi _id\xi _id\eta _j=-\int\eta _jd\eta _j=-1.} Using (2.10), (2.14), we get \ekv{2.15}{\det[i(H-E-i\eta )]=\int e^{{-i\sum_{j,k\in\Lambda }(H-E-i\eta )_{j,k}\xi _j\eta _k}}\prod_{j\in\Lambda }(d\eta _jd\xi_j).} Combining (2.7) with (2.15), we obtain the following expression: \ekv{2.16}{G(\mu ,\nu ;E+i\eta )=i\int x_\mu \cdot x_\nu e^{-i\sum_{j,k\in\Lambda }(H-E-i\eta )_{j,k}X_j\cdot X_k}\prod _{j\in\Lambda }d^2 X_j,} where $$\eqalignno{X_j:&=(x_j, \xi_j,\eta_j),\cr X _j\cdot X _k:&=x_j\cdot x_k +{1\over 2}(\eta_j\xi_k+\eta_k\xi_j),\cr d^2X_j:&={d^2x_j\over\pi }d\eta _jd\xi _j. &(2.17)\cr}$$ Hence, \eeekv{2.18} {\langle G(\mu ,\nu ;E+i\eta )\rangle =i\int x_\mu \cdot x_\nu e^{-i(\sum_{j,k\in\Lambda, \vert j-k\vert _1=1 }tX _j\cdot X _k-\sum_{j\in\Lambda }(E+i\eta )X _j\cdot X _j)}\times } {\hskip 5cm\prod_{j\in\Lambda }e^{-iv_jX _j\cdot X _j}\prod g(v_j)dv_j\prod d^2X _j} {=i\int x_\mu \cdot x_\nu e^{-i(\sum_{j,k\in\Lambda ,\vert j-k\vert _1=1}tX _j\cdot X _k-\sum_j(E+i\eta )X _j\cdot X _j)}\prod _j\widehat{g}(X_j\cdot X _j) \prod_jd^2 X _j,} where \ekv{2.19}{\widehat{g}(X _j\cdot X _j)=\widehat{g}(x_j\cdot x_j+\eta _j\xi _j):=\widehat{g}(x_j\cdot x_j)+\widehat{g}'(x_j\cdot x_j)\eta _j\xi _j,} is the (super-)Fourier transform. Assume that $\widehat{g}$ is in ${\cal S}$ away from $0$. Then the above integral is well defined. We can take the limit $\eta \searrow 0$ and obtain \ekv{2.20}{\langle G(\mu ,\nu ;E+i0)\rangle =i\int x_\mu \cdot x_\nu e^{-i(\sum tX _j\cdot X _k-\sum E X _j\cdot X_j)}\prod _j\widehat{g}(X _j\cdot X _j)\prod d^2X _j. } \par Note that by using (2.17), the integrand in (2.20) is a sum of terms of the form $$f_{j_1\cdots j_n,k_1\cdots k_n}(x)\xi_{j_1}\cdots\xi_{j_n}\eta_{k_1}\cdots\eta_{k_n}\qquad (n \le |\Lambda|),$$ where the $f$'s are called coefficients. Note that apart from the factor $x_\mu \cdot x_\nu $, the integrand in (2.20) is only a ``function" of the $X _j\cdot X _k$. Such ``functions" are called supersymmetric functions. Using Theorem A.2 in appendix A, we have: \ekv{2.21} {\int e^{-i(\sum_{\vert j-k\vert _1=1}tX _j\cdot X _k-\sum E X _j\cdot X _j)}\prod \widehat{g}(X_j\cdot X _j)\prod d^2 X _j=1,} for all $\Lambda $, all $t$. Hence $\langle G(\mu ,\nu ;E+i0)\rangle $ can be seen as a correlation function associated to the normalized supersymmetric ``measure" in (2.21). By integrating out the anti-commutative variables $\xi ,\eta $, this measure can be further reduced to a (normalized) complex measure. Assume for example that $\widehat{g}(\tau )=e^{-k(\tau )}\ne 0$. Then using (2.15), (2.17), we obtain: \ekv{2.22} {\langle G(\mu ,\nu ;E+i0)\rangle =i\int x_\mu \cdot x_\nu [\det (iM)e^{-i(\sum_{\vert j-k\vert _1=1}tx_j\cdot x_k -\sum_jEx_j\cdot x_j-i\sum_jk(x_j\cdot x_j))}]\prod _{j\in\Lambda }{d^2x_j\over \pi},} where \ekv{2.23}{ {M=t\Delta -E-i\,{\rm diag\,}(k'(x_j\cdot x_j))}.} \par Using an integration by parts, established in Proposition A.3 in appendix A or equivalently (B.19) in appendix B, (2.22) can be further put in a more transparent form: \ekv{2.24} {\langle G(\mu ,\nu ;E+i0)\rangle =\int M^{-1}(\mu ,\nu ;E)[\det (iM)e^{-i(\sum tx_j\cdot x_k-\sum Ex_j\cdot x_j-i\sum k(x_j\cdot x_j)}]\prod_{j\in\Lambda }d^2x_j.} The rest of the paper will be essentially devoted to the study of the resulting complex measue as defined in (2.22), (2.24) in an appropriate region in $({\bf C}^2)^{\Lambda}$. \par Note that if $g$ is the Cauchy distribution, $g_0(v)={1\over\pi }{1\over v^2+1}$, then $k(\tau )=\vert \tau \vert $ fo real $\tau $ and we have corresponding holomorphic extensions from each half axis (and we shall only use the one from the positive half axis, which is given by $k(\tau )=\tau $). Using (2.24), we then obtain another derivation of the fact that \ekv{2.25} {\langle G(\mu ,\nu ;E)\rangle ={(t\Delta -E-i)^{-1}}_{\mu ,\nu },} for the Cauchy distribution. (A more direct proof based on the Cauchy formula can easily be found either as an exercise or by looking in [Ec]). \par We now specify the class of densities $g$ that we shall allow. We assume that $g$ is of the form: \ekv{2.26} {g(v)=(1+{\cal O}(\epsilon ))g_0 (v)+r_\epsilon (v),} where $$g_0(v)={1\over \pi }{1\over v^2+1}$$ and $r_\epsilon $ has the following properties: \smallskip \par\noindent (a) $r_\epsilon $ is smooth and real on ${\bf R}$ and satisfies \ekv{2.27} {\vert {\partial^k r_\epsilon \over \partial v^k}\vert \le C_k\epsilon \hbox{ for all }k\in{\bf N},} for some fixed constants $C_0,C_1,..$ . \smallskip \par\noindent (b) There is a compact $\epsilon $-independent set $K\subset{\bf C}$, symmetric around ${\bf R}$ with $i\not\in K$, such that $r_\epsilon $ has a holomorphic extension to ${\bf C}\setminus K$ (also denoted by $r_\epsilon $) with \ekv{2.28} {r_\epsilon (v)={\cal O}(\epsilon ){1\over 1+\vert v\vert ^2}\hbox{ in }{\bf C}\setminus K.} \par The ${\cal O}(\epsilon )$ in (2.26) is determined by the requirement that $\int g(v)dv=1$. Assuming also that $\epsilon \ge 0$ is small enough, as we shall always do in the following, we notice that it follows that $g(v)\ge 0$ and hence is a probability measure. \smallskip \par\noindent \it Remark. \rm As it was mentioned in the introduction and as it will become clear later in the proof, the conditions for our constructions to be valid are rather on the Fourier transform $\widehat{g}$ of $g$. But for concreteness, we shall state our main theorem only for the class of densities above. \smallskip \par For all $\lambda>2d$, introduce the convex open bounded set \ekv{2.30}{W(\lambda):=\{\eta \in{\bf R}^d;\,2\sum_1^d\cosh \eta _j <\lambda\}.} Let \ekv{2.31}{p_{\lambda}(x):=\sup_{\eta \in {W(\lambda)}}x\cdot \eta } be the support function of $W(\lambda)$ so that $p_{\lambda}(x)$ is convex, even, positively homogeneous of degree 1. Moreover $p_\lambda(x)\ge 0$ with equality precisely at $0$. In other words $p_\lambda(x)$ is a norm.\smallskip \par In sect. 8, by using weighted estimates, we show that there exist $C_0\ge 1$, $C_1\ge 0$, such that if $\vert E\vert \ge C_0^2,$ $F\le {\vert E\vert \over C_0}$ and $V={\rm diag\,}(v_j)$, with $\vert v_j\vert \le F$, then \ekv{2.32} {\vert (\Delta +V-E)^{-1}(\mu ,\nu )\vert \le C_1e^{-p_{\vert E\vert }(\mu -\nu )+{C_1(1+F)\over \vert E\vert }\vert \mu -\nu \vert }.} A special case of this is that if $E\in{\bf R}$, $V={\rm diag\,}(v_j)$ with $\vert v_j\vert \le \epsilon >0$, $t\in ]0,1]$, $t/\vert E+i\vert <<1$, $\epsilon /\vert E+i\vert <<1$, then \ekv{2.33} {(\Delta +{1\over t}V-{E+i\over t})^{-1}(\mu ,\nu )={\cal O}(1)e^{-p_{\vert E+i\vert /t}(\mu -\nu )+{\cal O}(1){t+\epsilon \over\vert E+i\vert }\vert \mu -\nu \vert },} for all $\mu ,\nu \in{\bf Z}^d$.\smallskip \par Moreover, we show in sect. 8 that (2.33) is likely to be optimal by studying the inverse of $\Delta -E$ on $\l ^2({\bf Z}^d)$, when $E\in{\bf C}$, $\vert E\vert >>1$. After a suitable Fourier transform we see that this operator is unitarily equivalent to the operators of multiplication by $\delta(\xi )-E$ on $L^2({\bf T}^d)$, where $\delta(\xi )=2\sum \cos \xi_j$ and ${\bf T}^d=({\bf R}/2\pi {\bf Z})^d$ is the standard torus. By Bochner's tube theorem we know that the largest open connected set of the form ${\bf R}^d+iW$ containing ${\bf R}^d$ where $\delta(\xi)-E\ne 0$, is of the form ${\bf R}^d+iW(E)$, where $W(E)\subset{\bf R}^d$ is an open convex neighborhood of $0$. In sect. 8 we shall see that $W(E)$ is bounded, and we also note that $W(E)$ is symmetric around $0$ since $\delta$ is an even function. As in the case $E$ real, we define \ekv{2.34}{p_E(x):=\sup_{\eta \in W(E)}x\cdot \eta } to be the support function of $W(E)$ so that $p_E(x)$ is convex, even, positively homogeneous of degree 1. Moreover $p_E(x)\ge 0$ with equality precisely at $0$. In other words $p_E(x)$ is a norm. \smallskip \par In sect. 8, we shall see that \ekv{2.35}{p_E(x)=p_{\vert E\vert }(x)+{\cal O}({1\over \vert E\vert }\vert x\vert ),} \ekv{2.36}{W(\vert E\vert )=\{\eta \in{\bf R}^d;\,2\sum_1^d\cosh \eta _j <\vert E\vert \},} \ekv{2.37} {\vert (\Delta -E)^{-1}(\mu ,\nu )\vert \le {\cal O}(1) e^{-p_{\vert E\vert }(\mu -\nu )+{{\cal O}(1)\over \vert E\vert }\vert \mu -\nu \vert },} uniformly in $E$, $\mu ,\nu $, when $\vert E\vert $ is large enough. \par Equip the extended line $\overline{{\bf R}}:=\{-\infty \}\cup {\bf R}\cup\{+\infty \}$ with the natural topology (i.e. the one induced from the topology on $[-1 ,+1 ]$ under the map $f:\overline{{\bf R}}\to [-1,1]$, where $f(\pm \infty )=\pm 1$, $f(x)=x/\sqrt{1+x^2}$, $x\in{\bf R}$). We define a subset ${\cal E}\subset\overline{{\bf R}}$ in the following way: \par When $E\in{\bf R}$, we say that $E\in{\cal E}$ if and only if (iff) the following holds: The line $L_E$ through $-i$ which is orthogonal to the vector $E+i$ (the direction of the segment joining $-i$ to $E$) does not intersect $K_-:=\{ z\in K;{\rm Im\,}z\le 0\}$ and separates $K_-$ from $E$, in the sense that if $P_+$ is the open half-plane containing $E$ with boundary $L_E$, and $P_-$ the opposite open half-plane, then $K_-\subset P_-$. \par When $E\in\{\pm \infty \}$, we say that $E\in{\cal E}$ iff the above holds with $L_E=i{\bf R}$. \par Note that a necessary condition for ${\cal E}$ to be non-empty is that $-i$ does not belong to the convex hull of $K_-$. It is also clear that ${\cal E}$ is open and connected. \par Let $d_{\vert E\vert }(\mu ,\nu )$ be the distance on $\Lambda $ associated to the norm $p_{\vert E\vert }(\mu -\nu )$, so that $$d_{\vert E\vert }(\mu ,\nu)=p_{\vert E\vert }(\mu -\nu )$$ when $\Lambda $ is a finite set and $$d_{\vert E\vert }(\mu ,\nu )=\inf_{\widetilde{\mu }\in\pi _\Lambda ^{-1}(\mu ),\,\widetilde{\nu }\in\pi _\Lambda ^{-1}(\nu )}p_{\vert E\vert}(\widetilde{\mu }-\widetilde{\nu }),$$ in the case when $\Lambda $ is a torus, with $\pi _\Lambda :{\bf Z}^d\to \Lambda $ denoting the natural projection. \par We can now state the main theorem of this paper. \smallskip \par\noindent \bf Theorem 2.1. \sl For every ${\cal E}'\subset\subset{\cal E}$, there are constants $t_0>0$, $\epsilon _0>0$, such that if $0\le \epsilon \le \epsilon _0$, $t\in ]0,1]$, $E\in {\cal E}'$, ${t\over \vert E+i\vert }\le t_0$, then for $\Lambda $ sufficiently large we have uniformly in $t,$ $\epsilon $, $E$: \ekv{2.38} {\vert \langle G(\mu ,\nu ;E+i0)\rangle\vert \le {1\over t}e^{-d_{\vert E+i\vert /t}(\mu ,\nu )+{\cal O}({t+\epsilon \over\vert E+i\vert })\rho(\mu ,\nu )},\hbox{ }\mu ,\nu \in\Lambda .} Here $\rho$ denotes the standard Euclidean distance in $\Lambda $. \rm \vskip 1cm \centerline{\bf 3. Rotation of coordinates.} \medskip \par We make the assumptions of Theorem 2.1. Then for $E\in{\cal E}\cap{\bf R}$, we have \ekv{3.1}{\langle G(\mu ,\nu ;E)\rangle =i\int x_\mu \cdot x_\nu e^{-i(\sum tX _j\cdot X _k-\sum E X _j\cdot X_j)}\prod _j\widehat{g}(X _j\cdot X _j)\prod d^2X _j. } The corresponding normalized `` measure" is \ekv{3.2}{e^{-i(\sum tX _j\cdot X _k-\sum E X _j\cdot X_j)}\prod _j\widehat{g}(X _j\cdot X _j)\prod d^2X _j. } where $\widehat{g}(X _j\cdot X_j)$ is as in (2.18). Our aim in this section is to make an appropriate change of contour, so that on the new contour, after integrating out the anti-commutative variables, the phase of the normalized complex measure is almost real. \par Recall from the preceding section that, \ekv{3.3} {\widehat{g}_0(\sigma )=e^{-\sigma },\hbox{ for }{\rm Re}\,\sigma >0,} and if we replace $g$ by $g_0$ in (3.1), (3.2), we are naturally led to consider the factors, \ekv{3.4} {e^{(iE-1)x_j\cdot x_j}=e^{-(1-iE)x_j\cdot x_j},} (see also (2.22), (2.24) with $k(x_j\cdot x_j)=x_j\cdot x_j$,) which in some sense can be expected to be dominant when $t >0$ is small or $E$ is large. With $\sigma =x_j\cdot x_j$, this factor becomes real after the change of variables, $$\sigma =e^{i\theta (E)}s={1+iE\over\vert 1+iE\vert }s,\hbox{ where }\theta (E)=\arg (1+iE)\in ]-{\pi \over 2},{\pi \over 2}[.$$ Put $\theta (\pm \infty )=\pm {\pi \over 2}$. \smallskip \par\noindent \bf Lemma 3.1. \sl Let $E\in{\cal E}$ and let $S(E)$ be the closed convex sector in the complex plane bounded by the two half-lines $[0,+\infty [$ and $e^{i\theta (E)}[0,+\infty [$. The function $\widehat{g}(\sigma )$ has an entire extension from the positive half-axis, that we also denote by $\widehat{g}$, which has the following properties: \smallskip \item{(a)} If $E\ne 0$ and $\vert E\vert <\infty $, then for every $\gamma \in [0,1[$ and for all $k,N\in{\bf N}$: \ekv{3.5} {\partial _\sigma ^k\widehat{g}(\sigma )={\cal O}_{N,k,\gamma}\langle \sigma \rangle ^{-N}e^{-{\gamma \over E}{\rm Im\,}\sigma },} where $\sigma \in S(E)$ and $\langle\sigma\rangle=(1+|\sigma|^2)^{1/2}$. \item{(b)} If $E\in\{+\infty ,-\infty \}$, then there exists $\epsilon _0>0$, such that for every $\delta >0$ \ekv{3.6} {\partial _\sigma ^k\widehat{g}(\sigma )={\cal O}_{N,k,\delta }\langle \sigma \rangle ^{-N}(e^{-\epsilon _0\vert {\rm Im\,}\sigma\vert }+e^{-(1-\delta ){\rm Re\,}\sigma }),\hbox{ }\sigma \in S(E).} \item{(c)} Recall that $g=g_\epsilon $. For every ${\cal E}'\subset\subset{\cal E}$, there exists an $\epsilon _0>0$ such that if $E$ is confined to ${\cal E}'$ and $0\le \epsilon \le \epsilon _0$: There exists an open neighborhood $\Omega (E)\subset{\bf C}$ of $e^{i\theta (E)}[0,+\infty [$, which is conic near infinity, and a holomorphic function $k$ on $\Omega (E)$ such that \ekv{3.7} {\widehat{g}(\sigma )=e^{-k(\sigma )},\hbox{ }k(\sigma )=\sigma +{\cal O}(\epsilon),\hbox{ }\sigma \in\Omega (E).} \rm \smallskip \par\noindent \bf Proof. \rm If $\sigma >0$, then in the defining integral, $$\widehat{g}(\sigma )=\int_{\bf R}e^{-ix \sigma }g(x)dx$$ we may replace the real axis by a closed curve $\gamma $ in $\{{\rm Im\,}z\le 0\}$, which in the case when $E$ is finite stays on the opposite side of the line $L_E$ (introduced in the definition of the set ${\cal E}$ in the preceding section) from $E$, except in an arbitrarily small neighborhood of $-i$. We then get the entire extension from $]0,+\infty [$ by: \ekv{3.8} {\widehat{g}(\sigma )=\int_\gamma e^{-iz\sigma }g(z)dz,} so for every $N\in{\bf N}$, \ekv{3.9} {\vert \widehat{g}(\sigma )\vert \le C_N\langle \sigma \rangle ^{-N}e^{H_\gamma (\sigma )},} where \ekv{3.10} {H_\gamma (\sigma )=\sup_{z\in\gamma }{\rm Im\,}(z\sigma ).} \par Now consider the situation in (a) and assume in order to fix the ideas that $E>0$. It is straight forward to study $H_\gamma $ and we see that for every sufficiently small $\delta >0$, we can choose $\gamma $ as above such that: \ekv{3.11} {H_\gamma (\sigma )\le -{\rm Re\,}\sigma +\delta \vert \sigma \vert ,\hbox{ }\theta (E)-\delta \le \arg \sigma \le \theta (E)+\delta ,} \ekv{3.12} {H_\gamma (\sigma )\le -{1\over E}{\rm Im\,}\sigma ,\hbox{ when }0\le\arg \sigma \le \theta (E)-\delta .} Note that ${\rm Re\,}\sigma ={1\over E}{\rm Im\,}\sigma $ when $\arg\sigma =\theta (E)$. From this, we get part (a). \par For part (b), we may assume for instance that $E=+\infty $. Then $L_E$ is the imaginary axis, and we can choose $\gamma $ confined to the intersection of the lower and the left half-planes except in a small neighborhood of $-i$. Then there exists $\epsilon _0>0$, such that for any small $\delta >0$, we can choose $\gamma $ such that (3.11) holds and \ekv{3.13} {H_\gamma (\sigma )\le -\epsilon _0{\rm Im\,}\sigma \hbox{ when }0\le\arg\sigma \le\theta (E)-\delta . } Part (b) follows. \par In order to get part (c), we use the decomposition (2.25) and (3.3) as well as the fact, that if we represent $\widehat{r}_\epsilon $ as in (3.8), then the contour can be pushed across $-i$ and consequently, \ekv{3.14} {\vert \widehat{r}_\epsilon (\sigma )\vert \le{\cal O}(\epsilon )e^{-{\rm Re\,}\sigma -\delta \vert \sigma \vert }\hbox{ in }\Omega (E),} if $\delta >0$ and $\Omega (E)$ are small enough.\hfill{$\#$}\bigskip \par In the various integrals involving the density (3.2), we want to replace the integration variables $x\in ({\bf R}^2)^\Lambda $, by $x=e^{i\alpha /2}y$, with $y\in ({\bf R}^2)^\Lambda $ and $\alpha =\theta (E)$. As mentioned earlier in sect. 2, the integrand in (3.1) is a sum of terms of the form $$f_{j_1\cdots j_n,k_1\cdots k_n}(x)\xi_{j_1}\cdots\xi_{j_n}\eta_{k_1}\cdots\eta_{k_n}\qquad (n \le |\Lambda|),\eqno(3.15)$$ which is polynomial in $\xi$, $\eta$ and where the $f$'s (coefficients) are holomorphic functions in $x$. For the purpose of change of contours in $x$, we can view $\xi$, $\eta$ as mere ``parameters". (See appendix A. for a more formal presentation of this simple fact. See also appendix B where $\xi$ and $\eta$ are not explicitly invoked.) The change of contours can be justified by means of the Stokes' formula, if we can show that the coefficients $f$ decay fast enough on all the intermediate contours $x=e^{i\alpha /2}y$, $y\in ({\bf R}^2)^\Lambda $, for $0\le \alpha \le \theta (E)$, (where we assume for simplicity that $E>0$). \par Using (2.16), (2.9), (2.10), $f$ is proportional to \ekv{3.16}{e^{-i(\sum tx _j\cdot x _k-\sum E x _j\cdot x_j)}\prod _jh_j(x _j\cdot x _j), } where $h_j=\widehat g$ or $h_j={\widehat g}'$. Using (3.5) and without uniformity w.r.t. $\Lambda $, we have that $$|f|\le{\cal O}_N(1)(e^{t\sum_{\vert j-k\vert _1=1}{\rm Im\,}(x_j\cdot x_k)}\prod_j(\langle x_j\cdot x_j\rangle ^{-N}e^{-\gamma (E+1/E){\rm Im \,}(x_j\cdot x_j)})),$$ \noindent where ${\cal O}_N(1)$ also depends on $t$, $E$. Here ${\rm Im\,}(x_j\cdot x_k)=(\sin \alpha )y_j\cdot y_k$, and we get $$|f|\le{\cal O}_N(1)\exp[(\sin\alpha )(t\Vert \Delta \Vert -\gamma (E+{1\over E}))\Vert y\Vert ^2]\prod \langle y_j\rangle ^{-2N}.$$ Since $E+1/E\ge 2$, and we can choose $\gamma $ arbitrarily close to 1, this quantity is ${\cal O}_N(1)\prod_j\langle y_j\rangle ^{-2N}$ uniformly in $\alpha $, when \ekv{3.17} {t\Vert \Delta \Vert <2,} or when \ekv{3.18} {0\le t\le 1\hbox{ and }E\hbox{ is large enough.}} Stokes' formula can now be applied and we obtain \ekv{3.19}{\langle G(\mu ,\nu ;E+i0)\rangle =i\int x_\mu \cdot x_\nu \, e^{-i(\sum tX _j\cdot X _k-\sum E X _j\cdot X_j)}\prod _j\widehat{g}(X _j\cdot X _j)\prod d^2X _j, } where $x\in (e^{i\theta(E)/2}{\bf R}^2)^{\Lambda}$ and $\xi_j$, $\eta_j$ are the corresponding Grassmann algebra generators. (See (A.4) in appendix A.) \par From (c) of Lemma 3.1, we notice that for every compact subset ${\cal E}'\subset{\cal E}$, there exists $\epsilon _0>0$, such that $k(\sigma )=k_\epsilon (\sigma )=\sigma +r_\epsilon (\sigma )$ is holomorphic with $r_\epsilon (\sigma )={\cal O}(\epsilon )$ in some neighborhood of $e^{i\theta ({\cal E}')}[0,+\infty [$, which is conic near infinity. Let $m=|\Lambda|$. Integrating out the anticommutative variables, we get for $E\in{\cal E}'$: \eeekv{3.20} {\langle G(\mu ,\nu ;E+i0)\rangle } {=i\int_{e^{i\theta (E)/2}{\bf R}^{2m}}x_\mu \cdot x_\nu \det (iM)e^{-i((t\Delta -E)x\cdot x-i\sum k(x_j\cdot x_j))}{d^{2m}x\over\pi ^m}} {=\int_{e^{i\theta (E)/2}{\bf R}^{2m}}{(M^{-1})}_{\mu ,\nu }\det (iM)e^{-i((t\Delta -E)x\cdot x-i\sum k(x_j\cdot x_j))}{d^{2m}x\over\pi ^m},} where $M(x)=t\Delta -i{\rm diag\,}(k'(x_j\cdot x_j))-E$. The corresponding normalized complex measure becomes: \ekv{3.21} {i^m(\det M) e^{-i((t\Delta -E)x\cdot x-i\sum k(x_j\cdot x_j))}{d^{2m}x\over\pi ^m}. } \par It will be convenient to specify the domains where we shall work, and for $\alpha ,\beta >0$, we introduce the neighborhood $\Omega (E,\alpha ,\beta )$ of the half line $e^{i\theta (E)}[0,+\infty [$ asymptotically conic near infinity, by \ekv{3.22} {\vert {\rm Im\,}(e^{-i\theta (E)}\tau )\vert <\alpha {\rm Re\,}(e^{-i\theta (E)}\tau )+\beta .} Then with ${\cal E}'$, ${\cal E}$ as above and with ${\cal E}'$ connected and with $\alpha ,\beta >0$ small enough, we have \ekv{3.23} {k(\tau )=\tau +r(\tau ),\hbox{ }r(\tau )={\cal O}(\epsilon )\hbox{ in }\Omega ({\cal E}',\alpha ,\beta ),} where we have put $\Omega ({\cal E}',\alpha ,\beta )=\cup_{E\in{\cal E}'}\Omega (E,\alpha ,\beta )$. \par After the rotation of variables, \ekv{3.24} {x_j=e^{i\theta (E)/2}y_j,} the density (3.21) becomes, \ekv{3.25} {\vert 1+iE\vert ^m\det \left(1+\widetilde{t}\Delta +{\rm diag\,}\left(\widetilde{r}'\left (y_j\cdot y_j\right)\right)\right)e^{-\vert 1+iE\vert (y\cdot y+\widetilde{t}\Delta y\cdot y+\sum\widetilde{r}(y_j\cdot y_j))}{d^{2m}y\over\pi ^m},} where \ekv{3.26} {\widetilde{t}={it\over (1-iE)},} \ekv{3.27} {\widetilde{r}(\tau )={1\over\vert 1+iE\vert }r(e^{i\theta (E)}\tau ),} so that \ekv{3.28} {\widetilde{r}(\tau )={\cal O}(\widetilde{\epsilon })\hbox{ for }\tau \in\Omega (0,\alpha ,\beta ),} where \ekv{3.29} {\widetilde{\epsilon }={\epsilon \over\vert 1+iE\vert }.} \vskip 1cm \centerline{\bf 4. Elimination of $t\Delta $: The deforming vectorfield.} \medskip We consider the density (3.25) together with (3.26-29). From now on we drop the superscripts `` $\widetilde{ }$ " and write ``$x$" instead of ``$y$", so that the exponential in (3.25) can be written as $e^{-\vert 1+iE\vert Q_t(x)}$, where \ekv{4.1} {Q_t(x)=x\cdot x+t\Delta x\cdot x+\sum r(x_j\cdot x_j).} We look for a complex change of variables $x=x_t(y)$, generated by a $t$-dependent vector field $v=v_t(x)\cdot {\partial \over\partial x}$, holomorphic in $t,x$: \ekv{4.2} {{\partial \over\partial t}x_t(y)=v_t(x_t(y)),\hbox{ }x_0(y)=y,} such that \ekv{4.3} {Q_t(x_t(y))=Q_0(y).} Differentiating this equation with respect to $t$, we get \ekv{4.4} {\partial _tQ_t+\nabla _xQ_t\cdot v_t=0.} Letting $m\times m$ matrices act on ${\bf C}^{2m}$ in the natural way, we have \ekv{4.5} {\nabla _xQ_t=2(I+t\Delta +{\rm diag\,}(r'(x_j\cdot x_j)))x.} Note the appearence of the same matrix as in the determinant in (3.25). \par Looking for $v=v_t$ of the form $v(x)=B(x)x$ where $B$ is a ($t$-dependent) $m\times m$ matrix, and using that $\partial _tQ_t(x)=\Delta x\cdot x$, (4.4) becomes \ekv{4.6} {-\langle \Delta x,x\rangle =2\langle (I+t\Delta +{\rm diag\,}(r'(x_j\cdot x_j)))x,B(x)x\rangle ,} and it suffices to find $B(x)$ such that \ekv{4.7} {-\Delta ={}^tB(x)\circ (I+t\Delta +{\rm diag\,}(r'(x_j\cdot x_j)))+(I+t\Delta +{\rm diag\,}(r'(x_j\cdot x_j)))\circ B(x).} We shall take $B=B_t(x_1\cdot x_1,..,x_m\cdot x_m).$ \par A possible choice would be $B(x)=-{1\over 2}(I+t\Delta +{\rm diag\,}(r'(x_j\cdot x_j)))^{-1}\Delta $, but it turns out that the corresponding vector field is not sufficiently small in some components, and that we cannot exclude that the corresponding flow will take us out of the region where $r$ is well-defined. A better vectorfield can be constructed by means of a certain cut-off function, and before doing so, we specify in which region in $({\bf C}^2)^\Lambda $, we want to work. \par For $a\in ]0,1[$, $b>0$; let $V(a,b)\subset {\bf C}_{x_j}^2$ be the neighborhood of ${\bf R}^2$, given by \ekv{4.8} {({\rm Im\,}x_j)^2\le a({\rm Re\,}x_j)^2+b,} \noindent where $({\rm Im\,}x_j)^2=({\rm Im\,}x_{j,1})^2+({\rm Im\,}x_{j,2})^2$ and similarly we define $({\rm Re\,}x_j)^2$. >From simple estimates, we see that the map ${\bf C}^2\ni x_j\mapsto x_j\cdot x_j\in{\bf C}$, takes $V(a,b)$ into $\Omega (0,\alpha ,\beta )$, if \ekv{4.9} {\alpha ={2\sqrt{a}\over 1-a},\hbox{ }\beta ={2\sqrt{a}b\over 1-a}+{b\over\sqrt{a}},} and we can have $\alpha ,\beta $ as small as we like by taking $a,\, b,\, b/\sqrt{a}$ sufficiently small. \par We assume that $a$, $b$, $b/\sqrt{a}$, $\alpha $, $\beta $ in (4.9) are small enough, so that \ekv{4.10} {\tau \in\Omega (0,\alpha ,\beta )\Rightarrow \vert 1+\tau \vert \ge {1\over 2}(1+\vert \tau \vert ),\hbox{ }\vert \arg (1+\tau )\vert \le {\pi \over 2},} \ekv{4.11} {x_j\in V(a,b)\Rightarrow \vert 1+x_j\cdot x_j\vert \ge{1\over 2}(1+\vert x_j\vert ^2).} For $x_j\in V(a,b)$ we can define in a natural way $\langle x_j\rangle =\sqrt{1+x_j\cdot x_j}$, and combining (4.10), (4.11), we see that for $x_j\in V(a,b)$: \ekv{4.12} {{1\over\sqrt{2}}\langle \vert x_j\vert \rangle \le\vert \langle x_j\rangle \vert \le \langle \vert x_j\vert \rangle ,\hbox{ }\vert \arg \langle x_j\rangle \vert \le {\pi \over 4}.} Here $\langle \vert x_j\vert \rangle =\sqrt{1+\vert x_j\vert ^2}$ is of the same order of magnitude as $1+\vert x_j\vert $. It follows from (4.12) that for $x_j,x_k\in V(a,b)$: \ekv{4.13} {\vert \langle x_j\rangle +\langle x_k\rangle \vert \ge {1\over 2}(\vert \langle x_j\rangle \vert +\vert \langle x_k\rangle \vert ).} \par Put \ekv{4.14} {\chi (t,s)={t\over t+s},} \ekv{4.15} {\chi _{i,j}(x)=\chi (\langle x_j\rangle ,\langle x_k\rangle ),\,\,x_j,x_k\in V(a,b).} Notice that $\chi _{j,k}+\chi _{k,j}=1$ and that \ekv{4.16} {\vert \chi _{j,k}(x)\vert \le {2\vert \langle x_j\rangle \vert \over \vert \langle x_j\rangle \vert +\vert \langle x_k\rangle \vert }\le 2,} by (4.13). \par Let $\chi (x)$ denote the $m\times m$ matrix $(\chi _{i,j}(x))_{1\le i,j\le m}$ and let $*$ denote the operation of elementwise multiplication of $m\times m$-matrices: $(a*b)_{j,k}=a_{j,k}b_{j,k}$. We look for a solution $B(x)$ of (4.7) of the form \ekv{4.17} {B(x)=\chi (x)*A(x),\,\, x\in V(a,b)^m,} with $A(x)$ symmetric. Then ${}^t(\chi *A)={^t\chi *A=A*{}^t\chi }$, and (4.7) becomes \eeekv{4.18} {\hskip 3cm -\Delta =} {(A*{}^t\chi )\circ (I+t\Delta +{\rm diag\,}(r'(x_j\cdot x_j)))+(I+t\Delta +{\rm diag\,}r'(x_j\cdot x_j)))\circ (\chi *A)} {\hskip 4cm ={\cal D}(x)(A)+t{\cal R}(x)(A),\,\, (A=A(x)),} where \ekv{4.19} {{\cal D}(x)(A):=(A*{}^t\chi )\circ D(x)+D(x)\circ (\chi *A),\,\, D(x):=I+{\rm diag\,}(r'(x_j\cdot x_j)),} \ekv{4.20} {{\cal R}(x)(A):=(A*{}^t\chi )\circ \Delta +\Delta \circ (\chi *A).} Write $D=D(x)={\rm diag\,}(d_j(x))$. On the level of matrix elements, ${\cal D}(x)$ is the map \ekv{4.21} {a_{j,k}\mapsto(d_j\chi _{j,k}+d_k\chi _{k,j})a_{j,k},} and we contemplate the multiplier: \ekv{4.22} {d_j\chi _{j,k}+d_k\chi _{k,j}=1+\chi _{j,k}r'(x_j\cdot x_j)+\chi _{k,j}r'(x_k\cdot x_k).} Combining (3.23) (where the superscripts have been dropped), with the Cauchy inequality, we see after a slight decrease of $a,\, b,\, \alpha ,\, \beta ,\,$ that \ekv{4.23} {\vert r'(x_j\cdot x_j)\vert \le C\epsilon ,\,\, x_j\in V(a,b),} where $C=C_{{\rm (4.23)}}$ depends on $a,\, b,\, \alpha ,\, \beta \,$ and on how much we decreased $V(a,b)$. Using this in (4.22) with (4.16), we get \ekv{4.24} {\vert d_j\chi _{j,k}+d_k\chi _{k,j}-1\vert \le 4C_{{\rm (4.23)}}\epsilon .} \par Clearly (4.24) imples the invertibility of the map ${\cal D}(x)$ and we shall introduce weighted $\ell^\infty $-norms on the $m\times m$-matrices, for which (4.18) can be solved by a perturbation argument. Let $\rho :\Lambda \times \Lambda \to{\bf R}$ be a symmetric function: $\rho (j,k)=\rho (k,j)$, satisfying \ekv{4.25} {\vert \rho (j_1,k_1)-\rho (j_2,k_2)\vert \le \delta (\vert j_1-j_2\vert_1 +\vert k_1-k_2\vert _1),} for some $\delta >0$, where $|\cdot|_1=|\cdot|_{\ell^1}$ is the $\ell^1$ norm in ${\bf Z}^d$. The smallest possible $\delta $ will be denoted by $\Vert \rho \Vert _{{\rm Lip}}$. If $B=(b_{j,k})$ is an $m\times m$-matrix, we put \ekv{4.26} {\Vert B\Vert _{\ell_\rho ^\infty }=\max_{j,k} e^{\rho (j,k)}\vert b_{j,k}\vert .} Then according to (4.16): \ekv{4.27} {\Vert \chi *A\Vert _{\ell_\rho ^\infty },\, \Vert A*{}^t\chi \Vert _{\ell_\rho ^\infty }\le 2\Vert A\Vert _{\ell_\rho ^\infty },} and (4.24) implies that \ekv{4.28} {\Vert {\cal D}(x)\Vert _{{\cal L}(\ell_\rho ^\infty ,\ell_\rho ^\infty )}\le 1+4C_{{\rm (4.23)}}\epsilon ,\,\,\Vert {\cal D}(x)^{-1}\Vert _{{\cal L}(\ell_\rho ^\infty ,\ell_\rho ^\infty )}\le{1\over 1-4C_{{\rm (4.23)}}\epsilon }.} \par In order to estimate the norm of ${\cal R}(x)$, write $$(\Delta \circ B)_{j,k}=\sum_{\vert j-\ell\vert _1=1}b_{\ell,k},$$ $$e^{\rho (j,k)}(\Delta \circ B)_{j,k}=\sum_{\vert j-\ell\vert _1=1}e^{\rho (j,k)-\rho (\ell ,k)}e^{\rho (\ell ,k)}b_{\ell ,k},$$ and conclude that \ekv{4.29} {\Vert \Delta \circ B\Vert _{\ell_\rho ^\infty }\le 2d e^{\Vert \rho \Vert _{{\rm Lip}}}\Vert B\Vert _{\ell_\rho ^\infty },} where we recall that $d$ is the dimension of the lattice. Similarly, \ekv{4.30} {\Vert B\circ \Delta \Vert _{\ell_\rho ^\infty }\le 2de^{\Vert \rho \Vert _{{\rm Lip}}}\Vert B\Vert _{\ell_\rho ^\infty }.} Combining this with (4.27), we get \ekv{4.31} {\Vert {\cal R}(x)\Vert _{{\cal L}(\ell_\rho ^\infty ,\ell_\rho ^\infty )}\le 8de^{\Vert \rho \Vert _{{\rm Lip}}}.} Write (4.18): \ekv{4.32} {-\Delta ={\cal D}\circ (I+t{\cal D}^{-1}\circ {\cal R})(A).} \par Assume from now on, \ekv{4.33} {4C_{{\rm (4.23)}}\epsilon \le {1\over 2},\,\, 32\vert t\vert de\le 1,} and choose $\rho $ with \ekv{4.34} {32\vert t\vert de^{\Vert \rho \Vert _{{\rm Lip}}}\le 1.} Then $\Vert {\cal D}^{-1}\Vert _{{\cal L}(\ell_\rho ^\infty ,\ell_\rho ^\infty )}\le 2$, $\Vert t{\cal D}^{-1}{\cal R}\Vert _{{\cal L}(\ell_\rho ^\infty ,\ell_\rho ^\infty )}\le {1\over 2}$, and (4.32) has a unique solution $A=A(x)$, satisfying \ekv{4.35} {\Vert A\Vert _{\ell_\rho ^\infty }\le 4\Vert \Delta \Vert _{\ell_\rho ^\infty }.} Naturally, we may replace $\Delta $ in (4.32), (4.35) by any symmetric matrix. \par We sum up the discussion of the existence of a deforming vector field: \smallskip \par\noindent \bf Proposition 4.1. \sl Let $V(a,b)$ be sufficiently small, so that (4.10), (4.11) hold when $\Omega (0)=\Omega (0,\alpha ,\beta )$ is determined by (4.9). Assume (3.28), (4.23) (with the tildes dropped), where $\epsilon >0$ satisfies (4.33). Then there is a holomorphic vectorfield $v=v_t(x)\cdot {\partial \over \partial x}$, defined for $t\in{\bf C}$, $\vert t\vert <1/(32\,de)$, $x\in V(a,b)^m$, which satisfies (4.4), of the form $v(x)=B(x)x$, with $B=B_t(x)$ of the form (4.17), where $A(x)$ is the unique symmetric $m\times m$-matrix satisfying (4.18). If $\rho $ satisfies (4.34), then (4.35) holds.\rm\bigskip \par In the remainder of this section, we shall derive various estimates on $A$ and $v$ under the assumptions of Proposition 4.1. According to (4.33), (4.34), a possible choice of $\rho $ in (4.35) is $\rho (j,k)=\vert j-k\vert _1$. (4.35) then gives \ekv{4.36} {\vert a_{j,k}(x)\vert \le 4e^{1-\vert j-k\vert _1},} which implies that \ekv{4.37} {\Vert A(x)\Vert _{{\cal L}(\ell^p,\ell^p)}\le 4e\Big( {e+1\over e-1}\Big)^d,} for every $p\in[1,\infty ]$. \par We apply this with $p=\infty $ to the expression \ekv{4.38} {v_j(x)=\sum_ka_{j,k}(x)\chi _{j,k}(x)x_k,} together with the estimate which follows from (4.16), (4.12): \eekv{4.39} {\vert \chi _{j,k}(x)x_k\vert \le {2\vert \langle x_j\rangle \vert\, \vert x_k\vert \over \vert \langle x_j\rangle \vert +\vert \langle x_k\rangle \vert}\le {2\vert \langle x_j\rangle \vert \langle \vert x_k\vert \rangle \over \vert \langle x_j\rangle \vert +\vert \langle x_k\rangle \vert }} {\hskip 2cm \le 2\sqrt{2}{\vert \langle x_j\rangle \vert \, \vert \langle x_k\rangle \vert \over \vert \langle x_j\rangle \vert +\vert \langle x_k\rangle \vert }\le 2\sqrt{2}\vert \langle x_j\rangle \vert ,} and conclude that \ekv{4.40} {\vert v_j(x)\vert \le 2\sqrt{2} \vert \langle x_j\rangle \vert \sum_k\vert a_{j,k}(x)\vert \le 8\sqrt{2}e\big({e+1\over e-1}\Big) ^d\vert \langle x_j\rangle \vert .} This estimate implies that if $0From (4.42)--(4.45), we get \ekv{4.46} {\partial _{x_j}^\alpha ({}^t\chi (x)\circ D(x))=\Pi _j\circ {\cal O}(\langle x_j\rangle ^{-\vert \alpha \vert })+{\cal O}(\langle x_j\rangle ^{-\vert \alpha \vert })\circ \Pi _j,\,\,\alpha \ne 0,} \eekv{4.47} {\hskip 2cm \partial _{x_j}^\alpha \partial _{x_k}^\beta ({}^t\chi (x)\circ D(x))=} {\Pi _j\circ {\cal O}(\langle x_j\rangle ^{-\vert \alpha \vert }\langle x_k\rangle ^{-\vert \beta \vert })\circ \Pi _k+\Pi _k\circ {\cal O}(\langle x_j\rangle ^{-\vert \alpha \vert }\langle x_k\rangle ^{-\vert \beta \vert })\circ \Pi _j,\,j\ne k,\, \alpha \ne 0\ne \beta ,} \ekv{4.48} {\partial _{x_j}^\alpha \partial _{x_k}^\beta \partial _{x_\ell}^\gamma ({}^t\chi (x)\circ D(x))=0,\,j\ne k\ne \ell\ne j,\,\alpha \ne 0,\, \beta \ne 0, \, \gamma \ne 0.} \par We can now study the derivatives of ${\cal D}(x)+t{\cal R}(x)$ in (4.18). If $C$ is an $m\times m$-matrix, let ${\cal S}(C)=C+{}^tC$. Then, \ekv{4.49} {{\cal D}(x)(A)={\cal S}((A*{}^t\chi (x))\circ D(x))={\cal S}(A*({}^t\chi (x)\circ D(x))),} \ekv{4.50} {{\cal R}(x)(A)={\cal S}((A*{}^t\chi (x))\circ \Delta ).} \par If $\rho _1$, $\rho _2$ are symmetric weights, then for all symmetric $A$'s in the three cases in (4.46)--(4.48): \eekv{4.51} {\Vert (\partial _{x_j}^\alpha {\cal D})(x)(A)\Vert _{\ell_{\rho _2}^\infty }={\cal O}(1)\langle x_j\rangle ^{-\vert \alpha \vert }\Vert A\Vert _{\ell_{\rho _1}^\infty },\hbox{ if}} {\rho _2\le \rho _1\hbox{ on }L(j):=(\{ j\} \times \Lambda )\cup (\Lambda \times \{ j\} ),} \eekv{4.52} {\Vert (\partial _{x_j}^\alpha \partial _{x_k}^\beta {\cal D})(x)(A)\Vert _{\ell_{\rho _2}^\infty }={\cal O}(1)\langle x_j\rangle ^{-\vert \alpha \vert }\langle x_k\rangle ^{-\vert \beta \vert }\Vert A\Vert _{\ell_{\rho _1}^\infty },\hbox{ if}} {\rho _2\le \rho _1\hbox{ on }\{ (j,k),\,(k,j)\} =L(j)\cap L(k),} \ekv{4.53} {(\partial _{x_j}^\alpha \partial _{x_k}^\beta \partial _{x_\ell}^\gamma {\cal D})(x)(A)=0.} We have the same estimates for the map $A\mapsto A*{}^t\chi (x)$, and if we assume in addition that $\Vert \rho _1\Vert _{{\rm Lip}}$, $\Vert \rho _2\Vert _{{\rm Lip}}\le r$, then \ekv{4.54} {\Vert (\partial _{x_j}^\alpha {\cal R})(x)(A)\Vert _{\ell_{\rho _2}^\infty }={\cal O}(e^r)\langle x_j\rangle ^{-\vert \alpha \vert }\Vert A\Vert _{\ell_{\rho _1}^\infty },\hbox{ if }\alpha \ne 0,\hbox{ and }\rho _2\le \rho _1\hbox{ on }L(j),} \eekv{4.55} {\Vert (\partial _{x_j}^\alpha \partial _{x_k}^\beta {\cal R})(x)(A)\Vert _{\ell_{\rho _2}^\infty }={\cal O}(e^r)\langle x_j\rangle ^{-\vert \alpha \vert }\langle x_k\rangle ^{-\vert \beta \vert },\,j\ne k,\, \alpha \ne 0\ne \beta ,}{\hbox{if }\rho _2\le \rho _1\hbox{ on }L(j)\cap L(k),} \ekv{4.56} {\left(\partial _{x_j}^\alpha \partial _{x_k}^\beta \partial _{x_\ell}^\gamma {\cal R}(x)\right)(A)=0,\,j\ne k\ne \ell,\,\alpha \ne 0,\, \beta \ne 0,\, \gamma \ne 0.} If we assume, \ekv{4.57} {\vert t\vert e^r={\cal O}(1),} then (4.51)--(4.53) are valid with ${\cal D}$ replaced by ${\cal E}:={\cal D }+t{\cal R}$, but now with the restriction $\Vert \rho _1\Vert _{{\rm Lip}},\, \Vert \rho _2\Vert _{{\rm Lip}}\le r$. For a given such $\rho _1$, the optimal choice of $\rho _2$ in (4.51) (with ${\cal D}$ replaced by ${\cal E}$) is $$\rho _2(a)=\min_{b\in L(j)}\rho _1(b)+r\vert a-b\vert _1.$$ Similarly, the optimal choice of $\rho _2$ in (4.52) (with ${\cal D}$ replaced by ${\cal E}$) is $$\min_{b\in L(j)\cap L(k)}(\rho _1(b)+r\vert a-b\vert _1)\ge\min_{b_1\in L(j),\,b_2\in L(k)}(\rho _1(b_1)+r\vert b_1-b_2\vert _1+r\vert b_2-a\vert _1)=:\rho _2(a).$$ \par We shall now differentiate the equation (4.18), that we write as \ekv{4.58} {{\cal E}_t(x)(A)=\Delta .} Let $r\ge 1$ satisfy \ekv{4.59} {32\,\vert t\vert de ^r\le 1,} so that according to (4.35): \ekv{4.60} {\Vert A\Vert _{\ell^\infty _{r\vert \cdot-\cdot \cdot \vert _1}}\le 4\cdot 2de^r} We use the remark after (4.35), on the differentiated equation, with $j_1,..,j_N$ pairwise distinct and with $\alpha _j\ne 0$: \eeekv{4.61} {{\cal E}_t(x)(\partial _{x_{j_1}}^{\alpha _1}..\partial _{x_{j_N}}^{\alpha _N}A)=\hbox{ a linear combination of terms}} {\hskip 15mm (\partial _{x_{j_k}}^{\alpha _k'}{\cal E}_t(x))(\partial _{x_{j_1}}^{\alpha _1}..\partial _{x_{j_k}}^{\alpha _k-\alpha _k'}..\partial _{x_{j_N}}^{\alpha _N}A)\hbox{ and of terms}} {\hskip 3cm (\partial _{x_{j_k}}^{\alpha _k'}\partial _{x_{j_\ell}}^{\alpha _\ell '}{\cal E}_t(x))(\partial _{x_{j_1}}^{\alpha _1}..\partial _{x_{j_k}}^{\alpha _k-\alpha _k'}..\partial _{x_{j_\ell}}^{\alpha _\ell-\alpha _\ell'}..\partial _{x_{j_N}}^{\alpha _N}A),} with $0<\alpha _k'\le\alpha _k$ for the first kind of terms and with $k\ne \ell$, $0<\alpha _k'\le \alpha _k$, $0<\alpha _\ell '\le \alpha _\ell$ for the second kind. \par Using the observation after (4.57) and an induction argument based on (4.61), we get \ekv{4.62} {\Vert \partial _{x_{j_1}}^{\alpha _1}..\partial _{x_{j_N}}^{\alpha _N}A\Vert _{\ell_\rho ^\infty }={\cal O}(e^r)\langle x_{j_1}\rangle ^{-\vert \alpha _1\vert }..\langle x_{j_N}\rangle ^{-\vert \alpha _N\vert },} (where ${\cal O}(e^r)$ comes from $\Vert \Delta\Vert _{\ell_\rho ^\infty }={\cal O}(e^r)$,) when $j_1,..,j_N$ are distinct, $\alpha _1,..,\alpha _N\ne 0$ and \eekv{4.63} {\rho (\mu ,\nu )=r\min_{\pi \in {\rm Perm\,}(j_1,..,j_N)}\mathop{{\rm min\,}}_{{{{{b_N\in L(\pi (j_N))}\atop .}\atop.}\atop{b_1\in L(\pi (j_1))}}\atop {b_0\in {\rm diag\,}(\Lambda \times \Lambda )}}(\vert (\mu ,\nu )-b_N\vert _1} {\hskip7cm +\vert b_N-b_{N-1}\vert _1+..+\vert b_1-b_0\vert _1).} Here ${\rm Perm\,}(j_1,..,j_N)$ denotes the group of permutations of $(j_1,..,j_N)$. For given $\pi $ and $b_0,b_1,..,b_N$ as in (4.63), we write $b_k=(b_{k,1},b_{k,2})$, so that $$\eqalign{& \vert (\mu ,\nu )-b_N\vert _1+\vert b_N-b_{N-1}\vert _1+..+\vert b_1-b_0\vert _1= \cr &\hskip 5cm \vert \mu -b_{N,1}\vert _1+\vert b_{N,1}-b_{N-1,1}\vert _1+..\cr &\hskip 1cm +\vert b_{1,1}-b_{0,1}\vert _1+\vert b_{0,1}-b_{1,2}\vert _1+\vert b_{1,2}-b_{2,2}\vert _1+..+\vert b_{N-1,2}-b_{N,2}\vert _1+\vert b_{N,2}-\nu \vert _1.}$$ Here for each $k\ge 1$, one of $b_{k,1},b_{k,2}$ is equal to $\pi (j_k)$ while the other component "is free". $b_{1,1}=b_{1,2}$ is also free. Taking the infimum over the free components, we get $$\vert \mu -\widetilde{\pi }(j_N)\vert _1+\vert \widetilde{\pi }(j_N)-\widetilde{\pi }(j_{N-1})\vert _1+..+\vert \widetilde{\pi }(j_1)-\nu \vert _1,$$ for some new permutation (which can be arbitrary, when varying $\pi $ and the choice of free and unfree components). We then arrive at the simpler expression for $\rho $ in (4.62): \ekv{4.64} {\rho (\mu ,\nu )=r\min_{\pi \in{\rm Perm\,}(1,..,N)}\vert \mu -j_{\pi (N)}\vert _1+\vert j_{\pi (N)}-j_{\pi (N-1)})\vert _1+..+\vert j_{\pi (1)}-\nu \vert _1.} We may say that $\rho $ is $r$ times the $\ell^1$ distance from $\mu $ to $\nu $, when passing through the points $j_1,..,j_N$ in the shortest possible fashion. With this description of $\rho $ it is quite obvious that we can drop the assumption that $j_1,..,j_N$ are distinct in (4.62). \par It is easy to get the corresponding estimates for the matrix $B$ in (4.17). It will be convenient to sharpen (4.41) a little by using the middle bound in (4.16) and the Cauchy inequalities, to get \ekv{4.65} {\vert \chi _{j,k}(x)\vert \le 2\min (1,{\vert \langle x_j\rangle \vert \over \vert \langle x_k\rangle \vert }),} \ekv{4.66} {\vert \partial _{x_j}^\alpha \partial _{x_k}^\beta \chi _{j,k}(x)\vert ={\cal O}(1)\min (1,{\vert \langle x_j\rangle \vert \over \vert \langle x_k\rangle \vert })\langle x_j\rangle ^{-\vert \alpha \vert }\langle x_k\rangle ^{-\vert \beta \vert }.} \par Combining this with (4.62), (4.60), we get \ekv{4.67} {\partial _{x_{j_1}}^{\alpha _1}..\partial _{x_{j_N}}^{\alpha _N}b_{\mu ,\nu }(x)={\cal O}(e^r)\min (1,{\vert \langle x_\mu \rangle \vert \over \vert \langle x_\nu \rangle \vert })e^{-\rho (\mu ,\nu )}\langle x_{j_1}\rangle ^{-\vert \alpha _1\vert }..\langle x_{j_N}\rangle ^{-\vert \alpha _N\vert },} with $\rho $ given by (4.64). (We always have the option of replacing $r$ by a smaller value in (4.64), (4.67).) \par Recall that we have already estimated the vectorfield $v$ in (4.40). We now estimate the derivatives. For $\vert \alpha \vert =1$, consider \ekv{4.68} {\partial _{x_k}^\alpha v_j(x)=\partial _{x_k}^\alpha (\sum_{\mu }b_{j,\mu }(x)x_\mu )=b_{j,k}(x)+\sum_\mu \partial _{x_k}^\alpha (b_{j,\mu }(x))x_\mu .} Here the first term can be estimated by means of (4.67) and we use (4.67) also for the last sum in (4.68): $$\eqalign{ &\sum_\mu \partial _{x_k}^\alpha (b_{j,\mu }(x))x_\mu =\sum_\mu {\cal O}(e^r){\langle x_j\rangle \over \langle x_\mu \rangle }\cdot {1\over\langle x_k\rangle }e^{-r(\vert j-k\vert _1+\vert k-\mu \vert _1)}x_\mu \cr &={\cal O}(1){\langle x_j\rangle \over\langle x_k\rangle }\sum_\mu e^re^{-r(\vert j-k\vert _1+\vert k-\mu \vert _1)}={\cal O}(1){\langle x_j\rangle \over \langle x_k\rangle }e^re^{-r\vert j-k\vert _1},}$$ where in the last estimate, we first assume a strictly positive lower bound on $r$. Writing the Jacobian matrix ${\partial v\over \partial x}=({\partial v_j\over\partial x_k})$, where ${\partial v_j\over\partial x_k}$ is a $2\times 2$-matrix, we obtain \ekv{4.69} {{\partial v_j\over\partial x_k}={\cal O}(1){\langle x_j\rangle \over \langle x_k\rangle }e^re^{-r\vert j-k\vert _1}.} It follows that if $\Vert \rho \Vert _{{\rm Lip}}\le \theta r$, where $\theta \in [0,1[$ is some fixed constant, then \ekv{4.70} {\Vert {1\over\langle x\rangle }\circ {\partial v\over\partial x}\circ \langle x\rangle \Vert _{{\cal L}(\ell_\rho ^p,\ell_\rho ^p)}={\cal O}(e^r),} for $1\le p\le \infty $. Here we write $\langle x\rangle ={\rm diag\,}(\langle x_j\rangle )$. \par We next generalize (4.69) to higher derivatives. Let $N\ge 1$ be fixed and let $k_1,..,k_N\in\Lambda $. With a slight abuse of notation, we have \ekv{4.71} {\partial _{x_{k_1}}..\partial _{x_{k_N}}v_j=\sum_\mu (\partial _{x_{k_1}}..\partial _{x_{k_N}}b_{j,\mu }(x))x_\mu +\sum_{\ell =1}^N\partial _{x_{k_1}}..\widehat{\partial _{x_{k_\ell}}}..\partial _{x_{k_N}}b_{j,k_\ell}(x),} where the hat indicates the absence of the corresponding factor. >From (4.67), we get, \ekv{4.72} {(\partial _{x_{k_1}}..\partial _{x_{k_N}}b_{j,\mu }(x))x_\mu ={\cal O}(1){\langle x_j\rangle \over\langle x_{k_1}\rangle ..\langle x_{k_N}\rangle }e^re^{-\rho (j,\mu )},} where \ekv{4.73} {\rho (j,\mu )=r\min_{\pi \in{\rm Perm\,}(1,..,N)}\vert j-k_{\pi (N)}\vert _1+\vert k_{\pi (N)}-k_{\pi (N-1)}\vert _1+..+\vert k_{\pi (1)}-\mu \vert _1.} It follows that the first term of the RHS in (4.71) is \ekv{4.74} {{\cal O}(1){\langle x_j\rangle \over\langle x_{k_1}\rangle ..\langle x_{k_N}\rangle }e^re^{-\rho (j;k_1,..,k_N)},} where \ekv{4.75} {\rho (j;k_1,..,k_N)=r\min_{\pi \in{\rm Perm\,}(1,..,N)}\vert j-k_{\pi (N)}\vert _1+\vert k_{\pi (N)}-k_{\pi (N-1)}\vert +..+\vert k_{\pi (2)}-k_{\pi (1)}\vert_1.} Every term in the last sum in (4.71) is also of the form (4.74), so the same holds for $\partial _{x_{k_1}}..\partial _{x_{k_N}}v_j$. We did not assume $k_1,..,k_N$ to be distinct, and the resulting estimate can therefore be given the apparently more general form: \ekv{4.76} {\partial _{x_{k_1}}^{\alpha _1}..\partial _{x_{k_N}}^{\alpha _N}\partial _{x_j}^{\beta _j}v_j={\cal O}(1){\langle x_j\rangle ^{1-\vert \beta _j\vert }\over\langle x_{k_1}\rangle^{\vert \alpha _1\vert } ..\langle x_{k_N}\rangle ^{\vert \alpha _N\vert }}e^re^{-\rho (j;k_1,..,k_N)},} with $\rho $ given in (4.75), when $\vert \alpha _1\vert ,..,\vert \alpha _N\vert \ge 1$. \par It follows that when $\vert \alpha _1\vert ,..,\vert \alpha _N\vert \ge 1$: \ekv{4.77}{\partial _{x_{k_1}}^{\alpha _1}..\partial _{x_{k_N}}^{\alpha _N}{\rm div\,} v={\cal O}(1)\langle x_{k_1}\rangle ^{-\vert \alpha _1\vert }..\langle x_{k_N}\rangle ^{-\vert \alpha _N\vert }e^re^{-\rho (k_1,..,k_N)},} where \ekv{4.78} {\rho (k_1,..,k_N)=r\min_{\pi \in{\rm Perm\,}(1,..,N)}(\vert k_{\pi (N)}-k_{\pi (N-1)}\vert +..+\vert k_{\pi (2)}-k_{\pi (1)}\vert ).} Note that there is no reason to expect some nice (i.e uniform in $\Lambda $) estimates for ${\rm div\,}v$ itself. We notice the special cases: \ekv{4.79} {\partial _{x_k}^\alpha {\rm div\,}v={\cal O}(1)\langle x_k\rangle ^{-1},\hbox{ when }\vert \alpha \vert =1,} \ekv{4.80} {\partial _{x_j}^\alpha \partial _{x_k}^\beta {\rm div\,}v={\cal O}(1)\langle x_j\rangle ^{-1}\langle x_k\rangle ^{-1}e^re^{-r\vert j-k\vert _1}, \hbox{ when }\vert \alpha \vert =\vert \beta \vert =1.} \vskip 1cm \centerline{\bf 5. Elimination of $t\Delta $: The flow of the deforming vectorfield.} \medskip \par In this section we shall study the flow of the vectorfield $v=v_t$ constructed in the preceding section. The constructions of that section extend to sufficiently small complex $t$, and we shall here work with complex $t$ satisfying $\vert t\vert 0$ depending only on $a,b,a',b',d$ (but not on $r$ in (5.1)), such that if $y\in V(a',b')^m$, $\vert t\vert 0$ depending only on $a,b,d$, such that \ekv{5.2} {{1\over C}\le {\vert \langle (x_t(y))_j\rangle \vert \over \vert \langle y_j\rangle \vert }\le C.} \par In order to estimate the differential and higher order derivatives (w.r.t. $y$) of $x(t,y)=x_t(y)$, we shall give a slightly weakened variant of (4.76). Introduce \ekv{5.3} {d(j;k_1,..,k_N)=\min_{\pi \in {\rm Perm\,}(1,..,N)}(\vert j-k_{\pi (N)}\vert +\vert k_{\pi (N)}-k_{\pi (N-1)}\vert +..+\vert k_{\pi (2)}-k_{\pi (1)}\vert ),} so that $\rho (j;k_1,..,k_N)$ in (4.76) is of the form $rd(j;k_1,..,k_N)$. Fix $\theta \in ]0,1[$. We claim that \eekv{5.4} {\Vert {1\over \langle x\rangle }\langle \nabla _x^Nv,\tau_1\otimes ..\otimes \tau_N\rangle \Vert _{\ell_\rho ^p}\le C_Ne^r\Vert {1\over \langle x\rangle }\tau_1\Vert _{\ell_{\rho _1}^{p_1}}..\Vert {1\over \langle x\rangle }\tau_N\Vert _{\ell_{\rho _N}^{p_N}},} {\tau_j\in ({\bf C}^2)^\Lambda ,\hbox{ if }p,p_1,..,p_N\in [1,+\infty ],\,\,{1\over p}={1\over p_1}+..+{1\over p_N},} provided that the weights $\rho ,\rho _1,..,\rho _N:\Lambda \to {\bf R}$ satisfy \ekv{5.5} {\rho (j)\le \theta rd(j;k_1,..,k_N)+\rho _1(k_1)+..+\rho _N(k_N),\,\, j,k_1,..,k_N\in\Lambda .} Here $C_N$ is independent of the weights and the exponents and we recall the notations: $\langle x\rangle ={\rm diag\,}(\langle x_j\rangle )$, ${1/ \langle x\rangle }=\langle x\rangle ^{-1}$. $\nabla _x^Nv$ is the symmetric tensor of the $N$:th order derivatives of $v$. To see (5.4), write with $\tau_\nu =(\tau_{\nu,1},..,\tau_{\nu,m})$, $s=(s_1,..,s_m)$, and with a slight abuse of notation (since $\tau_{\nu ,j}$, $s_j$ are 2-vectors and not scalars): $$\eqalign{ &\langle s,\langle \nabla _x^Nv,\tau_1\otimes ..\otimes \tau_N\rangle \rangle =\sum_{j}\sum_{k_1}..\sum_{k_N}s_j(\partial _{x_{k_1}}..\partial _{x_{k_N}}v_j)\tau_{1,k_1}..\tau_{N,k_N}= \cr &\sum_{j\in\Lambda }\sum_{k\in\Lambda ^N}{\cal O}_N(1)e^re^{\rho (j)-rd(j;k_1,..,k_N)-\rho _1(k_1)-..-\rho _N(k_N)}(\langle x_j\rangle s_je^{-\rho (j)})\times \cr &\hskip 7cm{e^{\rho _1(k_1)}\tau_{1,k_1}\over\langle x_{k_1}\rangle }..{e^{\rho _N(k_N)}\tau_{N,k_N}\over \langle x_{k_N})}. }$$ Here the exponent is $$\eqalign{ & -(1-\theta )rd(j;k_1,..,k_N)-(\theta rd(j;k_1,..,k_N)+\rho _1(k_1)+..+\rho _N(k_N)-\rho (j)) \cr &\hskip 7cm \le -(1-\theta )rd(j;k_1,..,k_N). }$$ It is easy to see that $\sum^{(\nu )}e^{-(1-\theta )rd(j;k_1,..,k_N)}={\cal O}_N(1)$, $\nu =0,..,N$, where $\sum^{(\nu )}$ denotes the sum over all the variables $j,k_1,..,\widehat{k_\nu },..,k_N$ (with the exception of $k_\nu $) and with the convention that $k_0=j$. It follows that $$\langle s,\langle \nabla _x^Nv,\tau_1\otimes ..\otimes \tau_N\rangle \rangle ={\cal O}_N(1)e^r\Vert s\Vert _{\ell_{-\rho }^q}\Vert \tau_1\Vert _{\ell_{\rho _1}^{p_1}}..\Vert \tau_N\Vert _{\ell_{\rho _N}^{p_N}},$$ for $q,p_1,..,p_N\in [1,+\infty ]$ with $1={1/q}+{1/ p_1}+..+{1/p_N}$, first in the case when precisely one of the $q,p_1,..,p_N$ is $=1$ and the others $=+\infty $, then by interpolation in the general case. The last estimate is equivalent to (5.4), since $\ell_{-\rho }^q$ is the dual space to $\ell_\rho ^p$. \par If $\rho _1,..,\rho _N$ are given, then the optimal choice of $\rho $ in (5.5) is given by \eekv{5.6} {\rho (j)=R_{\theta r,N}(\rho _1,..,\rho _N)(j):=} {\inf_{k_1,..,k_N\in\Lambda }\theta rd(j;k_1,..,k_N)+\rho _1(k_1)+..+\rho _N(k_N).} \smallskip \par\noindent \bf Proposition 5.1. \sl Assume that \ekv{5.7} {\Vert \sum_{j\in K}\rho _j\Vert _{{\rm Lip}}\le\theta r\hbox{ for every }K\subset\{ 1,..,N\}.} Then $R_{\theta r,N}(\rho _1,..,\rho _N)=\rho _1+..+\rho _N$. \smallskip \par\noindent \bf Proof. \rm It suffices to prove that $R_{\theta r,N}(\rho _1,..,\rho _N)\ge \rho _1+..+\rho _N$, since the opposite inequality is obvious. We have the proposition in the case $N=1$. Assume we have proved the proposition with $N$ replaced by $N-1$, for some $N\ge 2$. Let $\pi \in{\rm Perm\,}(1,2,..,N)$. Then if $\pi (N-1)=\nu $, $\pi (N)=\mu $: $$\eqalign{ &\theta r(\vert j-k_{\pi (1)}\vert +\vert k_{\pi (1)}-k_{\pi (2)}\vert +..+\vert k_{\pi (N-1)}-k_{\pi (N)}\vert )+\rho _1(k_1)+..+\rho _{N}(k_{N}) \cr & \ge \theta r(\vert j-k_{\pi (1)}\vert +..+\vert k_{\pi (N-2)}-k_{\pi (N-1)}\vert )+ \sum_{j\not\in\{ \nu ,\mu \}}\rho _j(k_j)+(\rho _\nu +\rho _\mu )(k_\nu ).}$$ Here $\{ \pi (1),..,\pi (N-1)\} =\{ 1,..,\widehat{\mu },..,N\}$, so the last expression is $$\ge R_{\theta r,N-1}(\rho _1,..,\rho _\nu +\rho _\mu ,..,\widehat{\rho _\mu },..,\rho _N)(j)\ge (\rho _1+..+\rho _N)(j)$$ \hfill{$\#$} \smallskip \par Now consider (4.2) and differentiate once w.r.t. $y\in V(a',b')^m$: \eekv{5.8} {{\partial \over \partial t}\langle \nabla _yx(t,y),\tau_1\rangle =\langle \nabla _xv_t(x(t,y)),\langle \nabla _yx(t,y),\tau_1\rangle \rangle } {\langle \nabla _yx(0,y),\tau_1\rangle =\tau_1.} Here $\tau_1\in({\bf C}^2)^\Lambda $ is independent of $t$. If $\Vert \rho _1\Vert _{{\rm Lip}}\le \theta r$, then we get from this (5.2), (5.4) and Proposition 5.1, that for $p\in [1,\infty ]$ \ekv{5.9} {{\partial \over \partial t}\Vert {1\over\langle y\rangle }\langle \nabla _yx(t,y),\tau_1\rangle \Vert _{\ell_{\rho _1}^p }\le{\cal O}(1)e^r\Vert {1\over \langle y\rangle }\langle \nabla _yx(t,y),\tau_1\rangle \Vert _{\ell_{\rho _1}^p }.} Here, we also used that if $t\mapsto z(t)\in B$ is a $C^1$-curve in a Banach space $B$, then $t\mapsto \Vert z(t)\Vert _B$ is Lipschitz and the a.e. defined derivative satisfies $$\vert {d\over dt}\Vert z(t)\Vert_B\vert \le \Vert {dz(t)\over dt}\Vert _B.$$ Also recall that for Lipschitz functions, we have $$f(t)-f(s)=\int_s^t{\partial f\over\partial \sigma }(\sigma )d\sigma.$$ Combining the differential inequality (5.9) and the initial condition in (5.8), we get \ekv{5.10} {\Vert {1\over\langle y\rangle }\langle \nabla _yx(t,y),\tau_1\rangle \Vert_{\ell_{\rho _1}^p }\le e^{{\cal O}(e^r\vert t\vert )}\Vert {1\over \langle y\rangle }\tau_1\Vert _{\ell_{\rho _1}^p}.} This can be reformulated as \ekv{5.11}{\Vert {1\over\langle y\rangle }\circ {\partial x(t,y)\over\partial y}\circ \langle y\rangle \Vert _{{\cal L}(\ell_{\rho _1}^p,\ell_{\rho _1}^p)}\le e^{{\cal O}(e^r\vert t\vert )}.} (Compare with (4.70).) \par Considering also (5.8) with initial condition at some fixed $t$ instead of at $t=0$, we get an estimate for the inverse of the differential in the same way: \ekv{5.12}{\Vert {1\over\langle y\rangle }\circ {({\partial x(t,y)\over\partial y})}^{-1}\circ \langle y\rangle \Vert _{{\cal L}(\ell_{\rho _1}^p ,\ell_{\rho _1}^p)}\le e^{{\cal O}(e^r\vert t\vert )}.} \par Differentiating (5.8) $N-1$ times, we get for $N\ge 2$: \eeeekv{5.13} {{\partial \over\partial t}\langle \nabla _y^Nx(t,y),\tau_1\otimes ..\otimes \tau_N\rangle -\langle (\nabla _xv_t)(x(t,y)),\langle \nabla _y^Nx(t,y),\tau_1\otimes ..\otimes \tau_N\rangle \rangle =} {\hbox{a linear combination of terms of the type}} {\langle (\nabla _x^Lv_t)(x(t,y)),\langle \nabla _y^{\sharp K_1}x,\bigotimes_{k\in K_1}\tau_k\rangle \otimes ..\otimes \langle \nabla _y^{\sharp K_L}x,\bigotimes_{k\in K_L}\tau_k\rangle \rangle ,} {\hbox{with }L\ge 2,\,\,K_1\cup ..\cup K_L=\{ 1,..,N\} ,\,\,K_\nu \cap K_\mu =\emptyset\hbox{ for }\nu \ne\mu ,\,\,K_\nu \ne \emptyset .} The initial condition is now: \ekv{5.14} {\langle \nabla _yx(0,y),\tau_j\rangle =\tau_j,\,\, \nabla _y^Mx(0,y)=0\hbox{ for }M\ge 2.} Let $\rho _1,..,\rho _N:\Lambda \to {\bf R}$ be weights satisfying (5.7). Using (5.4), Proposition 5.1, (5.13), we get by induction over $N$: \ekv{5.15} {\Vert {1\over\langle y\rangle} \langle \nabla _y^Nx(t,y),\tau_1\otimes ..\otimes \tau_N\rangle \Vert _{\ell_{\rho _1+..+\rho _N}^p}\le C_Ne^r\vert t\vert \prod_1^N\Vert {1\over \langle y\rangle }\tau_j\Vert _{\ell_{\rho _j}^{p_j}},} for $N\ge 2$ and for weights $\rho _1,..,\rho _N:\Lambda \to{\bf R}$ satisfying (5.7) and for exponents $p_1,..,p_N,p\in [1,\infty ]$ satisfying \ekv{5.16} {{1\over p}={1\over p_1}+..+{1\over p_N}.} The constant $C_N$ in (5.15) only depends on $\theta $ in (5.7) and not on the choice of the $\rho _1,..,\rho _N$ and $p_1,..,p_N$. \vskip 1cm \centerline{\bf 6. Elimination of $t\Delta $: The end.} \medskip \par We start with some formal considerations about how to transform integrals, to be justified in each case by convenient choices of contours along which the integrands decay fast enough near infinity. All functions are assumed to be holomorphic in $x$ and sufficiently smooth in $t$ where $t$ varies in some interval. (The case of complex $t$ with holomorphic dependence on $t$ works the same way.) Let $\lambda $ be some parameter $\ne 0$ and let $v_t$ be a (holomorphic) vector field such that \ekv{6.1} {{\partial \phi _t\over\partial t}+v_t(\phi _t)-{1\over \lambda }{\rm div\,}(v_t)+{1\over \lambda }r_t=0,} where $r_t$ is a remainder and $\phi _t$ a phase. \par Then \eeeekv{6.2} {{\partial \over \partial t}\int f_t(x)e^{-\lambda \phi _t(x)}dx=\int{\partial f_t\over\partial t}(x)e^{-\lambda \phi _t(x)}dx-\lambda \int f_t(x)e^{-\lambda \phi _t(x)}{\partial \phi _t\over\partial t}dx} {=\int {\partial f_t\over\partial t}(x)e^{-\lambda \phi _t(x)}dx+\lambda \int f_t(x)e^{-\lambda \phi _t(x)}(v_t(\phi _t)-{1\over\lambda }{\rm div\,}(v_t)+{1\over\lambda }r_t)dx} {=\int ({\partial f_t\over\partial t}+r_tf_t)e^{-\lambda \phi _t(x)}dx-\int f_t(x)(v_t+{\rm div\,}(v_t))(e^{-\lambda \phi _t(x)})dx} {=\int ({\partial f_t\over\partial t}+r_tf_t+v_t(f_t(x)))e^{-\lambda \phi _t(x)}dx,} where we used an integration by parts in the next to the last integral. We conclude that the integral $\int f_t(x)e^{-\lambda \phi _t(x)}dx$ is independent of $t$, if \ekv{6.3} {{\partial f_t\over\partial t}+v_t(f_t)+r_tf_t=0.} Let $t\mapsto x_t(y)$ be an integral curve of $v_t$: $\partial _tx_t(y)=v_t(x_t(y))$. Writing $u(t)=f_t(x_t(y))$, (6.3) amounts to $${d\over dt}u(t)+r_t(x_t(y))u(t)=0,$$ so $$u(t)=u(0)e^{-\int_0^tr_s(x_s(y))ds}.$$ In other words, the solutions of (6.3) (at least locally) are of the form $f_t(x)$, with \ekv{6.4} {f_t(x_t(y))=f_0(y)e^{-\int_0^tr_s(x_s(y))ds}.} The way we have set up things, $f_t$ is given for some $t$ and we look for $f_0$, so we rewrite (6.4): \ekv{6.5} {f_0(y)=e^{\int_0^tr_s(x_s(y))ds}f_t(x_t(y)),} leading to the identity, \ekv{6.6} {\int f(x)e^{-\lambda \phi _t(x)}dx=\int f(x_t(y))e^{-\lambda \phi _0(y)+\int_0^tr_s(x_s(y))ds}dy.} \par Let $$M_t(x):=1+{t}\Delta +{\rm diag\,}{r}'(x_j\cdot x_j),$$ which first appeared in (3.25). Note that in $V(a,b)^m$, $\det M_t\ne 0$. $\log\det M_t$ is therefore holomorphic. We apply (6.6) with $\phi _t(x)=Q_t(x)-\left (1/\lambda \right )\log\det M_t$, where $Q_t$ is as in (4.1) and $\lambda =\vert 1+iE\vert $. Let $v_t$ be the vector field constructed in sect. 4. Let $f(x)$ be a holomorphic function on $V(a,b)^m$ of at most polynomial growth at infinity. We also recall that $\vert t\vert \le T_0$, with $T_0$ so small that $x_t(y)\in V(a,b)^m$ when $y\in V(a',b')^m$, for some fixed $a',b'$ with $00$, and write $\Omega =\Omega (T)$. We check that the assumptions of Theorem C.8 are satisfied, when $\delta $ is small enough: \par Let $x\in\partial \Omega$, so that $\vert x_j\vert \le T$ with equality for some $j=j_0$. The $j_0$:th component of $\nu $ is $$\lambda (2x_{j_0}+\nabla _{x_{j_0}}R(x))=\lambda (2x_{j_0}+{\cal O}(\delta )).$$ For the corresponding real vectorfield $\nu _{\bf R}$, we therefore have $\nu _{\bf R}(\vert x_{j_0}\vert )>0$ at the point under consideration. The outgoing condition (C.32) follows. The conditions (C.25) and (C.31) are clearly fulfilled, and the vectorfield $\nu $ therefore satisfies all the required conditions. \par Let $B=\ell_\rho ^\infty $ with $\Vert \rho \Vert _{{\rm Lip}}\le \theta r$. Then as a special case of (6.15): \ekv{7.8} {\Vert R''(x)\Vert _{{\cal L}(\ell_\rho ^\infty ,\ell_\rho ^\infty )}={\cal O}(\delta ).} We then get (C.33) with ``$\delta $" there equal to $\lambda $: \eekv{7.9} {\hbox{If }x\in\overline{\Omega },\, u\in B,\, v\in B^*\hbox{ and }{\rm Re\,}(u\vert v)=\Vert u\Vert _B\Vert v\Vert _{B^*},} {\hbox{then }{\rm Re\,}(V(x)u\vert v)\ge \lambda \Vert u\Vert _B\Vert v\Vert _{B^*}.} It follows that if $v\in C_b(\overline{\Omega })\cap{\rm Hol}(\Omega )$, then there exists $u\in E$ (the space defined in Theorem C.8,) such that \ekv{7.10} {-\Delta u+\lambda \nabla \phi _t\cdot {\partial \over\partial x}u+\lambda \phi _t''(x)u=v,} and \ekv{7.11} {\sup_{\overline{\Omega }}\Vert u\Vert _{\ell_\rho ^\infty }\le{1\over\lambda }\sup_{\overline{\Omega }}\Vert v\Vert _{\ell_\rho ^\infty }.} We also recall from the proof of Theorem C.8, that if $v\in{\cal S}(\overline{\Omega })\cap{\rm Hol}$, then $u$ is in the same space. \par We shall next use the maximum principle as in appendix C, to estimate the derivatives of $u$, when $u,v\in{\cal S}(\overline{\Omega })\cap{\rm Hol}$. To simplify the notations, we divide (7.10) by $\lambda $ and then take the scalar product with the constant vector $\tau$: \ekv{7.12} {-{1\over\lambda }\Delta \langle u,\tau\rangle +\langle \langle \nabla u,\nabla \phi \rangle ,\tau\rangle +\langle \langle \nabla ^2\phi ,u\rangle ,\tau\rangle =\langle {v\over\lambda },\tau\rangle .} Now differentiate (7.12) in the constant direction $s$, using the identity \ekv{7.13} {s_1(\partial _x)\circ ..\circ s_k(\partial _x)u=\langle \nabla ^ku,s_1\otimes ..\otimes s_k\rangle ,} when $s_1,..,s_k$ are constant directions: $$\eqalign{ &-{1\over\lambda }\Delta \langle \langle \nabla u,s\rangle ,\tau\rangle +\langle \langle \langle \nabla ^2u,s\rangle ,\nabla \phi \rangle ,\tau\rangle +\langle \langle \nabla u,\langle \nabla ^2\phi ,s\rangle \rangle ,\tau\rangle +\langle \langle \nabla ^2\phi ,\langle \nabla u,s\rangle \rangle ,\tau\rangle \cr & \hskip 3cm ={1\over\lambda }\langle \langle \nabla v,s\rangle ,\tau\rangle -\langle \langle \langle \nabla ^3\phi ,s\rangle ,u\rangle ,\tau\rangle . }$$ The second and third terms of the LHS can be rewritten, and we get, \eekv{7.14} {-{1\over\lambda }\Delta \langle \langle \nabla u,s\rangle ,\tau\rangle +\nabla \phi \cdot {\partial \over\partial x}\langle \langle \nabla u,s\rangle ,\tau\rangle +\langle \langle \nabla ^2\phi ,s\rangle ,\langle {}^t(\nabla u),\tau\rangle \rangle +} {\hskip 3cm\langle \langle \nabla ^2\phi ,\langle \nabla u,s\rangle \rangle ,\tau\rangle ={1\over \lambda }\langle \langle \nabla v,s\rangle ,\tau\rangle -\langle \langle \langle \nabla ^3\phi ,s\rangle ,u\rangle ,\tau\rangle .} \par Let $B=\ell_\rho ^\infty $ with $\Vert \rho \Vert _{{\rm Lip}}\le\theta r$, so that (7.9) holds with $V=\lambda \nabla ^2\phi $. Let $x_0\in\overline{\Omega }$ be a point where $\Vert \nabla u\Vert _{{\cal L}(B,B)}$ ($=\Vert {}^t(\nabla u)\Vert _{{\cal L}(B^*,B^*)}$) is maximal $=:m$, and choose $s\in B$, $\tau\in B^*$ normalized, such that $$\eqalign{ &\langle \langle \nabla u(x_0),s\rangle ,\tau\rangle =\langle s,\langle {}^t\nabla u(x_0),\tau\rangle \rangle =m \cr & =\Vert \langle \nabla u(x_0),s\rangle \Vert _B\Vert \tau\Vert _{B^*}=\Vert s\Vert _B\Vert \langle {}^t\nabla u(x_0),\tau \rangle \Vert _{B^*}, }$$ so that $\overline{\Omega }\ni x\mapsto{\rm Re\,}\langle \langle \nabla u(x),\tau\rangle ,s\rangle $ attains its maximum ($m$) at $x_0$. Hence the real part of the first term in (7.14) is $\ge 0$ at $x=x_0$ and the same holds for the secone term by the outgoing condition. In view of (7.9), the real parts of the 3:rd and the 4:th terms in (7.14) at $x=x_0$, are both $\ge m$, so we end up with the estimate \ekv{7.15} {2\sup_{\overline{\Omega }}\Vert \nabla u\Vert _{{\cal L}(B,B)}\le{1\over\lambda }\sup_{\overline{\Omega }}\Vert \nabla v\Vert _{{\cal L}(B,B)}+\sup_{\overline{\Omega }}\Vert \nabla ^3\phi \Vert _{(B\otimes E\otimes B^*)^*}\Vert u\Vert _E,} where $E\simeq ({\bf C}^2)^\Lambda $ is any Banach space with $({\bf C}^2)^\Lambda $ as the underlying vector space and \break $\Vert \nabla ^3\phi \Vert _{(B\otimes E\otimes B^*)^*}$ denotes the norm of $\nabla ^3\phi $ as a trilinear form on $B\times E\times B^*$. \par Notice that $u$ is the gradient of a holomorphic function in $\Omega $ iff $\nabla u$ is symmetric. The same holds for $v$ of course, and we now rewrite (7.14) in the form: \ekv{7.16} {-{1\over\lambda }\Delta \nabla u+(\nabla \phi \cdot {\partial \over\partial x})(\nabla u)+\nabla u\circ \nabla ^2\phi +\nabla ^2\phi \circ \nabla u+\langle \nabla ^3\phi ,u\rangle ={1\over\lambda }\nabla v.} Here $\langle \nabla ^3\phi ,u\rangle $ is symmetric, so if we transpose the last equation and then take the difference between (7.16) and its transpose, we get \eekv{7.17} {-{1\over \lambda }\Delta (\nabla u-{}^t\nabla u)+\nabla \phi \cdot {\partial \over\partial x}(\nabla u-{}^t\nabla u)+(\nabla u-{}^t\nabla u)\circ \nabla ^2\phi +} {\nabla ^2\phi \circ (\nabla u-{}^t\nabla u)={1\over \lambda }(\nabla v-{}^t\nabla v).} The maximum principle (used after going back to an equation of the type (7.14)) shows that \ekv{7.18} {2\sup_{\overline{\Omega }}\Vert \nabla u-{}^t\nabla u\Vert _{{\cal L}(B,B)}\le {1\over\lambda }\sup_{\overline{\Omega }}\Vert \nabla v-{}^t\nabla v\Vert _{{\cal L}(B,B)}.} In particular, if $v$ is a gradient, so that $\nabla v-{}^t\nabla v=0$, then the same holds for $u$. In this case, if $u=\nabla f$, $v=\nabla g$, we see that the LHS in (7.10) is the gradient of $-\Delta f+\lambda \nabla \phi _t\cdot {\partial \over\partial x}f$ and we get \ekv{7.19} {-\Delta f+\lambda \nabla \phi _t\cdot {\partial \over\partial x}f-E_t=g,} where $E_t$ is a constant. \par We now want to estimate higher derivatives and we start from (7.19) with $\nabla f,\nabla g\in{\cal S}(\overline{\Omega })\cap{\rm Hol}(\Omega )$. Let $s_1,..,s_N$ be constant directions, and apply $s_1(\partial _x)\circ ..\circ s_N(\partial _x)$ to (7.19): $$\eqalign{ & -{1\over \lambda }\Delta (s_1(\partial _x)\circ ..\circ s_N(\partial _x)f)+\nabla \phi \cdot {\partial \over\partial x}(s_1(\partial _x)\circ ..\circ s_N(\partial _x)f)+ \cr & \sum_{j=1}^N\nabla (s_j(\partial _x)\phi )\cdot \nabla (s_1(\partial _x)\circ ..\widehat{s_j(\partial _x)}\circ ..\circ s_N(\partial _x)f)+ \cr & \sum_{J\cup K=\{ 1,..,N\} ,J\cap K=\emptyset ,\sharp J\ge 2} \nabla ((\prod_J s_j(\partial _x))\phi )\cdot \nabla (\prod_K s_k(\partial _x)f)={1\over \lambda }s_1(\partial _x )\circ ..\circ s_N(\partial _x)g,}$$ with the convention that $\prod_Ks_k(\partial _x)=1$, when $K=\emptyset$. This can also be written \eeekv{7.20}{ -\lambda \Delta \langle \nabla ^Nf,s_1\otimes ..\otimes s_N\rangle +\nabla \phi \cdot {\partial \over\partial x}\langle \nabla ^Nf,s_1\otimes ..\otimes s_N\rangle } {\hskip 4cm +\sum_{j=1}^N\langle \langle \nabla ^2\phi ,s_j\rangle ,\langle \nabla ^N,s_1\otimes ..\widehat{s_j}..\otimes s_N\rangle \rangle = } { {1\over\lambda }\langle \nabla ^Ng,s_1\otimes ..\otimes s_N\rangle -\sum_{J\cup K=\{ 1,..,N\} ,J\cap K:\emptyset,\sharp J\ge 2}\langle \langle \nabla ^{1+\sharp K}f,\bigotimes_{k\in K}s_k\rangle ,\langle \nabla ^{1+\sharp J}\phi ,\bigotimes_{j\in J}s_j\rangle \rangle . } Let $\rho _1,..,\rho _N:\Lambda \to{\bf R}$ satisfy (5.7), and put $\rho _K=\sum_{k\in K}\rho _k$, when $K\subset \{1,..,N\}$ is non-empty, and $\rho _\emptyset =0$. Let $p_1,..,p_N\in [1,+\infty ]$ satisfy \ekv{7.21} {1={1\over p_1}+..+{1\over p_N}.} If $K \subset\{ 1,..,N\}$, define $p_K\in [1,+\infty ]$, by \ekv{7.22} {{1\over p_K}=\sum_{k\in K}{1\over p_k},\hbox{ for }K\ne\emptyset,\,\,p_\emptyset=+\infty .} Let $x_0\in\overline{\Omega }$ be a point where \ekv{7.23} {\sup_{x\in\overline{\Omega }}\Vert \nabla ^Nf(x)\Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes\ell_{\rho _N}^{p_N})^*}=:m,} is attained, and observe that $\Vert \nabla ^Nf(x)\Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes\ell_{\rho _N}^{p_N})^*}$ ,(defined as after (7.15),) is also the norm of $\nabla ^Nf(x)$ as a multilinear map: $\ell_{\rho _1}^{p_1}\times ..\widehat{\ell_{\rho _j}^{p_j}}..\times \ell_{\rho _N}^{p_N}\to \ell_{-\rho _j}^{q_j}$, where $q_j$ is the conjugate index to $p_j$: ${1\over q_j}+{1\over p_j}=1$, so that $q_j=p_{\{ 1,..\widehat{j}..,N\} }$. (When $N=1$, we interpret $\ell_{\rho _1}^{p_1}\times ..\widehat{\ell_{\rho _j}^{p_j}}..\times \ell_N^{p_N}$ as ${\bf C}$ and our identification remains valid trivially.) The latter norm will be denoted $$\Vert \nabla ^Nf(x)\Vert _{{\cal L}(\ell_{\rho _1}^{p_1}\otimes ..\widehat{\ell_{\rho _j}^{p_j}}..\otimes\ell_{\rho _N}^{p_N};\ell_{-\rho _j}^{q_j})}.$$ Let $s_j\in\ell_{\rho _j}^{p_j}$ be normalized vectors with \ekv{7.24} {\langle \nabla ^Nf(x_0),s_1\otimes ..\otimes s_N\rangle =m.} We notice here that (7.8), (7.9) remain valid, if we replace ``$\infty $" there by some arbitrary $p\in [1,\infty ]$. Since \ekv{7.25} {m={\rm Re\,}\langle s_j,\langle \nabla ^Nf(x_0),s_1\otimes ..\widehat{s_j}..\otimes s_N\rangle \rangle =\Vert s_j\Vert _{\ell_{\rho _j}^{p_j}}\Vert \langle \nabla ^Nf(x_0), s_1\otimes ..\widehat{s_j}..\otimes s_N\rangle \Vert _{\ell_{-\rho _j}^{q_j}},} and $\ell_{-\rho _j}^{q_j}$ is the dual of $\ell_{\rho _j}^{p_j}$, it follows from the above mentioned extension of (7.9), that \ekv{7.26} {{\rm Re\,}\langle \langle \nabla ^2\phi (x_0),s_j\rangle ,\langle \nabla ^Nf(x_0),s_1\otimes ..\widehat{s_j}..\otimes s_N\rangle \rangle \ge m.} (When $N=1$, we use the convention: $\langle \nabla ^Nf(x),s_1,..\widehat{s_j}..\otimes s_N\rangle =\nabla f(x)$.) \par Taking the real part of (7.20) and putting $x=x_0$, we can apply the maximum principle as before, and get \eekv{7.27} {N\sup_{x\in\overline{\Omega }} \Vert \nabla ^Nf\Vert _{(\ell_{\rho_1}^{p_1}\otimes \cdots \otimes\ell_{\rho_N}^{p_N})^*} \le{1\over \lambda}\sup_{x\in\overline{\Omega }}\Vert \nabla ^Ng\Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes\ell_{\rho _N}^{p_N})^*}+} {\sum_{J\cup K=\{ 1,..,N\},\, J\cap K=\emptyset,\,\sharp J\ge 2}\inf_\rho\,\sup_{x\in\overline{\Omega}} (\Vert \nabla ^{1+\sharp K}f\Vert _{{\cal L}(\bigotimes_{k\in K}\ell_{\rho _k}^{p_k};\ell_\rho ^{p_K})} \Vert \nabla^{1+\sharp J}\phi\Vert _{{\cal L} (\bigotimes_{j\in J}\ell_{\rho _j}^{p_j};\ell_{-\rho}^{p_J})}),} where we also used that ${1/ p_K}+{1/ p_J}=1$, so that $(\ell_\rho ^{p_k})^*=\ell_{-\rho }^{p_J}$. If we also have $\rho _1+..+\rho _N=0$, then a natural choice for $\rho $, to bound the infimum above, may be $\rho =\rho _K$, since then $-\rho =\rho _J$. \par We return to the equation (7.4). Approximating ${\cal R}_t$ by the functions $e^{-\epsilon x^2}{\cal R}_t(x)\in{\cal S}(\overline{\Omega }) \cap{\rm Hol}\,(\Omega )$, we see that (7.4) has a solution $u=u_t$ with $\nabla u\in C_b^\infty (\overline{\Omega })\cap{\rm Hol\,}(\Omega )$ and such that the estimates we made for the equation (7.19), can be applied with $g=-\lambda {\cal R}_t$, $f=u$. \par Let $\rho _1,..,\rho _N:\Lambda \to{\bf R}$ be a system of weights which satisfies (5.7) for some fixed $\theta $ and assume, \ekv{7.28} {\rho _1+..+\rho _N=0.} Let $p_1,..,p_N\in[1,+\infty ]$ satisfy (7.21). We shall derive estimates for $\nabla ^Nu$, which depend on $\theta $ in (5.7), but not on the choice of $\rho _j$ and $p_j$ satisfying (5.7), (7.28) and (7.21). If $\emptyset\ne K\subset \{ 1,..,N\}$, then $\rho _k$, $k\in K$, $-\rho _K$ satisfy (5.7), (7.8) with $N$ replaced by $1+\sharp K$, and if $q_K$ is the exponent conjugate to $p_K$, then $p_k$, $k\in K$, $q_K$ satisfy (7.21): $\sum_K{1\over p_k}+{1\over q_K}=1.$ Using this remark, we can make an ``induction over $N$": Let $m(N)$ be the infimum of all constants $C=C_t$ such that \ekv{7.29} {\vert \langle \nabla ^Nu,\tau_1\otimes ..\otimes \tau_N\rangle \vert \le C\Vert \tau_1\Vert _{\ell_{\rho _1}^{p_1}}..\Vert \tau_N\Vert _{\ell_{\rho _N}^{p_N}},} for all $\tau_j\in{\bf C}^2$, $p_j\in[1,+\infty ]$ satisfying (7.21), $\rho _j$ satusfying (5.7) (where $\theta $ is fixed) and (7.28). \par In (7.27) we choose $\rho $ as in the subsequent remark, and get \eeekv{7.30} {N\sup_{\overline{\Omega }}\Vert \nabla ^Nu\Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes \ell_{p_N}^{\rho _N})^*}\le} {\hskip 1cm \sup_{\overline{\Omega }}\Vert \nabla ^N{\cal R}\Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes \ell_{p_N}^{\rho _N})^*}+}{\hskip 2cm \sum_{J\cup K=\{ 1,..,N\} ,J\cap K=\emptyset ,\sharp J\ge 2}m(1+\sharp K)\sup_{\overline{\Omega }}\Vert \nabla ^{1+\sharp J}\phi \Vert _{{\cal L}(\bigotimes_{j\in J}\ell_{\rho _j}^{p_j};\ell_{\rho _J}^{p_J})}.} Now recall (6.19): $$\Vert \nabla ^N{\cal R}_t\Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes\ell_{\rho _N}^{p_N})^*}\le C_N{1\over \lambda }(e^r+\epsilon )=C_N,$$ and that $\delta ={\cal O}(1)$, by assumption. It follows from (7.1), that $$\Vert \nabla ^N\phi \Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes\ell_{\rho _N}^{p_N})^*}\le C_N,$$ so $$\Vert \nabla ^{1+\sharp J}\phi \Vert _{{\cal L}(\bigotimes_{j\in J}\ell_{\rho _j}^{p_j};\ell_{\rho _J}^{p_J})}\le C_{1+\sharp J}.$$ >From (7.30), we get with a new constant $C_N$: \ekv{7.31} {m(N)\le C_N(\delta +\sum_1^{N-1}m(k)),} so with a new constant $C_N$: \ekv{7.32} {m(N)\le C_N\delta .} Summing up, we have proved: \smallskip \par\noindent \bf Proposition 7.1. \sl Fix $\theta \in ]0,1[$, and let $t$ be such that $|t|\delta ={\vert t\vert \over \lambda }(\epsilon +e^r)$ be sufficiently small. Then (7.4) has a solution $u=u_t$, with $\nabla u\in C_b^\infty (\overline{\Omega })\cap{\rm Hol}$. Moreover, for $N\in \{ 1,2,..\} $ there exists $C_N>0$, such that \ekv{7.33} {\Vert \nabla ^Nu\Vert _{(\ell_{\rho _1}^{p_1}\otimes ..\otimes\ell_{\rho _N}^{p_N})^*}\le C_N\delta ,} for all weights $\rho _1,..,\rho _N:\Lambda \to {\bf R}$ satisfying (5.7), (7.28) and all exponents $p_1,..,p_N\in [1,+\infty ]$ satisfying (7.21).\rm\smallskip \par We shall now establish that the second deforming vector field $\nabla u_t$ depends holomorphically on $t$ for $t$ such that $|t|\delta$ is sufficiently small. \par Let $v_t$ be a holomorphic function in $(t,x)$, more precisely $v_t\in{\cal S}(\overline{\Omega })\cap{\rm Hol\,}(\Omega )$ and is holomorphic in $t$ for $t$ sufficiently small. Let $u_t$ be the solution with $\nabla u_t\in{\cal S}(\overline{\Omega })\cap{\rm Hol\,}(\Omega )$ of $$-{1\over\lambda }\Delta u_t+\nabla \phi _t\cdot {\partial \over\partial x}u_t-E_t=v_t.$$ Return to the equation for the gradient: \ekv{7.34} {-{1\over\lambda }\Delta \nabla u_t+\nabla \phi _t\cdot {\partial \over\partial x}\nabla u_t+\nabla ^2\phi _t\nabla u_t=\nabla v_t.} Let us first show that $\nabla u_t$ depends continuously on $t$ in a slightly smaller tube $\Omega '=\Omega (T)$, $T'0$ small enough, \ekv{7.41} {\widetilde{x}_t=\widetilde{x}(t,\cdot ):\Omega (T')\to\Omega (T).} We fix such a $T'$. \par The final decoupling can now be carried out: We recall that the RHS of (6.8) is of the form \ekv{7.42} {\int_{({\bf R})^\Lambda }g(x)e^{-\lambda \phi _t(x)}dx,} where $\phi _t$ is givien by (7.1), and where $g(x)=f(x_t(x))$, with $x_t$ here denoting the earlier $v$-flow, so that $g$ is holomorphic and of at most polynomial growth in the tube $\Omega (T)$. Using Stokes' formula, we replace $({\bf R}^2)^\Lambda $ in (7.42) by $\widetilde{x}_t(({\bf R}^2)^\Lambda )$, then a second application of Stokes' formula gives us as in sect. 6, that the integral (7.42) is equal to $$\int_{({\bf R}^2)^\Lambda }g(\widetilde{x}_t(x))e^{-\lambda x\cdot x-\int_0^tE_sds}dx.$$ Here we also use that the vectorfield in Proposition 7.1 is holomorphic in $t$. \par Using finally that $\int e^{-\lambda x\cdot x}dx=\int e^{-\lambda \phi _t(x)}dx$, we see that $\int_0^t E_sds=0$. (This is fundamentally due to the fact that the measure in its original form (3.2) is a supersymmetric function and is normalized independently of $t$ (See (2.20).) by using Theorem A.2.) The RHS of (6.8) is then of the form \ekv{7.43} {\int_{({\bf R}^2)^\Lambda }f(x_t\circ \widetilde{x}_t(x))e^{-\lambda x\cdot x}dx.} \vskip 1cm \centerline{\bf 8. Exponentially weighted estimates and end of the proof of Theorem 2.1.} \medskip \par We consider weighted estimates for $\Delta+V-E$, where $E\in{\bf C}$. Let \ekv{8.1}{q(\eta ):=2\sum_1^d\cosh \eta _j.} We then have $$\Vert e^{(\cdot )\eta }\Delta e^{-(\cdot )\eta }\Vert _{{\cal L}(\ell^2,\ell^2)}\le q(\eta).$$ Writing \ekv{8.2}{e^{(\cdot )\eta }(\Delta +V-E)e^{-(\cdot )\eta }=V-E+e^{(\cdot )\eta }\Delta e^{-(\cdot )\eta },} we observe that $\vert v-E\vert \ge \vert E\vert -\vert v\vert _\infty $ everywhere on ${\bf Z}^d$. Let $2d<\lambda<|E|$. Assume $q(\eta)\le \lambda\le |E|$. \par Passing to the matrices, and using that every entry of a matrix is bounded by the norm of the matrix in ${\cal L}(\ell^2,\ell^2)$, we get from (8.2): \ekv{8.3} {\vert (\Delta +V-E)^{-1}(\mu ,\nu )\vert \le {1\over \vert E\vert -\lambda -\vert v\vert _\infty }e^{-(\mu -\nu )\cdot \eta }.} Define the convex set $$W(\lambda)=\{\eta\in{\bf R}^d;\quad q(\eta)<\lambda\}.$$ We introduce the support function of $W(\lambda)$: \ekv{8.4} {p_\lambda(x)=\sup_{\eta \in W(\lambda)}x\cdot \eta ,\,\, x\in{\bf R}^d.} Then $p_\lambda$ is even, continuous, convex, positively homogeneous of degree 1, and $p_\lambda(x)>0$ for $x\ne 0$. In other words, $p_\lambda$ is a norm on ${\bf R}^d$. Varying $\eta \in {\cal W}(\lambda )$ in (8.4), we get \ekv{8.5} {\vert (\Delta +V-E)^{-1}(\mu ,\nu )\vert \le {1\over \vert E\vert -\lambda -\vert v\vert _\infty }e^{-p_\lambda (\mu -\nu )}.} \par We now assume $|E|>>d$. In order to get a precise control on $p_\lambda$, we start by considering $(\Delta -E)^{-1}$ on $\ell^2({\bf Z}^d)$, when $E\in{\bf C}$ and $\vert E\vert >>1$. Let ${\bf T}={\bf R}/2\pi {\bf Z}$. The Fourier transform ${\cal F}:\ell^2({\bf Z}^d)\to L^2({\bf T}^d;{1\over (2\pi )^d}d\xi )$ given by: \ekv{8.6} {{\cal F}u(\xi )=\sum_{j\in {\bf Z}^d}e^{ij\xi }u(j),} is unitary and has the inverse, \ekv{8.7} {{\cal F}^{-1}v(j)={1\over (2\pi )^d}\int e^{-ij\xi }v(\xi )d\xi .} Conjugation by ${\cal F}$ shows that $\Delta $ is unitarily equivalent to the operator of multiplication on $L^2({\bf T}^d)$ by \ekv{8.8} {p(\xi ):=2\sum_1^d\cos\xi _j.} Whenever convenient, we view $p$ as a $(2\pi {\bf Z})^d$-periodic function on ${\bf R}^d$ and it will be natural to consider $p$ also as a function on ${\bf C}^d$: \ekv{8.9} {p(\zeta )=2\sum_1^d\cos\zeta _j=2\sum_1^d(\cos\xi _j\cosh\eta _j-i\sin\xi _j\sinh\eta _j),} with $\zeta =\xi +i\eta \in{\bf C}^d$. \par We are interested in points where $p(\zeta )-E\ne 0 $. Our analysis will be based on a certain approximate translation invariance. Observe that \ekv{8.10} {\cosh \eta _j={1\over 2}e^{\vert \eta _j\vert }+{\cal O}(e^{-\vert \eta _j\vert }),\,\, \sinh \eta _j={1\over 2}({\rm sgn\,}\eta _j)e^{\vert \eta _j\vert }+{\cal O}(e^{-\vert \eta _j\vert }),} so that \ekv{8.11} {p(\zeta )=\sum_1^de^{-i({\rm sgn\,}\eta _j)\xi _j}e^{\vert \eta _j\vert }+{\cal O}(1).} Here ${\rm sgn\,}\eta _j=+1$, when $\eta _j\ge 0$, and $=-1$, when $\eta _j<0$. (The choice for $\eta _j=0$ is unimportant.) Put \ekv{8.12} {s(\eta )=({\rm sgn\,}\eta _1,..,{\rm sgn\,}\eta _d).} Then uniformly for $t\in{\bf R}$: \ekv{8.13} {p(\xi +ts(\eta )+i\eta )=e^{-it}p(\xi +i\eta )+{\cal O}(1).} \par For $E\in{\bf C}\setminus [-2d,2d]$, let $\Omega (E)={\bf R}^d+iW(E)$ be the largest connected open tube (i.e. set of the form ${\bf R}^d+iW$) containing ${\bf R}^d$, where $p(\zeta )-E\ne 0$. Bochner's tube theorem implies that $W(E)$ is convex. \par When $E>2d$, this coincides with the earlier definition: \ekv{8.14} {W(E)=\{ \eta \in{\bf R}^d;\,2\sum_1^d\cosh \eta _j>2d$: \ekv{8.16} {W(E_1)\subset W(E_2)\subset W(E_1)+B(0,{\cal O}(1)\log{E_2\over E_1}).} (The first inclusion holds more generally for $E_2\ge E_1>d$.) \smallskip \par (8.16) is all we need to have a precise control on $p_\lambda$ in (8.5). However for completeness we now go on to consider the case of complex $E$. We first recall the estimate obtained in the proof of (8.14): \ekv{8.17} {\vert p(\zeta )\vert d,\,\zeta \in{\bf R}^d+iW(E).} \par Consider now the case of general $E\in{\bf C}\setminus [-2d,2d]$. It follows from (8.17) that $\vert p(\zeta )\vert <\vert E\vert $ for $\zeta \in{\bf R}^d+iW(\vert E\vert ),$ so \ekv{8.18} {W(\vert E\vert )\subset W(E).} In the other direction, we have: \smallskip \par\noindent \bf Proposition 8.1. \sl There exists a constant $C>0$, such that \ekv{8.19} {\vert p(\zeta )\vert <\vert E\vert +C\hbox{ for all }\zeta \in{\bf R}^d+iW(E).} In particular, \ekv{8.20} {W(E)\subset W(\vert E\vert +C).} \rm\smallskip \par\noindent \bf Proof. \rm Let $\zeta =\xi +i\eta \in{\bf R}^d+iW(E)$ and assume that $\vert p(\zeta )\vert =R>>1$. Consider the closed curve $$\gamma :\, {\bf R}/2\pi {\bf Z}\ni t\mapsto p(\xi +its(\eta )+i\eta )=e^{-it}p(\zeta )+{\cal O}(1),$$ which winds once around $0$ in the negative direction at a distance $\ge R-C$ from $0$. Since the set of values of ${p_\vert }_{{\bf R}^n+iW(E)}$ is simply connected and contains the image of $\gamma $, it also has to contain the closed disc $\overline{D(0,R-C)}$. By definition of $W(E)$, $E$ cannot belong to $p({\bf R}^d+iW(E))$, so $\vert E\vert >R-C$, and $\vert p(\zeta )\vert <\vert E\vert +C$, as claimed.\hfill{$\#$} \medskip \par We now go back to estimate (8.5). We assume that $\vert E\vert >>2d$ and that $1+\vert v\vert _\infty \le {1\over 2}\vert E\vert $. Choose $\lambda =\vert E\vert -\vert v\vert _\infty -1$. (8.16) gives $$W(\lambda )\subset W(\vert E\vert )\subset W(\lambda )+B(0,{\cal O}(1)\log {\vert E\vert \over \vert E\vert -\vert v\vert _\infty -1}).$$ Here $${\vert E\vert \over \vert E\vert -(\vert v\vert _\infty +1)}=1+{\cal O}({1+\vert v\vert _\infty \over\vert E\vert }),$$ so $$W(\lambda )\subset W(\vert E\vert )\subset W(\lambda )+B(0,{\cal O}(1){1+\vert v\vert _\infty \over\vert E\vert }).$$ Consequently, $$p_\lambda (x)\le p_{\vert E\vert }(x)\le p_\lambda (x)+{\cal O}(1){1+\vert v\vert _\infty \over \vert E\vert }\vert x\vert .$$ Substitution into (8.5) gives on ${\bf Z}^d\times {\bf Z}^d$: \ekv{8.21} {\vert (\Delta +V-E)^{-1}(\mu ,\nu )\vert \le e^{-p_{\vert E\vert }(\mu -\nu )+{\cal O}(1){1+\vert v\vert _\infty\over \vert E\vert }\vert \mu -\nu \vert }.} \par We shall derive similar estimates for $(\Delta _\Lambda +V-E)^{-1}$, when $\Lambda $ is a discrete torus or a subset of ${\bf Z}^d$. As a preparation, we first establish that, \ekv{8.22} {(\log \lambda -\log 2d)\vert x\vert _{\ell^1}\le p_\lambda (x)\le (\log \lambda )\vert x\vert _{\ell^1},} when $\lambda >>1$. Recall that $p_\lambda $ is the support function of the set $W(\lambda )$ in ${\bf R}^d$, defined by $2\sum_1^d\cosh \eta _j\le \lambda $, so if $\eta \in W(\lambda )$, we have $e^{\vert \eta _j\vert }\le \lambda $ for every $j$, or equivalently, $\eta \in B_{\ell^\infty }(0,\log \lambda )$. In the other direction we notice that if $\vert \eta _j\vert \le \log \lambda -\log (2d)$, then $2\cosh \eta _j \le 2e^{\vert \eta _j\vert }\le {\lambda \over d }$, so $2\sum_1^d\cosh \eta _j\le \lambda $ and hence $\eta \in W(\lambda )$. We have shown that \ekv{8.23} {B_{\ell^\infty }(0,\log \lambda -\log 2d)\subset W(\lambda )\subset B_{\ell^\infty }(0,\log \lambda ).} (8.22) now follows, since the support function of $B_{\ell^\infty }(0,1)$ is $\vert x\vert _{\ell^1}$. \par Let $\Lambda =({\bf Z}/N{\bf Z})^d$, $N>>1$ be a discrete torus, and consider $\Delta _\Lambda +V-E$, where $V={\rm diag\,}(v_j)$, $j\in\Lambda $. We also view $v$ as an $N{\bf Z}^d$-periodic function on ${\bf Z}^d$ in the natural way. If $\pi :{\bf Z}^d\to \Lambda $ is the natural projection, and $\widetilde{\nu }\in{\bf Z}^d$ some point in the pre-image of $\nu $, \ekv{8.24} {(\Delta _\Lambda +V-E)^{-1}(\mu ,\nu )=\sum_{\widetilde{\mu }\in\pi ^{-1}(\mu )}(\Delta +V-E)^{-1}(\widetilde{\mu },\widetilde{\nu }).} Let $$d_\lambda (\mu ,\nu )=\min_{\widetilde{\mu }\in\pi ^{-1}(\mu ),\,\widetilde{\nu }\in\pi ^{-1}(\nu )}p_\lambda (\mu -\nu )$$ be the distance on $\Lambda $, induced by the norm $p_\lambda $. Observe that in (8.21) we can introduce an arbitrarily small (but fixed) prefactor in the RHS, by modifying the choice of $\lambda$ by ${\cal O}(1)$, which increases the ${\cal O}(1)$ in the exponent. Using also (8.22), we see that (8.24) converges as a geometric series and that only a fixed finite number of terms may contribute to the leading behaviour. It follows that \ekv{8.25} {\vert (\Delta _\Lambda +V-E)^{-1}(\mu ,\nu )\vert \le e^{-d_{\vert E\vert }(\mu ,\nu )+{\cal O}(1){1+\vert v\vert _{\infty }\over\vert E\vert }\rho(\mu ,\nu )},} where $\rho$ denotes the Euclidean distance on $\Lambda $. \par Consider next the case when $\Lambda $ is a subset of ${\bf Z}^d$. Let $V={\rm diag\,}(v_j)$, $v\in\ell^\infty $ and let $\Delta _\Lambda $ be the discrete Laplacian on $\Lambda $. The observation after (8.1) extends: $$\Vert e^{(\cdot )\eta }\Delta _\Lambda e^{-(\cdot )\eta }\Vert _{{\cal L}(\ell^2,\ell^2)}\le q(\eta ),$$ and the argument there shows that \ekv{8.26} {\Vert e^{(\cdot )\eta }(\Delta _\Lambda +V-E)^{-1}e^{-(\cdot )\eta }\Vert _{{\cal L}(\ell^2,\ell^2)}\le {1\over \vert E\vert -\lambda -\vert v\vert _\infty },} when $\eta \in W(\lambda )$, $\lambda +\vert v\vert _\infty From the rules for manipulating power series it follows that substitution (A.3) possesses the natural property ``associativity": the result of two consecutive substitutions does not depend on the ``arrangement of brackets". Thus one can deal with non-linear transformations of even and odd variables just as with changes of variables in classical analysis. \smallskip \noindent {\it Differentiation} \smallskip \noindent Derivatives with respect to odd variables are defined by algebraic rules: $${\partial\over \partial\xi} (\xi)=1$$ together with linearity and the super-Leibniz formula (see below). For differentiating with respect to even variables one differentiates the coefficients in (A.1). We now use collective notations--we let $x^A$ stand for both $x^a$ and $\xi^{\mu}$. For simplicity, we let $|A|$ denote the parity of $x^A$, i.e. $|A|=\widetilde{x^A}$. If $x^A$ is even, then $|A|=0$. If $x^A$ is odd, then $|A|=1$. Let $c$ be a (numerical) constant. The properties of partial derivatives (in collective notation) are: \smallskip \noindent ({\it linearity}) $$\eqalign{ {\partial\over \partial x^A}(cf) &= c{\partial f\over \partial x^A},\cr {\partial\over \partial x^A}(f+g) &={\partial f\over \partial x^A}+{\partial g\over \partial x^A},\cr}$$ ({\it the Leibniz formula}) $${\partial\over \partial x^A}(fg)= {\partial f\over \partial x^A}g+(-1)^{|A|\tilde f}f {\partial g\over \partial x^A},$$ ({\it derivative of a function of a function}) $${\partial\over \partial x^A}(f(y(x)))= {\partial y^B\over \partial x^A}{\partial f\over \partial y^B}.$$ (Note the order.) The parity of the derivative is equal to the parity of the corresponding variable ({\it i.e.} $\partial/\partial x^A$ maps even to even and odd to odd, or exchanges even and odd according to whether $x^A$ is even or odd). The partial derivatives {\it commute}: $${\partial^2 f\over {\partial x^A\partial x^B}}=(-1)^{|A||B|} {\partial^2 f\over {\partial x^B\partial x^A}},$$ and {\it Taylor's formula} is valid: $$ f(x+h)=f(x)+h^A{\partial f\over \partial x^A}(x)+{1\over 2}h^Bh^A {\partial^2 f\over {\partial x^A\partial x^B}}(x)+\cdots +{\bf O}(h^{k+1}).$$ (Note the order. The symbol ${\bf O}$ has its natural meaning.) \smallskip By analogy, one can also define the notion of (super)vector fields, which we do not elaborate here. See however (A.5) for an example of such a vector field. \smallskip In general all naturally formulated analogues of the assertions in an analysis course carry over to the supercase. The most important of them is the {\it implicit function theorem}: the system of equations $$F^A(x,y)=0$$ is uniquely solvable with respect to the variables $x=(x^A)$ if the matrix of partial derivatives ($\partial F^A/{\partial x^B}$) is invertible (see below). Then the solution ($x^A$) can be expressed as a smooth function of the variables $y=(y^K)$ (a square matrix is invertible if and only if its even-even and odd-odd blocks are invertible, see below). \smallskip \noindent {\it Example.} The change of variables $$\eqalign {x^a&=x^a(x',\xi')=x^a_0(x')+{\bf O}({\xi'}^2),\cr \xi^{\mu}&=\xi^{\mu}(x',\xi')={\xi'}^{\mu'}T^{\mu}_{\mu'}(x')+{\bf O}({\xi'}^3). \cr}$$ The variables $x'$, $\xi'$ will be expressible in terms of the variables $x$, $\xi$ if the numerical matrices ($\partial x_0^a/\partial x^{b'}$) and ($T^{\mu}_{\mu'}$) are invertible. (This should be compared with the fact that an element of the Grassmann algebra of the form $g=g_1+g_2$, where $g_1$ is a scalar and $g_2$ the nilpotent part has an inverse, if and only if $g_1\neq 0$.) Such a change is called {\it non-degenerate}. \smallskip \noindent {\it The algebra} ${\cal H} (U^{n|m})$ \smallskip In this paper, we are in fact more concerned with expressions of the type (A.2) with the coefficients being holomorphic functions of $n$ variables $z_1\cdots z_n$ in an open set $ U^n\subset {\bf C}^n$. Complex odd coordinates are $\zeta_j=\xi_j+i\eta_j$ and $\bar\zeta_j=\xi_j-i\eta_j$ where $\xi_j$, $\eta_j$ ($j=1\cdots m$) are the generators of a Grassmann algebra. A holomorphic function, i.e. an element of ${\cal H} (U^{n|m})$ is then of the form $$\eqalign{ f(z,\zeta) &= f(z_1,\cdots,z_n,\zeta_1,\cdots,\zeta_m)\cr &= f_0(z) + \zeta^1 f_1(z)+ \cdots \zeta^m f_m(z) + \cdots \zeta^{\mu_1} \cdots \zeta^{\mu_m} f_{\mu_1\dots\mu_m}(z),\cr}$$ where the coefficients are holomorphic functions of $z$ in $U^n\subset {\bf C}^n$. We have therefore $${\cal H} (U^{n|m}) = {\cal H} (U^n) \otimes \Lambda[\zeta_1,\cdots,\zeta_m].$$ Naturally, all the statements that we have made so far carry over with holomorphic functions replacing $C^{\infty}$ functions. \bigskip \noindent {\bf 3. The Berezin Integral} \smallskip \noindent {\it The integral for a differential algebra} \smallskip \noindent The definition of an integral with respect to odd variables emerges from the following general algebraic construction, obtained from a formal variational calculation. Suppose we have a commutative algebra $A$ with an operator $\partial$ - a `differential' (but not endowed with any kind of $\partial^2=0$ property). Then the equivalence $f$ mod $\partial A$ is called the {\it integral} of the element $f\in A$. If $\partial$ is a differentiation of the algebra $A$, then `integration by parts' works. This construction is used to model the integral of functions of a single variable. \smallskip \noindent {\it Example.} On the algebra of functions of compact support $C_0^{\infty}({\bf R})$, taking $\partial$ to be the ordinary derivative, the integral coincides with the ordinary integral over ${\bf R}$. \smallskip \noindent {\it The Berezin integral over} ${\bf R}^{n|m}$ \smallskip \noindent We first consider the algebra $C^{\infty}({\bf R}^{0|1})$. It is spanned by the functions $1$ and $\xi$. The operator $\partial/\partial\xi$ annihilates $1$ and turns $\xi$ into $1$. The corresponding integral of the function $f=f_0+\xi f_1$ is therefore equal to the coefficient $f_1$ up to normalization. We write: $$\int_{{\bf R}^{0|1}}d\xi\quad 1=0,\qquad \int_{{\bf R}^{0|1}}d\xi\quad \xi=1.$$ \smallskip The operation of integration is odd. We assign parity $1$ to the symbol $d\xi$. Therefore its permutation with functions follows the supercommutator rule. \smallskip We define a multiple integral over ${\bf R}^{n|m}$ to be a repeated integral. To do this we assign parity $1$ to $dx$ in ${\bf R}={\bf R}^{1|0}$. We define $$\eqalign{ d(x,\xi)&=d(x^1,\cdots,x^n,\xi^1,\cdots,\xi^m)\cr &=dx^1dx^2\cdots dx^nd\xi^1\cdots d\xi^m\cr}$$ for ${\bf R}^{n|m}$. \smallskip Let $f\in C^{\infty}({\bf R}^{n|m})$ be such that all of its coefficients are in ${\cal S}({\bf R}^{n})$. If the term of highest degree in $\xi$ is $\xi^m\cdots\xi^1a(x)$, we obtain by using the parity conventions for $dx$, $d\xi$, $$\int_{{\bf R}^{n|m}}d(x,\xi)f(x,\xi)=(-1)^{n(n-1)/2}\int_{{\bf R}^n}a(x)dx^1\cdots dx^n.$$ \smallskip Let $x=(x^A)$ be the collective symbol of ($x,\xi$). Let $dx$ denote $d(x,\xi)$. Let $c$ be a (numerical) constant. Then the following properties can be verified directly: \smallskip \noindent ({\it linearity}) $$\eqalign{ \int_{{\bf R}^{n|m}}(f(x)+g(x))dx&=\int_{{\bf R}^{n|m}}f(x)dx+\int_{{\bf R}^{n|m}}g(x)dx,\cr \int_{{\bf R}^{n|m}}cf(x)dx&=c\int_{{\bf R}^{n|m}}f(x)dx,\cr}$$ ({\it differentiation under the integral sign}) $${\partial\over \partial y}\int_{{\bf R}^{n|m}}f(x,y)dx= (-1)^{n\tilde y}\int_{{\bf R}^{n|m}}{\partial f\over \partial y}(x,y)dx,$$ ({\it integral of a derivative and integration by parts}) $$\eqalign{ \int_{{\bf R}^{n|m}}{\partial\over \partial x^A}(f(x))dx&=0,\cr \int_{{\bf R}^{n|m}}{\partial f\over \partial x^A}gdx&=(-1)^{|A|\tilde f} \int_{{\bf R}^{n|m}}f{\partial g\over \partial x^A}dx,\cr}$$ and ({\it Fubini's theorem-reduction to a repeated integral}) $$\int_{{\bf R}^{n|m}\times{\bf R}^{p|q}}dxdyf(x,y)= (-1)^{(n+m)p}\int_{{\bf R}^{n|m}}dx\int_{{\bf R}^{p|q}}dyf(x,y).$$ Clearly all the above properties hold in the case ${\bf C}^{n|m}$ under appropriate conditions on the coefficients. \smallskip In the special case $n=m$, the Berezin integral can be seen as follows. We consider an inhomogeneous differential form on ${\bf R}^n$ as a function of the variables $x^a$ and $dx^a$, where $\widetilde {dx^a}=1$: $$\omega(x,dx)=\omega^{(0)}+\omega^{(1)}+\cdots+\omega^{(n)}.$$ Then $$\int_{{\bf R}^n}\omega= \int_{{\bf R^n}}\omega^{(n)} =\pm\int_{{\bf R}^{n|n}}\omega(x,dx)d(x,dx).$$ \smallskip \noindent {\it Change of variables in the integral} \smallskip \noindent Suppose we have a non-degenerate coordinate transformation $$\eqalign{ x^a&=x^a(x',\xi')\cr \xi^{\mu}&=\xi^{\mu}(x',\xi')\cr}$$ with Jacobian matrix $$J:={\partial(x,\xi)\over \partial(x',\xi')} :=\pmatrix {\partial x/{\partial x'}&{\partial \xi}/{\partial x'}\cr \partial x/\partial\xi'&\partial\xi/\partial \xi'\cr}.$$ It can be shown (see {\it e.g.} [V]) from general algebraic considerations that there exists an essentially unique scalar function, ({\it i.e.} a function which is of degree zero in $\xi$), associated with $J$, called the {\it Berezinian} of $J$, denoted by ${\rm Ber\,} J$. It is the generalization (counterpart) in ${\bf R}^{n|m}$ of the notion of the determinant in ${\bf R}^n$. Let $g_{i,j}$ ($i,j=0,1$) be the blocks of $J$, {\it i.e.} $$J(x',\xi'): =\pmatrix {g_{00}(x',\xi')&g_{01}(x',\xi')\cr g_{10}(x',\xi')&g_{11}(x',\xi')\cr}.$$ Then it can be shown by using Gauss's method that $$\eqalign{ ({\rm Ber\,}J)(x')&={\det (g_{00}(x',0)-[g_{01}g_{11}^{-1}g_{10}](x',0))\over \det g_{11}(x',0)}\cr &={\det g_{00}(x',0)\over \det (g_{11}(x',0)-[g_{10}g_{00}^{-1}g_{01}](x',0))}. \cr}$$ Define $${d(x,\xi)\over d(x',\xi')}:={\rm Ber\,} J ={\rm Ber\,} \left\{{\partial(x,\xi)\over \partial(x',\xi')}\right\}.$$ We then have \smallskip \noindent {\bf Theorem A.1} [V] \sl Let the function $f(x,\xi)$ on ${\bf R}^{n|m}$ be such that all of its coefficients are in ${\cal S}({\bf R}^n)$. Then we have the equality $$\eqalign{ \int_{{\bf R}^{n|m}}f(x,\xi)d(x,\xi) =&{\rm sign}\left\{\det\left[({\partial x}/{\partial x'})(x',0)\right]\right\}\cr &\int_{{\bf R}^{n|m}}f(x(x',\xi'),\xi(x',\xi')){d(x,\xi)\over d(x',\xi')}d(x',\xi'). \cr}$$ \rm \noindent {\it Change of contours in} $U^{n|m}$ \smallskip Let $f(z,\zeta)\in{\cal H}(U^{n|m})$. Let $\Gamma^n$ be an open set in $U^n$. Assume all the coefficients of $f$ are rapidly decreasing in $\Gamma^n$ so that contour integration in $\Gamma^n$ is well defined. The superanalogue of the usual Stokes' formula (see e.g. [V]) then allows us to make a change of contours in $U^{n|m}$. Specifically, assuming ${\bf R}^n\subset\Gamma^n\subset U^n$, $(e^{i\theta}{\bf R})^n\subset\Gamma^n\subset U^n$ for some $\theta\ne 0$, we make the following change of contours in sect. 3: $$\int_{{\bf R}^{n|m}}f(z,\zeta) d(z,\zeta)=\int_{(e^{i\theta}{\bf R})^{n|m}}f(z,\zeta) d(z,\zeta) \eqno({\rm A}.4)$$ by using the superstokes' formula. \bigskip \noindent {\bf 4. Supersymmetry on} ${\bf R}^{2n|2n}$ \smallskip \noindent We will now consider the special case of the superspace ${\bf R}^{2n|2n}$. It will be more convenient to change our notations. We group the $4n$ (super)commuting variables into $2n$ pairs of coordinates: let $x_i\in{\bf R}^2$ ($i=1,\cdots,n$) be the {\it even} commuting coordinates; $\xi_i$, $\eta_i$ ($i=1,\cdots,n$) be the {\it odd} commuting coordinates: $$\eqalign{ [\xi_i, \xi_j]&=0\cr [ \eta_i, \eta_j]&=0\cr [\eta_i, \xi_j]&=0.\cr}$$ We use the composite notation $X_i=(x_i,\xi_i,\eta_i)$. We define the (super)dot product: $$\eqalign{X_i\cdot X_j&:=D(X_i,X_j)\cr &:=f_0(x_i,x_j)+f_1(x_i,x_j)\eta_i\xi_j+ f_2(x_i,x_j)\eta_j\xi_i\cr &:=x_i\cdot x_j+{1\over 2}(\eta_i \xi_j+\eta_j \xi_i)\cr}$$ where $x_i\cdot x_j$ denotes the usual inner product of $x_i$ and $x_j$ in ${\bf R}^2$. Note that when $i=j$, $$X_i\cdot X_i=x_i\cdot x_i+\eta_i\xi_i.$$ \smallskip Supersymmetries are defined to be the set of coordinate transformations that leave the above dot product invariant. Two obvious transformations that leave $D$ invariant are the usual rotations $O$ in ${\bf R}^2$, $$x_i=x_i'O\qquad (i=1,\cdots,n)$$ and the transformations $A\in Sp(2)$ acting on $\xi_i$, $\eta_i$ ($i=1,\cdots,n$) such that $$(\xi_i,\eta_i)=(\xi'_i,\eta'_i)A,$$ where $\{x'_i,\xi'_i,\eta'_i\}_{i=1}^{n}$ is another set of coordinates on ${\bf R}^{2n|2n}$, $x'_i$ being the even ones and $\xi'_i$, $\eta'_i$ the odd ones. We put $X'_i=(x'_i,\xi'_i,\eta'_i), \, (i=1,\cdots,n)$. Aside from these two linear transformations, supersymmetries also include transformations generated by (super)vector fields of the type: $$\eqalignno{ V&=\sum_i V_i\cr &=\sum_i(\xi_i a+\eta_i b){\partial\over \partial x_i}+2(b\cdot x_i){\partial\over \partial \xi_i} -2(a\cdot x_i){\partial\over \partial \eta_i},& ({\rm A}.5)\cr}$$ where $a$, $b\in {\bf R^2}$, and $$\eqalignno{a {\partial\over \partial x_i}& :=a_1{\partial\over \partial x_{i,1}}+a_2{\partial\over \partial x_{i,2}},\cr b{\partial\over \partial x_i}& :=b_1{\partial\over \partial x_{i,1}}+b_2{\partial\over \partial x_{i,2}}.& ({\rm A}.6)\cr}$$ (Note that it is the same transformation in all the $X_i$.) As before the above transformation is to be understood in the algebraic sense. We check that $VD(X_i,X_j)=0$. We check also that the Berezinian corresponding to such a change of variables is $1$. \smallskip Let $\tau$ be a supersymmetric transformation. Let $X_i=\tau X'_i$ ($i=1,\cdots,n$). \smallskip\noindent {\bf Definition} \sl A superfunction $F$ is supersymmetric if it is invariant under all supersymmetries: $$F(X_1,\cdots,X_n)=F(X'_1,\cdots,X'_n),$$ for all $\tau$ supersymmetric transformations.\rm \smallskip Clearly, supersymmetric functions belong to a rather restricted class of functions. For example, in ${\bf R}^{2|2}$, $F$ is supersymmetric if and only if there exists $f$: $[0,\infty)\mapsto {\bf R}$ of class $C^{\infty}$, such that $$F(X)=f(X\cdot X)=f(x\cdot x)+f'(x\cdot x )\eta\xi.$$ For the general classification in ${\bf R}^{2n|2n}$, see {\it e.g.} [KS]. \smallskip Define $dX_i= (d^2 x_i/\pi)d\eta_i d\xi_i$ ($i=1,\cdots,n$). One of the most useful properties of the supersymmetric functions is the following:\smallskip \noindent {\bf Theorem A.2} (see {\it e.g.}[K]) \sl If $F$ is supersymmetric with all of its coefficients in ${\cal S}({\bf R}^{2n})$, then $$\int F(X_1,\cdots,X_n) dX_1\cdots dX_n =F(0,\cdots 0).\eqno ({\rm A}.7)$$ \rm \bigskip \noindent {\bf 5. An Expression for the Inverse of a Matrix} \smallskip \noindent Let $A$ be an operator on $\ell^2(\Lambda)$, where $\Lambda$ is some finite index set. Let $|\Lambda|$ be the number of elements in $\Lambda$. Assume $A=A_1+iA_2$, where $A_1$, $A_2$ are real symmetric matrices with $A_1>0$. We then have the following well-known Gaussian integrals on ${\bf R}^{2|\Lambda |}$: $$\int e^{-\sum_{i,j\in \Lambda }A_{ij}x_i\cdot x_j} \prod_{j\in \Lambda}{d^2x_j\over \pi}={1\over \det A} \eqno ({\rm A}.8)$$ and $$\int x_a\cdot x_b e^{-\sum_{i,j\in \Lambda }A_{ij}x_i\cdot x_j} \prod_{j\in \Lambda}{d^2 x_j\over \pi}={(A^{-1})_{ab}\over \det A} \eqno ({\rm A}.9)$$ where $a$, $b\in \Lambda $. Using the construction made so far in this section, we also have the following counterpart on ${\bf R}^{0|(2|\Lambda |)}$: $$\int e^{-\sum_{i,j\in \Lambda}A_{ij}\eta_i\cdot \xi_j} \prod_{j\in \Lambda}d\eta_j d\xi_j=\det A \eqno ({\rm A}.10)$$ and $$\int \xi_a\eta_b e^{-\sum_{i,j\in \Lambda}A_{ij}\eta_i \xi_j} \prod_{j\in \Lambda}d\eta_j d\xi_j=(A^{-1})_{ab}(\det A). \eqno ({\rm A}.11)$$ Combining (A.5) and (A.6), (A.4) and (A.7), we finally have the following expressions for the inverse of $A$, expressed as a Berezin integral: $$\eqalignno{ (A^{-1})_{ab}&= \int x_a\cdot x_b e^{-\sum_{i,j\in \Lambda }A_{ij}X_i\cdot X_j} \prod_{j\in \Lambda}dX_j\cr &=\int \xi_a\eta_b e^{-\sum_{i,j\in \Lambda }A_{ij}X_i\cdot X_j} \prod_{j\in \Lambda}dX_j.&(A.12)\cr} $$ This is precisely the representation that we used in sect. 2. \bigskip \noindent {\bf 6. An Integration by Parts} \smallskip \noindent We now give the details of the integration by parts which led (2.22) to (2.24) in sect. 2. It was first derived by using superanalysis (i.e. using supervector fields etc.). In order not to venture too far in that direction, we present below a ``translated" version which uses standard analysis. Define $$L=i\left (\sum t x_j\cdot x_k-\sum Ex_j\cdot x_j-i\sum k(x_j\cdot x_j)\right )-\log \det M(x),$$ where $$M(x)=t\Delta -E-i\,{\rm diag\,}(k'(x_j\cdot x_j)),\qquad (\det M\ne 0),$$ as in (2.23) in sect. 2. Let $\langle G(\mu,\nu;E+i0)\rangle$ be as in (2.22). Let $m=|\Lambda|$. Then we have \par\noindent {\bf Proposition A.3.} $$\langle G(\mu ,\nu ;E+i0)\rangle =i^m\int M^{-1}(\mu ,\nu ;E)e^{-L(x)}\prod_{j\in\Lambda}{d^2x_j\over \pi}.$$ \noindent {\bf Proof.} Define $$\phi(x)=i\left(\sum t x_j\cdot x_k-\sum Ex_j\cdot x_j-i\sum k(x_j\cdot x_j)\right).$$ We first look for a vector field $v$, such that $$x_{\mu}e^{-\phi}=v\cdot \nabla (e^{-\phi}),\eqno ({\rm A}.13)$$ \noindent so $$x_{\mu}=-v\cdot \nabla \phi.\eqno ({\rm A}.14)$$ \noindent Since $$\nabla \phi=2iMx, \eqno ({\rm A}.15)$$ we look for $v$ of the form: $v=Bu$, where $B$ is a matrix and $u_j=1$ for all $j$. Let $\pi_{\mu}$ be the matrix, such that $(\pi_{\mu})_{ij}=\delta_{i\mu}\delta_{j\mu}$. Then (A.14) can be written as $$(\pi_{\mu})x=-2i(B^{t}\circ M)x.$$ \noindent Therefore $$B={i\over 2}M^{-1}\circ \pi_{\mu}$$ \noindent is a solution. Hence $$v={i\over 2}M^{-1}\circ \pi_{\mu}u \eqno ({\rm A}.16)$$ satisfies (A.14). We now show that $v$ in fact verifies $$x_{\mu}e^{-L}=v\cdot \nabla (e^{-L})+({\rm div\,}v) e^{-L}.\eqno ({\rm A}.17)$$ Comparing (A.14) with (A.17), we see that we only need to show that $$v\cdot (\nabla\log\det M)+{\rm div\,}v=0.\eqno ({\rm A}.18)$$ Using the fact that $$\partial _j\log\det M=-2i k''(x_j\cdot x_j)(M^{-1})_{jj}x_j$$ \noindent and the expression for $v$ in (A.16), we easily verify that (A.18) holds. Hence (A.17) holds. We then have $$\eqalign{\langle G(\mu ,\nu ;E)\rangle &=i^{m+1}\int x_{\mu}\cdot x_{\nu} e^{-L(x)}\prod_{j\in\Lambda}d^2x_j\cr &=i^{m+1}\int x_{\nu}\cdot (v\cdot \nabla+{\rm div\,}v)e^{-L(x)}\prod_{j\in\Lambda}d^2x_j\cr &=i^m\int M^{-1}(\mu ,\nu ;E)e^{-L(x)}\prod_{j\in\Lambda}d^2x_j\cr}$$ \noindent by integration by parts.\hfill{$\#$} \vfill \eject \centerline{\bf Appendix B. Direct approach to some basic formulas.} \medskip In this appendix, we give direct proofs of some of the basic formulas which were first derived using supersymmetry. \par Let $x_1,..,x_m\in{\bf R}^2$. If $A=(a_{j,k})$ is a complex symmetric $m\times m$-matrix with ${\rm Re\,}A>0$, then \ekv{{\rm B}.1}{\int\det A\,e^{-\sum a_{j,k}x_j\cdot x_k}{d^{2m}x\over \pi ^m}=1.} Let \ekv{{\rm B}.2}{\kappa :{\bf R}^{2m}\ni(x_1,..,x_m)\mapsto (x_j\cdot x_k)_{1\le j,k\le m}\in{\bf R}^{m^2},} and introduce \ekv{{\rm B}.3} {f_A(\tau )=e^{-\sum a_{j,k}\tau _{j,k}}=e^{-A\cdot \tau },} so that $f_A$ is invariant under the maps \eekv{{\rm B}.4} {\gamma _{j,k}:\tau \mapsto s\hbox{ with }s_{\widetilde{j},\widetilde{k}}= \tau _{\widetilde{k},\widetilde{j}},\hbox{ when }(\widetilde{j},\widetilde{k})=(j,k)\hbox{ or }(k,j) } {\hbox{and }s_{\widetilde{j},\widetilde{k}}= \tau _{\widetilde{j},\widetilde{k}}\hbox{ otherwise.}} (B.1) can be written \ekv{{\rm B}.5} {\int (\det (-{\partial \over\partial \tau _{j,k}})f_A)\circ \kappa \,\, {d^{2m}x\over \pi ^m}=f_A(0).} Let $\mu (A)$ be a distribution with compact support on the space of complex symmetric $m\times m$-matrices $A$ with ${\rm Re\,}A>0$. For $\tau \in {\bf R}^{m^2}$ we can define the Laplace transform: \ekv{{\rm B}.6} {\widehat{\mu }(\tau )=\int f_A(\tau ) \mu (A)dA=\int e^{-A\cdot \tau }\mu (A)dA,} and if we integrate (B.5) against $\mu (A)$ we get \ekv{{\rm B}.7} {\int (\det (-{\partial \over \partial \tau _{j,k}})\widehat{\mu })\circ \kappa\,\, {d^{2m}x\over \pi ^m}=\widehat{\mu }(0).} With $\mu $ as above we notice that ${1\over 2}(\tau _{j,k}+\tau _{k,j})\widehat{\mu }(\tau )$ is also the Laplace transform of a distribution with compact support in the space of symmetric matrices with positive real part, so ({\rm B}.7) gives \ekv{{\rm B}.8} {\int (\det (-{\partial \over\partial \tau })({1\over 2}(\tau _{j,k}+\tau _{k,j})\widehat{\mu }))\circ \kappa\, \, {d^{2m}x\over \pi ^m}=0.} We write this as \eekv{{\rm B}.9} {\int ([\det (-{\partial \over\partial \tau }),{1\over 2}(\tau _{j,k}+\tau _{k,j})]\widehat{\mu })\circ \kappa\, \, {d^{2m}x\over \pi ^m}+} {\hskip 3cm \int ({1\over 2}(\tau _{j,k}+\tau _{k,j})\det (-{\partial \over \partial \tau })\widehat{\mu })\circ \kappa\,\, {d^{2m}x\over \pi ^m}=0,} or \eekv{{\rm B}.10} {\int ({1\over 2}(\tau _{j,k}+ \tau _{k,j})\det (-{\partial \over\partial \tau })\widehat{\mu })\circ \kappa\,\, {d^{2m}x\over \pi ^m}=} {\hskip 3cm \int ([{1\over 2}(\tau _{j,k}+\tau _{k,j}),\det (-{\partial \over\partial \tau })]\widehat{\mu })\circ \kappa\, \, {d^{2m}x\over \pi ^m}.} \par Put $$M_{j,k}(-{\partial \over \partial \tau })=[\tau _{k,j},\det (-{\partial \over \partial \tau })]=[\det (-{\partial \over \partial \tau }),-\tau _{k,j}]$$ and notice that $M_{k,j}$ is the minor obtained from $$\det (-{\partial \over\partial \tau })$$ by replacing $-{\partial \over \partial \tau _{j,k}}$ by $1$ and all other elements in the $j$:th line and in the $k$:th column by $0$. Consequently, by summing over a column: $$\det (-{\partial \over \partial \tau })=\sum_jM_{k,j}(-{\partial \over \partial \tau })(-{\partial \over \partial \tau _{j,k}}),$$ and more generally, $$\det (-{\partial \over \partial \tau })\delta _{k,\widetilde{k}}=\sum_jM_{k,j}(-{\partial \over \partial \tau })(-{\partial \over \partial \tau _{j,\widetilde{k}}}),$$ so if $M=(M_{j,k})$, we obtain: $$M(-{\partial \over \partial \tau })\circ (-{\partial \over \partial \tau })=(-{\partial \over \partial \tau })\circ M(-{\partial \over \partial \tau })=\det (-{\partial \over \partial \tau })\otimes I.$$ Formally we can write: $$M(-{\partial \over \partial \tau })={(-{\partial \over \partial \tau })}^{-1}\det (-{\partial \over \partial \tau })\otimes I. $$ Rewrite (B.10): \ekv{{\rm B}.11} {\int {1\over 2}(M_{j,k}(-{\partial \over \partial \tau })+M_{k,j}(-{\partial \over \partial \tau }))\widehat{\mu }\circ \kappa\, \, {d^{2m}x\over \pi ^m}=\int ({1\over 2}(\tau _{j,k}+\tau _{k,j})\det (-{\partial \over \partial \tau })\widehat{\mu })\circ \kappa\,\, {d^{2m}\over \pi ^m}.} Since $$M_{j,k}(-{\partial \over\partial \tau })e^{-\sum a_{j,k}\tau _{j,k}}=(A^{-1})_{j,k}e^{-\sum a_{j,k}\tau _{j,k}}\det A,$$ we can apply this to $\widehat{\mu }=f_A$ and get \eekv{{\rm B}.12} {{A^{-1}}_{j,k}=\int {1\over 2}(M_{j,k}(-{\partial \over \partial \tau })+M_{k,j}(-{\partial \over \partial \tau }))e^{-\sum a_{j,k}\tau _{j,k}}\circ \kappa\,\, {d^{2m}x\over \pi ^m}} {\hskip 11mm =\int x_j\cdot x_k(\det (-{\partial \over \partial \tau })e^{-\sum a_{j,k}\tau _{j,k}})\circ \kappa\,\, {d^{2m}x\over\pi ^m}.} \par Let $\mu $ be a probability measure with compact support on the space of complex symmetric matrices with real part $>0$. If $\langle \cdot \rangle $ denotes the corresponding expectation value then we get from (B.12): \eekv{{\rm B}.13} {\langle (A^{-1})_{j,k}\rangle =\int ({1\over 2}(M_{j,k}(-{\partial \over \partial \tau })+M_{k,j}(-{\partial \over \partial \tau }))\widehat{\mu }(\tau ))\circ \kappa\, \, {d^{2m}x\over \pi ^m}} {=\int x_j\cdot x_k (\det (-{\partial \over\partial \tau })\widehat{\mu }(\tau ))\circ \kappa \,\, {d^{2m}\over \pi ^m}.} \par We now assume that the probability measure has the property that \ekv{{\rm B}.14} {\widehat{\mu }(\tau )=e^{-\phi (\tau )},} where $\phi (\tau )$ is invariant under the maps $\gamma _{j,k}$ and satisfies: \ekv{{\rm B}.15} {{\partial ^2\phi \over \partial \tau _{j,k}\partial \tau _{\widetilde{j},\widetilde{k}}}=0,\hbox{ when }(j,k)\ne (\widetilde{j},\widetilde{k}).} Then, $$M_{j,k}(-{\partial \over \partial \tau })e^{-\phi (\tau )}=M_{j,k}({\partial \phi \over \partial \tau })e^{-\phi (\tau )}=\det ({\partial \phi \over \partial \tau }){{({\partial \phi \over \partial \tau })}^{-1}}_{j,k}e^{-\phi (\tau )},$$ $$\det (-{\partial \over \partial \tau })e^{-\phi (\tau )}=\det ({\partial \phi \over \partial \tau })e^{-\phi (\tau )},$$ and (B.13) takes the form: \eekv{{\rm B}.16} {\langle (A^{-1})_{j,k}\rangle =\int {1\over 2}({{({\partial \phi \over \partial \tau })}^{-1}}_{j,k}+{{({\partial \phi \over \partial \tau })}^{-1}}_{k,j})e^{-\phi }\det ({\partial \phi \over \partial \tau })\circ \kappa\,\, {d^{2m}x\over \pi ^m}} {\hskip 3cm =\int x_j\cdot x_k(e^{-\phi}\det ({\partial \phi \over \partial \tau }))\circ \kappa\, \, {d^{2m}x\over \pi ^m}.} Using that ${\partial \phi \over \partial \tau }$ is symmetric on the image of $\kappa $, we get \eekv{{\rm B}.17} {\langle {A^{-1}}_{j,k}\rangle =\int [({({\partial \phi \over \partial \tau })^{-1})}_{j,k}e^{-\phi }\det ({\partial \phi \over \partial \tau })]\circ \kappa\,\, {d^{2m}x\over \pi ^m}} {=\int x_j\cdot x_k(e^{-\phi }\det ({\partial \phi \over\partial \tau }))\circ \kappa\, \, {d^{2m}x\over \pi ^m}.} \par In the main text we apply the above discussion with $$A=i(t\Delta +{\rm diag\,}(v_j)-(E+i\eta ))=i(H-(E+i\eta )),$$ with $v_j$ and $E$ real and $\eta >0$ to start with, and then with $\eta =0$, whenever we can get to the limit. Hence to start with ${\rm Re\,}A=\eta I>0$. Then we have (B.5), (B.12) and if we take the expectation value w.r.t. $\prod _1^mg(v_j)dv_j$, where $g\ge 0$ has integral 1, we get: \ekv{{\rm B}.18} {1=\int \det (-{\partial \over \partial \tau _{j,k}})(e^{-i(t\Delta -(E+i\eta ))\cdot \tau }\prod \widehat{g}(\tau _{j,j}))\circ \kappa \,\, {d^{2m}x\over \pi ^m},} \eeekv{{\rm B}.19} {\langle {A^{-1}}_{j,k}\rangle =} {\int {1\over 2}(M_{j,k}(-{\partial \over\partial \tau })+M_{k,j}(-{\partial \over\partial \tau }))(e^{-i(t\Delta -(E+i\eta ))\cdot \tau} \prod\widehat{g}(\tau _{j,j}))\circ \kappa\, \, {d^{2m}x\over\pi ^m}} {\hskip 2cm =\int x_j\cdot x_k(\det (-{\partial \over\partial \tau })e^{-i(t\Delta -(E+i\eta ))\cdot \tau} \prod_j\widehat{g}(\tau _{j,j}))\circ \kappa\, \, {d^{2m}x\over \pi ^m}.} If the Fourier transform $\widehat{g}$ and all its derivatives decay rapidly near infinity, we can let $\eta $ tend to zero and we get two expressions for $\langle (t\Delta +{\rm diag\,}(v_j)-(E+i0))^{-1}\rangle $ which reduce to the formulas (2.22), (2.24), if we further assume that $\widehat{g}(\tau )=e^{-k(\tau )}$. \par We shall next write formulas for the action of certain vectorfields. If $\gamma =\gamma _{j,k}$ and we write $(\tau \circ \gamma )_{\nu ,\mu }=\tau _{\gamma (\nu ,\mu )}$, then if $f(\tau \circ \gamma )=f(\tau )$, we have $${\partial f\over\partial \tau _{\gamma (\nu ,\mu )}}(\tau )={\partial f\over\partial \tau _{\nu ,\mu }}(\tau \circ \gamma ),$$ and in particular, $${\partial f\over\partial \tau _{\nu ,\mu }}={\partial f\over\partial \tau _{\gamma (\nu ,\mu )}}$$ on the image of $\kappa $. Let $f(\tau )$ satisfy $f(\tau )=f(\tau\circ \gamma _{j,k})$, $\forall \,(j,k)$. Identify $\tau $ and $\kappa (x)$: Then $$x_k\cdot {\partial f\over\partial x_j}=\sum_\nu {\partial f(\tau )\over\partial \tau _{j,\nu }}\tau_{k,\nu }+\sum_\nu {\partial f\over\partial \tau_{\nu ,j}}\tau_{\nu ,k}.$$ Using the symmetry, we get $$x_k\cdot {\partial \over\partial x_j}f=\sum_\nu {1\over 2}(\tau _{k,\nu }+\tau _{\nu ,k})({\partial \over\partial \tau _{j,\nu }}+{\partial \over\partial \tau_{\nu ,j}})f,$$ so we can say that a lift of $x_k\cdot {\partial \over\partial x_j}$ to the $\tau $-variables, which commutes with all the $\gamma _{j',k'}$, is given by \ekv{{\rm B}.20} { \sum_\nu {1\over 2}(\tau _{k,\nu }+\tau _{\nu ,k})({\partial \over\partial \tau _{j,\nu }}+{\partial \over\partial \tau _{\nu ,j}}). } Consider a vectorfield \ekv{{\rm B.}21} {v(x,{\partial \over \partial x})=\sum_j\sum_k b_{j,k}(\tau )x_k\cdot {\partial \over\partial x_j},} where each coefficient satisfies $b_{j,k}(\tau \circ \gamma _{\nu ,\mu })=b_{j,k}(\tau )$, but where we do not assume that $(b_{j,k})$ is symmetric. Let us compute the divergence: \eekv{{\rm B}.22} { {\rm div\,}v=\sum_j\sum_k{\partial \over\partial x_j}(\cdot b_{j,k}(\tau )x_k)=2{\rm tr\,}(b_\cdot )+\sum_j\sum_k x_k\cdot {\partial \over\partial x_j}(b_{j,k}) } { =2{\rm tr\,}(b_\cdot )+{1\over 2}\sum_j\sum_k\sum_\nu (\tau _{k,\nu }+\tau _{\nu ,k})({\partial \over\partial \tau _{j,\nu }}+{\partial \over\partial \tau _{\nu ,j}})b_{j,k}. } \par Now look at a deformation problem: Let $\phi =\phi ^s(\tau )$, where $\phi ^s(\tau \circ \gamma _{j,k})=\phi ^s(\tau )$, $\forall\,(j,k)$, and assume that $\phi _s$ vanishes at $\tau=0$. Then we can write \ekv{{\rm B}.23} { \phi ^s(\tau )={1\over 2}\sum_{j,k}\Phi ^s_{j,k}(\tau )(\tau _{j,k}+\tau _{k,j}), } where $\Phi^s_{j,k}(\tau )=\Phi ^s_{k,j}(\tau )=\Phi ^s_{j,k}(\tau \circ \gamma _{j',k'}) $, $\forall j,k,j',k'$. Notice that after composition with $\kappa $, we get $$\phi ^s(x)=\sum_{j,k}\Phi _{j,k}^s(\tau )x_j\cdot x_k.$$ Look for a vectorfield $v=v^s$ of the form (B.21) with $b_{j,k}=b_{j,k}^s(\tau )$, $b_{j,k}^s(\tau \circ \gamma _{\nu ,\mu })=b_{j,k}^s(\tau )$, such that \ekv{{\rm B}.24} {{\partial\phi^s\over\partial s}= v^s(x, {\partial\over\partial x})\phi ^s.} We can write this as \ekv{{\rm B}.25} {{1\over 2}\sum_j\sum_k{\partial \over\partial s}(\Phi _{j,k}^s(\tau ))(\tau _{j,k}+\tau _{k,j})={1\over 2}\sum\sum\sum b_{\mu ,j}^s(\tau )(\tau _{j,k}+\tau _{k,j})({\partial \over\partial \tau _{\mu ,k}}+{\partial \over\partial \tau _{k,\mu }})\phi ^s.} It suffices to choose $B^s=(b^s_\cdot )$ so that \ekv{{\rm B}.26_{\rm strong}} {{\partial \over\partial s}\Phi ^s={^tB^s}\circ ({\partial \over\partial \tau }\phi ^s+{^t{({\partial \over\partial \tau }\phi ^s)}}),} or so that we have the weaker condition \ekv{{\rm B}.26_{\rm weak}} { {\partial \over\partial s}\Phi ^s=[{^tB^s}\circ {1\over 2}({\partial \phi ^s\over\partial \tau }+{^t({\partial \phi ^s\over\partial \tau }))}+{1\over 2}({\partial \phi ^s\over\partial \tau }+{^t({\partial \phi ^s\over\partial \tau })})\circ B^s], } obtained by taking the symmetric part of $({\rm B}.26_{\rm strong})$. Assume for simplicity that ${\partial \phi ^s\over\partial \tau }$ is a symmetric matrix. Then we get \ekv{{\rm B}.27_{\rm strong}} {{\partial \Phi ^s\over\partial s}=2{^t\hskip -2pt B^s}\circ {\partial \phi ^s\over\partial \tau },} or \ekv{{\rm B}.27_{\rm strong\,alt}} {{\partial \Phi ^s\over\partial s}=2{\partial \phi ^s\over\partial \tau }\circ B^s,} and \ekv{{\rm B}.27_{\rm weak}} {{\partial \Phi ^s\over\partial s}={^tB^s}\circ {\partial \phi ^s\over\partial \tau }+{\partial \phi ^s\over\partial \tau }\circ B^s.} \par Let ${\cal L}_v$ denote the Lie derivative w.r.t. $v$. We want to compute the following logarithmic derivative: \eekv{{\rm B}.28} { {({\partial \over\partial s}-{\cal L}_v)((\det ({\partial \phi ^s\over\partial \tau })e^{-\phi ^s})\circ \kappa\,\, d^{2m}x)\over (\det ({\partial \phi ^s\over\partial \tau })e^{-\phi ^s})\circ \kappa\,\, d^{2m}x}= } { -({\partial \phi ^s\over\partial s}-v(x,\partial _x)\phi ^s)+{\rm tr\,}[(({\partial \over\partial s}-v(x,\partial _x))({\partial \phi ^s\over\partial \tau })){({\partial \phi ^s\over\partial \tau })}^{-1}]-{\rm div\,}(v). } Here the first term vanishes in view of (B.24), the same would be the case with the second term, if we could commute $({\partial \over\partial s}-v(x,\partial _x))$ and ${\partial \over\partial \tau }$. Instead we get a commutator term: \eekv{{\rm B.}29} { {({\partial \over\partial s}-{\cal L}_v)(\det ({\partial \phi ^s\over\partial \tau})e^{-\phi ^s})\circ \kappa\,\, d^{2m}x)\over (\det ({\partial \phi ^s\over\partial \tau })e^{-\phi _s})\circ \kappa\,\, d^{2m}x}= } {\hskip 2cm {\rm tr\,}(([{\partial \over\partial \tau },v(x,\partial _x)]\phi ^s){({\partial \phi ^s\over\partial \tau })}^{-1})-{\rm div\,}v. } In the $\tau $-variables, $v(x,\partial _x)$ can be lifted to $${1\over 2}\sum_\nu \sum_\mu \sum_\rho b_{\mu ,\rho} (\tau )(\tau _{\rho ,\nu }+\tau _{\nu ,\rho })({\partial \over\partial \tau _{\mu ,\nu }}+{\partial \over \partial \tau _{\nu ,\mu }}),$$ so $$\eqalign{& {[{\partial \over\partial \tau },v]}_{j,k}=[{\partial \over\partial \tau _{j,k}},v]= \cr& \hskip 1cm {1\over 2}\sum_\mu b_{\mu ,j}({\partial \over\partial \tau _{\mu ,k}}+{\partial \over\partial \tau _{k,\mu }})+{1\over 2}\sum_\mu b_{\mu ,k}({\partial \over\partial \tau _{\mu ,j}}+{\partial \over\partial \tau _{j,\mu }}) \cr&\hskip 3cm +{1\over 2}\sum_\nu \sum_\mu \sum_\rho {\partial b_{\mu ,\rho }(\tau )\over\partial \tau _{j,k}}(\tau _{\rho ,\nu }+\tau _{\nu ,\rho })({\partial \over\partial \tau _{\mu ,\nu }}+{\partial \over\partial \tau _{\nu ,\mu }}).}$$ Hence, \ekv{{\rm B}.30} { [{\partial \over\partial \tau },v]\phi ^s={^tB}\circ {\partial \phi ^s\over\partial \tau }+{\partial \phi ^s\over\partial \tau }\circ B+{\rm tr\,}({\partial B\over\partial \tau }\circ \tau \circ {\partial \phi ^s\over\partial \tau })+{\rm tr\,}({\partial \hskip 1pt{^t\hskip -2pt B}\over\partial \tau }\circ {\partial \phi ^s\over\partial \tau }\circ \tau ). } Here we have to specify the notation used in the last two terms and below: for instance the third term denotes the matrix whose entry of index $j,k$ is $${\rm tr\,}({\partial B\over\partial \tau_{j,k} }\circ \tau \circ {\partial \phi ^s\over\partial \tau }).$$ Let $A\cdot B={\rm tr\,}({^tA}\circ B)$ denote the natural ``real"`scalar product of matrices. It follows that $${\rm tr\,}([{\partial \over\partial \tau },v]\phi ^s){({\partial \phi ^s\over\partial \tau })}^{-1}=2{\rm tr\,}B+{\rm tr\,}[({({\partial \phi ^s\over\partial \tau })}^{-1}\cdot {\partial \over\partial \tau }B)\circ \tau \circ {\partial \phi ^s\over\partial \tau }]+{\rm tr\,}[({({\partial \phi ^s\over\partial \tau })}^{-1}\cdot {\partial \over\partial \tau }{^tB})\circ {\partial \phi ^s\over\partial \tau }\circ \tau ].$$ Using this and (B.22) in (B.29), we get: \eeekv{{\rm B}.31} { {({\partial \over\partial s}-{\cal L}_v)((\det ({\partial \phi ^s\over\partial \tau })e^{-\phi ^s})\circ \kappa\, \,d^{2m}x)\over \det ((({\partial \phi ^s\over\partial \tau })e^{-\phi ^s})\circ \kappa\, \,d^{2m}x)}= } { \hskip 1cm ={\rm tr\,}(({({\partial \phi ^s\over\partial \tau })}^{-1}\cdot {\partial \over\partial \tau }B)\circ \tau \circ {\partial \phi ^s\over\partial \tau })+{\rm tr\,}(({({\partial \phi ^s\over \partial \tau })}^{-1}\cdot {\partial \over\partial \tau }{^tB)\circ {\partial \phi ^s\over\partial \tau }\circ \tau )} } {\hskip 2cm -{1\over 2}\sum_j\sum_k\sum_\nu (\tau _{k,\nu }+\tau _{\nu ,k})({\partial \over\partial \tau _{j,\nu }}+{\partial \over\partial \tau _{\nu ,j}})b_{j,k}. } \par We are interested in further cancellations and consider the special case: $$\phi ^s=i(\sum_{\vert j-k\vert _1=1}s\tau _{j,k})+\sum_j(k(\tau _{j,j})-iE\tau _{j,j}),$$ with $${\partial \phi ^s\over\partial s}=i\sum_{\vert j-k\vert _1=1}\tau _{j,k},\quad {\partial \phi ^s\over\partial \tau }=is\Delta -iE+{\rm diag\,}(k'(\tau _{j,j}))=:M,$$ so that $M$ is a symmetric matrix. Then $({\rm B}.27_{\rm strong\, alt})$ becomes: $$i\Delta =2M\circ B,$$ so we can take $B={i\over 2}M^{-1}\Delta $. \par We try to simplify the expression (B.31) and get $$\eqalign{ & {\rm tr\,}(((M^{-1}\cdot {\partial \over\partial \tau }){i\over 2}(M^{-1}\Delta ))\circ \tau \circ M)+{\rm tr\,}(((M^{-1}\cdot {\partial \over\partial \tau })({i\over 2}\Delta M^{-1}))\circ M\circ \tau ) \cr & \hskip 2cm -{1\over 2}\sum_j\sum_k\sum_\nu (\tau _{k,\nu }+\tau _{\nu ,k})({\partial \over\partial \tau _{j,\nu}}+{\partial \over\partial \tau _{\nu ,j}})({i\over 2}M^{-1}\Delta )_{j,k} \cr & =-{i\over 2}[{\rm tr\,}(M^{-1}(M^{-1}\cdot {\partial \over\partial \tau })(M)M^{-1}\Delta \tau M)+{\rm tr\,}(\Delta M^{-1}(M^{-1}\cdot {\partial \over\partial \tau })(M)M^{-1}M\tau ) \cr &\hskip 2cm -{1\over 2}\sum_j\sum_k\sum_\nu (\tau _{k,\nu} +\tau _{\nu ,k})(M^{-1}({\partial \over\partial \tau _{j,\nu }}+{\partial \over\partial \tau _{\nu ,j}})(M)M^{-1}\Delta )_{j,k}]. }$$ Notice the cancellation between a $M$ and a $M^{-1}$ in the each of the first and second terms in the bracket (using also the cyclicity of the trace in the first term). We also take into account that $M$ only depends on $\tau _{j,j}$ at the $j$:th diagonal place and get: $$\eqalign{ & -{i\over 2}[{\rm tr\,}({\rm diag\,}((M^{-1})_{j,j}k''(\tau _{j,j}))\circ M^{-1}\circ \Delta \circ \tau )+ \cr &\hskip 3cm ({\rm tr\,}\tau\circ \Delta \circ M^{-1}\circ {\rm diag\,}((M^{-1})_{j,j}k''(\tau _{j,j}))) \cr & \hskip 5cm -\sum_j\sum_k(\tau _{k,j}+\tau _{j,k})(M^{-1}{\rm diag\,}(\delta _{\cdot,j}k''(\tau _{j,j}))M^{-1}\Delta )_{j,k}] \cr &\hskip 3mm =-{i\over 2}[\sum_{j,\nu ,k}((M^{-1})_{j,j}k''(\tau _{j,j}))(M^{-1})_{j,\nu }\Delta _{\nu ,k}\tau _{k,j}+\sum_{j,\nu ,k}\tau _{j,k}\Delta _{k,\nu }(M^{-1})_{\nu ,j}(M^{-1})_{j,j}k''(\tau _{j,j}) \cr & \hskip 5cm -\sum_{j,k}(\tau _{k,j}+\tau _{j,k})(M^{-1})_{j,j}k''(\tau _{j,j})(M^{-1}\circ \Delta )_{j,k}]. }$$ Using that $M^{-1}$, $\Delta $ are symmetric, the last expression reduces to $$\eqalign{ & =-{i\over 2}[\sum_{j,\nu ,k}(\tau _{k,j}+\tau _{j,k})(M^{-1})_{j,\nu }\Delta _{\nu ,k}((M^{-1})_{j,j}k''(\tau _{j,j})) \cr & \hskip 2cm -\sum_{j,\nu ,k}(\tau _{k,j}+\tau _{j,k})((M^{-1})_{j,j}k''(\tau _{j,j}))(M^{-1})_{j,\nu }\Delta _{\nu ,k}]=0, }$$ so in this special case, the logarithmic derivative (B.28) vanishes. \vfill\eject \magnification =1200 \def\re{{\rm Re\,}} \def\im{{\rm Im\,}} \def\o{\over} \centerline{\bf Appendix C. An equation in a tube.\rm} \medskip \par We start by developping some $L^2$-theory on the real space ${\bf R}^N$, and later we use these results to study more precise $L^\infty $ estimates, using a version of the maximum principle. It is only in this last part that we make estimates which are uniform with respect to the dimension. \par Let $C_b^\infty =C_b^\infty ({\bf R}^N)$ denote the space of all $C^\infty $-functions $a$ on ${\bf R}^N$ such that for every multiindex $\alpha \in {\bf N}^N$, there exists a constant $C=C_{a,\alpha }$, such that $\vert \partial ^\alpha a(x)\vert \le C$ on ${\bf R}^N$. Here $a$ is a scalar (real or complex) function, but we may similarly define the space of vector-valued functions $C_b^\infty ({\bf R}^N;E)$, if $E$ is a finite dimensional vector space. \par We consider a differential operator of the form $$P=-\Delta +\nu (x,{\partial \over \partial x})+V(x),\,\,x\in {\bf R}^N,$$ where $\nu =\sum_1^N\nu _j(x){\partial \over \partial x_j}$ is a complex but scalar vector field and $V\in C_b^\infty ({\bf R}^N;{\rm Mat}_M({\bf C}))$ a function of class $C_b^\infty $ with values in the space of complex $M\times M$-matrices. This means of course that the operator $P$ acts on functions with values in ${\bf C}^M$. We assume that the vector field $\nu $ satisfies: $\im \nu _j\in C_b^\infty $, $\nabla \re \nu _j\in C_b^\infty $. Here $\Delta $ denotes the usual Laplace operator, and $\nabla $ the standard gradient. \par We start by deriving two basic a priori estimates. Let $u\in{\cal S}({\bf R}^N)$, $z=z_1+iz_2\in{\bf C}$, and consider the equation $$(P+z)u=v. \eqno{({\rm C}.1)}$$ We will assume that $z_1\ge C_0$ for some sufficiently large constant $C_0\ge 0$. It will also be convenient to use the notation $D_{x_j}={1\over i}{\partial \over\partial x_j}$, so that $i\nu (x,D_x)=\nu (x,{\partial \over \partial x})$. Notice that the complex adjoint of $\nu (x,D)$ is given by $$\nu ^*(x,D_x)=-i\overline{{\rm div\,}\nu }(x)-2i(\im \nu )(x,D_x)+\nu (x,D_x).$$ The assumptions on $\nu $ tell us that the first term to the right belongs to $C_b^\infty $ and that the second term is a vectorfield with coefficients in $C_b^\infty $. Write $\nu (x,D_x)=\nu _1(x,D_x)+i\nu _2(x,D_x)$, with $\nu _1={1\over 2}(\nu +\nu ^*)$, $\nu _2={1\over 2i}(\nu -\nu ^*)$. Then $$\eqalign{&\nu _1\equiv \nu\,\, {\rm mod\,}(C_b^\infty +\sum_1^NC_b^\infty {\partial \over\partial x_j})\cr &\nu _2\equiv 0 \,\, {\rm mod\,}(C_b^\infty +\sum_1^NC_b^\infty {\partial \over\partial x_j}),}$$ where until further notice, we write $\nu =\nu (x,D_x)$ and similarly for $\nu _j$, $j=1,2$. Similarly, write $V(x)=V_1(x)+iV_2(x)$, where $V_1$, $V_2$ are Hermitian, and $P+z=(P_1+z_1)+i(P_2+z_2)$, where $P_1=-\Delta -\nu _2+V_1(x)$, $P_2=\nu _1+V_2(x)$. From (C.1), we get: $$\Vert v\Vert ^2=\Vert (P_1+z_1)u\Vert ^2+\Vert (P_2+z_2)u\Vert ^2+i((P_2+z_2)u\vert (P_1+z_1)u)-i\left((P_1+z_1)u\vert (P_2+z_2)u\right).\eqno{({\rm C}.2)}$$ The sum of the last two terms can be written $(i[P_1,P_2]u\vert u)$ \par Here, $$\eqalign{&[P_1,P_2]=\cr & [-\Delta ,\nu _1(x,D_x)]+[-\Delta ,V_2(x)]-(\nu _2(x,D_x)\nu _1(x,D_x)-\nu _1(x,D_x)\nu _2(x,D_x))\cr &-[\nu _2(x,D_x),V_2(x)]+V_1(x)\circ \nu _1(x,D_x)-\nu _1(x,D_x)\circ V_1(x)+[V_1(x),V_2(x)]\cr &=(-\nu _2(x,D_x)+V_1(x))\circ \nu _1(x,D_x)-\nu _1(x,D_x)\circ (-\nu _2(x,D_x)+V_1(x))+{1\over i}Q(x,D_x),}$$ where $Q$ is a second order formally self-adjoint operator with coefficients in $C_b^\infty ({\bf R}^N)$. We rewrite (C.2) as $$\eqalignno{&\Vert v\Vert ^2=\Vert (-\Delta -\nu _2(x,D_x)+V_1(x)+z_1)u\Vert ^2+\Vert (\nu _1(x,D_x)+V_2(x)+z_2)u\Vert ^2&{({\rm C}.3)}\cr &\hskip 1cm +(Q(x,D_x)u\vert u)+i((\nu _1(x,D_x)+z_2)u\vert (-\nu _2(x,D_x)+V_1(x))u)\cr & \hskip 2cm-i((-\nu _2(x,D_x)+V_1(x))u\vert (\nu _1(x,D_x)+z_2)u),}$$ where we judged it convenient to reintroduce $z_2$. \par Since $\nu _2$ has coefficients in $C_b^\infty $, we can apply a standard a priori estimate to the first term of the RHS, and using also the famous inequality $\Vert a+b\Vert ^2\le 2\Vert a\Vert ^2+2\Vert b\Vert ^2$: $${1\over 2}\Vert (\nu _1(x,D_x)+z_2)u\Vert ^2\le \Vert (\nu _1(x,D_x)+V_2(x)+z_2)u\Vert ^2+\Vert V_2(x)u\Vert ^2,$$ we get for $z_1\ge C_0$ large enough: $$\eqalignno{&\Vert v\Vert ^2\ge {1\over C_1}\Vert u\Vert _{H^2}^2+{z_1\over C_1}\Vert u\Vert _{H^1}^2+{z_1^2\over C_1}\Vert u\Vert ^2+{1\over 2}\Vert (\nu _1(x,D_x)+z_2)u\Vert ^2 &({\rm C}.4)\cr & \hskip 1cm -{\cal O}(1)\Vert u\Vert ^2-{\cal O}(1)\Vert u\Vert _{H^2}\Vert u\Vert -{\cal O}(1)\Vert (\nu _1(x,D_x)+z_2)u\Vert \Vert u\Vert_{H^1} . }$$ After increasing $C_0$, $C_1$, we can absorb the last three terms and get the basic a priori estimate $$C_1\Vert v\Vert ^2\ge \Vert u\Vert _{H^2}^2+z_1\Vert u\Vert _{H^1}^2+z_1^2\Vert u\Vert ^2+\Vert (\nu _1(x,D_x)+z_2)u\Vert ^2,\eqno{({\rm C}.5)}$$ for solutions to (C.1) of class ${\cal S}$, when $z=z_1+iz_2$ and $z_1\ge C_0$ with $C_0$ sufficiently large. In this estimate, we can also replace $\nu _1$ by $\nu $, if we so wish. We notice that this estimate is equally valid when $u\in H_{{\rm comp}}^2({\bf R}^N)$. Our second basic $L^2$-estimate will be of semi-boundedness type, and very simple to obtain: For $u\in{\cal S}$, we simply notice that $$\eqalignno{&\re ((P+z)u\vert u)=(-\Delta u\vert u)+(-\nu _2(x,D_x)u\vert u)+(V_1(x)u\vert u)+z_1\Vert u\Vert ^2&{({\rm C}.6)}\cr &\hskip 2cm\ge {1\over 2}\Vert u\Vert _{H^1}^2+(z_1-{\cal O}(1))\Vert u\Vert ^2-\Vert u\Vert _{H^1}\Vert u\Vert \cr &\hskip 3cm\ge {1\over 3}\Vert u\Vert _{H^1}^2+(z_1-{\cal O}(1))\Vert u\Vert ^2.}$$ Let $H_{z_1}^1$ be the space $H^1$ equipped with the norm $\Vert (\vert D_x\vert +\sqrt{z_1})u\Vert $, and let $H_{z_1}^{-1}$ be the corresponding dual space, equipped with the norm $\Vert (\vert D_x\vert +\sqrt{z_1})^{-1}u\Vert $. Assuming as before that $z_1\ge C_0$, with $C_0$ sufficiently large, we can write the preceding estimate, $$\Vert u\Vert _{H_{z_1}^1}^2\le {\cal O}(1)\Vert (P+z)u\Vert _{H_{z_1}^{-1}}\Vert u\Vert_{H_{z_1}^1},$$ so $$\Vert u\Vert _{H_{z_1}^1}\le C\Vert (P+z)u\Vert _{H_{z_1}^1},\,\, u\in{\cal S}.\eqno{({\rm C}.7)}$$ We have the same estimate for the adjoint: $$\Vert u\Vert _{H_{z_1}^1}\le C\Vert (P+z)^*u\Vert _{H_{z_1}^1},\,\, u\in{\cal S}.\eqno{({\rm C}.8)}$$ \par Using this estimate we now start to consider the existence of solutions to (C.1). Let $v\in H_{z_1}^{-1}$, and consider the antilinear form: $\ell_v:{\cal S}\ni \phi \mapsto (v\vert \phi )$. Then $$\vert \ell_v(\phi )\vert \le \Vert v\Vert _{H_{z_1}^{-1}}\Vert \phi \Vert _{H_{z_1}^1}\le C\Vert v\Vert _{H_{z_1}^{-1}}\Vert (P+z)^*\phi \Vert _{H_{z_1}^{-1}}.$$ By the Hahn-Banch theorem, there exist $u\in H_{z_1}^1$ with $\Vert u\Vert _{H_{z_1}^1}\le C\Vert v\Vert _{H_{z_1}^{-1}}$, such that $\ell_\phi (\phi )=(u\vert (P+z)^*\phi )$, $\forall \phi \in{\cal S}$. Consequently, we have shown: \medskip \par\noindent \bf Proposition C.1. \it There exists a constant $C_0>0$, such that if $z_1\ge C_0$, and $v\in H_{z_1}^{-1}$, then there exists $u\in H_{z_1}^1$, such that $$(P+z)u=v,$$ in the sense of distributions, and $$\Vert u\Vert _{H_{z_1}^1}\le C_0\Vert v\Vert_{H_{z_1}^{-1}}.\eqno{({\rm C}.9)} $$ \rm \medskip \par Notice that this applies if $v\in L^2$, since $v $ then also belongs to $H_{z_1}^{-1}$, and $$\Vert v\Vert _{H_{z_1}^{-1}}\le \Vert (\vert D_x\vert +\sqrt{z_1})^{-1}v\Vert \le {1\over\sqrt{z_1}}\Vert v\Vert .$$ Consequently, for $v\in L^2$, we get a solution $u\in H_{z_1}^1$ of (C.1), which satisfies $$\sqrt{z_1}\Vert u\Vert _{H_{z_1}^1}\le C_0\Vert v\Vert ,$$ or more explicitly, $$z_1\Vert u\Vert +\sqrt{z_1}\Vert \vert D_x\vert u\Vert \le {\cal O}(1)\Vert v\Vert .\eqno{({\rm C}.10)}$$ \par In order to complete most of the $L^2$-theory, we have to consider the regularity of $H^1$-solutions of (C.1). Let $u\in H^1$, $v\in L^2$ and assume that (C.1) holds. Let $\chi \in C_0^\infty ({\bf R}^N)$ be equal to $1$ near $0$ and put $\chi _R(x)=\chi ({1\over R}x)$, $R\ge 1$. Using the fact that $\nu _j$ grow at most linearly, we see that $$[P,\chi _R]={1\over R}{\cal O}(1)\cdot {\partial \o\partial x}+{\cal O}(1).$$ where ${\cal O}(1)$ indicate functions which belong to some bounded set in $C_b^\infty $. It follows that $$(P+z)(\chi _Ru)=\chi _Rv+{\cal O}({1\over R})\cdot {\partial \o \partial x}u+{\cal O}(1)u,\eqno{({\rm C}. 11)}$$ so the RHS is ${\cal O}(1)$ in $L^2$. Since $\chi _Ru$ has compact support, the local ellipticity implies that $\chi _Ru\in H^2$ and we can apply the basic a priori estimate, with $v$ replaced by the RHS of the preceding equation, and we get: $$\eqalignno{&\Vert \chi _Ru\Vert _{H^2}^2+z_1\Vert \chi _Ru\Vert _{H^1}^2+z_1^2\Vert \chi _Ru\Vert ^2+\Vert (\nu _1(x,D_x)+z_2)\chi _Ru\Vert ^2&{({\rm C}.12)}\cr &\hskip 4cm\le {\cal O}(10(\Vert v\Vert ^2+\Vert u\Vert _{H^1}^2).}$$ Here $(\nu _1(x,D_x)+z_2)\chi _Ru=\chi _R(\nu _1(x,D_x)+z_2)u+{\cal O}(1)u$, so $$\eqalignno{&\Vert \chi _Ru\Vert _{H^2}+\sqrt{z_1}\Vert \chi _Ru\Vert _{H^1}+z_1\Vert \chi _Ru\Vert +\Vert \chi _R(\nu _1(x,D_x)+z_2)u\Vert &{({\rm C}.13)}\cr &\le {\cal O}(1)(\Vert v\Vert +\Vert u\Vert _{H^1}).}$$ Letting $R$ tend to infinity, we see that $u\in H^2$, $(\nu _1(x,D_x)+z_2)u\in L^2$, and $$\Vert u\Vert _{H^2}+\sqrt{z_1}\Vert u\Vert _{H^1}+z_1\Vert u\Vert +\Vert (\nu _1(x,D_x)+z_2)u\Vert \le {\cal O}(1)(\Vert v\Vert +\Vert u\Vert _{H^1}).$$ Possibly after increasing $C_0$, we get: \medskip \par\noindent \bf Proposition C.2. \it There exists a constant $C_0>0$, such that if $z_1\ge C_0$, and $u\in H^1$ solves (C.1) in the sense of distributions with $v\in L^2$, then we have $u\in H^2$, $(\nu _1(x,D_x)+z_2)u\in L^2$, and $$\Vert u\Vert _{H^2}+\sqrt{z_1}\Vert u\Vert _{H^1}+z_1\Vert u\Vert +\Vert (\nu _1(x,D_x)+z_2)u\Vert \le C_0\Vert v\Vert \eqno{({\rm C}.14)}$$\rm \medskip \par Notice that (C.14) implies uniqueness. Summing up, we have proved: \medskip \par\noindent \bf Theorem C.3. \it There exists $C_0>0$, such that if $z_1\ge C_0$, and $v\in L^2$, then (C.1) has a unique solution u of class $H^1$. Moreover $u\in H^2$, $(\nu _1(x,D_x)+z_2)u\in L^2$ and (C.14) holds.\rm\medskip \par When $v$ has more regularity, we can differentiate (C.1). If for instance $v\in H^1$, we get for every $\alpha \in {\bf N}^N$ of length $1$: $$(P-z)(D^\alpha u)=D^\alpha v-[P,D^\alpha ]u,\eqno{({\rm C}.15)}$$ and $$[P,D^\alpha ]=i[\nu (x,D),D^\alpha ]+[V,D^\alpha ]\in \sum C_b^\infty D_{x_j}+C_b^\infty ,$$ and knowing that $u\in H^2$, we see that the RHS of (C.15) is in $L^2$. Since we also know that $D^\alpha u\in H^1$, the preceding proposition implies that $D^\alpha u\in H^2$, $(\nu _1(x,D)+z_2)D^\alpha u\in L^2$. By iteration, we get: \medskip \par\noindent \bf Theorem C.4. \it Let $C_0$ be as in the preceding theorem, let $m\in {\bf N}$, $v\in H^m$, $z_1\ge C_0$ and let $u$ be the solution of (C.1), given by the preceding theorem. Then $u\in H^{m+2}$, $(\nu _1(x,D)+z_2)u\in H^m$ and we have $$\Vert u\Vert _{H^{m+2}}+\sqrt{z_1}\Vert u\Vert _{H^{m+1}}+z_1\Vert u\Vert_{H^m} +\Vert (\nu _1(x,D_x)+z_2)u\Vert_{H^m} \le C_m\Vert v\Vert_{H^m}.\eqno{({\rm C}.16)} $$\rm\medskip \par There remains to make two routine extensions. The first one concerns the decay of $u$ if $v$ decays. Let $f:[1,+\infty [\to ]0,+\infty [$ with $f$, $1/f$ bounded by some constant that will not enter into the estimates and assume that $f$ is smooth with $f^{(k)}(t)={\cal O}_k(1)f(t)t^{-k}$. Then $f(\langle x\rangle )^{-1}\circ P\circ f(\langle x\rangle )$ has the same properties as $P$. We can approximate the function $F(t)=t$ by functions $f_\epsilon (t)=t/(1+\epsilon t)$, $0<\epsilon \le 1$, for which $f^{(k)}(t)={\cal O}_k(1)f_\epsilon (t)t^{-k}$ uniformly w.r.t. $\epsilon $. >From this it is easy to see that we can gain power decay for $u$, if $v$ has such a power decay. More precisely, we can prove the following theorem, where we let $H^{k,m}$ for $k,\,m\in{\bf N}$ denote the weighted Sobolev space of all $u\in {\cal S}'$ s.t. $\langle x\rangle ^kD^\alpha u\in L^2$ for $\vert \alpha \vert \le m$: \medskip \par\noindent \bf Theorem C.5. \it Same as the preceding theorem after the substitutions: $m\mapsto (k,m)\in{\bf N}^2$, $H^m\mapsto H^{k,m}$, $H^{m+1}\mapsto H^{k,m+1}$, $H^{m+2}\mapsto H^{k,m+2}$ everywhere. \rm \medskip \par The second extension concerns parameters. Let $W\subset {\bf R}^N$ be open, and let $\nu (x,y,{\partial \o\partial x})$ be a complex vectorfield, $V=V(x,y)$. We assume $$V,\,\im \nu ,\,\nabla \re \nu \in C_b^\infty ({\bf R}^N\times W),\eqno{({\rm C}.17)}$$ $$\re \nu ={\cal O}(\langle x\rangle ).\eqno{({\rm C}.18)}$$ Of course, we have the estimate in (C.18) for every fixed $y$, by (C.17), but the point of (C.18) is that the estimate holds uniformly with respect to $y$. It is clear that the preceding estimates hold uniformly with respect to $y$. If the function $v=v(x,y)$ depends sufficiently smoothly on $y$, we can also differentiate the equation (C.1) with repect to $y$, and we get the following result: \medskip \par\noindent \bf Theorem C.6. \it There exist $C_k>0$ for all $k\in{\bf N}$ such that the following holds: Let $\ell ,\,k,\, m\,\in {\bf N}$, and let $v=v(x,y)$ be a measurable function on ${\bf R}^N\times W$, such that $D_y^\beta v\in H^{k,m}({\bf R}^N)$ with locally bounded norm, for $y\in W$, $\vert \beta \vert \le \ell$. Let $z_1\ge C_k$ and let $u=u(x,y)$, be the unique solution of (C.1) which belongs to $H^1$ for every $y$. Then $D_y^\beta u\in H^{k,m+2}$, $\nu _1(x,y,D_x)+z_2)D_y^\beta u\in H^{k,m}$ for $\vert \beta \vert \le \ell$ with locally bounded norms for $y\in W$, and we have $$\eqalignno{&\sum_{\vert \beta \vert \le \ell}(\Vert D_y^\beta u\Vert _{H^{k,m+2}}+\sqrt{z_1}\Vert D_y^\beta u\Vert _{H^{k,m+1}}+z_1\Vert D_y^\beta u\Vert _{H^{k,m}}+&{({\rm C}.19)}\cr &\hskip 1cm\Vert (\nu _1(x,y,D_x)+z_2)D_y^\beta u \Vert _{H^{k,m}})\le C_{\ell,k,m}\sum_{\vert \beta \vert \le \ell}\Vert D_y^\beta v\Vert _{H^{k,m}},\,\, y\in W,}$$ where $C_{\ell,k,m}$ is independent of $y$.\rm \medskip \par We return temporarily to the parameter independent situation. By combining Theorem C.3 and the second important a-priori estimate (C.6), we see that $P$ is a closed unbounded operator on $L^2({\bf R}^N)$ with domain $\{ u\in H^2;\,\nu _1u\in L^2\}$, such that $\{ z\in{\bf C};\,z_1<-C_0\}$ is contained in the resolvent set and such that for $z_1>C_0$: $$\Vert (z+P)^{-1}\Vert _{{\cal L}(L^2)}\le {1\over z_1-C_0}.\eqno{({\rm C}.20)}$$ We can apply the Hille-Yoshida theorem to conclude that $-P$ is the generator of a strongly continuous semi-group, $$[0,+\infty [\ni t\mapsto T_t=e^{-tP},\eqno{({\rm C}.21)}$$ with $$\Vert e^{-tP}\Vert _{{\cal L}(L^2)}\le e^{C_0t}.\eqno{({\rm C}.22)}$$ Applying Theorem 4 with $m=0$, and the observation leading to that result, we see that $e^{-tP}$ is also a strongly continuous semigroup on $H^{k,0}$ for every $k\in {\bf N}$, and $$\Vert e^{-tP}\Vert _{{\cal L}(H^{k,0})}\le e^{C_kt}.\eqno{({\rm C}.23)}$$ To obtain this, we consider $P$ as an unbounded operator in $H^{k,0}$ with the analogous domain, and we identify the two semigroups using a limiting sequence of weights as above. In both cases, we notice that $e^{-tP}$ is a strongly continuous semigroup on ${\cal D}(P^m)$ for every fixed $m$. Playing with $k,m$, we conclude that if $u\in {\cal S}({\bf R}^N)$, then $e^{-tP}u\in C^\infty ([0,+\infty [;{\cal S}({\bf R}^N))$ and we have in the classical sense: $$({\partial \over \partial t}+P(x,D_x))(e^{-tP}u(x))=0\eqno{({\rm C}.24)}$$ \par We now consider equations in tube domains and we start by applying the $L^2$ theory above. Let $W\subset\subset{\bf R}^N$ be open, connected and satisfy a cone condition, so that if $u\in H^m(\Omega )$, $\Omega ={\bf R}^N+iW$ and $m>N$, then $u\in C(\overline{\Omega })$. Let $V(z)\in C_b^\infty (\overline{\Omega };{\rm Mat}_{M}({\bf C}))$ be holomorphic in $\Omega $ and let $\nu (z,{\partial \o \partial z})=\sum_1^N\nu _j(z){\partial \o \partial z_j}$ have holomorphic coefficients $\nu _j$ which are also of class $C^\infty (\overline{\Omega })$, and which satisfy: $$\im \nu _j,\,\nabla \re \nu _j\,\in C_b^\infty (\overline{\Omega }).\eqno{({\rm C}.25)}$$ A typical example of such a vectorfield is $\nu (z,{\partial \over \partial z})=\sum_1^Nz_j{\partial \o \partial z_j}$. If $u$ is holomorphic in $\Omega $, we notice that $$\nu (z,{\partial \o \partial z})u=\widetilde{\nu }(x,y,{\partial \o \partial x})u=\nu _{\bf R}(x,y,{\partial \o \partial x},{\partial \o \partial y})u,\eqno{({\rm C}.26)}$$ where we write $z=x+iy$, and where $$\widetilde{\nu }(x,y,{\partial \over \partial x})=\sum_1^N\nu _j(z){\partial \o \partial x_j},\eqno{({\rm C}.27)}$$ $$\nu _{\bf R}(x,y,{\partial \o \partial x},{\partial \o \partial y})=\sum_1^N(\re (\nu _j(z)){\partial \o \partial x_j}+(\im \nu _j(z)){\partial \o \partial y_j}).\eqno{({\rm C}.28)}$$ $\nu _{\bf R}$ is the real vectorfield determined by the direction $(\nu _1,..,\nu _N)\in {\bf C}^N\simeq {\bf R}^{2N}$. \par Let ${\cal H}^m(\Omega )=\{u\in H^m(\Omega ); u\hbox{ is holomorphic }\}$, $m\in {\bf N}$, ${\cal H}(\Omega )={\cal H}^0(\Omega )$ and more generally for $k,m\in {\bf N}$: $${\cal H}^{k,m}(\Omega )=\{ u\in H^m(\Omega ); u\hbox{ is holomorphic, }\langle z\rangle ^kD_z^\alpha u\in L^2(\Omega ),\,\vert \alpha \vert \le m\} .$$ Similarly, define $$\widetilde{H}^{k,m}(\Omega )=\{ u\in H^m(\Omega ); \langle z\rangle ^kD_x^\alpha u\in L^2(\Omega ),\, \vert \alpha \vert \le m\}.$$ Let $$P=-\Delta _{\bf C}+\nu (z,{\partial \o \partial z})+V(z),\eqno{({\rm C}.29)}$$ $$\widetilde{P}=-\Delta _{\bf R}+\widetilde{\nu }+V,\eqno{({\rm C}.30)}$$ where $\Delta _{\bf C}=\sum_1^N({\partial \o \partial z_j})^2$, $\Delta _{\bf R}=\sum_1^N({\partial \o \partial x_j})^2$. Notice that our two Laplace operators have the same action on holomorphic functions. For this reason we shall sometimes drop the subscripts ${\bf R}$, ${\bf C}$. Also, when $u$ is holomorphic, $Pu=\widetilde{P}u$. We can apply the preceding results and see that $\widetilde{P}:H^0(\Omega )\to H^0(\Omega )$ is a closed operator with domain $\{ u\in \widetilde{H}^2(\Omega );\, \widetilde{\nu }(x,y,{\partial \over \partial x})u\in H^0(\Omega )\}$ and resolvent set containing the half plane $z_1<-C_0$. Moreover $\Vert (\widetilde{P}-z)^{-1}\Vert \le 1/(-C_0-z_1)$ for $z$ in that half plane. We have the completely analogous result for $\widetilde{P}:\widetilde{H}^{k,0}\to \widetilde{H}^{k,0}$. If $v\in {\cal H}^0(\Omega )$, let $u\in {\cal D}(\widetilde{P})$ be the solution of $(\widetilde{P}-z)u=v$ for $z_1<-C_0$. Notice that ${\partial \o \partial \overline{z}_j}$ formally commutes with $\widetilde{P}$. If $W'\subset\subset W$, $\Omega '={\bf R}^N+iW'$, then ${\partial \o \partial \overline{z}_j}u\in \widetilde{H}^1(\Omega ')$, and we get $(\widetilde{P}-z)({\partial \o \partial \overline{z}_j}u)=0$, implying ${\partial \o \partial \overline{z}_j}u=0$ in $\Omega '$ and hence in $\Omega $ if we take a sequence of $\Omega '$ converging to $\Omega $. We have shown that $u$ is holomorphic and $(P-z)u=v$. We get \medskip \par\noindent \bf Theorem {\rm C}.7. \it $P:{\cal H }(\Omega )\to {\cal H}(\Omega )$ is a closed operator with domain $\{ u\in {\cal H}^2(\Omega );\, \nu (z,{\partial \o \partial z})u\in {\cal H}^0(\Omega )\}$ and resolvent set containing the half-plane $z_1<-C_0$. Moreover $\Vert (\widetilde{P}-z)^{-1}\Vert \le 1/(-C_0-z_1)$ for $z$ in that half-plane. The same result is valid with the substitutions: ${\cal H}^0\mapsto {\cal H}^{k,0}$, ${\cal H}^2\mapsto {\cal H}^{k,2}$, $C_0\mapsto C_k$.\rm\medskip \par The Hille-Yoshida theorem allows us to define the strongly continuous semigroup $T_t=e^{-tP}: {\cal H}(\Omega )\to {\cal H }(\Omega )$, $t\ge 0$, with $\Vert e^{-tP}\Vert _{{\cal L}({\cal H}(\Omega ))}\le e^{C_0t}$, and more generally $\Vert e^{-tP}\Vert _{{\cal L}({\cal H}^{k,0})}\le e^{C_kt}$. Notice also that $T_t$ acts as a strongly continuous semi-group in the domain of any positive integer power of $P:{\cal H }^{k,0}\to{\cal H}^{k,0}$. It follows that if $u\in {\cal S}(\overline{\Omega})$ in the sense that $u\in C^\infty (\overline{\Omega })$ and all derivatives tend to zero at infinity faster than any negative power of $\langle z\rangle $, and if $u$ is holomorphic in $\Omega $, then $e^{-tP}u\in C^\infty ([0,+\infty [;{\cal S}(\overline{\Omega })\cap{\rm Hol}(\Omega ))$, and the heat equation $({\partial \o \partial t}+P)e^{-tP}u=0$ holds in the classical sense. Moreover for such $u$'s we also have $e^{-tP}Pu=Pe^{-tP}u$. \par Finally, we are ready for the $L^\infty $ estimates, but we will have to add an assumption about $\nu $ and an assumption about $V$. $$\eqalignno{&\hbox{There is a real vectorfield }\mu\, \hbox{in ${\bf C}^N$ with smooth coefficients}&{({\rm C}.31)}\cr &\hbox{of at most linear growth, such that }\mu _{\vert \Omega }=\nu _{\bf R},}$$ $$\hbox{If }z\in \Omega ,\hbox{ then }\exp (-t\mu )(z )\in \Omega ,\,\,t\ge 0.\eqno{({\rm C}.32)}$$ \par Now equip ${\bf C}^M$ with some norm and view correspondingly ${\bf C}^N$ as a Banach space $B$, with dual $B^*$. Let $(u\vert v)$ be the corresponding sesquilinear scalar product on $B\times B^*$. We view $V(z)$ as a map $B\to B$, and make the following assumption on $V$: $$\eqalignno{&\hbox{There exists }\delta >0, \hbox{ such that if }z\in\overline{\Omega },\,u\in B,\,v\in B^*,\hbox{ and}&{({\rm C}.33)}\cr &\re (u\vert v)=\Vert u\Vert _B\Vert v\Vert _B^*,\hbox{ then }\re (V(z)u\vert v)\ge \delta \Vert u\Vert _B\Vert v\Vert _{B^*}.}$$ \par Let $u(t,z)\in C^\infty ([0,+\infty [;{\cal S}(\overline{\Omega };B))$ be holomorphic in $z$, and assume that $u$ solves the equation: $${\partial \o \partial t}u+Pu=0.\eqno{({\rm C}.34)}$$ Let $$m(t)=\sup_{z\in \overline{\Omega }}\Vert u(z)\Vert _B.\eqno{({\rm C}.35)}$$ Notice that $$m(t)=\max_{(z,e)\in\overline{\Omega }\times S(B^*)}\re (u(t,z)\vert e),\eqno{({\rm C}.36)}$$ where $S(B^*)=\{ e\in B^*;\,\Vert e\Vert _{B^*}=1\}$. Let $M(t)$ be the set of points in $\overline{\Omega }\times S(B^*)$, where the maximum is attained in ({\rm C}.36). It follows that $m(t)$ is a locally Lipschitz function on $[0,+\infty [$ whose (a.e. defined) derivative satisfies: $$m'(t)\le \sup_{(x,e)\in M(t)}\re ({\partial \o \partial t}u(t,z)\vert e).\eqno{({\rm C}.37)}$$ \par Consider, $$\re (Pu(t,z)\vert e)=-\Delta _{\bf R}\re (u(t,z)\vert e)+\nu _{\bf R}(x,y,{\partial \o \partial x},{\partial \o \partial y})\re (u(t,z)\vert e)+\re (V(z)u(t,z)\vert e).$$ If $(z,e)\in M(t)$, then $w\mapsto \re (u(w)\vert e)$ has a maximum at $z$, so $-\Delta _{\bf R}\re (u(t,z)\vert e)\ge 0$. On the other hand the assumptions (C.32) imply that $\nu _{\bf R}\re (u(z)\vert e)\ge 0$, and since $\re (u(t,z)\vert e)=\Vert u(t,z)\Vert _B\Vert e\Vert _{B^*}$, we have $\re (V(z)u(t,z)\vert e)\ge \delta \Vert u(t,z)\Vert _B\Vert e\Vert _{B^*}$. >From (C.34), we get $\re ({\partial u\o \partial t}\vert e)=-\re (Pu\vert e)$, so for $(z,e)\in M(t)$: $\re ({\partial u\o \partial t}\vert e)\le -\delta m(t)$, so (C.37) implies that $$m'(t)\le -\delta m(t),\eqno{({\rm C}.38)}$$ and hence that $m(t)\le e^{-\delta t}m(0)$. \par Summing up, we have shown that if $u\in {\cal S}(\overline{\Omega })$ is holomorphic in $\Omega $, then $$\sup_{x\in \Omega }\Vert e^{-tP}u(x)\Vert _B\le e^{-\delta t}\sup_{x\in \Omega }\Vert u(x)\Vert _B.\eqno{({\rm C}.39)}$$ \par For the same $u$'s we have $Pe^{-tP}u=e^{-tP}Pu=-{\partial \o \partial t}e^{-tP}u$, so if we put $$Qu=\int_0^\infty e^{-tP}udt,$$ we get $$PQu=QPu=-\int_0^\infty {\partial \o \partial t}(e^{-tP}u)dt=u.\eqno{({\rm C}.40)}$$ We also have, $$\sup_{z\in \Omega }\Vert Qu(z)\Vert _B\le {1\o \delta }\sup_{z\in \Omega }\Vert u(z)\Vert _B.\eqno{({\rm C}.41)}$$ \par Put $f_\epsilon (t)={t\over 1+\epsilon t}$, and $F_\epsilon (x)=f_\epsilon (\langle x/C\rangle )$ (with $\langle x\rangle =\sqrt{1+x^2}$) where $C$ is large enough, so that the latter function is well-defined in $\overline{\Omega }$, when $0<\epsilon \le 1$. Then (C.39) remains valid if we replace $P$ by $F_\epsilon ^{-1}\circ P\circ F_\epsilon $ and $\delta $ by $\delta /2$, provided that $\epsilon $ is small enough. Examining the earlier arguments, we see that (C.41) also holds with $Q$ replaced by $F_\epsilon ^{-1}\circ Q\circ F_\epsilon $ and with $\delta $ replaced by $\delta /2$. \medskip \par\noindent \it Definition. \rm Let $u_j,u\in C_b(\overline{\Omega })\cap{\rm Hol}(\Omega)$, for $j\in{\bf N}$. We say that $u_j\to u$ narrowly when $j\to \infty $ if $\sup_\Omega \Vert u_j\Vert _B$ is bounded by a constant independent of $j$ and $u_j\to u$ uniformly on every compact subset of $\overline{\Omega }$. \medskip \par Let $u_j,u\in{\cal S}(\overline{\Omega })\cap{\rm Hol}$ and assume that $u_j\to u$ narrowly, when $j\to \infty $. Then $\sup_\Omega \Vert F_\epsilon ^{-1}(u_j-u)\Vert _B\to 0$, so $\sup_\Omega \Vert F_\epsilon ^{-1}(Qu_j-Qu)\Vert _B\to$ and we see that $Qu_j\to Qu$ narrowly when $j\to \infty $. In other words, $Q$ preserves narrow convergence of sequences in ${\cal S}(\overline{\Omega })$ with limits in the same space. From (C.40), we then get: \medskip \bf \noindent Theorem C.8. \it Let $E\subset C_b(\overline{\Omega })\cap{\rm Hol}(\Omega)$ be the closure of ${\cal S}(\overline{\Omega })\cap{\rm Hol}(\Omega)$ for narrow convergence. Then: \smallskip \par\noindent a) $Q:E\to E$ is well-defined and (C.41) holds for $u\in E$. \smallskip \par\noindent b) If $v\in E$, then $PQv=v$, \smallskip \par\noindent c) Let $u\in E$ and assume that there is a sequence $u_j\in{\cal S}(\overline{\Omega })\cap{\rm Hol\,}(\Omega )$ with $u_j\to u$ and $Pu_j\to Pu$ narrowly (so that $Pu\in E$). Then, $QPu=u$.\rm \medskip \par Naturally we want to know if there is a simpler characterization of the spaces that appear here. \medskip \par\noindent \bf Proposition C.9. \it If $W$ is starshaped with respect to $y=0$, then $E=C_b(\overline{\Omega })\cap{\rm Hol}(\Omega)$. \medskip \par\noindent \bf Proof. \rm Let $u\in C_b(\overline{\Omega })\cap{\rm Hol}(\Omega)$. Then $\widetilde{u}_j\to u$ narrowly, where $\widetilde{u}_j(z)=u(\theta _jz)$ and $\theta _j=(1-{1\over j})$. Put $u_j(z)=e^{-\epsilon _j\langle z/C\rangle }\widetilde{u}_j(z)\in {\cal S}(\overline{\Omega })\cap{\rm Hol}(\Omega)$, where $C>0$ is sufficiently large and $\epsilon _j\searrow 0$. Then $u_j\to u$ narrowly.\hfill{$\#$} \medskip \par We leave the following question open until an answer is needed: Make the assumptions of the last proposition and assume that $\partial ^\alpha u,\,Pu\in C_b(\overline{\Omega })\cap{\rm Hol\,}(\Omega )$, for $\vert \alpha \vert \le 2$. Is it true that $u$ satisfies the assumption of c) in the last theorem? \vfill\eject \centerline{\bf References.} \medskip \item {[A]} P. Anderson, {\it Absence of diffusion in certain random lattices}, Phys. Rev. {\bf 109}, 1492 (1958). \item {[AM]} M. Aizenman and S. Molchanov, {\it Localization at large disorder and at extreme energies: an elementary derivation}, Commun. Math. Phys. {\bf 157}, 245 (1993). \item {[Be]} F. A. Berezin, {\it The method of second quantization}, New York: Academic press, 1966. \item{[BCKP]} A. Bovier, M. Campanino, A. Klein, and F. Perez, {\it Smoothness of the density of states in the Anderson model at high disorder}, Commun. Math. Phys. {\bf 114} 439-461, (1988). \item{[CFS]} F. Constantinescu, J. Fr\" ohlich, and T. Spencer, {\it Analyticity of the density of states and replica method for random Schr\"odinger operators on a lattice}, J. Stat. Phys. {\bf 34} 571-596, (1984). \item {[DK]} H. von Dreifus and A. Klein, {\it A new proof of localization in the Anderson tight binding model}, Commun. Math. Phys. {\bf 124}, 285-299 (1989). \item {[Ec]} E. N. Economu, {\it Green's functions in quantum physics}, Springer Series in Solid State Sciences 7, 1979. \item{[FMSS]}J. Fr\"ohlich, F. Martinelli, E. Scoppola and T.Spencer, {\it Constructive proof of localization in Anderson tight binding model}, Commun. Math. Phys. {\bf 101}, 21-46 (1985). \item{[FS]}J. Fr\"ohlich and T.Spencer, {\it Absence of diffusion in the Anderson tight binding model for large disorder or low energy}, Commun. Math. Phys. {\bf 88}, 151-184 (1983). \item {[HS]} B. Helffer and J. Sj\"ostrand, {\it On the correlation for Kac-like models in the convex case}, J. of Stat. Phys. (1994). \item {[K]} A. Klein, {\it The supersymmetric replica trick and smoothness of the density of states for the random Schr\"odinger operators}, Proceedings of Symposium in Pure Mathematics, {\bf 51}, 1990. \item{[KS]} A. Klein and A. Spies, {\it Smoothness of the density of states in the Anderson model on a one dimensional strip}, Annals of Physics {\bf 183}, 352-398 (1988). \item{[S1]} J. Sj\"ostrand, {\it Ferromagnetic integrals, correlations and maximum principle}, Ann. Inst. Fourier {\bf 44}, 601-628 (1994). \item{[S2]} J. Sj\"ostrand, {\it Correlation asymptotics and Witten Laplacians}, Algebra and Analysis {\bf 8} (1996). \item {[V]} T. Voronov, {\it Geometric integration theory on supermanifolds}, Mathematical Physics Review, USSR Academy of Sciences, Moscow, 1993. \item {[W1]} W. M. Wang, {\it Asymptotic expansion for the density of states of the magnetic Schr\"odinger operator with a random potential}, Commun. Math. Phys. {\bf 172}, 401-425 (1995). \item{[W2]} W. M. Wang, {\it Supersymmetry and density of states of the magnetic Schr\"odinger operator with a random potential revisited}, (submitted). \end