Content-Type: multipart/mixed; boundary="-------------1603151928854" This is a multi-part message in MIME format. ---------------1603151928854 Content-Type: text/plain; name="16-28.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="16-28.keywords" Kronecker product, symmetry, semiclassical wave packets ---------------1603151928854 Content-Type: application/x-tex; name="HagedornLasser.tex" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="HagedornLasser.tex" \documentclass{amsart} \usepackage{color} \usepackage{hyperref} \title{Symmetric Kronecker products and\\ semiclassical wave packets} \author[George A. Hagedorn]{George A. Hagedorn} \address[George A. Hagedorn]{Department of Mathematics and Center for Statistical Mechanics, Mathematical Physics, and Theoretical Chemistry, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061-0123, U.S.A} \email{hagedorn@math.vt.edu} \author[Caroline Lasser]{Caroline Lasser} \address[Caroline Lasser]{Zentrum Mathematik M3, Technische Universit\"at M\"unchen, D-80290 M\"unchen, Germany} \email{classer@ma.tum.de} \date{\today} \keywords{Kronecker product, symmetry, semiclassical wave packet} \subjclass[2010]{15A69, 15B10, 81Q20} \def\eps{\varepsilon} \def\C{{\mathbb C}} \def\N{{\mathbb N}} \def\R{{\mathbb R}} \def\GL{{\rm GL}} \def\Id{{\rm Id}} \def\I{{\mathcal I}} \def\Rr{{\mathcal R}} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{definition}{Definition} \begin{document} %\tableofcontents \begin{abstract} We investigate the iterated Kronecker product of a square matrix with itself and prove an invariance property for symmetric subspaces. This motivates the definition of an iterated symmetric Kronecker product and the derivation of an explicit formula for its action on vectors. We apply our result for describing a linear change in the matrix parametrization of semiclassical wave packets. \end{abstract} \maketitle \section{Introduction} The Kronecker product of matrices is known to be ubiquitous \cite{VL00}, and our aim here is to investigate the $n$-fold Kronecker product of a complex square matrix $M\in\C^{d\times d}$ with itself, \[ M^{n\otimes} \ = \ \underbrace{M \otimes \cdots \otimes M}_{n\,\text{times}},\qquad n\in\N, \] and to apply our findings to the parametrization of semiclassical wave packets. \subsection{Two-fold symmetric Kronecker products} In semidefinite programming (See for example \cite{AHO98} or \cite[Appendix~E]{Kle02}.), the two-fold Kronecker product has notably occurred in combination with subspaces of a particular symmetry property. One considers the space \[ X_2 \ =\ \left\{x\in\C^{d^2}: x = {\rm vec}(X), \,X\ =\ X^t\in\C^{d\times d}\right\}, \] that contains those vectors that can be obtained by the row-wise vectorization of a complex symmetric $d\times d$ matrix. The dimension of the space $X_2$ is \[ L_2 \ =\ \tfrac12\,d\,(d+1). \] One can prove that this space is invariant under Kronecker products, in the sense that for all matrices $M\in\C^{d\times d}$, one has \[ (M\otimes M)\,x\in X_2,\quad \text{whenever}\;\; x\in X_2. \] Now one uses the standard basis of $\C^{d^2}$ for constructing an orthonormal basis of the subspace $X_2$ and defines a corresponding sparse $L_2\times d^2$ matrix~$P_2$ that has the basis vectors as its rows. The symmetric Kronecker product of $M$ with itself is then the $L_2\times L_2$ matrix \[ S_2(M) \ =\ P_2\,\left(M\otimes M\right)\, P_2^*. \] \subsection{$\bf n$-fold symmetric Kronecker products} How does one extend this construction to symmetrizing $n$-fold Kronecker products? It is instructive to revisit the second order space in two dimensions and to write a vector $x\in X_2$ as \[ x\ =\ (x_{(2,0)},\,x_{(1,1)},\,x_{(1,1)},\,x_{(0,2)})^t. \] This labelling uses the multi-indices $k=(k_1,\,k_2)\in\N^2$ with $k_1+k_2=2$ in the redundant enumeration \[ \nu_2 \ =\ \left((2,0),\,(1,1),\,(1,1),\,(0,2)\right)^t. \] This description allows for a straightforward extension to higher order $n$ and dimension $d$. One works with a redundant enumeration $\nu_n$ of the multi-indices $k=(k_1,\,\ldots,\,k_d)\in\N^d$ with $k_1+\cdots+k_d=n$, and defines \[ X_n \ = \ \left\{ x\in\C^{d^n}: x_{j} = x_{j'}\;\;\text{if}\;\; \nu_n(j)=\nu_n(j')\right\}. \] The dimension of $X_n$ equals the number of mult-indices in $\N^d$ of order $n$, that is the binomial coefficient \[ L_n\ =\ \binom{n+d-1}{n}. \] And again, we can prove invariance in the sense that for all $M\in\C^{d\times d}$ \[ M^{n\otimes n}\,x\in X_n,\quad\text{whenever}\;\; x\in X_n. \] See Proposition~\ref{prop:kron}. Then, we use the standard basis of $\C^{d^n}$ to build an orthonormal basis of $X_n$ and assemble the corresponding sparse $L_n\times d^n$ matrix $P_n$. All this motivates the definition of the $n$-fold symmetric Kronecker product as \[ S_n(M) \ =\ P_n\,M^{n\otimes}\,P_n^*. \] The matrix $S_n(M)$ is of size $L_n\times L_n$ and inherits structural properties as invertibility or unitarity from the matrix $M$. See Lemma~\ref{lem:str}. \bigskip Our main result Theorem~\ref{theo:main} provides an explicit formula for the action of the matrix $S_n(M)$ in terms of multinomial coeffients and powers of the entries of the original matrix~$M$. Labelling the components of a vector $y\in\C^{L_n}$ by multi-indices of order $n$, we obtain for all $k\in\N^d$ with $|k|=n$ that \begin{align*} &\left(S_n(M)\,y\right)_k\\ &=\,\frac{1}{\sqrt{k!}}\sum_{|\alpha_1|=k_1}\cdots \sum_{|\alpha_d|=k_d}\,\binom{k_1}{\alpha_1}\cdots \binom{k_d}{\alpha_d}\,m_1^{\alpha_1}\cdots m_d^{\alpha_d}\, \sqrt{(\alpha_1+\cdots+\alpha_d)!}\ y_{\alpha_1+\cdots+\alpha_d}, \end{align*} where $m_1,\,\ldots,\,m_d\in\C^d$ denote the row vectors of $M$. The summations weighted with multinomial coefficients stem from the $n$-fold Kronecker product $M^{n\otimes}$, whereas the square roots of the factorials originate in the orthonormalization of the row vectors of the matrix~$P_n$. \subsection{Application to semiclassical wave packets} Our interest in $n$-fold symmetric Kronecker products emerged when studying linear changes in the para\-metrization of semiclassical wave packets. Semiclassical wave packets have first been proposed in \cite{Hag85} as a multivariate non-isotropic generalization of the Hermite functions. See also \cite{Hag98}. A family of semiclassical wave packets \[ \left\{\varphi_k[A,\,B]: k\in\N^d\right\} \] is parametrized by two invertible matrices $A,\,B\in{\rm GL}(d,\C)$ and forms an orthonormal basis of the Hilbert space of square integrable functions. More recently, semiclassical wave packets have been also used for the numerical discretization of semiclassical quantum dynamics by E.~Faou, V.~Gradinaru, and C.~Lubich \cite{FGL09,L}. The computationally demanding step of this method is the assembly of the Galerkin matrix for the potential function $V:\R^d\to\R$ according to \[ \left\langle \varphi_k[A,\,B], V\varphi_l[A,\,B]\right\rangle\ =\ \int_{\R^d}\,\overline{\varphi_k[A,\,B](x)}\, V(x)\,\varphi_l[A,\,B](x)\,dx. \] In an effort to develop a new variant \cite{BGH} that performs the quadrature for semiclassical wave packets with different parametrizing matrices $A',\,B'$, the search began for the matrices describing a linear change in the parametrization. Our main application is Corollary~\ref{MainResult} in Section \ref{Section5.3}. It gives an explicit formula in terms of an $n$-fold symmetric Kronecker product. \subsection{Organization of the paper} In the next Section,%~\ref{sec:com}, we start with some combinatorics for explicitly relating the lexicographic enumeration of multi-indices of order $n$ with the redundant enumeration $\nu_n$. Then we introduce the symmetric subspaces $X_n$ in Section~\ref{sec:sym} and construct an orthonormal basis together with the corresponding matrix $P_n$. In Section \ref{sec:kron}, we define the $n$-fold symmetric Kronecker product and prove our main result Theorem~\ref{theo:main}. An introduction to semiclassical wave packets and the description of linear changes in their parametrization by symmetric Kronecker products is given in Section~\ref{sec:sem}. \subsection{Notation} On some occasions we shall use the binomial coefficient \[ \binom{n}{j}\ =\ \frac{n!}{(n-j)!\,j!},\qquad \text{for non-negative integers }n\ge j. \] We write a multi-index $k\in\N^d$ as a row vector $k=(k_1,\,\ldots,\,k_d)$. We use the modulus $|k|=k_1+\cdots+k_d$, and the multinomial coefficient \[ \binom{|k|}{k}\ =\ \frac{|k|!}{k_1!\,\cdots\,k_d!},\qquad \text{for}\;\;k\in\N^d. \] We adopt the convention that any multinomial coefficient with any negative argument is defined to be $0$. We also use the $k$th power of a vector, \[ x^k\ =\ x_1^{k_1}\,\cdots\,x_d^{k_d},\qquad x\in\C^d. \] \section{Combinatorics}\label{sec:com} \subsection{Reverse Lexicographic ordering} First we enumerate the set multi-indices of order $n$ in $d$ dimensions, \[ \left\{k\in\N^d: |k|=n\right\},\qquad n\in\N, \] in reverse lexicographic ordering and collect them as components of a formal vector denoted by $\ell_n$. The length of the vector $\ell_n$ is the binomial coefficient \[ L_n\ = %\ \frac{(n+d-1)!}{n!\ (d-1)!}\ = \ \binom{n+d-1}{n} %=\\binom{n+d-1}{d-1}. \] %Indeed, the length $L_n$ of the vector $\ell_n$ is the number of ways of %obtaining the integer~$n$ as a summation of $d$ non-negative integers, %which is $(d+n-1)$ choose $n$. See {\it e.g.}, \cite[Chapter 1, \S7]{Ber71}. %Alternatively, One can think of this in the following way \cite{JHT}: The multi-indices $k$ of order $n$ in $d$ dimensions are in a one-to-one correspondence with the sequences of $n$ identical balls and $d-1$ identical sticks. The sticks partition the line into $d$ bins into which one can insert the $n$ balls. (The first bin is to the left of all the sticks, and contains $k_1$ balls; the last bin is to the right of all the sticks, and contains $k_d$ balls; for $2\le j\le d-1$, the $j^{\mbox{\scriptsize th}}$ bin is between sticks $j-1$ and $j$, and it contains $k_j$ balls.) {\it E.g.}, the multi-index $(3,\,2,\,0,\,1)$ in four dimensions corresponds to $$ {\color{red}\bullet}\quad{\color{red}\bullet}\quad{\color{red}\bullet}\quad {\color{blue}|}\quad{\color{red}\bullet}\quad{\color{red}\bullet}\quad {\color{blue}|}\quad{\color{blue}|}\quad{\color{red}\bullet}. $$ If all these objects were distinguishable, there would be $(n+d-1)!$ permutations, but since the balls are all identical, one must divide by $n!$, and since the sticks are all identical, one must divide by $(d-1)!$. \subsection{A redundant enumeration} Next we redundantly enumerate and collect multi-indices of modulus $n$ in a vector $\nu_n$ of length $d^n$. We proceed recursively and set $\nu_0=\left((0,\,\ldots,\,0)\right)^t$,\quad $\nu_1=(e_1^*,\,\ldots,\,e_d^*)^t$,\quad and \[ \nu_{n+1}\ =\ {\rm vec} \begin{pmatrix}\nu_n(1)+e_1^* & \ldots & \nu_n(d^n)+e_1^*\\ \vdots & & \vdots\\ \nu_n(1)+e_d^*& \ldots & \nu_n(d^n)+e_d^*\end{pmatrix},\qquad n\ge0, \] where $e_1,\ldots,e_d\in\C^d$ are the standard basis vectors of $\C^d$, and ${\rm vec}$ denotes the row-wise vectorization of a matrix. \vskip 5mm For example, for $d=2$, we have \begin{align*} \nu_2\ &=\ \left((2,0),\,(1,1),\,(1,1),\,(0,2)\right)^t,\\ \nu_3\ &=\ \left((3,0),\,(2,1),\,(2,1),\,(1,2),\,(2,1),\,(1,2),\,(1,2),\, (0,3)\right)^t. \end{align*} \subsection{A partition} For relating the lexicographic and the redundant enumeration, we define the mapping \[ \sigma_n: \{1,\,\ldots,\,L_n\}\to {\mathcal P}(\{1,\,\ldots,\,d^n\}) \] so that for all $i\in\{1,\,\ldots,\,L_n\}$ and $j\in\{1,\,\ldots,\,d^n\}$ the following holds: \[ j\in\sigma_n(i)\ \Longleftrightarrow\ \nu_n(j)=\ell_n(i). \] \vskip 5mm For example, for $d=2$, we have \[ \sigma_2(1)=\{1\},\quad \sigma_2(2)=\{2,\,3\},\quad \sigma_2(3)=\{4\} \] and \[ \sigma_3(1)=\{1\},\quad \sigma_3(2)=\{2,\,3,\,5\}, \quad\sigma_3(3)=\{4,\,6,\,7\},\quad\sigma_3(4)=\{8\}. \] \vskip 5mm We observe the following partition property. \begin{lemma} We have \[ \#\sigma_n(i)\ =\ \binom{n}{\ell_n(i)},\qquad i=1,\,\ldots,\,L_n, \] and $\displaystyle\bigcup_{i=1,\,\ldots,\,L_n}\sigma_n(i) = \{1,\,\ldots\,d^n\}$, ~where the union is pairwise disjoint. \end{lemma} \begin{proof} We first prove that we have a partition property. For any $j\in\{1,\,\ldots,\,d^n\}$ there exists $i\in\{1,\,\ldots,\,L_n\}$ so that $\nu_n(j)=\ell_n(i)$. So, we clearly have \[ \bigcup_{i=1,\ldots,L_n} \sigma_n(i) = \{1,\,\ldots,\,d^n\}. \] Moreover, since $j\in\sigma_n(i)\cap\sigma_n(i')$ is equivalent to $\ell_n(i)=\ell_n(i')$, that is, $i=i'$, the union is disjoint. For proving the claimed cardinality, we argue by induction. For $n=0$, we have \[ \ell_{0}\ =\ \left((0,\,\ldots,\,0)\right)\ =\ \nu_{0},\qquad \sigma_{0}(1) = \{1^0\},\qquad \#\sigma_{0}(1) = 1. \] For the inductive step, we observe that in the redundant enumeration $\nu_n$, the multi-index $k=(k_1,\,\ldots,\,k_d)$ can be generated from $d$ possible entries in $\nu_{n-1}$, \[ (k_1-1,\,\,k_2,\,\ldots,\,k_d),\ \ldots,\ (k_1,\,\ldots,\,k_{d-1},\,k_d-1) \] by adding $e_1^*,\,\ldots,\,e_d^*$, respectively. Of course, such an entry only belongs to $\nu_{n-1}$ if all its components are non-negative. For each of these indices with all entries non-negative, there is a unique number $j\in\{1,\,2,\,\dots,\,L_{n-1}\}$, such that $l_{n-1}(j)$ is the given index. If one of these indices has a negative entry, we define $j=-1$ and $\sigma_{n-1}(-1)$ to be the empty set, {\it i.e.}, \[ \sigma_{n-1}(-1)\ =\ \{\},\quad\mbox{whose cardinality is } 0. \] We list the $d$ numbers defined this way as $i_1,\,\ldots,\,i_d$, and note that all the positive values in this list must be distinct. Then, \begin{eqnarray}\nonumber \#\sigma_n(i)&=& \sum_{m=1}^d\,\#\sigma_{n-1}(i_m) \\[3mm]\nonumber &=& \sum_{m=1}^d\,\binom{n-1}{k_1,\,\ldots,\,k_m-1,\,\ldots,\,k_d} \\[3mm]\nonumber &=& \sum_{m=1}^d\,\left. \begin{cases}\frac{(n-1)!}{k_1!\,\cdots(k_m-1)!\,\cdots\,k_d!}&k_m>0\\ 0&k_m=0\end{cases} \right\} \\[3mm]\nonumber &=& \frac{(n-1)!\,(k_1+\cdots+k_d)}{k_1!\,\cdots\,k_d!} \\[3mm]\nonumber &=&\binom{n}{k}. \end{eqnarray} \end{proof} \section{Symmetric subspaces}\label{sec:sym} We next analyze the symmetric spaces \[ X_n\ =\ \left\{x\in\C^{d^n}: x_j = x_{j'}\;\;\text{if}\;\; \nu_n(j)=\nu_n(j')\right\},\qquad n\in\N. \] We have $X_0=\C$ and $X_1=\C^d$, whereas $X_n$ is a proper subset of $\C^{d^n}$ for $n\ge2$. \vskip 5mm For example, for $d=2$, \begin{align*} X_2\ &=\ \left\{x\in\C^4: x_2=x_3\right\},\\ X_3\ &=\ \left\{x\in\C^8: x_2=x_3=x_5,\ x_4=x_6=x_7\right\}. \end{align*} \vskip 5mm Any vector $x\in X_n$ has $d^n$ components, but the components that correspond to the same multi-index in the redundant enumeration $\nu_n(1),\,\ldots,\,\nu_n(d^n)$ have the same value. Hence, at most $L_n$ components of $x\in X_n$ are different. They may be labelled by the multi-indices $\ell_n(1),\,\ldots,\,\ell_n(L_n)$, and we often refer to them by \[ x_{\ell_n(i)},\qquad i=1,\,\ldots,\,L_n. \] \vskip 5mm The symmetric subspace $X_2$ has also been considered in \cite{VLV15}, and we next relate the alternative second order constructions to ours. \subsection{The second order space} The second order subspace \[ X_2\ =\ \left\{x\in\C^{d^2}: x_j = x_{j'}\;\;\text{if}\;\; \nu_2(j)=\nu_2(j')\right\} \] can also be described in terms of matrices. Since \[ \nu_2 = {\rm vec}\begin{pmatrix}e_1^*+e_1^* & \ldots & e_d^*+e_1^*\\ \vdots & & \vdots\\ e_1^*+e_d^* & \ldots & e_d^*+e_d^*\end{pmatrix}, \] we may write \[ X_2\ =\ \left\{ x\in\C^{d^2}: x={\rm vec}(X),\;\; X = X^t\in\C^{d\times d}\right\}. \] Alternatively, as in \cite[\S2.3]{VLV15}, one may permute the standard basis vectors $e_1,\,\ldots,\,e_{d^2}\in\C^{d^2}$ according to the $d^2\times d^2$ permutation matrix \[ \Pi_{dd}\ =\ \left( e_{1+0\cdot d},\,e_{1+1\cdot d},\,\ldots,\,e_{1+(d-1)\cdot d},\, \ldots,\,e_{d+0\cdot d},\,e_{d+1\cdot d},\,\ldots,\,e_{d^2} \right) \] and characterize the symmetric subspace as \[ X_2\ =\ \left\{x\in\C^{d^2}: \Pi_{dd}\,x =x\right\}. \] This way of writing the second order subspace corresponds to the definition of symmetric tensor spaces by permutations as given in \cite[Chapter 3.5]{Hack12}. \subsection{Relation between the subspaces} Due to the recursive definition of the redundant multi-index enumeration, the symmetric subspaces of neighboring order can be related to each other as follows. \begin{lemma}\label{lem:dec} The symmetric subspace $X_{n+1}$ is contained in the $d$-ary Cartesian product of the symmetric subspace $X_n$, \[ X_{n+1} \subseteq X_n \times \cdots \times X_n,\qquad n\in\N. \] \end{lemma} \begin{proof} We decompose a vector \[ x\ =\ (x^{(1)},\,\ldots,\,x^{(d)})^t\in X_{n+1} \] into $d$ subvectors with $d^n$ components each. The $d^{n+1}$ components of $x$ can be labelled by the multi-indices \[ \nu_n(1)+e_1^*,\,\ldots,\,\nu_n(d^n)+e_1^*,\,\ldots,\, \nu_n(1)+e_d^*,\,\ldots,\,\nu_n(d^n)+e_d^*, \] so that the components of the subvector $x^{(m)}$, $m=1,\ldots,d$, can be labelled by \[ \nu_n(1)+e_m^*,\,\ldots,\,\nu_n(d^n)+e_m^*. \] Hence, $x^{(m)}_j = x^{(m)}_{j'}$ if $\nu_n(j) = \nu_n(j')$, and $x^{(m)}\in X_n$ for all $m=1,\,\ldots,\,d$. \end{proof} The two-dimensional example $X_1=\C^2$ and $X_2=\{x\in\C^4: x_2=x_3\}$ shows that the inclusion of Lemma~\ref{lem:dec} is in general not an equality. \subsection{An orthonormal basis} We now use the standard basis of $\C^{d^n}$ to construct an orthonormal basis of $X_n$. \begin{lemma} Let $e_1,\ldots,e_{d^n}$ be the standard basis vectors of $\C^{d^n}$, and define the vectors \[ p_i\ =\ \frac{1}{\sqrt{\# \sigma_n(i)}}\ \sum_{j\in\sigma_n(i)}e_j,\qquad i=1,\,\ldots,\,L_n. \] Then, $\{p_1,\,\ldots,\,p_{L_n}\}$ forms an orthonormal basis of the space $X_n$. \end{lemma} \begin{proof} For all $i=0,\,\ldots,\,L_n$ and $j,\,j'=1,\,\ldots,\,d^n$, we have \[ (p_i)_j\ =\ \left\{\begin{array}{ll} (\#\sigma_n(i))^{-1/2}, &\,\text{if}\ j\in\sigma_n(i),\\ 0, &\,\text{otherwise.}\end{array}\right. \] Since $j\in\sigma_n(i)$ if and only if $\nu_n(j)=\ell_n(i)$, we have \[ (p_i)_j\ =\ (p_i)_{j'} \quad\text{if}\quad \nu_n(j)=\nu_n(j'), \] and thus $p_i\in X_n$. We also observe, that for all $i,\,i'=1,\,\ldots,\,L_n$, \[ \langle p_i,\,p_{i'}\rangle\ =\ \frac{1}{\sqrt{\#\sigma_n(i)\cdot\#\sigma_n(i')}}\, \sum_{j\in\sigma_n(i)}\,\sum_{j'\in\sigma_n(i')}\, \langle e_j,\,e_{j'}\rangle\ =\ \delta_{i,i'}. \] Hence, the vectors $p_1,\,\ldots,\,p_{L_n}$ are orthonormal. Moreover, for all $x\in X_n$, we have \[ \langle p_i,\,x\rangle\ =\ \frac{1}{\sqrt{\#\sigma_n(i)}}\, \sum_{j\in\sigma_n(i)}\,\langle e_j,\,\,x\rangle\ =\ \sqrt{\#\sigma_n(i)}\,x_{\ell_n(i)}, \] and therefore \begin{align*} x\ &=\ \sum_{j=1}^{d^n}\,\langle e_j,\,x\rangle\,e_j\ =\ \sum_{i=1}^{L_n}\,\sum_{j\in\sigma_n(i)}\,\langle e_j,\,x\rangle\,e_j\ =\ \sum_{i=1}^{L_n}\,x_{\ell_n(i)}\,\sqrt{\#\sigma_n(i)}\,p_i\\ &=\ \sum_{i=1}^{L_n}\,\langle p_i,\,x\rangle\,p_i. \end{align*} \end{proof} \subsection{An orthonormal matrix} The orthonormal basis vectors $p_1,\ldots,p_{L_n}\in X_n$ allow us to define the sparse rectangular $L_n\times d^n$ matrix \[ P_n\ =\ \begin{pmatrix}p_1^*\\ \vdots\\ p_{L_n}^*\end{pmatrix} \] that has the $L_n$ basis vectors as its rows. For example, for $d=2$, we have \begin{align*} P_2\ &=\ \begin{pmatrix}e_1^*\\ \frac{1}{\sqrt{2}}(e_2^*+e_3^*)\\ e_4^*\end{pmatrix}\ =\ \begin{pmatrix}1&0&0&0\\ 0&\frac{1}{\sqrt2}&\frac{1}{\sqrt2}&0\\ 0&0&0&1\end{pmatrix},\\*[2ex] P_3\ &=\ \begin{pmatrix}e_1^*\\ \frac{1}{\sqrt{2}}(e_2^*+e_3^*+e_5^*)\\ \frac{1}{\sqrt2}(e_4^*+e_6^*+e_7^*)\\ e_8^*\end{pmatrix} \ =\ \begin{pmatrix}1&0&0&0&0&0&0&0\\ 0&\frac{1}{\sqrt2}&\frac{1}{\sqrt2}&0&\frac{1}{\sqrt2}&0&0&0\\ 0&0&0&\frac{1}{\sqrt2}&0&\frac{1}{\sqrt2}&\frac{1}{\sqrt2}&0\\ 0&0&0&0&0&0&0&1\end{pmatrix}. \end{align*} We summarize some properties of the matrix $P_n$ and of its adjoint $P_n^*$ and calculate explicit formulas for their actions on vectors. \begin{proposition}\label{prop:pn} The $L_n\times d^n$ matrix $P_n$ and its adjoint $P_n^*$ satisfy \[ P_n\,P_n^*\ =\ \Id_{L_n\times L_n},\qquad {\rm range}(P_n^*) \ =\ X_n. \] Moreover, for all $x\in X_n$, \[ (P_n\,x)_i \ =\ \sqrt{\binom{n}{\ell_n(i)}}\ x_{\ell_n(i)},\qquad i=1,\,\ldots,\,L_n, \] and for all $y\in\C^{L_n}$, \[ (P_n^*\,y)_{\ell_n(i)}\ =\ 1\left/ \sqrt{\binom{n}{\ell_n(i)}}\right.\ y_i, \qquad i=1,\,\ldots,\,L_n. \] In particular, \[ P_n^*\,P_n\,x \ =\ x,\quad\text{whenever}\;\; x\in X_n. \] \end{proposition} \begin{proof} The two properties $P_n\,P_n^*\ =\ \Id_{L_n\times L_n}$ and ${\rm range}(P_n^*) \ =\ X_n$ equivalently say, that the row vectors of $P_n$ build an orthonormal basis of $X_n$. \medskip For any $y\in\C^{L_n}$, the vector $P_n^*\,y$ is a linear combination of the vectors $p_1,\,\ldots,\,p_{L_n}$ and therefore in $X_n$. Labelling its components by $\ell_n(1),\,\ldots,\,\ell_n(L_n)$, we obtain \[ (P_n^*\,y)_{\ell_n(i)}\ =\ \sum_{i'=1}^{L_n}\,(p_{i'})_{\ell_n(i)}\,y_{i'} \ =\ \frac{1}{\sqrt{\#\sigma_n(i)}}\ y_i,\qquad i=1,\,\ldots,\,L_n. \] For $x\in X_n$ and $i=1,\,\ldots,\,L_n$, we obtain \begin{align*} (P_n\,x)_i\ &=\ \sum_{j=1}^{d^n} (p_i)_{\nu_n(j)}\, x_{\nu_n(j)} = \frac{\#\sigma_n(i)}{\sqrt{\#\sigma_n(i)}}\ x_{\ell_n(i)}\\ \nonumber &=\ \sqrt{\#\sigma_n(i)}\ x_{\ell_n(i)}, \end{align*} since $\#\sigma_n(i)$ components of $p_i$ do not vanish. In particular, \[ (P_n^*\,P_n\,x)_{\ell_n(i)}\ =\ \frac{1}{\sqrt{\#\sigma_n(i)}}\ (P_n x)_i\ =\ x_{\ell_n(i)}. \] \end{proof} \section{Symmetric Kronecker products}\label{sec:kron} \subsection{Iterated Kronecker products} We next investigate the action of an $n$-fold Kronecker product on the symmetric subspace $X_n$, $n\in\N$. \begin{proposition}\label{prop:kron} Let $M\in\C^{d\times d}$, and denote by $m_1,\,\ldots,\,m_d\in\C^d$ the row vectors of the matrix $M$. Then, \[ M^{n\otimes}x \in X_n,\qquad\mbox{whenever }x\in X_n. \] Moreover, for all $k\in\N^d$ with $|k|=n$, and all $x\in X_n$, \[ \left(M^{n\otimes}x\right)_k\ =\ \sum_{|\alpha_1|=k_1}\,\cdots\, \sum_{|\alpha_d|=k_d}\,\binom{k_1}{\alpha_1}\,\cdots\, \binom{k_d}{\alpha_d}\,m_1^{\alpha_1}\,\cdots\,m_d^{\alpha_d}\ x_{\alpha_1+\cdots+\alpha_d}, \] where $\alpha_1,\,\ldots,\,\alpha_d\in\N^d$. \end{proposition} \begin{proof} For $n=1$, we have $M^{n\otimes} = M$ and $X_n=\C^d$, and our formula reduces to usual matrix-vector multiplication written as \[ (Mx)_{e_k}\ =\ \sum_{j=1}^d\,m_j^{e_k}\,x_{e_j},\qquad k=1,\,\ldots,\,d. \] For the inductive step, we consider $x=(x^{(1)},\,\ldots,\,x^{(d)})\in X_{n+1}$ decomposed as in Lemma~\ref{lem:dec} with $x^{(j)}\in X_n$. We compute \begin{align*} M^{(n+1)\otimes}x\ &=\ \begin{pmatrix} m_{11}\,M^{n\otimes}&\ldots &m_{1d}\,M^{n\otimes}\\ \vdots &&\vdots\\ m_{d1}\,M^{n\otimes}&\ldots&m_{dd}\,M^{n\otimes} \end{pmatrix}\begin{pmatrix}x^{(1)}\\ \vdots \\x^{(d)} \end{pmatrix} \\*[2ex] &=\ \begin{pmatrix} m_{11}\,M^{n\otimes} x^{(1)} +\cdots +m_{1d}\,M^{n\otimes}x^{(d)}\\ \vdots \\ m_{d1}\,M^{n\otimes} x^{(1)}+\cdots +m_{dd}\,M^{n\otimes} x^{(d)} \end{pmatrix}. \end{align*} By the inductive hypothesis, we have for all $j=1,\,\ldots,\,d$, that \[ m_{j1}\,M^{n\otimes} x^{(1)}+\cdots+m_{jd}\,M^{n\otimes}x^{(d)} \in X_n. \] The components of these vectors can be labelled by $k\in\N^d$ with $|k|=n$, and we have \begin{align*} &\left(m_{j1}M^{n\otimes}x^{(1)}+\cdots+ m_{jd}\,M^{n\otimes}x^{(d)}\right)_k\\ &=\ \sum_{|\alpha_1|=k_1}\,\cdots\,\sum_{|\alpha_d|=k_d}\, \binom{k_1}{\alpha_1}\,\cdots\,\binom{k_d}{\alpha_d}\, \left(m_{j1} m_1^{\alpha_1}\cdots m_d^{\alpha_d}\, x^{(1)}_{\alpha_1+\cdots+\alpha_d}+\,\cdots\right.\\ &\hspace*{21em} \left.+\ m_{jd}\,m_1^{\alpha_1}\cdots m_d^{\alpha_d} \,x^{(d)}_{\alpha_1+\cdots+\alpha_d}\right). \end{align*} We now observe that \begin{align*} & \sum_{|\alpha_j|=k_j}\,\binom{k_j}{\alpha_j}\, \left(m_j^{\alpha_j+e_1}\,x^{(1)}_{\alpha_1+\cdots+\alpha_d}+\, \cdots\,+m_j^{\alpha_j+e_d}\,x^{(d)}_{\alpha_1+\cdots+\alpha_d}\right) \\ &=\ \sum_{|\alpha_j-e_1|=k_j}\,\binom{k_j}{\alpha_j-e_1}\, m_j^{\alpha_j}\, x^{(1)}_{\alpha_1+\cdots+(\alpha_j-e_1)+\cdots+\alpha_d}+\,\cdots\,+\\ &\hspace{12em} \sum_{|\alpha_j-e_d|=k_j}\,\binom{k_j}{\alpha_j-e_d}\, m_j^{\alpha_j}\, x^{(d)}_{\alpha_1+\cdots+(\alpha_j-e_d)+\cdots+\alpha_d}\\ &=\ \sum_{|\alpha_j|=k_j+1}\,\binom{k_j+1}{\alpha_j}\, m_j^{\alpha_j}\,x_{\alpha_1+\cdots+\alpha_d}. \end{align*} Therefore, for all $j=1,\,\ldots,\,d$, \begin{align*} &\left(m_{j1} M^{n\otimes}x^{(1)}+\,\cdots\,+ m_{jd}\, M^{n\otimes}x^{(d)}\right)_k\ =\\ &\sum_{|\alpha_1|=k_1}\cdots\sum_{|\alpha_j|=k_j+1}\cdots \sum_{|\alpha_d|=k_d}\,\binom{k_1}{\alpha_1}\cdots \binom{k_j+1}{\alpha_j}\,\cdots\,\binom{k_d}{\alpha_d}\, m_1^{\alpha_1}\cdots m_d^{\alpha_d} x_{\alpha_1+\cdots+\alpha_d}, \end{align*} Altogether, we have proven for all $k\in\N^d$ with $|k|=n+1$, and all $x\in X_{n+1}$ \[ \left(M^{(n+1)\otimes}x\right)_k\ =\ \sum_{|\alpha|=k_1}\,\cdots\,\sum_{|\alpha_d|=k_d}\, \binom{k_1}{\alpha_1}\,\cdots\,\binom{k_d}{\alpha_d}\, m_1^{\alpha_1}\,\cdots\,m_d^{\alpha_d}\ x_{\alpha_1+\cdots+\alpha_d}. \] \end{proof} \subsection{Symmetric Kronecker products} Having proven that $n$-fold Kronecker products leave the $n$th symmetric subspace invariant, we define the $n$-fold symmetric Kronecker product as follows: \begin{definition} For $M\in\C^{d\times d}$ and $n\in\N$, we define the $L_n\times L_n$ matrix \[ S_n(M) \ =\ P_n \,M^{n\otimes}\, P_n^* \] and call it the {\em $n$-fold symmetric Kronecker product} of the matrix $M$. \end{definition} The $n$-fold symmetric Kronecker product has useful structural properties. \begin{lemma}\label{lem:str} The $n$-fold symmetric Kronecker product $S_n(M)$ of a matrix $M\in\C^{d\times d}$ satisfies $S_n(M)^* = S_n(M^*)$. If $M\in{\rm GL}(d,\C)$, then \[ S_n(M)\in{\rm GL}(L_n,\C)\quad\text{with}\quad S_n(M)^{-1}\ =\ S_n(M^{-1}). \] In particular, if $M\in U(d)$, then $S_n(M)\in U(L_d)$. \end{lemma} \begin{proof} Since $(M\otimes M)^*=M^*\otimes M^*$ and $(M^{n\otimes})^*=(M^*)^{n\otimes}$, we have \[ S_n(M^*)\ =\ P_n(M^{n\otimes})^*P_n^*\ =\ S_n(M^*). \] If $M$ is invertible, then $M\otimes M$ is invertible with $(M\otimes M)^{-1} = M^{-1}\otimes M^{-1}$. By Proposition~\ref{prop:kron}, \[ (M^{n\otimes})^{-1}p_j\in X_n,\qquad j=1,\,\ldots,\,L_n. \] Proposition~\ref{prop:pn} yields $P_n^*\,P_n\,x=x$ for all $x\in X_n$ and \[ P_n\,p_j \ =\ e_j,\qquad j=1,\,\ldots,\,L_n, \] where $e_1,\,\ldots,\,e_{L_n}\in\C^{L_n}$ are the standard basis vectors. Therefore, \[ S_n(M)\,S_n(M^{-1})\,e_j \ =\ P_n\,M^{n\otimes}\,P_n^*\,P_n\,(M^{n\otimes})^{-1}p_j\ =\ P_n\,p_j\ =\ e_j, \] that is, \[ S_n(M)\,S_n(M^{-1})\ = \ {\rm Id}_{L_n\times L_n}. \] \end{proof} \subsection{The main result} The explicit formula of Proposition~\ref{prop:kron} for the $n$-fold Kronecker product also allows a detailed description of the $n$-fold symmetric Kronecker product. \begin{theorem}\label{theo:main} Let $M\in\C^{d\times d}$, and denote by $m_1,\,\ldots,\,m_d\in\C^d$ the row vectors of the matrix $M$. Then, the $n$-fold symmetric Kronecker product satisfies \begin{align*} &\left(S_n(M)\,y\right)_k\\ &=\,\frac{1}{\sqrt{k!}}\sum_{|\alpha_1|=k_1}\cdots \sum_{|\alpha_d|=k_d}\,\binom{k_1}{\alpha_1}\cdots \binom{k_d}{\alpha_d}\,m_1^{\alpha_1}\cdots m_d^{\alpha_d}\, \sqrt{(\alpha_1+\cdots+\alpha_d)!}\ y_{\alpha_1+\cdots+\alpha_d}. \end{align*} for all $y\in\C^{L_n}$ and $k\in\N^d$ with $|k|=n$. \end{theorem} \begin{proof} By Proposition~\ref{prop:pn}, we have $P_n^*\,y\in X_n$ and \[ (P_n^*\,y)_{\ell_n(i)}\ =\ 1\left/\sqrt{\binom{n}{\ell_n(i)}}\ \right.y_i, \qquad\mbox{for }i=1,\,\ldots,\,L_n. \] By Proposition~\ref{prop:kron}, the $n$-fold Kronecker product leaves $X_n$ invariant, and we have for all $k\in\N^d$ with $|k|=n$, \begin{align*} &\left(M^{n\otimes}\,P_n^*\,y\right)_k\ =\ \sum_{|\alpha_1|=k_1}\,\cdots\,\sum_{|\alpha_d|=k_d} \binom{k_1}{\alpha_1}\,\cdots\,\binom{k_d}{\alpha_d}\, m_1^{\alpha_1}\,\cdots\,m_d^{\alpha_d}\, (P_n^*\,y)_{\alpha_1+\cdots+\alpha_d}\\ &=\ \sum_{|\alpha_1|=k_1}\,\cdots\,\sum_{|\alpha_d|=k_d}\, \binom{k_1}{\alpha_1}\,\cdots\,\binom{k_d}{\alpha_d}\, m_1^{\alpha_1}\,\cdots\,m_d^{\alpha_d}\, \sqrt{\frac{(\alpha_1+\cdots+\alpha_d)!}{n!}}\ y_{\alpha_1+\cdots+\alpha_d}. \end{align*} By Proposition~\ref{prop:pn}, we have \[ (P_n\,x)_i\ =\ \sqrt{\binom{n}{\ell_n(i)}}\ x_{\ell_n(i)}, \qquad\mbox{for } i=1,\,\ldots,\,L_n, \] so that \begin{align*} &\left(P_n\,M^{n\otimes}\,P_n^*\,y\right)_k\\ &=\ \sqrt{\frac{n!}{k!}}\sum_{|\alpha_1|=k_1}\cdots \sum_{|\alpha_d|=k_d} \binom{k_1}{\alpha_1}\cdots\binom{k_d}{\alpha_d} m_1^{\alpha_1}\cdots m_d^{\alpha_d} \sqrt{\frac{(\alpha_1+\cdots+\alpha_d)!}{n!}}\, y_{\alpha_1+\cdots+\alpha_d}\\ &=\ \frac{1}{\sqrt{k!}}\sum_{|\alpha_1|=k_1}\cdots \sum_{|\alpha_d|=k_d}\,\binom{k_1}{\alpha_1}\cdots \binom{k_d}{\alpha_d} m_1^{\alpha_1}\cdots m_d^{\alpha_d} \sqrt{(\alpha_1+\cdots+\alpha_d)!}\ y_{\alpha_1+\cdots+\alpha_d}. \end{align*} \end{proof} \section{Application to semiclassical wavepackets}\label{sec:sem} \subsection{Parametrizing Gaussians} We consider two complex invertible matrices $A,\,B\in{\rm GL}(d,\,\C)$ that satisfy the conditions \begin{align}\label{eq:mat1} A^tB - B^t A\ &=\ 0, \\ \label{eq:mat2} A^*B + B^* A\ &=\ 2\,{\rm Id}_{d\times d}. \end{align} These two conditions imply that $B\,A^{-1}$ is a complex symmetric matrix such that its real part satisfies \[ {\rm Re}(B\,A^{-1})\ =\ (A\,A^*)^{-1} \] is a Hermitian and positive definite matrix, see \cite{Hag80}. Let $\hbar>0$ and define the multivariate complex-valued Gaussian function \begin{align*} &\varphi_0[A,\,B]: \R^d\to\C,\\ &\varphi_0[A,\,B](x)\ =\ (\pi\,\hbar)^{-d/4}\,\det(A)^{-1/2}\, %\exp\!\left(-\tfrac{1}{2\,\hbar}\langle x,\,B\,A^{-1}\,x\rangle\right). \exp\left(\,-\,\frac{\langle x,\,B\,A^{-1}\,x\rangle}{2\,\hbar}\right). \end{align*} Then, $\varphi_0[A,\,B]$ is a square-integrable function, and the constant factor $\det(A)^{-1/2}$ ensures normalization according to \[ \int_{\R^d}\,\left|\varphi_0[A,\,B](x)\right|^2 \,dx\ =\ 1. \] Changing the parametrization by a unitary matrix changes the Gaussian function only by constant multiplicative factor of modulus one: \begin{lemma}\label{lem:gauss} Let $A,\,B\in{\rm GL}(d,\,\C)$ satisfy the conditions (\ref{eq:mat1}--\ref{eq:mat2}). Then, for all unitary matrices $U\in U(d)$, the matrices $A'=A\,U$ and $B'=B\,U$ also satisfy the conditions (\ref{eq:mat1}--\ref{eq:mat2}). Moreover, \[ \varphi_0[A',\,B'] = \det(U)^{-1/2} \,\varphi_0[A,\,B]. \] \end{lemma} \begin{proof} We observe that \begin{align*} (A')^tB' -(B')^t A'\ &=\ U^t (A^t B -BA)U\ =\ 0\\ (A')^*B' + (B')^*A'\ &=\ U^*(A^* B + BA)U\ =\ 2\,{\rm Id}_{d\times d}. \end{align*} and $B'\,(A')^{-1} = B\,U\,U^*A^{-1} = BA^{-1}$. Therefore, \begin{align*} \varphi_0[A',\,B']\ &=\ (\pi\,\hbar)^{-d/4}\,\det(A\,U)^{-1/2}\, %\exp\!\left(-\tfrac{1}{2\hbar}\langle x,\,B\,A^{-1}\,x\rangle\right)\\ \exp\left(\,-\,\frac{\langle x,\,B\,A^{-1}\,x\rangle}{2\,\hbar}\right)\\ &=\ \det(U)^{-1/2}\,\varphi_0[A,\,B]. \end{align*} \end{proof} \subsection{Semiclassical wave packets} Following the construction of \cite{Hag98}, we consider $A,\,B\in{\rm GL}(d,\,\C)$ satisfying the conditions (\ref{eq:mat1}--\ref{eq:mat2}) and introduce the vector of raising operators \[ \Rr[A,\,B]\ =\ \frac{1}{\sqrt{2\hbar}}\left(B^*\,x - iA^*(-i\hbar\nabla_x)\right) \] that consists of $d$ components, \[ \Rr[A,\,B]\ = \ \begin{pmatrix}\Rr_1[A,\,B]\\ \vdots \\ \Rr_d[A,\,B]\end{pmatrix}. \] The raising operator acts on Schwartz functions $\psi:\R^d\to\C$ as \[ \left(\Rr[A,\,B]\psi\right)(x)\ =\ \frac{1}{\sqrt{2\hbar}} \left(B^*\,x\,\psi(x) - iA^*(-i\hbar\nabla_x\psi)(x)\right),\qquad x\in\R^d. \] Powers of the raising operator generate the semiclassical wave packets according to \[ \varphi_k[A,\,B]\ =\ \frac{1}{\sqrt{k!}}\Rr[A,B]^k\varphi_0[A,\,B],\qquad k\in\N^d, \] where the $k$th power of the raising operator \[ \Rr[A,\,B]^k = \Rr_1[A,\,B]^{k_1}\,\cdots\,\Rr_d[A,B]^{k_d} \] does not depend on the ordering, since the components commute with one another due to the compatibility conditions (\ref{eq:mat1}--\ref{eq:mat2}). The set \[ \left\{\varphi_k[A,\,B]: k\in\N^d\right\} \] forms an orthonormal basis of the space of square-integrable functions $L^2(\R^d,\,\C)$. \subsection{Hermite polynomials} By its construction, the $k$th semiclassical wave packet is a multivariate polynomial of degree $|k|$ times the Gaussian function, that is, \[ \varphi_k[A,\,B]\ =\ \frac{1}{\sqrt{2^{|k|}\,k!}}\,p_k[A]\,\varphi_0[A,\,B]. \] The polynomials $p_k[A]$ are determined by the matrix $A\in{\rm GL}(d,\,\C)$ and satisfy the three-term recurrence relation \[ \left(p_{k+e_j}[A]\right)_{j=1}^d\ =\ \frac{2}{\sqrt{\hbar}} A^{-1}x\,p_k[A] - 2A^{-1}\overline{A} \left(k_j\,p_{k-e_j}[A]\right)_{j=1}^d. \] Whenever the parameter matrix $A$ has all entries real, $A\in{\rm GL}(d,\,\R)$, then the polynomials factorize according to \[ p_k[A](x)\ =\ \prod_{j=1}^d\, H_{k_j}\!\left(\tfrac{1}{\sqrt{\hbar}}(A^{-1}x)_j\right),\qquad x\in\R^d, \] where $H_n$ is the $n$th Hermite polynomial, $n\in\N$, defined by the univariate three-term recurrence relation \[ H_{n+1}(y)\ =\ 2\,y\,H_n(y)\,-\,2\,n\,H_{n-1}(y),\qquad y\in\R. \] The real parameter case, however, is rather expectional when using semiclassical wave packets for their key application in molecular quantum dynamics. There, the parameter matrices $A(t)$ and $B(t)$, $t\in\R$, are time-dependent and determined by a system of ordinary differential equations. For the particularly simple, but instructive example of %1--dimensional harmonic oscillator motion, one can even write the solution explicitly as \begin{align*} A(t)\ &=\ \cos(t)\,A(0) + i\,\sin(t)\,B(0),\\ B(t)\ &=\ i\,\sin(t)\,A(0) + \cos(t)\,B(0). \end{align*} Hence, the matrix $A(t)$ cannot be expected to have only real entries, and the crucial matrix factor $A(t)^{-1}\overline{A(t)}$ in the three term recurrence relation generates multivariate polynomials beyond a tensor product representation. \subsection{Changing the parametrization}\label{Section5.3} If $A,\,B\in{\rm GL}(d,\,\C)$ satisfy the compatibility conditions (\ref{eq:mat1}--\ref{eq:mat2}), then $|A| = \sqrt{AA^*}$ is a real symmetric, positive definite matrix, and the singular value decomposition of $A$, \[ A\ =\ V\,\Sigma W^*\quad\text{with}\quad \Sigma={\rm diag}(\sigma_1,\ldots,\sigma_d)\;\;\text{positive definite}, \] is given by an orthogonal matrix $V\in O(d)$ and a unitary matrix $W\in U(d)$. This provides two natural ways for transforming $A' = A\,U$ with $A'\in{\rm GL}(d,\,\R)$ and $U\in U(d)$. One may work with the polar decomposition of $A$, \[ A'\ =\ |A|\ =\ V\,\Sigma\,V^*\quad\text{and}\quad U=W\,V^*, \] or alternatively with \[ A'\ =\ V\,\Sigma\quad\text{and}\quad U=W. \] Both choices provide a unitary transformation to the real case, and we ask how to relate different families of wave packets that correspond to unitarily linked parametrizations. For an explicit description, we collect the semiclassical wave packets of order $n$ in one formal vector \[ \vec\varphi_n[A,\,B]\ =\ \begin{pmatrix}\varphi_{\ell_n(1)}[A,\,B]\\ \vdots\\ \varphi_{\ell_n(L_n)}[A,\,B]\end{pmatrix}, \] whose components are labelled by the multi-indices $\ell_n(1),\,\ldots,\,\ell_n(L_n)$. Then, we use the $n$-fold symmetric Kronecker product in the following way: \begin{corollary}\label{MainResult} Let $A,\,B\in{\rm GL}(d,\,\C)$ satisfy the conditions (\ref{eq:mat1}--\ref{eq:mat2}), and consider the matrices $A'=A\,U$, $B'=B\,U$ with $U\in U(d)$. Then, \[ \vec\varphi_n[A',\,B']\ =\ \det(U)^{-1/2}\,S_n(U)\, \vec\varphi_n[A,\,B],\qquad n\in\N. \] \end{corollary} \begin{proof} We observe that the raising operators transform according to \[ \Rr[A',\,B']\ =\ \Rr[A\,U,\,B\,U] \ =\ U^*\,\Rr[A,\,B], \] which means for the components that \[ \Rr_j[A',\,B']\ =\ \sum_{m=1}^d\, \overline{u_{mj}}\ \Rr_m[A,\,B],\qquad j=1,\,\ldots,\,d. \] Since all components of the raising operators commute which each other, we can use the multinomial theorem and obtain for all $n\in\N$ that \[ \Rr_j[A',\,B']^n\ =\ \sum_{|\alpha|=n}\, \binom{n}{\alpha}\,\overline{u_j^\alpha}\ \Rr[A,\,B]^\alpha, \] where $u_1,\,\ldots,\,u_d\in\C^d$ denote the column vectors of the matrix $U$. This implies for any $k\in\N^d$ with $|k|=n$, \[ \Rr[A',\,B']^k\ =\ \sum_{|\alpha_1| = k_1}\,\cdots\, \sum_{|\alpha|=k_d}\, \binom{k_1}{\alpha_1}\,\cdots\,\binom{k_d}{\alpha_d}\, \overline{u_1^{\alpha_1}}\,\cdots\,\overline{u_d^{\alpha_d}}\ \Rr[A,\,B]^{\alpha_1+\cdots+\alpha_d}, \] where the $d$ summations run over $\alpha_1,\,\ldots,\,\alpha_d\in\N^d$. Together with Lemma~\ref{lem:gauss}, we therefore obtain \begin{align*} &\varphi_k[A',\,B']\ =\ \frac{1}{\sqrt{k!}}\,\Rr[A',\,B']^k \,\varphi_0[A',\,B'] \\ &=\ \frac{1}{\sqrt{\det(U)\,k!}}\,\sum_{|\alpha_1| = k_1}\,\cdots\, \sum_{|\alpha_d=k_d}\, \binom{k_1}{\alpha_1}\,\cdots\,\binom{k_d}{\alpha_d}\, \overline{u_1^{\alpha_1}}\,\cdots\,\overline{u_d^{\alpha_d}}\\ &\hspace*{22em}\times\quad\Rr[A,\,B]^{\alpha_1+\cdots+\alpha_d}\, \varphi_0[A,B]\\[3mm] &=\ \frac{1}{\sqrt{\det(U)\,k!}}\,\sum_{|\alpha_1|=k_1}\,\cdots\, \sum_{|\alpha_d|=k_d}\,\binom{k_1}{\alpha_1}\,\cdots\, \binom{k_d}{\alpha_d} \;\overline{u_1^{\alpha_1}}\,\cdots\,\overline{u_d^{\alpha_d}}\\ &\hspace*{19em}\times\,\sqrt{(\alpha_1+\cdots+\alpha_d)!}\ \varphi_{\alpha_1+\cdots+\alpha_d}[A,\,B]. \end{align*} By Theorem~\ref{theo:main}, we then obtain \[ \vec\varphi_n[A',\,B']\ =\ \det(U)^{-1/2}\,S_n(U)\,\vec\varphi_n[A,\,B]. \] \end{proof} \subsection*{Acknowledgements} This research was partially supported by the U.S.~National Science Foundation Grant DMS--1210982 and the German Research Foundation (DFG), Collaborative Research Center SFB/TRR 109. %\appendix %\section{The matrix $P_2$} % %For $n=2$, we have $L_2 = \frac12 d(d+1)$, and the $L_2\times d^2$ %matrix $P_2$ can also be introduced via symmetric vectorisation, see %\cite[Appendix~E]{K02}. For a symmetric matrix $S\in\R^{d\times d}$, %one considers the columnwise vectorisation %\[ %{\rm vec}_c(S) = \left(s_{11},s_{21},\ldots,s_{d1},s_{12},s_{22},\ldots,s_{d2},\ldots,s_{1d},s_{2d},\ldots,s_{dd}\right)^t\in\R^{d^2} %\] %and the symmetric vectorisation %\[ %{\rm svec}(S) = \left(s_{11},\sqrt2 s_{21},\ldots,\sqrt2 s_{d1},s_{22},%\sqrt2 s_{32},\ldots,\sqrt2 s_{d2},\ldots,s_{dd}\right)^t\in\R^{L_2}. %\] %Then, %\[ %{\rm svec}(S) = P_2 {\rm vec}_c(S). %\] \begin{thebibliography}{99} \bibitem[AHO98]{AHO98} F. Alizadeh, J. Haeberly, M. Overton, Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results, {\em SIAM J. Optim.} {\bf 8}, no.~3, 746--768 (1998) %\bibitem[Ber71]{Ber71} %C. Berge, {\em Principles of Combinatorics}, Academic Press, 1971. \bibitem[BGH]{BGH}R. Bourquin, V. Gradinaru, and G.A. Hagedorn, In Preparation. \bibitem[FGL09]{FGL09}E. Faou, V. Gradinaru, and C. Lubich, Computing semiclassical quantum dynamics with Hagedorn wavepackets. {\em SIAM J. Sci. Comp.}~{\bf 31}, 3027--3041 (2009) \bibitem[Hack12]{Hack12} W. Hackbusch, {\em Tensor Spaces and Numerical Tensor Calculus}, Springer, 2012. \bibitem[Hag80]{Hag80} G.A. Hagedorn, Semiclassical quantum mechanics I: The $\hbar\to0$ limit for coherent states, {\em Commun. Math. Phys.}~{\bf 71}, 77--93 (1980) \bibitem[Hag85]{Hag85} G. A. Hagedorn, Semiclassical quantum mechanics IV: The large order asymptotics and more general states in more than one dimension, {\em Ann. Inst. Henri Poincar\'e} Sect. A {\bf 42}, 363-–374 (1985) \bibitem[Hag98]{Hag98} G.A. Hagedorn, Raising and lowering operators for semi-classical wave packets, {\em Ann. Physics}~{\bf 269}, 77–-104 (1998) %\bibitem[LT14]{LT14} %C. Lasser, S. Troppmann, Hagedorn wavepackets in time-frequency and %phase space, %{\em J. Fourier An. Appl.}~{\bf 20}, 679--714 (2014) \bibitem[Lub08]{L}C. Lubich, {\em From quantum to classical molecular dynamics: reduced models and numerical analysis.} European Math. Soc., 2008. \bibitem[Kle02]{Kle02} E. de Klerk, {\em Aspects of Semidefinite Programming.} Kluwer, 2002. \bibitem[JHT]{JHT}J.H. Toloza, Private Communication. \bibitem[VL00]{VL00} C. Van Loan, The ubiquitous Kronecker product, {\em J. Comput. Appl. Math}~{\bf 123}, 85--100 (2000) \bibitem[VLV15]{VLV15} C. Van Loan, J. Vokt, Approximating matrices with multiple symmetries, {\em SIAM J. Matrix Anal. Appl.}~{\bf 36}, no. 3, 974--993 (2015) \end{thebibliography} \end{document} ---------------1603151928854--