Waves in composites and metamaterials/Transformation-based cloaking in mechanics

The content of these notes is based on the lectures by Prof. Graeme W. Milton (University of Utah) given in a course on metamaterials in Spring 2007.

Recap
In the previous lecture we showed that for a material with a real, symmetric, and positive definite conductivity tensor ($$\boldsymbol{\sigma}$$), we can set up a variational principle for minimal power dissipation. We also showed that this variational principle is satisfied if

\boldsymbol{\nabla} \cdot (\boldsymbol{\sigma}\cdot\boldsymbol{\nabla u)} = \boldsymbol{\nabla} \cdot \mathbf{J}(\mathbf{x}) = 0 ~. $$ Next we showed that under a coordinate transformation from $$\mathbf{x} \rightarrow \mathbf{x}'$$, the conductivity tensor transforms as

\boldsymbol{\sigma}'(\mathbf{x}') = \cfrac{1}{J}~\boldsymbol{A}(\mathbf{x})\cdot\boldsymbol{\sigma}(\mathbf{x})\cdot\boldsymbol{A}^T(\mathbf{x}) $$ where

J = \det(\boldsymbol{A}) ~; A_{ij} := \frac{\partial x'_i}{\partial x_j} ~. $$ We also found that the variational principle in transformed coordinates has the alternative interpretation that { the function $$u'(x') = u(x')$$ minimizes $$W$$ in a body $$\Omega'$$ filled with material with conductivity $$\boldsymbol{\sigma}'(\mathbf{x}')$$ with $$x'_1,x'_2,x'_3$$ as Cartesian coordinates in $$x'$$ space.}

Next we derived the transformation rule for currents:

\mathbf{J}'(\mathbf{x}') = \cfrac{1}{J}~\boldsymbol{A}(\mathbf{x})\cdot\boldsymbol{\sigma}(\mathbf{x})\cdot\boldsymbol{\nabla} u(\mathbf{x}) = \cfrac{\boldsymbol{A}\cdot\mathbf{J}(\mathbf{x})}{\det(\boldsymbol{A})} ~; \boldsymbol{\nabla}'\cdot\mathbf{J}'(\mathbf{x}') = 0 ~. $$ We also saw that the transformation rule for the electric field could be written as

\mathbf{E}'(\mathbf{x}') = (\boldsymbol{A}^T)^{-1}\cdot\mathbf{E}(\mathbf{x}) ~. $$ Next, based on insights obtained from electric tomography, we found that a transformation-based cloaking effect could be obtained using the Greenleaf-Lassas-Uhlmann mapping (Greenleaf03). This mapping is singular and of the form

\mathbf{x}'(\mathbf{x}) = \begin{cases} \left(\cfrac{|\mathbf{x}|}{2} + 1\right)~\cfrac{\mathbf{x}}{|\mathbf{x}|} & \text{if}~|\mathbf{x}| < 2 \\ \mathbf{x} & \text{if}~|\mathbf{x}| > 2 ~. \end{cases} $$ The effect of this mapping is shown in Figure 1.

Some other unusual transformations
One unusual mapping that can be used to achieve cloaking is to fold back space upon itself (Leonhardt06,Pendry06). An example of such a mapping is

\mathbf{x}' = \begin{cases} \mathbf{x} & \text{if}x_1 < 0 \\ (-x_1, x_2, x_3) & \text{if} d > x_1 > 0 \\ \mathbf{x} - (2d, 0, 0) & \text{if} x_1 > d ~. \end{cases} $$ The effect of this transformation is shown in Figure~2. Note that there is a sharp (discontinuous) fold and the separation shown in the thickness direction is simply for the purpose of illustration. In reality, space is folded upon itself and the determination of the Jacobian inside the fold is -1.

Transformations of Maxwell's equations
We also showed in the previous lecture that Maxwell's equations at fixed frequency are invariant with respect to coordinate transformations. Thus the equations in $$\mathbf{x}$$-space

\begin{align} \boldsymbol{\nabla} \times \mathbf{E} + i\omega\boldsymbol{\mu}\cdot\mathbf{H} &= \boldsymbol{0} \\ \boldsymbol{\nabla} \times \mathbf{H} - i\omega\boldsymbol{\epsilon}\cdot\mathbf{E} &= \boldsymbol{0} ~. \end{align} $$ transform, in $$\mathbf{x}'$$-space, to

\begin{align} \boldsymbol{\nabla}'\times\mathbf{E}' + i\omega\boldsymbol{\mu}'\cdot\mathbf{H}' &= \boldsymbol{0} \\ \boldsymbol{\nabla}'\times\mathbf{H}' - i\omega\boldsymbol{\epsilon}'\cdot\mathbf{E}' &= \boldsymbol{0} ~. \end{align} $$ In the transformed equations,

\mathbf{E}'(\mathbf{x}') = (\boldsymbol{A}^T)^{-1}~\mathbf{E}(\mathbf{x}) ~; \mathbf{H}'(\mathbf{x}') = (\boldsymbol{A}^T)^{-1}~\mathbf{H}(\mathbf{x}) $$ and
 * $$ \text{(1)} \qquad

\boldsymbol{\mu}'(\mathbf{x}') = \cfrac{\boldsymbol{A}\cdot\boldsymbol{\mu}(\mathbf{x})\cdot\boldsymbol{A}^T}{\det(\boldsymbol{A})} ~; \boldsymbol{\epsilon}'(\mathbf{x}') = \cfrac{\boldsymbol{A}\cdot\boldsymbol{\epsilon}(\mathbf{x})\cdot\boldsymbol{A}^T}{\det(\boldsymbol{A})} ~. $$ Let us consider the effect of the fold-back transformation shown in Figure~2 on Maxwell's equations.

Recall that this transformation has the form

\mathbf{x}' = \begin{cases} \mathbf{x} & \text{if}x_1 < 0 \\ (-x_1, x_2, x_3) & \text{if} d > x_1 > 0 \\ \mathbf{x} - (2d, 0, 0) & \text{if} x_1 > d ~. \end{cases} $$

The perfect lens
Now, let us suppose that $$\boldsymbol{\epsilon}' = \boldsymbol{\mu}' = \boldsymbol{\mathit{1}}$$ everywhere in the region. Since the Jacobian of the transformation is

A_{ki} = \frac{\partial x'_k}{\partial x_i} $$ we have

\boldsymbol{A} = \begin{cases} \boldsymbol{\mathit{1}} & \text{if} x_1 < 0 \text{or} x_1 > d \\ \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \text{if} d > x_1 > 0 ~. \end{cases} $$ This implies that in the region $$d > x_1 > 0$$ we have

\boldsymbol{A}\cdot\boldsymbol{A}^T = \boldsymbol{\mathit{1}} ~; \det(\boldsymbol{A}) = -1 ~. $$ Since the materials in the region are isotropic, i.e., $$\boldsymbol{\mu} = \boldsymbol{\mathit{1}}$$ and $$\boldsymbol{\epsilon} = \boldsymbol{\mathit{1}}$$, then from equation (1) we see that in the region $$d > x_1 > 0$$,

\mu'(\mathbf{x}') = -1 ~; \epsilon'(\mathbf{x}') = -1 ~. $$ Therefore the fold back transformation is realized in a geometry that is equivalent to the perfect lens that we discussed earlier. Figure 3 shows the geometry involved. For a source that is less than a distance $$d$$ from the first interface, the fields blow up to infinity and there is no solution. For a solution to exist, we need to regularize the problem and add a small loss $$\delta$$, i.e., $$\epsilon=\mu = -1 + i\delta$$ in the lens. In that case, the fields blow up to infinity in two strips of length $$d - d_0$$ where $$d_0$$ is the distance of the source from the first interface. Outside this region, the fields converge to that expected by the Pendry solution (see Figure 4 for a schematic.)

If $$d - d_0 > d_0$$, i.e., $$d_0 < d/2$$, then the sources will be in a region of enormous fields. In fact, the source produces infinite energy per unit time in such regions as the loss $$\delta \rightarrow 0$$. This is clearly unphysical. So any realistic point or line source with finite energy such as a polarizable particle must have an amplitude which goes to zero as $$\delta \rightarrow 0$$. This means that the particle will have become cloaked!

Figures 5(a) and (b) show the cloaking caused by a cylindrical perfect lens with a small loss (Milton06). When a polarizable diople is located close to the lens, the field is barely perturbed. However, when the dipole is at a distance from the lens, the field shows significant perturbations.

Magnification
So far we have not dealt with the issue of magnification. Is there a coordinate transformation that leads to magnification? One such possible transformation is illustrated in Figure 6.

In this case, in the region of dilation, the transformation is

\mathbf{x}' = \beta~\mathbf{x} ~; \beta > 1 ~. $$ Therefore, the Jacobian of the transformation is

\boldsymbol{A} = \beta~\boldsymbol{\mathit{1}} ~; \det(\boldsymbol{A}) = \beta^3 ~. $$ Hence the material tensors in the region of dilation transform as

\boldsymbol{\epsilon}' = \cfrac{\boldsymbol{\epsilon}}{\beta} ~; \boldsymbol{\mu}' = \cfrac{\boldsymbol{\mu}}{\beta} ~. $$ However, in the folded region, the $$\boldsymbol{\epsilon}$$ and $$\boldsymbol{\mu}$$ tensors are anisotropic and negative. Such a transformation therefore acts like a magnifying lens.

Transformation-based cloaking in elasticity
It turns out the Willis equations in elastodynamics also transform in a manner that is very similar to the Maxwell equation in electromagnetism. Before we describe the Willis equations, let us get into a brief description of ensemble averaging (a opposed to volume averaging). The hope is that the ensemble average is a good descriptor of behavior in individual realizations.

Examples of ensembles
Some examples of ensembles are:
 * Periodic media with a period $$\delta$$ where the fields are not necessarily periodic (see Figure 7(a)). The ensemble is the material and all translations of it.  Of course,        a translation that is equal to the period gives back the same        material.
 * Media generated by some translation invariant statistical process.       This means that a particular realization and its translations are        equally likely to occur (roughly speaking).  An example is a medium        generated by a Poisson process.  We can represent the ensemble by         constructing a Voronoi tessellation and assigning constants to each        cell at random (see Figure 7(b)).
 * Media generated by some statistical process where the statistics       depend slowly on position.

Willis' equations
Recall that the equations governing the motion of a linear elastic body are
 * $$ \text{(2)} \qquad

\begin{align} & \boldsymbol{\nabla} \cdot \boldsymbol{\sigma} + \mathbf{f} = \dot{\mathbf{p}} \\ & \boldsymbol{\varepsilon} = \frac{1}{2}~[\boldsymbol{\nabla}\mathbf{u} + (\boldsymbol{\nabla}\mathbf{u})^T] \\ & \dot{\mathbf{u}} = \frac{\partial \mathbf{u}}{\partial t} ~; \dot{\mathbf{p}} = \frac{\partial \mathbf{p}}{\partial t}  \end{align} $$ where $$\mathbf{p}$$ is the momentum, $$\boldsymbol{\sigma}$$ is the stress, and $$f$$ is the body force. We assume that the body force is independent of the realization. The microscopic constitutive relations are assumed to be
 * $$ \text{(3)} \qquad

\boldsymbol{\sigma} = \boldsymbol{\mathsf{C}}\star\boldsymbol{\varepsilon} ~; \mathbf{p} = \rho~\dot{\mathbf{u}} ~. $$ Here,

\boldsymbol{\mathsf{C}}\star\boldsymbol{\varepsilon} \equiv \int_{-\infty}^t \boldsymbol{\mathsf{C}}(t-\tau) : \boldsymbol{\varepsilon}(\tau)~\text{d}\tau ~. $$ By ensemble averaging (2) we get

\begin{align} & \boldsymbol{\nabla} \cdot \left\langle \boldsymbol{\sigma} \right\rangle + \mathbf{f} = \left\langle \dot{\mathbf{p}} \right\rangle \\ & \left\langle \boldsymbol{\varepsilon} \right\rangle = \frac{1}{2}~[\boldsymbol{\nabla} \left\langle \mathbf{u} \right\rangle + (\boldsymbol{\nabla} \left\langle \mathbf{u} \right\rangle)^T] \end{align} $$ where $$\left\langle (\bullet) \right\rangle$$ is the ensemble average over realizations and not a volume average.

However, we cannot ensemble average (3) since

\left\langle \boldsymbol{\sigma} \right\rangle \ne \left\langle \boldsymbol{\mathsf{C}} \right\rangle\star\left\langle \boldsymbol{\varepsilon} \right\rangle \quad \text{and} \quad \left\langle \mathbf{p} \right\rangle \ne \left\langle \rho \right\rangle~\left\langle \dot{\mathbf{u}} \right\rangle ~. $$ We therefore need some effective constitutive relation. Willis (Willis81,Willis81a,Willis83,Willis97,Milton07) found that

\begin{align} \left\langle \boldsymbol{\sigma} \right\rangle & = \boldsymbol{\mathsf{C}}_\text{eff}\star\left\langle \boldsymbol{\varepsilon} \right\rangle + \boldsymbol{\mathsf{S}}_\text{eff}\star\left\langle \dot{\mathbf{u}} \right\rangle\\ \left\langle \mathbf{p} \right\rangle & = \boldsymbol{\mathsf{S}}_\text{eff}^\dagger \star\left\langle \boldsymbol{\varepsilon} \right\rangle + \rho_\text{eff}\star\left\langle \dot{\mathbf{u}} \right\rangle \end{align} $$ where all the operators are nonlocal in time (and in general also nonlocal in space).

By the adjoint operator (represented by the superscript $$\dagger$$), we mean

\int \boldsymbol{\pi} \star (\boldsymbol{\mathsf{S}}^\dagger_\text{eff} \star \boldsymbol{\tau})~\text{d}\mathbf{x} = \int \boldsymbol{\tau} \star (\boldsymbol{\mathsf{S}}_\text{eff} \star \boldsymbol{\pi})~\text{d}\mathbf{x} $$ for all fields $$\boldsymbol{\pi}$$ and $$\boldsymbol{\tau}$$ and at time $$t$$.

In the next lecture we will show how the Willis' equations are derived.