Waves in composites and metamaterials/Hierarchical laminates and Hilbert space formalism

The content of these notes is based on the lectures by Prof. Graeme W. Milton (University of Utah) given in a course on metamaterials in Spring 2007.

Hierarchical Laminates
In the previous lecture we found that, for rank-1 laminates, the effective permittivity can be calculated using the formula of Tartar-Murat-Lurie-Cherkaev. In this lecture we extend the ideas used to arrive at that formula to hierarchical laminates.

An example of a hierarchical laminate is shown in Figure 1. The idea of such materials goes back to Maxwell. In the rank-2 laminate shown in the figure there are two length scales which are assumed to be sufficiently separated so that the ideas in the previous lecture can be exploited. There has to be a separation of length scales so that the layer material can be replaced by its effective tensor.

Recall the Tartar-Murat-Lurie-Cherkaev formula for the effective permittivity of a rank-1 laminate:

f_1~[\epsilon_2~\boldsymbol{\mathit{1}} - \boldsymbol{\epsilon}_\text{eff}]^{-1} = [\epsilon_2~\boldsymbol{\mathit{1}} - \boldsymbol{\epsilon}_1]^{-1} - \cfrac{f_2}{\epsilon_2}~\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n}) ~. $$ By iterating this formula one gets, for a rank-$$m$$ laminate,
 * $$ \text{(1)} \qquad

f_1~[\epsilon_2~\boldsymbol{\mathit{1}} - \boldsymbol{\epsilon}_\text{eff}]^{-1} = [\epsilon_2~\boldsymbol{\mathit{1}} - \boldsymbol{\epsilon}_1]^{-1} - \cfrac{f_2}{\epsilon_2}~\boldsymbol{M} $$ where

\boldsymbol{M} := \sum_{j=1}^m c_j~\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n}_j) ~; c_j = \cfrac{f^{(j-1)} - f^{(j)}}{1 - f_1} \ge 0 $$ and $$m$$ is the number of laminates in the hierarchy, $$f^{(j)}$$ is the proportion of phase $$1$$ in a rank-$$j$$ laminate, and $$\mathbf{n}_j$$ is the orientation of the $$j$$-th laminate.

In particular,

f^{(m)} = f_1 ~;~ f^{(0)} = 1 \qquad \implies \qquad f^{(j-1)} > f^{(j)} ~. $$ Then,

\sum_{j=1}^m c_j = \cfrac{f^{(0)} - f^{(m)}}{1 - f_1} = 1~. $$

For a rank-3 laminate, if the normals $$\mathbf{n}_1$$, $$\mathbf{n}_2$$, and $$\mathbf{n}_3$$ are three orthogonal vectors, then

\boldsymbol{M} = c_1~\mathbf{n}_1\otimes\mathbf{n}_1 + c_2~\mathbf{n}_2\otimes\mathbf{n}_2 + c_3~\mathbf{n}_3\otimes\mathbf{n}_3 ~. $$ If we choose the $$f^{(j)}$$ s so that

c_1 = c_2 = c_3 = \frac{1}{3} $$ then

\boldsymbol{M} = \frac{1}{3}~(\mathbf{n}_1\otimes\mathbf{n}_1 + \mathbf{n}_2\otimes\mathbf{n}_2 +         \mathbf{n}_3\otimes\mathbf{n}_3) = \boldsymbol{\mathit{1}} ~. $$ In this case, equation (1) coincides with the solution for the Hashin sphere assemblage!

This implies that different geometries can have the same  $$\boldsymbol{\epsilon}_\text{eff}(\epsilon_2,\boldsymbol{\epsilon}_1)$$.

Is there a formula as simple as the Tartar-Murat-Lurie-Cherkaev formula when ε1 and ε2 are both anisotropic?
The answer is yes.

In this case we use an anisotropic reference material $$\boldsymbol{\epsilon}_0$$ and define the polarization as
 * $$ \text{(2)} \qquad

\mathbf{P}(\mathbf{x}) := [\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]\cdot\mathbf{E}(\mathbf{x}) = \mathbf{D}(\mathbf{x}) - \boldsymbol{\epsilon}_0\cdot\mathbf{E}(\mathbf{x}) ~. $$ The volume average of this field is given by

\langle \mathbf{P} \rangle := \langle \boldsymbol{\epsilon}\cdot\mathbf{E} \rangle - \boldsymbol{\epsilon}_0\cdot\langle \mathbf{E} \rangle = \langle \mathbf{D} \rangle - \boldsymbol{\epsilon}_0\cdot\langle \mathbf{E} \rangle ~. $$ Therefore, the difference between the field and its volume average is
 * $$ \text{(3)} \qquad

\mathbf{P}(\mathbf{x}) - \langle \mathbf{P} \rangle = [\mathbf{D}(\mathbf{x}) - \langle \mathbf{D} \rangle] - \boldsymbol{\epsilon}_0\cdot[\mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle] ~. $$ Let us introduce a new matrix $$\boldsymbol{\mathit{\Gamma}}(\mathbf{n})$$ defined through its action on a vector $$\mathbf{a}$$, i.e.,
 * $$ \text{(4)} \qquad

\mathbf{b} = \boldsymbol{\mathit{\Gamma}}(\mathbf{n}) \cdot \mathbf{a} \qquad\text{if and only if}\qquad \begin{cases} \boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot\mathbf{b} = \mathbf{b} \\ \boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot(\mathbf{a} - \boldsymbol{\epsilon}_0\cdot\mathbf{b}) = \boldsymbol{0} \\ \end{cases} $$ where $$\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n}) = \mathbf{n}\otimes\mathbf{n}$$ and projects parallel to $$\mathbf{n}$$. Therefore,

\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot\mathbf{b} = \mathbf{b} \qquad \implies \qquad \alpha~\mathbf{n} = \mathbf{b} $$ where $$\alpha = \mathbf{b}\cdot\mathbf{n}$$. Also,

\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot(\mathbf{a} - \boldsymbol{\epsilon}_0\cdot\mathbf{b}) = \boldsymbol{0} \qquad \implies \qquad \mathbf{a}\cdot\mathbf{n} - \alpha~(\mathbf{n}\cdot\boldsymbol{\epsilon}_0\cdot\mathbf{n}) = 0~. $$ Therefore,

\alpha = \cfrac{\mathbf{a}\cdot\mathbf{n}}{\mathbf{n}\cdot\boldsymbol{\epsilon}_0\cdot\mathbf{n}} \qquad \text{and} \qquad \mathbf{b} = \left(\cfrac{\mathbf{a}\cdot\mathbf{n}}{\mathbf{n}\cdot\boldsymbol{\epsilon}_0\cdot\mathbf{n}} \right)~\mathbf{n} = \cfrac{(\mathbf{n}\otimes\mathbf{n})\cdot\mathbf{a}}{\mathbf{n}\cdot\boldsymbol{\epsilon}_0\cdot\mathbf{n}} = \cfrac{\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot\mathbf{a}}{\mathbf{n}\cdot\boldsymbol{\epsilon}_0\cdot\mathbf{n}} ~. $$ From the definition of $$\boldsymbol{\mathit{\Gamma}}(\mathbf{n})$$ we then have
 * $$ \text{(5)} \qquad

\boldsymbol{\mathit{\Gamma}}(\mathbf{n}) = \cfrac{\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})}{\mathbf{n}\cdot\boldsymbol{\epsilon}_0\cdot\mathbf{n}} ~. $$ Taking the projection of both sides of equation (3) we get

\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot[\mathbf{P}(\mathbf{x}) - \langle \mathbf{P} \rangle] = \boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot[\mathbf{D}(\mathbf{x}) - \langle \mathbf{D} \rangle] -    \boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot\{\boldsymbol{\epsilon}_0\cdot[\mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle]\} ~. $$ Now continuity of the normal component of $$\mathbf{D}$$ and the piecewise constant nature of the field implies that the normal component of $$\mathbf{D}$$ is constant. Therefore,

\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot[\mathbf{D}(\mathbf{x}) - \langle \mathbf{D} \rangle] = [\mathbf{n}\cdot\mathbf{D}(\mathbf{x}) - \mathbf{n}\cdot\langle \mathbf{D} \rangle]~\mathbf{n} = \boldsymbol{0} ~. $$ Hence we have,
 * $$ \text{(6)} \qquad

\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot\left\{\mathbf{P}(\mathbf{x}) - \langle \mathbf{P} \rangle + \boldsymbol{\epsilon}_0\cdot[\mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle]\right\} = \boldsymbol{0} ~. $$ Recall from the previous lecture that
 * $$ \text{(7)} \qquad

\boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot\mathbf{E}(\mathbf{x}) = \mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle + \boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot\langle \mathbf{E} \rangle \quad \implies \quad \boldsymbol{\mathit{\Gamma}}_1(\mathbf{n})\cdot[\mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle] = \mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle ~. $$ Since the conditions in (4) are satisfied with

\mathbf{a} := \mathbf{P}(\mathbf{x}) - \langle \mathbf{P} \rangle \quad \text{and} \quad \mathbf{b} := -[\mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle] $$ from the definition of $$\boldsymbol{\mathit{\Gamma}}$$ we then have
 * $$ \text{(8)} \qquad

-[\mathbf{E}(\mathbf{x}) - \langle \mathbf{E} \rangle] = \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\cdot[\mathbf{P}(\mathbf{x}) - \langle \mathbf{P} \rangle] ~. $$ Now, from equation (2) we have

\mathbf{E}(\mathbf{x}) = [\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]^{-1}\cdot\mathbf{P}(\mathbf{x}) ~. $$ Plugging this into (8) gives

-[\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]^{-1}\cdot\mathbf{P}(\mathbf{x}) + \langle \mathbf{E} \rangle = \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\cdot[\mathbf{P}(\mathbf{x}) - \langle \mathbf{P} \rangle] $$ or,

\{[\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}\cdot\mathbf{P}(\mathbf{x}) = \langle \mathbf{E} \rangle + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\cdot\langle \mathbf{P} \rangle ~. $$ Define

\mathbf{V} := \langle \mathbf{E} \rangle + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\cdot\langle \mathbf{P} \rangle $$ and note that this quantity is constant throughout the laminate. Therefore we can write

\{[\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}\cdot\mathbf{P}(\mathbf{x}) = \mathbf{V} $$ or

\mathbf{P}(\mathbf{x}) = \{[\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}^{-1}\cdot\mathbf{V} ~. $$ If we now take a volume average, we get
 * $$ \text{(9)} \qquad

\langle \mathbf{P} \rangle = \langle \{[\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]^{-1}+ \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}^{-1} \rangle \cdot\mathbf{V} ~. $$ Also, from the definition of $$\mathbf{P}(\mathbf{x})$$ we have

\langle \mathbf{P} \rangle = [\boldsymbol{\epsilon}_\text{eff} - \boldsymbol{\epsilon}_0]\cdot\langle \mathbf{E} \rangle \qquad \implies \qquad \langle \mathbf{E} \rangle = [\boldsymbol{\epsilon}_\text{eff} - \boldsymbol{\epsilon}_0]^{-1}\cdot\langle \mathbf{P} \rangle ~. $$ Therefore,

\mathbf{V} = \{[\boldsymbol{\epsilon}_\text{eff} - \boldsymbol{\epsilon}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}\cdot\langle \mathbf{P} \rangle $$ or,
 * $$ \text{(10)} \qquad

\langle \mathbf{P} \rangle = \{[\boldsymbol{\epsilon}_\text{eff} - \boldsymbol{\epsilon}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}^{-1}\cdot\mathbf{V}~. $$ Comparing equation (9) and (10) and invoking the arbitrariness of $$\boldsymbol{\epsilon}_0$$, we get
 * $$ \text{(11)} \qquad

{ \{[\boldsymbol{\epsilon}_\text{eff} - \boldsymbol{\epsilon}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}^{-1} = \langle \{[\boldsymbol{\epsilon}(\mathbf{x}) - \boldsymbol{\epsilon}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}^{-1} \rangle ~. } $$ This relation has a simple form and can be used when the phases are anisotropic.

For a simple (rank-1) laminate where $$\boldsymbol{\epsilon}_0 = \boldsymbol{\epsilon}_2$$, equation (11) reduces to

{ f_1~(\boldsymbol{\epsilon}_\text{eff} - \boldsymbol{\epsilon}_2)^{-1} = (\boldsymbol{\epsilon}_1 - \boldsymbol{\epsilon}_2)^{-1} + f_2~\boldsymbol{\mathit{\Gamma}}(\mathbf{n}) } $$ where

{ \boldsymbol{\mathit{\Gamma}}(\mathbf{n}) = \cfrac{\mathbf{n}\otimes\mathbf{n}}{\mathbf{n}\cdot\boldsymbol{\epsilon}_2\cdot\mathbf{n}} ~. } $$

Linear Elastic laminates
For elasticity, exactly the same analysis can be applied. In this case we introduce a reference stiffness tensor $$\boldsymbol{\mathsf{C}}_0$$ and define the second order polarization tensor as

\boldsymbol{P}(\mathbf{x}) = [\boldsymbol{\mathsf{C}}(\mathbf{x}) - \boldsymbol{\mathsf{C}}_0] : \boldsymbol{\varepsilon}(\mathbf{x}) $$ where the strain $$\boldsymbol{\varepsilon}(\mathbf{x})$$ is given by

\boldsymbol{\varepsilon}(\mathbf{x}) = \frac{1}{2}~[\boldsymbol{\nabla}\mathbf{u} + (\boldsymbol{\nabla}\mathbf{u})^T] ~. $$ Following the same process as before, we can show that the effective elastic stiffness of a hierarchical laminate can be determined from the formula
 * $$ \text{(12)} \qquad

{ \{[\boldsymbol{\mathsf{C}}_\text{eff} - \boldsymbol{\mathsf{C}}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}^{-1} = \langle \{[\boldsymbol{\mathsf{C}}(\mathbf{x}) - \boldsymbol{\mathsf{C}}_0]^{-1} + \boldsymbol{\mathit{\Gamma}}(\mathbf{n})\}^{-1} \rangle } $$ where (the components are in a rectangular Cartesian basis)

[\boldsymbol{\mathit{\Gamma}}(\mathbf{n})]_{ijlm} = \cfrac{1}{4}~\left[n_i~\{\boldsymbol{C}^{-1}(\mathbf{n})\}_{jl}~n_m + n_i~\{\boldsymbol{C}^{-1}(\mathbf{n})\}_{jm}~n_l + n_j~\{\boldsymbol{C}^{-1}(\mathbf{n})\}_{il}~n_m + n_j~\{\boldsymbol{C}^{-1}(\mathbf{n})\}_{im}~n_l\right] $$ and

\boldsymbol{C}^{-1}(\mathbf{n}) = \mathbf{n} \cdot \boldsymbol{\mathsf{C}}_0 \cdot \mathbf{n} ~. $$ Note that $$\boldsymbol{C}^{-1}(\mathbf{n})$$ has the same form as the acoustic tensor.

If $$\boldsymbol{\mathsf{C}}_0$$ is isotropic, i.e.,

\boldsymbol{\mathsf{C}}_0:\boldsymbol{\varepsilon} = \lambda_0~\text{tr}(\boldsymbol{\varepsilon})~\boldsymbol{\mathit{1}} + 2~\mu_0~\boldsymbol{\varepsilon} $$ where $$\lambda_0$$ is the Lame modulus and $$\mu_0$$ is the shear modulus, $$\boldsymbol{\mathit{\Gamma}}(\mathbf{n})$$ simplifies to

[\boldsymbol{\mathit{\Gamma}}(\mathbf{n})]_{ijlm} = \left(\cfrac{1}{\lambda_0 + 2~\mu_0} - \cfrac{1}{\mu_0}\right)~ n_i~n_j~n_l~n_m + \cfrac{1}{4~\mu_0} (n_i~\delta_{jl}~n_m + n_i~\delta_{jm}~n_l +    n_j~\delta_{il}~n_m + n_j~\delta_{im}~n_l) ~. $$

Hilbert Space Formulation
The methods discussed above can be generalized if we think in terms of a Hilbert space formalism. Recall that our goal is to find a general formula for $$\boldsymbol{\epsilon}_\text{eff}$$ and $$\boldsymbol{\mathsf{C}}_\text{eff}$$.

Let us consider a periodic material with unit cell $$Q$$. We will call such materials $$Q$$-periodic.

Electromagentism
Consider the Hilbert space $$\mathcal{H}$$ of square-integrable, $$Q$$-periodic, complex vector fields with the inner product

(\mathbf{a}, \mathbf{b}) = \int_Q \overline{\mathbf{a}(\mathbf{x})}\cdot\mathbf{b}(\mathbf{x})~\text{d}\mathbf{x} $$ where $$\mathbf{a}$$ and $$\mathbf{b}$$ are vector fields and $$\overline{(\bullet)}$$ denotes the complex conjugate. We can use Parseval's theorem to express the inner product in Fourier space as

(\mathbf{a}, \mathbf{b}) = \int_{\hat{Q}} \widehat{\mathbf{a}}(\mathbf{k})\cdot\widehat{\mathbf{b}}(\mathbf{k})~\text{d}\mathbf{k} $$ where $$\mathbf{k}$$ is the phase vector.

The Hilbert space $$\mathcal{H}$$ can be decomposed into three orthogonal subspaces.


 * 1) The subspace $$\mathcal{U}$$ of uniform fields, i.e., $$\mathbf{a}(\mathbf{x})$$ is independent of $$\mathbf{x}$$, or in Fourier space, $$\widehat{\mathbf{a}}(\mathbf{k}) = \boldsymbol{0}$$ unless $$\mathbf{k} = \boldsymbol{0}$$.
 * 2) The subspace $$\mathcal{J}$$ of zero divergence, zero average value fields, i.e., $$\boldsymbol{\nabla} \cdot \mathbf{a}(\mathbf{x}) = 0$$ and $$\langle \mathbf{a}(\mathbf{x}) \rangle = \boldsymbol{0}$$, or in Fourier space, $$\widehat{\mathbf{a}}(\boldsymbol{0}) = \boldsymbol{0}$$ and $$\mathbf{k}\cdot\widehat{\mathbf{a}}(\mathbf{k}) = 0$$.
 * 3) The subspace $$\mathcal{E}$$ of zero curl, zero average value fields, i.e., $$\boldsymbol{\nabla} \times \mathbf{a}(\mathbf{x}) = \boldsymbol{0}$$ and $$\langle \mathbf{a}(\mathbf{x}) \rangle = \boldsymbol{0}$$, or in Fourier space, $$\widehat{\mathbf{a}}(\boldsymbol{0}) = \boldsymbol{0}$$ and $$\widehat{\mathbf{a}}(\mathbf{k}) = \alpha(\mathbf{k})~\mathbf{k}$$.

Thus we can write

\mathcal{H} = \mathcal{U} \oplus \mathcal{J} \oplus \mathcal{E} ~. $$ In Fourier space, we can clearly see that

(\mathbf{a}, \mathbf{b}) = \boldsymbol{0} $$ if we choose $$\mathbf{a}$$ from any one of $$\mathcal{U}, \mathcal{J}, \mathcal{E}$$ and $$\mathbf{b}$$ from a a different subspace. Therefore the three subspaces are orthogonal.

Elasticity
Similarly, for elasticity, $$\mathcal{H}$$ is the Hilbert space of square-integrable, $$Q$$-periodic, complex valued, symmetric matrix valued fields with inner product

(\boldsymbol{A}, \boldsymbol{B}) = \int_Q \overline{\boldsymbol{A}(\mathbf{x})} : \boldsymbol{B}(\mathbf{x})~\text{d}\mathbf{x} ~. $$ In Fourier space, we have

(\boldsymbol{A}, \boldsymbol{B}) = \int_{\hat{Q}} \widehat{\boldsymbol{A}}(\mathbf{k}) : \widehat{\boldsymbol{B}}(\mathbf{k})~\text{d}\mathbf{k} ~. $$ Again we decompose the space $$\mathcal{H}$$ into three orthogonal subspaces $$\mathcal{U}$$, $$\mathcal{J}$$, and $$\mathcal{E}$$ where


 * 1) $$\mathcal{U}$$ is the subspace of uniform fields, i.e., $$\boldsymbol{A}(\mathbf{x})$$ is independent of $$\mathbf{x}$$, or in Fourier space, $$\widehat{\boldsymbol{A}}(\mathbf{k}) = \boldsymbol{\mathit{0}}$$ unless $$\mathbf{k} = \boldsymbol{0}$$.
 * 2) $$\mathcal{J}$$ is the subspace of zero divergence, zero average value fields, i.e., $$\boldsymbol{\nabla} \cdot \boldsymbol{A}(\mathbf{x}) = \boldsymbol{0}$$ and $$\langle \boldsymbol{A}(\mathbf{x}) \rangle = \boldsymbol{\mathit{0}}$$, or in Fourier space, $$\widehat{\boldsymbol{A}}(\boldsymbol{0}) = \boldsymbol{\mathit{0}}$$ and $$\widehat{\boldsymbol{A}}(\mathbf{k})\cdot\mathbf{k} = \boldsymbol{0}$$.
 * 3) $$\mathcal{E}$$ is the subspace of zero average "strain" fields, i.e., $$\langle \boldsymbol{A}(\mathbf{x}) \rangle = \boldsymbol{\mathit{0}}$$, or in Fourier space, $$\widehat{\boldsymbol{A}}(\boldsymbol{0}) = \boldsymbol{\mathit{0}}$$ and $$\widehat{\boldsymbol{A}}(\mathbf{k}) =  \mathbf{k}\otimes\mathbf{u}(\mathbf{k}) + \mathbf{u}(\mathbf{k})\otimes\mathbf{k}$$.

Problem of determining the effective tensor in an abstract setting
Let us first consider the problem of determining the effective permittivity. The approach will be to split relevant fields into components that belong to orthogonal subpaces of $$\mathcal{H}$$.

Since $$\boldsymbol{\nabla}\cdot\mathbf{D} = 0$$, we can split $$\mathbf{D}(\mathbf{x})$$ into two parts

\mathbf{D}(\mathbf{x}) = \mathbf{D}_0 + \mathbf{D}_1(\mathbf{x}) $$ where $$\mathbf{D}_0 \in \mathcal{U}$$ and $$\mathbf{D}_1 \in \mathcal{J}$$.

Also, since $$\boldsymbol{\nabla}\times\mathbf{E} = \boldsymbol{0}$$, we can split $$\mathbf{E}(\mathbf{x})$$ into two parts

\mathbf{E}(\mathbf{x}) = \mathbf{E}_0 + \mathbf{E}_1(\mathbf{x}) $$ where $$\mathbf{E}_0 \in \mathcal{U}$$ and $$\mathbf{E}_1 \in \mathcal{E}$$.

The constitutive relation linking $$\mathbf{D}$$ and $$\mathbf{E}$$ is

\mathbf{D}(\mathbf{x}) = \boldsymbol{\epsilon}(\mathbf{x})\cdot\mathbf{E}(\mathbf{x}) \equiv \boldsymbol{\epsilon}(\mathbf{E}) $$ where $$\boldsymbol{\epsilon}(\bullet)$$ can be thought of as an operator which is local in real space and maps $$\mathcal{H}$$ to $$\mathcal{H}$$. Therefore, we can write

\mathbf{D}_0 + \mathbf{D}_1 = \boldsymbol{\epsilon}(\mathbf{E}_0 + \mathbf{E}_1) ~. $$ The effective permittivity $$\boldsymbol{\epsilon}_\text{eff}$$ is defined through the relation

\mathbf{D}_0 = \boldsymbol{\epsilon}_\text{eff}(\mathbf{E}_0) ~. $$ Let $$\boldsymbol{\mathit{\Gamma}}_1$$ denote the projection operator that effects the projection of any vector in $$\mathcal{H}$$ onto the subspace $$\mathcal{E}$$. This projection is local in Fourier space. We can show that, if

\mathbf{b}(\mathbf{x}) = \boldsymbol{\mathit{\Gamma}}_1\cdot\mathbf{a}(\mathbf{x}) $$ then

\widehat{\mathbf{b}}(\mathbf{k}) = \boldsymbol{\mathit{\widehat{\Gamma}}}_1(\mathbf{k})\cdot\widehat{\mathbf{a}}(\mathbf{k}) $$ where

\boldsymbol{\mathit{\widehat{\Gamma}}}_1(\mathbf{k}) = \begin{cases} \cfrac{\mathbf{k}\otimes\mathbf{k}}{|\mathbf{k}|^2} = \boldsymbol{\mathit{\Gamma}}_1(\mathbf{n}) & \qquad\text{with} \mathbf{n} = \cfrac{\mathbf{k}}{|\mathbf{k}|} \text{if} \mathbf{k} \ne 0 \\ \boldsymbol{\mathit{0}} & \qquad\text{if}\mathbf{k} = 0 ~. \end{cases} $$ More generally, if we choose some reference matrix $$\boldsymbol{\epsilon}_0$$, we can define an operator $$\boldsymbol{\mathit{\Gamma}}$$ which is local in Fourier space via the relation

\mathbf{b}(\mathbf{x}) = \boldsymbol{\mathit{\Gamma}}\cdot\mathbf{a}(\mathbf{x}) $$ if and only if

\mathbf{b} \in \mathcal{E} \qquad \text{and} \qquad \mathbf{a} - \boldsymbol{\epsilon}_0(\mathbf{b}) \in \mathcal{U} \oplus \mathcal{J} \implies \boldsymbol{\nabla} \cdot (\mathbf{a} - \boldsymbol{\epsilon}_0(\mathbf{b})) = 0 ~. $$ In Fourier space,

\widehat{\mathbf{b}}(\mathbf{k}) = \boldsymbol{\mathit{\widehat{\Gamma}}}(\mathbf{k})\cdot\widehat{\mathbf{a}}(\mathbf{k}) $$ where

\boldsymbol{\mathit{\widehat{\Gamma}}}(\mathbf{k}) = \begin{cases} \cfrac{\mathbf{k}\otimes\mathbf{k}}{\mathbf{k}\cdot\boldsymbol{\epsilon}_0(\mathbf{k})} = \boldsymbol{\mathit{\Gamma}}(\mathbf{n}) & \qquad\text{with} \mathbf{n} = \cfrac{\mathbf{k}}{|\mathbf{k}|} \text{if} \mathbf{k} \ne 0 \\ \boldsymbol{\mathit{0}} & \qquad\text{if}\mathbf{k} = 0 ~. \end{cases} $$ In the next lecture we will derive relations for the effective tensors using these ideas.