Waves in composites and metamaterials/Effective tensors using Hilbert space formalism

The content of these notes is based on the lectures by Prof. Graeme W. Milton (University of Utah) given in a course on metamaterials in Spring 2007.

Recap
In the previous lecture we introduced the Hilbert space $$\mathcal{H}$$ of of square-integrable, $$Q$$-periodic, complex vector fields with the inner product

(\mathbf{a}, \mathbf{b}) = \int_Q \overline{\mathbf{a}(\mathbf{x})}\cdot\mathbf{b}(\mathbf{x})~\text{d}\mathbf{x} \equiv \int_{\hat{Q}} \widehat{\mathbf{a}}(\mathbf{k})\cdot\widehat{\mathbf{b}}(\mathbf{k})~\text{d}\mathbf{k} ~. $$

We then decomposed $$\mathcal{H}$$ into three orthogonal subspaces.


 * 1) The subspace $$\mathcal{U}$$ of uniform fields, i.e., $$\mathbf{a}(\mathbf{x})$$ is independent of $$\mathbf{x}$$, or in Fourier space, $$\widehat{\mathbf{a}}(\mathbf{k}) = \boldsymbol{0}$$ unless $$\mathbf{k} = \boldsymbol{0}$$.
 * 2) The subspace $$\mathcal{J}$$ of zero divergence, zero average value fields, i.e., $$\boldsymbol{\nabla} \cdot \mathbf{a}(\mathbf{x}) = 0$$ and $$\langle \mathbf{a}(\mathbf{x}) \rangle = \boldsymbol{0}$$, or in Fourier space, $$\widehat{\mathbf{a}}(\boldsymbol{0}) = \boldsymbol{0}$$ and $$\mathbf{k}\cdot\widehat{\mathbf{a}}(\mathbf{k}) = 0$$.
 * 3) The subspace $$\mathcal{E}$$ of zero curl, zero average value fields, i.e., $$\boldsymbol{\nabla} \times \mathbf{a}(\mathbf{x}) = \boldsymbol{0}$$ and $$\langle \mathbf{a}(\mathbf{x}) \rangle = \boldsymbol{0}$$, or in Fourier space, $$\widehat{\mathbf{a}}(\boldsymbol{0}) = \boldsymbol{0}$$ and $$\widehat{\mathbf{a}}(\mathbf{k}) = \alpha(\mathbf{k})~\mathbf{k}$$.

To determine the effective permittivity, we introduced the operator $$\boldsymbol{\mathit{\Gamma}}$$ with the properties
 * $$ \text{(1)} \qquad

\mathbf{b}(\mathbf{x}) = \boldsymbol{\mathit{\Gamma}}\cdot\mathbf{a}(\mathbf{x}) $$ if and only if

\mathbf{b} \in \mathcal{E} \qquad \text{and} \qquad \mathbf{a} - \boldsymbol{\epsilon}_0\cdot\mathbf{b} \in \mathcal{U} \oplus \mathcal{J} \implies \boldsymbol{\nabla} \cdot (\mathbf{a} - \boldsymbol{\epsilon}_0\cdot\mathbf{b}) = 0 ~. $$ We also found that in Fourier space,

\widehat{\mathbf{b}}(\mathbf{k}) = \boldsymbol{\mathit{\widehat{\Gamma}}}(\mathbf{k})\cdot\widehat{\mathbf{a}}(\mathbf{k}) $$ where
 * $$ \text{(2)} \qquad

\boldsymbol{\mathit{\widehat{\Gamma}}}(\mathbf{k}) = \begin{cases} \cfrac{\mathbf{k}\otimes\mathbf{k}}{\mathbf{k}\cdot\boldsymbol{\epsilon}_0\cdot\mathbf{k}} = \boldsymbol{\mathit{\Gamma}}(\mathbf{n}) & \qquad\text{with} \mathbf{n} = \cfrac{\mathbf{k}}{|\mathbf{k}|} \text{if} \mathbf{k} \ne 0 \\ \boldsymbol{\mathit{0}} & \qquad\text{if}\mathbf{k} = 0 ~. \end{cases} $$

Deriving a formula for the effective permittivity
Let us now derive a formula for the effective tensor. Recall that the polarization is defined as
 * $$ \text{(3)} \qquad

\mathbf{P} = (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\mathbf{E} = \mathbf{D} - \boldsymbol{\epsilon}_0\cdot\mathbf{E} $$ where the permittivity tensor is being thought of as an operator that operates on $$\mathbf{E}$$.

Also notice that
 * $$ \text{(4)} \qquad

\langle \mathbf{E} \rangle - \mathbf{E} \in \mathcal{E} \qquad \text{(curl free)}. $$ From (3) we have
 * $$ \text{(5)} \qquad

\mathbf{P} - \boldsymbol{\epsilon}_0\cdot[\langle \mathbf{E} \rangle - \mathbf{E}] = \mathbf{D} - \boldsymbol{\epsilon}_0\cdot\langle \mathbf{E} \rangle \in \mathcal{U}\oplus\mathcal{J} \qquad \text{(divergence free)}. $$ From the definition of $$\boldsymbol{\mathit{\Gamma}}$$ (equations (1) and (2)) and using (4) and (5), we can show that
 * $$ \text{(6)} \qquad

\boldsymbol{\mathit{\Gamma}}\cdot\mathbf{P} = \langle \mathbf{E} \rangle - \mathbf{E} ~. $$ Let $$\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0$$ act on both sides of (6). Then we get

(\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\boldsymbol{\mathit{\Gamma}}\cdot\mathbf{P} = (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\langle \mathbf{E} \rangle - (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\mathbf{E} = (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\langle \mathbf{E} \rangle - \mathbf{P} ~. $$ Therefore,
 * $$ \text{(7)} \qquad

[\boldsymbol{\mathit{1}} + (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\boldsymbol{\mathit{\Gamma}}]\cdot\mathbf{P} = (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\langle \mathbf{E} \rangle ~. $$ Inverting (7) gives
 * $$ \text{(8)} \qquad

\mathbf{P} = [\boldsymbol{\mathit{1}} + (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\boldsymbol{\mathit{\Gamma}}]^{-1}\cdot (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\langle \mathbf{E} \rangle ~. $$ Averaging (8) gives
 * $$ \text{(9)} \qquad

\langle \mathbf{P} \rangle = \langle [\boldsymbol{\mathit{1}} + (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\boldsymbol{\mathit{\Gamma}}]^{-1} \cdot (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0) \rangle\cdot\langle \mathbf{E} \rangle ~. $$ Averaging (3) leads to the relation
 * $$ \text{(10)} \qquad

\langle \mathbf{P} \rangle = (\boldsymbol{\epsilon}_\text{eff} - \boldsymbol{\epsilon}_0)\cdot\langle \mathbf{E} \rangle ~. $$ Comparing (9) and (10) shows us that
 * $$ \text{(11)} \qquad

{ \boldsymbol{\epsilon}_\text{eff} = \boldsymbol{\epsilon}_0 + \langle [\boldsymbol{\mathit{1}} + (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\boldsymbol{\mathit{\Gamma}}]^{-1} \cdot(\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\rangle~. } $$ Recall from the previous lecture that, in equation (11), the operator $$\boldsymbol{\epsilon}$$ is local in real space while the operator $$\boldsymbol{\mathit{\Gamma}}$$ is local in Fourier space. It is therefore not obvious how one can invert $$[\boldsymbol{\mathit{1}} + (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\boldsymbol{\mathit{\Gamma}}]$$.

Let us define

\boldsymbol{\delta} := \boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0~. $$ Then (8) can be written as

\mathbf{P} = [\boldsymbol{\mathit{1}} + \boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}}]^{-1}\cdot \boldsymbol{\delta}\cdot\langle \mathbf{E} \rangle ~. $$ Assuming that $$-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}} < \boldsymbol{\mathit{1}}$$, we can expand the first operator in terms of an infinite series, i.e.,

[\boldsymbol{\mathit{1}} + \boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}}]^{-1} = \sum_{n=0}^\infty (-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}})^n ~. $$ Then we have

\mathbf{P} = \left[\sum_{n=0}^\infty (-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}})^n\right]\cdot \boldsymbol{\delta}\cdot\langle \mathbf{E} \rangle ~. $$ Also, from the definition of $$\mathbf{P}$$, we have

\mathbf{P} = \boldsymbol{\delta}\cdot\mathbf{E} ~. $$ Hence,

\boldsymbol{\delta}\cdot\mathbf{E} = \left[\sum_{n=0}^\infty (-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}})^n\right]\cdot \boldsymbol{\delta}\cdot\langle \mathbf{E} \rangle ~. $$ Now,

(-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}})^n\cdot\boldsymbol{\delta} = (-1)^n~  \boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}}\cdot\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}}\dots\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}} \cdot\boldsymbol{\delta} = \boldsymbol{\delta}\cdot(-\boldsymbol{\mathit{\Gamma}}\cdot\boldsymbol{\delta})^n ~. $$ Therefore,

\boldsymbol{\delta}\cdot\mathbf{E} = \boldsymbol{\delta}\cdot\left[\sum_{n=0}^\infty (-\boldsymbol{\mathit{\Gamma}}\cdot\boldsymbol{\delta})^n\right]\cdot \langle \mathbf{E} \rangle $$ or,
 * $$ \text{(12)} \qquad

\mathbf{E} = \left[\sum_{n=0}^\infty (-\boldsymbol{\mathit{\Gamma}}\cdot\boldsymbol{\delta})^n\right]\cdot \langle \mathbf{E} \rangle ~. $$ Let us now define
 * $$ \text{(13)} \qquad

\begin{align} \mathbf{P}^{(m)} & := \left[\sum_{n=0}^m (-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}})^n\right]\cdot \boldsymbol{\delta}\cdot\langle \mathbf{E} \rangle \\ \mathbf{E}^{(m)} & := \left[\sum_{n=0}^m (-\boldsymbol{\mathit{\Gamma}}\cdot\boldsymbol{\delta})^n\right]\cdot\langle \mathbf{E} \rangle ~. \end{align} $$ Then we can write
 * $$ \text{(14)} \qquad

\boldsymbol{\mathit{\Gamma}}\cdot\mathbf{P}^{(m)} = \langle \mathbf{E} \rangle - \mathbf{E}^{(m+1)} \qquad \text{and} \qquad \boldsymbol{\delta} \cdot \mathbf{E}^{(m)} = \mathbf{P}^{(m)} ~. $$ These recurrence relations may be used to compute these fields inductively. An algorithm that may be used is outlined below:


 * Set $$m = 0$$. Then

\mathbf{E}^{(0)} = \langle \mathbf{E} \rangle ~. $$
 * Compute $$\mathbf{P}^{(m)}(\mathbf{x})$$ in real space using the relation

\mathbf{P}^{(m)} = \boldsymbol{\delta}\cdot\mathbf{E}^{(m)} ~. $$
 * Take a fast Fourier transform to find $$\widehat{\mathbf{P}}^{(m)}(\mathbf{k})$$.
 * From (14) we get

\widehat{\mathbf{E}}^{(m+1)} = \begin{cases} - \boldsymbol{\mathit{\widehat{\Gamma}}}(\mathbf{k})\cdot\widehat{\mathbf{P}}^{(m)}(\mathbf{k}) & \text{for}\mathbf{k} \ne 0 \\ \langle \mathbf{E} \rangle & \text{for}\mathbf{k} = 0 ~. \end{cases} $$
 * Compute $$\widehat{\mathbf{E}}^{(m+1)}(\mathbf{k})$$ in Fourier space.
 * Take an inverse fast Fourier transform to find $$\mathbf{E}^{(m+1)}(\mathbf{x})$$ in real space.
 * Increment $$m$$ by 1 and repeat.

This is the method of Moulinec and Suquet (Mouli94). The method also extends to nonlinear problems (Mouli98). However, there are other iterative methods that have faster convergence.

Convergence of expansions
For simplicity, let us assume that $$\boldsymbol{\epsilon}_0$$ is isotropic, i.e., $$\boldsymbol{\epsilon}_0 = \epsilon_0~\boldsymbol{\mathit{1}}$$. Then,

\boldsymbol{\mathit{\Gamma}} = \cfrac{\boldsymbol{\mathit{\Gamma}}_1}{\epsilon_0} $$ where $$\boldsymbol{\mathit{\Gamma}}_1$$ is the projection from $$\mathcal{H}$$ onto $$\mathcal{E}$$.

Define the norm of a field $$\mathbf{P}$$ as

|\mathbf{P}| := (\mathbf{P}, \mathbf{P})^{1/2} = \langle \overline{\mathbf{P} \rangle\cdot\mathbf{P}}^{1/2} ~. $$ Also, define the norm of a linear operator $$\boldsymbol{A}$$ acting on $$\mathbf{P}$$ as

\lVert\boldsymbol{A}\rVert_{} := \sup_{\mathbf{P}\in\mathcal{H}~,~ |\mathbf{P}| = 1} |\boldsymbol{A}\cdot\mathbf{P}| ~. $$ Therefore,

|\boldsymbol{A}\cdot\mathbf{P}| \le \lVert\boldsymbol{A}\rVert_{}\cdot|\mathbf{P}| ~. $$ Hence,

|\boldsymbol{A}\cdot\boldsymbol{B}\cdot\mathbf{P}| \le \lVert\boldsymbol{A}\rVert_{}\cdot|\boldsymbol{B}\cdot\mathbf{P}| \le \lVert\boldsymbol{A}\rVert_{}\cdot\lVert\boldsymbol{B}\rVert\cdot|\mathbf{P}| ~. $$ So

\lVert\boldsymbol{A}\cdot\boldsymbol{B}\rVert_{} \le \lVert\boldsymbol{A}\rVert_{}\cdot\lVert\boldsymbol{B}\rVert_{} ~. $$ In addition, we have the triangle inequality

\lVert\boldsymbol{A} + \boldsymbol{B}\rVert_{} \le \lVert\boldsymbol{A}\rVert_{} + \lVert\boldsymbol{B}\rVert_{} ~. $$ So from (12) and (13), we have

\begin{align} |\mathbf{E} - \mathbf{E}^{(m)}| & = \left|\sum_{n=m+1}^\infty (-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}})^n\cdot\langle \mathbf{E} \rangle\right| \\ & \le \lVert\sum_{n=m+1}^\infty (-\boldsymbol{\delta}\cdot\boldsymbol{\mathit{\Gamma}})^n\rVert_{} \cdot|\langle \mathbf{E} \rangle| \\ & \le \left(\sum_{n=m+1}^\infty  \lVert\cfrac{\boldsymbol{\delta}}{\epsilon_0}\rVert^n\cdot    \lVert\boldsymbol{\mathit{\Gamma}}_1\rVert_{}^n\right) \cdot|\langle \mathbf{E} \rangle|  ~. \end{align} $$ But $$\lVert\boldsymbol{\mathit{\Gamma}}_1\rVert_{} = 1$$ since it is a projection onto $$\mathcal{E}$$. Hence,

|\mathbf{E} - \mathbf{E}^{(m)}| \le \left(\sum_{n=m+1}^\infty \lVert\cfrac{\boldsymbol{\delta}}{\epsilon_0}\rVert^n\right) \cdot|\langle \mathbf{E} \rangle| ~. $$ Therefore the series converges provided

\lVert\cfrac{\boldsymbol{\delta}}{\epsilon_0}\rVert < 1 ~. $$ In that case

|\mathbf{E} - \mathbf{E}^{(m)}| \le \cfrac{\gamma^{m+1}}{1-\gamma}~|\langle \mathbf{E} \rangle| $$ where

\gamma := \lVert\cfrac{\boldsymbol{\delta}}{\epsilon_0}\rVert_{}~. $$ To get a better understanding of the norm $$\gamma$$, let us consider a $$n$$-phase composite with isotropic phases, i.e.,

\boldsymbol{\epsilon} = \sum_{i=1}^n \chi_i~\epsilon_i~\boldsymbol{\mathit{1}} $$ where

\chi_i = \begin{cases} 1 & \text{in phase}i \\ 0 & \text{otherwise}. \end{cases} $$ In this case,

\gamma = \lVert \cfrac{\boldsymbol{\delta}}{\epsilon_0} \rVert = \sup_{\mathbf{P}\in\mathcal{H}~,~|\mathbf{P}|=1} \sum_{i=1}^n \left\langle \overline{\left(\cfrac{\epsilon_i-\epsilon_0} {\epsilon_0}~\mathbf{P}_i\right)} \cdot     \left(\cfrac{\epsilon_i-\epsilon_0}{\epsilon_0}~\mathbf{P}_i\right)\right\rangle^{1/2} $$ where

\mathbf{P}_i = \chi_i~\mathbf{P} \quad \text{and} \quad |\mathbf{P}| = \left(\sum_i \overline{\mathbf{P}_i}\cdot\mathbf{P}\right)^{1/2} = 1 ~. $$ Hence,

\gamma = \sup_{\mathbf{P}\in\mathcal{H}~,~|\mathbf{P}|=1} \sum_{i=1}^n \left|\cfrac{\epsilon_i-\epsilon_0}{\epsilon_0}\right| \langle \overline{\mathbf{P}}_i \cdot\mathbf{P}_i\rangle^{1/2} ~. $$ Since, $$ \langle \overline{\mathbf{P}}_i \cdot\mathbf{P}_i\rangle^{1/2}$$ are weights, it makes sense to put the weights where $$\epsilon_i$$ is maximum. Hence, we can write

\gamma = \max_i \left|\cfrac{\epsilon_i-\epsilon_0}{\epsilon_0}\right| = \max_i \cfrac{|\epsilon_i-\epsilon_0|}{|\epsilon_0|} ~. $$ For $$\gamma$$ to be less than 1, we therefore require that, for all $$i$$,

|\epsilon_i-\epsilon_0| < |\epsilon_0| ~. $$ A geometrical representation of this situation is shown in Figure 1. If the value of $$\epsilon_0$$ is sufficiently large, then we get convergence if all the $$\epsilon_i$$ s lie in one half of the complex plane (shown by the green line in the figure).

Similarly, we can expand

\boldsymbol{\epsilon}_\text{eff} = \boldsymbol{\epsilon}_0 + \langle [\boldsymbol{\mathit{1}} + (\boldsymbol{\epsilon} - \boldsymbol{\epsilon}_0)\cdot\boldsymbol{\mathit{\Gamma}}]^{-1} \cdot(\boldsymbol{\epsilon}-\boldsymbol{\epsilon}_0)\rangle $$ in the form
 * $$ \text{(15)} \qquad

\boldsymbol{\epsilon}_\text{eff} = \boldsymbol{\epsilon}_0 + \sum_{j=0}^{\infty} \boldsymbol{\mathit{\Gamma}}_0\cdot\boldsymbol{\delta} \cdot(-\boldsymbol{\mathit{\Gamma}}\cdot\boldsymbol{\delta})^j\cdot\boldsymbol{\mathit{\Gamma}}_0 $$ where $$\boldsymbol{\mathit{\Gamma}}_0$$ is a projection onto $$\mathcal{U}$$, i.e.,

\boldsymbol{\mathit{\Gamma}}_0\cdot\mathbf{P} = \langle \mathbf{P} \rangle ~. $$ In this case, we find that the series converges provided

|\epsilon_i-\epsilon_0| < |\epsilon_0| \qquad \text{for all}i ~. $$ Note that each term in (15) is an analytic function of $$\epsilon_1$$ (in fact a polynomial). So, if we truncate the series, we have an analytic function of $$\epsilon_1$$.

Since a sequence of analytic functions which is uniformly convergent in a domain converges to a function which is analytic in that domain (see, for example, Rudin76), we deduce that $$\boldsymbol{\epsilon}_\text{eff}$$ is an analytic function of $$\epsilon_1$$ in the $$\epsilon_0$$ disk (see Figure~1) with $$r < |\epsilon_0|$$ provided $$|\epsilon_i - \epsilon_0| < \epsilon_0$$ for $$i = 2, 3, \dots, n$$.

Similarly, the effective tensor is an analytic function of $$\epsilon_2$$, $$\epsilon_3$$, etc.

Since $$\boldsymbol{\epsilon}_\text{eff}$$ is independent of $$\epsilon_0$$, by taking the union of all such regions of analyticity, we conclude that $$\boldsymbol{\epsilon}_\text{eff}$$ is an analytic function of $$\epsilon_1, \dots, \epsilon_n$$ provided all these $$\epsilon_i$$ s lie inside a half-plane (see Figure~1). This means that there exists a $$\theta$$ such that

\text{Re}(e^{i\theta}~\epsilon_j) > 0 \quad \text{for all}~ j ~. $$

Corollary:
A corollary of the above observations is the following. If each $$\epsilon_i(\omega)$$ is an analytic function of $$\omega$$ for $$\text{Im}(\omega)>0$$ (which is what one expects with $$\omega$$ as the frequency) and $$\text{Im}[\epsilon_i(\omega)] > 0$$ for all $$\omega$$ with $$\text{Re}{\omega} > 0$$, then $$\boldsymbol{\epsilon}_\text{eff}(\omega)$$ will be analytic in $$\omega$$.

Another interesting property:
Now, if $$\mathbf{D} = \boldsymbol{\epsilon}\cdot\mathbf{E}$$, we have

\lambda~\mathbf{D} = \lambda~\boldsymbol{\epsilon}\cdot\mathbf{E} \qquad \text{for all constants}~\lambda ~. $$ Therefore,

\langle \mathbf{D} \rangle = \boldsymbol{\epsilon}_\text{eff}\cdot\langle \mathbf{E} \rangle \qquad \implies \qquad \langle \lambda~\mathbf{D} \rangle = \lambda~\boldsymbol{\epsilon}_\text{eff}\cdot\langle \mathbf{E} \rangle ~. $$ This means that

(\lambda~\boldsymbol{\epsilon})_\text{eff} = \lambda~\boldsymbol{\epsilon}_\text{eff} ~. $$ Therefore, $$\boldsymbol{\epsilon}_\text{eff}(\epsilon_1,\epsilon_2, \dots, \epsilon_n)$$ is homogeneous of degree one and

{ \boldsymbol{\epsilon}_\text{eff}(\lambda~\epsilon_1, \lambda~\epsilon_2, \dots,     \lambda~\epsilon_n) = \lambda~\boldsymbol{\epsilon}_\text{eff}(\epsilon_1, \epsilon_2, \dots, \epsilon_n)~. } $$ For a two-phase composite, if we take

\lambda = \cfrac{1}{\epsilon_2} $$ we get

\boldsymbol{\epsilon}_\text{eff}(\epsilon_1,\epsilon_2) = \epsilon_2~\boldsymbol{\epsilon}_\text{eff}(\epsilon_1/\epsilon_2, 1) ~. $$ Therefore, it suffices to study the analytic function $$\boldsymbol{\epsilon}_\text{eff}(\epsilon_1, 1) = \boldsymbol{\epsilon}_\text{eff}(\epsilon_1)$$. For further details see Milton02.