MyOpenMath/Solutions/Gauss law (TF)/Proof

Gauss's law is based on a coincidence that might not strike you as very remarkable: The surface area of a sphere grows as the square of the radius and the Coulomb force law falls as the inverse square of the radius:
 * $$F=qE=\frac{1}{4\pi\varepsilon_0}\frac{qQ}{r^2}$$

Serious consequences for the theory of electromagnetism (and the nature of light) will result if it is ever found necessary to replace $$r^2$$ by something like $$r^{2.0001}$$ in this formula. For this reason, we our discussion of Gauss's law begins with the area of a sphere. First we review the radian and the fact that the circumference of a circle is $$2\pi r$$ (it is also important to be aware of a formula for the surface area of a sphere: $$A_\text{sphere}=4\pi r^2$$.)

The radian and the steradian


The radian is defined as arclength divided by radius: $$\theta=s/r$$. If $$s=r$$, then we have an angle of 1 radian, as shown to the left. A full circle measures as 2&pi; radians.

For solid angle, we replace the circle by a sphere of radius r, and we replace the arclength by an area on that sphere. Instead of the radian defined as the ratio of two lengths, $$\theta=s/r$$, we use the steradian as the ratio two entities that are squares of lengths, $$\Omega = A/r^2$$ where &Omega; is the capital Greek omega, A is an area situated on a sphere of radius r.  Since the area of a sphere can be shown to be 4&pi;r2, the solid angle of an entire sphere is 4&pi;.

For a sufficiently small solid angle, the portion of the sphere where the area A is calculated is so small that we may calculate the area as if the sphere were a flat surface. In contrast with angles, where two arcs described by the same &theta; and r have the same shape, there is no restriction on the shape of the area associated with a solid angle.

The electric field near a two-dimensional surface
Gauss's law is about an integral over a closed surface. When thinking about surface integrals, one needs to imagine dividing up the surface into small sections, typically small quadrilaterals. A closed surface has an "inside" and "outside", such as the bent peanut shown to the left.

To construct these differential surface elements it helps to think about differential (small) solid angles. Consider a small shape of area $$dA$$ on the surface of a sphere with a sufficiently large radius $$r$$:

$$\;\; d\Omega = \frac{dA}{r^2} \text{  (valid only for a sphere).}$$

Shown to the right is a solid angle centered at point O with a solid angle defined by the circle shown in yellow (dotted outline) at the far right of the figure. Since this is a 3-D image, the circle is depicted as an ellipse from this perspective. The Gaussian surface in this figure surrounds point O, and the surface is shaped like a bent peanut so that the cone exits, re-enters, and then again exits the Gaussian surface.

If the surface's outward unit normal $$\hat n$$ is not oriented along the $$\vec r$$ vector (from origin to surface), we cannot use the differential area $$dA$$ to calculate the differential solid angle $$d\Omega$$ because the differential area of the Gaussian surface is too large. This is illustrated below, where the solid angle differential is now a small rectangle. The surface with polka-dots represents a portion of the Gaussian surface, and all the points on this surface are not equidistant from the origin. To calculate the solid angle we require the yellow surface, which strictly speaking is the surface of a sphere of radius $$r$$.

This figure also allows us to visualize the components of a differential surface area. The polka-dotted surface area is the sum of two surface areas, that are perpendicular to each other:
 * $$d\vec S = \hat n dA = d\vec A_\parallel + d\vec A_\perp$$,

where $$dA\equiv |d\vec S|$$, and,


 * $$d\vec A_\parallel = \hat r(\hat r\cdot\hat n)\,dA$$

is the component of $$d\vec S $$ parallel to $$\vec r$$. The perpendicular component $$d\vec A_\perp$$ is shown in the figure as the unmarked bottom rectangle in the right triangular prism whose other two sides are the polka-dotted and yellow shaded rectangles in the figure. The reader can verify the Pythagorean identity, $$dA^2=dA_\parallel^2+ dA_\perp^2$$.

We can now express the solid angle differential in terms of a small area that is not necessarily perpendicular to the radius:
 * $$d\Omega = \frac{\cos\theta \,dA}{r^2} = \frac{\hat r\cdot\hat n \,dA}{r^2}$$,

where $$\theta$$ is the angle between $$\hat r$$ and $$\hat n$$. This identity will be used to construct our "proof" of Gauss's law.

Vector fields


A vector field is a vector function of the three spatial dimensions $$(x,y,z)$$ (it can also be a function of time $$t$$.) If you include non-Cartesian coordinate systems, vector fields can be described in an number of ways. For example,

$$\vec F(\vec r) = F_x(x,y,x)\hat i + F_y(x,y,x)\hat j + F_z(x,y,x)\hat k$$

$$\vec F(\vec r) = F_r(r,\theta,\varphi)\hat r + F_\theta(r,\theta,\varphi)\hat\theta + F_\varphi(r,\theta,\varphi)\hat\varphi$$

define the same field, first in Cartesian coordinates, and then in spherical coordinates.

A theorem for radially directed fields
If the only non-vanishing component of a vector field is radial, we have,


 * $$F_\theta= F_\varphi= 0$$,

which implies that the $$\hat\theta$$ and $$\hat\varphi$$ components both vanish, leaving us with only one component of the vector field:
 * $$\vec F=F_r (r,\theta,\varphi)\hat r$$

It is not always easy to find all the components of the surface elements $$d\vec S = \hat n dA$$ (where we have defined $$dA\equiv |d\vec S|$$.) But fortunately, we have already derived a simple formula for $$\hat r\cdot d\vec S$$:


 * $$\hat r\cdot d\vec S = \hat r\cdot\hat n dA = r^2d\Omega$$,

where $$d\Omega$$ is the differential solid angle as measured from the origin (which is the tail of $$\vec r$$.) If a vector field of the three spatial variables is always directed towards or away from the origin, then the surface integral for any shape the encloses the origin is given by:


 * $$\oint \vec F\cdot d\vec S = \oint F_r \hat r\cdot\hat n dA = \oint F_r(r,\theta,\varphi)\,r^2 d\Omega$$,

where in a calculus class you might use $$d\Omega = \sin\theta d\theta d\varphi$$. If the origin is situated inside a simple shape like an ellipsoid or even a cube, we just need to define the distance to the origin as a function of the two angular variables:
 * $$ r = R(\theta, \varphi)$$,

where $$R(\theta, \varphi)$$ is some function. Two simple examples involve any constant value of $$R_0>0$$:
 * $$r=R_0$$ is a sphere of radius R0
 * $$r=\frac{R}{\sin\theta}$$ is a cylinder of radius R aligned along the z axis (and &theta; is measured relative to that axis.)

Later we discuss the complexity associated with more complicated shapes such as the "bent peanut" described above, where it is necessary to introduce r as a multi-valued function because a ray directed from the origin intersects the surface more than once.

A radially directed vector field ($$F_\theta=F_\varphi=0$$) can be integrated over a simple Gaussian surface defined by $$r=R(\theta,\varphi)$$, using this expression:


 * $$\vec F \cdot d\vec S= F_r \hat r\cdot \hat n dA = F_r(r,\theta,\varphi) \,r^2 d\Omega$$

In the last step we set $$F_r=F_r(r,\theta,\varphi)$$ to highlight the fact that no restriction is placed on the vector field, other than the fact that it always points in the radial ($$\hat r$$) direction. Defining the Gaussian surface for the "bent-peanut" shape shown above is a bit tricky because for one orientation (i.e., one value of $$\theta$$ and $$\varphi$$) one ray will pierce the surface at more than one location.

Special case: Fr does not depend on &theta; or &phi;
The simplest application of this theorem is the case where $$F_r$$ depends only on $$r$$, and something interesting happens when dependence is inverse square:


 * $$\oint \frac{\hat n\cdot d\vec S}{r^2}\equiv\oint \frac{\hat r\cdot \hat n}{r^2} dA=4\pi \text{ if the origin is inside the closed surface,}$$
 * $$\oint \frac{\hat n\cdot d\vec S}{r^2}\equiv\oint \frac{\hat r\cdot \hat n}{r^2} dA=0 \text{ if the origin is outside the closed surface,}$$

where the origin is defined at where $$\vec r = 0$$ and $$\hat n dA \equiv d \vec S$$ denotes integration over a closed surface of any shape. To understand why the integral vanishes if the origin is outside the Gaussian surface, note any ray ($$\vec r$$ vector) that pierces the Gaussian surface from the outside will also exit at a place with the opposite sign. For any such ray (i.e., that originates from outside Gaussian surface) the Riemann sum $$\left(\sum\Delta\Omega)\right)$$ of the differentials will occur in pairs and will not sum to $$4\pi$$. Instead they will cancel as equal and opposite pairs:


 * $$\Delta\Omega_{out} = -\Delta\Omega_{in}$$

Generalization of Gauss's Law beyond the case of a single point charge
For arbitrary charge distributions, it can be shown that:
 * $$\varepsilon_0\oint \vec E\cdot d\vec S = Q_\text{enc}$$,

where,
 * $$Q_\text{enc}= \sum_j q_j \text{ ... or ... } \int_\text{inside}\rho(\vec r') d^3 r'$$,

is the net enclosed charge, which can be a sum over charges or a volume integral (e.g. dx'dy'dz') over charge density. Since mathematically rigorous arguments for this generalization are beyond the scope of most first-year physics courses, this section will only outline the arguments that extend Gauss's law in this fashion.

Multiple point charges
The discussion so far has been restricted to a single point charge, with the added stipulation that the origin of the coordinate system is situated at the location of that point charge. First, we must recognize the implicit assumption that Gauss's Law remains valid even if the coordinate system is moved to a different location. Could be accomplished by a change of variables, $$\vec r \rightarrow \vec r - \vec r_0$$, where the constants $$\vec r_0 =[x_0, y_0, z_0]$$ represent the location of the point charge in the original coordinate system. This permits us to use a property called superposition to show that electric field due to a sum of charges is the sum of the electric field due to individual charges:
 * $$\vec E =\frac{1}{4\pi\varepsilon} \sum_j {\frac {q_j}{r_j^2}\hat{r_j}}=\sum_j\vec {E_j},$$

where $$\vec {E_j}$$ is the field due to $$q_j$$. We can also appeal to linearity to argue:
 * $$\sum_j\oint \vec E_j \cdot d\vec S=\oint \left(\sum_j\vec E_j \right)\cdot d\vec S = \oint \vec E \cdot d\vec S$$

Continuous charge density
Some readers might find it interesting that the sum over point charges can also be expressed as an integral over a charge density if we use the three-dimensional Dirac delta function:


 * $$\rho\left(\vec{r}\right)=\sum_j q_j\delta (\vec r-\vec r_j)$$