Dirac's notation

Dirac's, or bra-ket, notation

Kets' notation
A ket is just another way to write a column vector. If you have a vector called $$v$$, you can write it as $$\vec{v}$$, as [ vi ], or as $$|v \rangle$$;. This last one is called a ket.

If you transpose a column vector you obtain a row vector. Dirac's way to write row vectors are Bras: $$\langle v|$$. Bra and ket are just a pun for bracket.

Operations
Just as you can right-multiply any matrix by a column vector, you can do it by a ket (since it's another way to write it). As is usual in vectorial algebra, you omit the matrix-multiplication sign (x) and just juxtapose the two factors. So multiplying matrix A by ket v is written: $$A|v \rangle$$;.

The same goes for left-multiplying and bras. Multiplying bra u by matrix B is written: $$\langle u|B$$.

"A bilinear form" is the name given to a matrix intended to be multiplied on both sides by vectors (by a row vector on the left and a column vector on the right). In bra-ket notation, that multiplication is written this way: $$\langle u|A|v \rangle$$.

As the inner product of two vectors is the same as the matrix product of the first vector, transposed, by the second one, we can treat vectors as row or column matrices:



So the inner product of two vectors is a bilinear form whose matrix is the identity matrix. The identity matrix is, of course, never written, and the inner product of u and v is then written as utv or, in bra-ket notation, $$\langle u|v \rangle$$ (note you only write a "|").

Functions as vectors
Functions of one variable can be seen, mathematically, as "infinite" vectors. The analogy is easier to see the other way: vectors can be treated as functions.

If we have, for example, a 5-dimensional vector, v = [ vi ], such that i ranges from 1 to 5, we can write without trouble a function w(j) such that j is a discrete variable which can be 1, 2, 3, 4 or 5, and such that w(j) = vj.

Treating a function over the Reals (say, f(x)) as a vector requires us to accept x € R as an index (as was previously i € {1, 2, 3, 4, 5}). So now, instead of having a vector of dimension 5, we have a vector of infinite dimensions (as the possible values of x are infinite).

Perhaps it is easier to imagine x as belonging only to the Integers, and ranging from -oo to oo.



Vectorial algebra is the same
The dot (or inner) product of two vectors consists on multiplying them coordinate-by-coordinate and then adding up all the products. If your vectors are of continuous dimension (as functions over the Reals), you have to substitute summation by integration.

Two-variable functions can be thougth of as matrices (of infinite and/or continuous dimension if needed).

Matrix-multiplicating a two-variable function (a matrix) by a one-variable function (a vector) is done this way: You give the same name to the index of the vector and the column index of the matrix. You multiply (normally) both functions. And finally you integrate over the common index. The result will be a function of the matrix's row index (and so, a vector, as expected).