Robotic Mechanics and Modeling/Inverse Kinematics

Inverse Position Kinematics (Copied and edited from Link)
The inverse kinematics problem is the opposite of the forward kinematics problem and can be summarized as follows: given the desired position of the end effector, what combinations of joint angles (or prismatic displacements) can be used to achieve this position?



Two types of solutions can be considered: a closed-form solution and a numerical solution. Closed-form or analytical solutions are sets of equations that fully describe the connection between the end-effector position and the joint angles. Numerical solutions are found through the use of numerical algorithms, and can exist even when no closed-form solution is available. There may also be multiple solutions, or no solution at all.

Kinematic Equations for a Planar Three-Link Manipulator
The inverse kinematics problem for this 2D manipulator can be solved algebraically. The solution to the forward kinematics problem is:
 * $$^{ee}_{bs}T = \, ^4_0T(\theta_1, \theta_2, \theta_3) =

\begin{bmatrix} c_{123} & -s_{123} & l_1 c_1 + l_2 c_{12} + l_3 c_{123} \\ s_{123} & c_{123} & l_1 s_1 + l_2 s_{12} + l_3 s_{123} \\ 0 & 0 & 1 \end{bmatrix} $$

To find the kinematic equations, we can use the following relationship:

\begin{bmatrix} x \\ y \\ 1 \end{bmatrix} = T \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} $$

Thus, the resulting kinematic equations are:

\begin{array}{l} x = l_1 \cos \theta_1 + l_2 \cos(\theta_1 + \theta_2) + l_3 \cos(\theta_1 + \theta_2 + \theta_3) \\ y = l_1 \sin \theta_1 + l_2 \sin(\theta_1 + \theta_2) + l_3 \sin(\theta_1 + \theta_2 + \theta_3) \end{array} $$

Inverse Kinematic Equations for a Planar Two-Link Manipulator
For simplicity in looking at the inverse problem of the three-link manipulator now modeled as a two-link manipulator, the displacement over the distance $$l_3$$ is set to zero. An appropriate 4x4 homogeneous transform (in 3D) including the orientation of the third body with $$l_3=0$$ in the above figure is the following:



^3_0 T = \begin{bmatrix} c_{123} & -s_{123} & 0 & l_1 c_1 + l_2 c_{12} \\ s_{123} & c_{123} & 0 & l_1 s_1 + l_2 s_{12} \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$

Now assume a given end-effector orientation in the following form:

^{ee}_{bs} T = \begin{bmatrix} c_{\phi} & -s_{\phi} & 0 & x \\ s_{\phi} & c_{\phi} & 0 & y \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$

Equating the two previous expressions results in:

\begin{array}{lll} c_{\phi} & = & c_{123} \\ s_{\phi} & = & s_{123} \\ x & = & l_1 c_1 + l_2 c_{12} \\ y & = & l_1 s_1 + l_2 s_{12} \\ \end{array} $$

As:

\begin{array}{lll} c_{12} & = & c_1c_2 - s_1 s_2 \\ s_{12} & = & c_1 s_2 + s_1 c_2 \\ \end{array} $$, squaring both the expressions for $$x$$ and $$y$$ and adding them, leads to:

x^2 + y^2 = l_1^2 + l_2^2 + 2 l_1 l_2 c_2 $$

Solving for $$c_2$$ leads to:

c_2 = \frac{x^2 + y^2 - l_1^2 - l_2^2}{2 l_1 l_2} $$, while $$s_2$$ equals:

s_2 = \pm \sqrt{1 - c_2^2} $$, and, finally, $$\theta_2$$:

\theta_2 = \mbox{Atan2}(s_2, c_2) $$

Note: The choice of the sign for $$s_2$$ corresponds with one of the two solutions in the figure above.

The expressions for $$x$$ and $$y$$ may now be solved for $$\theta_1$$. In order to do so, write them like this:

\begin{array}{lll} x & = & k_1 c_1 - k_2 s_1 \\ y & = & k_1 s_1 + k_2 c_1 \\ \end{array} $$ where $$k_1 = l_1 + l_2 c_2$$, and $$k_2 = l_2 s_2$$.

Let:

\begin{array}{lll} r & = & \sqrt{k_1^2 + k_2^2} \\ \gamma & = & \mbox{Atan2}(k2,k1) \\ \end{array} $$

Then:

\begin{array}{lll} k_1 & = & r \cos \gamma \\ k_2 & = & r \sin \gamma \\ \end{array} $$

Applying these to the above equations for $$x$$ and $$y$$:

\begin{array}{lll} x/r & = & \cos \gamma \, c_1 + \sin \gamma \, s_1 \\ y/r & = & \cos \gamma \, s_1 + \sin \gamma \, c_1 \\ \end{array} $$, or:

\begin{array}{lll} \cos (\gamma + \theta_1) & = & \frac{x}{r} \\ \sin (\gamma + \theta_1) & = & \frac{y}{r} \\ \end{array} $$

Thus:

\gamma + \theta_1 = \mbox{Atan2}(y,x) $$

Hence:

\theta_1 = \mbox{Atan2}(y,x) - \mbox{Atan2}(k_2,k_1) $$

Note: If $$x = y = 0$$, $$\theta_1$$ actually becomes arbitrary.

$$\theta_3$$ may now be solved from the first two equations for $$s_{\phi}$$ and $$c_{\phi}$$:

\theta_3 = \phi - \theta_1 - \theta_2 = \mbox{Atan2}(s_{\phi},c_{\phi}) - \theta_1 - \theta_2 $$

Inverse Kinematics Related to Jacobian Matrix
The inverse velocity problem seeks the joint rates that provide a specified end-effector twist. This is solved by inverting the Jacobian matrix. It can happen that the robot is in a configuration where the Jacobian does not have an inverse. These are termed singular configurations of the robot.

Inverse Velocity Kinematics
This problem can easily be solved by inverting the Jacobian...

Singularities
If the Jacobian $$J$$ is invertible, inverting it can be used to easily calculate the joint velocities if the (Cartesian) end-effector velocity is given. Locations (combinations of $$\theta_i$$ where the Jacobian is not invertible are called singularities. Setting the determinant of $$J$$ equal to zero and solving for $$\theta$$ allows for finding these singularities. These positions correspond to the loss of a degree of freedom.

In vector calculus, the Jacobian matrix of a vector-valued function in several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature.

Suppose $f : ℝ^{n} → ℝ^{m}$ is a function such that each of its first-order partial derivatives exist on $ℝ^{n}$. This function takes a point $x ∈ ℝ^{n}$ as input and produces the vector $f(x) ∈ ℝ^{m}$ as output. Then the Jacobian matrix of $f$ is defined to be an $m×n$ matrix, denoted by $J$, whose $(i,j)$th entry is $$\mathbf J_{ij} = \frac{\partial f_i}{\partial x_j}$$, or explicitly


 * $$\mathbf J = \begin{bmatrix}

\dfrac{\partial \mathbf{f}}{\partial x_1} & \cdots & \dfrac{\partial \mathbf{f}}{\partial x_n} \end{bmatrix} = \begin{bmatrix} \dfrac{\partial f_1}{\partial x_1} & \cdots & \dfrac{\partial f_1}{\partial x_n}\\ \vdots & \ddots & \vdots\\ \dfrac{\partial f_m}{\partial x_1} & \cdots & \dfrac{\partial f_m}{\partial x_n} \end{bmatrix}.$$

This matrix, whose entries are functions of $x$, is also denoted variously by $Df$, $J_{f}$, and $∂(f_{1},...,f_{m})⁄∂(x_{1},...,x_{n})$. (However, some literature defines the Jacobian as the transpose of the matrix given above.)

The Jacobian matrix represents the differential of $f$ at every point where $f$ is differentiable. In detail, if $h$ is a displacement vector represented by a column matrix, the matrix product $J(x) ⋅ h$ is another displacement vector, that is the best approximation of the change of $f$ in a neighborhood of $x$, if $f(x)$ is differentiable at $x$. This means that the function that maps $x$ to $x$ is the best linear approximation of $y$ for points close to $f(x) + J(x) ⋅ (y – x)$. This linear function is known as the derivative or the differential of $f$ at $x$.

When $f$ = $x$, the Jacobian matrix is square, so its determinant is a well-defined function of $m$, known as the Jacobian determinant of $n$. It carries important information about the local behavior of $x$. In particular, the function $f$ has locally in the neighborhood of a point $f$ an inverse function that is differentiable if and only if the Jacobian determinant is nonzero at $f$ (see Jacobian conjecture). The Jacobian determinant also appears when changing the variables in multiple integrals (see substitution rule for multiple variables).

When $x$ = 1, that is when $x$ is a scalar-valued function, the Jacobian matrix reduces to a row vector. This row vector of all first-order partial derivatives of $m$ is the transpose of the gradient of $f : ℝ^{n} → ℝ$, i.e. $$ \mathbf{J}_{f} = (\nabla f)^{\intercal} $$. Here we are adopting the convention that the gradient vector $$\nabla f$$ is a column vector. Specialising further, when $f$ = $f$ = 1, that is when $m$ is a scalar-valued function of a single variable, the Jacobian matrix has a single entry. This entry is the derivative of the function $n$.

These concepts are named after the mathematician Carl Gustav Jacob Jacobi (1804–1851).

Inverse
According to the inverse function theorem, the matrix inverse of the Jacobian matrix of an invertible function is the Jacobian matrix of the inverse function. That is, if the Jacobian of the function $f : ℝ → ℝ$ is continuous and nonsingular at the point $f$ in $f : ℝ^{n} → ℝ^{n}$, then $p$ is invertible when restricted to some neighborhood of $ℝ^{n}$ and


 * $$\mathbf J_{\mathbf f^{-1}} \circ \mathbf f = {\mathbf J_{\mathbf f}}^{-1} .$$

Conversely, if the Jacobian determinant is not zero at a point, then the function is locally invertible near this point, that is, there is a neighbourhood of this point in which the function is invertible.

The (unproved) Jacobian conjecture is related to global invertibility in the case of a polynomial function, that is a function defined by n polynomials in n variables. It asserts that, if the Jacobian determinant is a non-zero constant (or, equivalently, that it does not have any complex zero), then the function is invertible and its inverse is a polynomial function.