Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inverse kinematics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== The Jacobian inverse technique === The [[Jacobian matrix and determinant|Jacobian]] inverse technique is a simple yet effective way of implementing inverse kinematics. Let there be <math>m</math> variables that govern the forward-kinematics equation, i.e. the position function. These variables may be joint angles, lengths, or other arbitrary real values. If, for example, the IK system lives in a 3-dimensional space, the position function can be viewed as a mapping <math>p(x): \mathbb{R}^m \rightarrow \mathbb{R}^3</math>. Let <math>p_0 = p(x_0)</math> give the initial position of the system, and :<math>p_1 = p(x_0 + \Delta x)</math> be the goal position of the system. The Jacobian inverse technique iteratively computes an estimate of <math>\Delta x</math> that minimizes the error given by <math>||p(x_0 + \Delta x_\text{estimate}) - p_1||</math>. For small <math>\Delta x</math>-vectors, the series expansion of the position function gives :<math>p(x_1) \approx p(x_0) + J_p(x_0)\Delta x</math>, where <math>J_p(x_0)</math> is the (3 × m) [[Jacobian matrix and determinant|Jacobian matrix]] of the position function at <math>x_0</math>. The (i, k)-th entry of the Jacobian matrix can be approximated numerically :<math>\frac{\partial p_i}{\partial x_k} \approx \frac{p_i(x_{0,k} + h) - p_i(x_0)}{h}</math>, where <math>p_i(x)</math> gives the i-th component of the position function, <math>x_{0,k} + h</math> is simply <math>x_0</math> with a small delta added to its k-th component, and <math>h</math> is a reasonably small positive value. Taking the [[Moore–Penrose pseudoinverse]] of the Jacobian (computable using a [[singular value decomposition]]) and re-arranging terms results in :<math>\Delta x \approx J_p^+(x_0)\Delta p</math>, where <math>\Delta p = p(x_0 + \Delta x) - p(x_0)</math>. Applying the inverse Jacobian method once will result in a very rough estimate of the desired <math>\Delta x</math>-vector. A [[line search]] should be used to scale this <math>\Delta x</math> to an acceptable value. The estimate for <math>\Delta x</math> can be improved via the following algorithm (known as the [[Newton–Raphson method]]): :<math>\Delta x_{k+1} = J_p^+(x_k)\Delta p_k</math> Once some <math>\Delta x</math>-vector has caused the error to drop close to zero, the algorithm should terminate. Existing methods based on the [[Hessian matrix]] of the system have been reported to converge to desired <math>\Delta x</math> values using fewer iterations, though, in some cases more computational resources.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)