forked from RussTedrake/manipulation
-
Notifications
You must be signed in to change notification settings - Fork 0
/
trajectories.html
637 lines (514 loc) · 35.3 KB
/
trajectories.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
<!DOCTYPE html>
<html>
<head>
<title>Ch. 8 - Motion Planning</title>
<meta name="Ch. 8 - Motion Planning" content="text/html; charset=utf-8;" />
<link rel="canonical" href="http://manipulation.csail.mit.edu/trajectories.html" />
<script src="https://hypothes.is/embed.js" async></script>
<script type="text/javascript" src="htmlbook/book.js"></script>
<script src="htmlbook/mathjax-config.js" defer></script>
<script type="text/javascript" id="MathJax-script" defer
src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<script>window.MathJax || document.write('<script type="text/javascript" src="htmlbook/MathJax/es5/tex-chtml.js" defer><\/script>')</script>
<link rel="stylesheet" href="htmlbook/highlight/styles/default.css">
<script src="htmlbook/highlight/highlight.pack.js"></script> <!-- http://highlightjs.readthedocs.io/en/latest/css-classes-reference.html#language-names-and-aliases -->
<script>hljs.initHighlightingOnLoad();</script>
<link rel="stylesheet" type="text/css" href="htmlbook/book.css" />
</head>
<body onload="loadChapter('manipulation');">
<div data-type="titlepage" pdf="no">
<header>
<h1><a href="index.html" style="text-decoration:none;">Robotic Manipulation</a></h1>
<p data-type="subtitle">Perception, Planning, and Control</p>
<p style="font-size: 18px;"><a href="http://people.csail.mit.edu/russt/">Russ Tedrake</a></p>
<p style="font-size: 14px; text-align: right;">
© Russ Tedrake, 2020-2021<br/>
Last modified <span id="last_modified"></span>.</br>
<script>
var d = new Date(document.lastModified);
document.getElementById("last_modified").innerHTML = d.getFullYear() + "-" + (d.getMonth()+1) + "-" + d.getDate();</script>
<a href="misc.html">How to cite these notes, use annotations, and give feedback.</a><br/>
</p>
</header>
</div>
<p pdf="no"><b>Note:</b> These are working notes used for <a
href="http://manipulation.csail.mit.edu/Fall2021/">a course being taught
at MIT</a>. They will be updated throughout the Fall 2021 semester. <!-- <a
href="https://www.youtube.com/channel/UChfUOAhz7ynELF-s_1LPpWg">Lecture videos are available on YouTube</a>.--></p>
<table style="width:100%;" pdf="no"><tr style="width:100%">
<td style="width:33%;text-align:left;"><a class="previous_chapter" href=force.html>Previous Chapter</a></td>
<td style="width:33%;text-align:center;"><a href=index.html>Table of contents</a></td>
<td style="width:33%;text-align:right;"><a class="next_chapter" href=rl.html>Next Chapter</a></td>
</tr></table>
<!-- EVERYTHING ABOVE THIS LINE IS OVERWRITTEN BY THE INSTALL SCRIPT -->
<chapter style="counter-reset: chapter 7"><h1>Motion Planning</h1>
<a href="https://deepnote.com/project/Ch-8-Motion-Planning-B2KxZ0AqQ2KXAn1Vnw5zuw/%2Ftrajectories.ipynb" style="float:right; margin-top:-70px;background:white;border:0;" target="trajectories">
<img src="https://deepnote.com/buttons/launch-in-deepnote-white.svg"></a>
<div style="clear:right;"></div>
<p>There are a few more essential skills that we need in our toolbox. In this
chapter, we will explore some of the powerful methods of kinematic trajectory
motion planning.</p>
<div>I'm actually almost proud of making it this far into the notes without
covering this topic yet. Writing a relatively simple script for the pose of
the gripper, like we did in the bin picking chapter, really can solve a lot of
interesting problems. But there are a number of reasons that we might want a
more automated solution:
<ol><li>When the environment becomes more cluttered, it is harder to write
such a simple solution, and we might have to worry about collisions between
the arm and the environment as well as the gripper and the environment.</li>
<li>If we are doing "mobile manipulation" -- our robotic arms are attached to
a mobile base -- then the robot might have to operate in many different environments. Even if the workspace is not geometrically complicated,
it might still be different enough each time we reach that it requires
automated (but possibly still simple) planning.</li><li>If the robot is
operating in a simple known environment all day long, then it probably makes
sense to optimize the trajectories that it is executing; we can often speed up
the manipulation process significantly.</li>
</div>
<p>We'll focus on the problem of moving an arm through space in this chapter,
because that is the classical and very important motivation. But I need to
cover this now for another reason, too: it will also soon help us think about
programming a more dexterous hand.</p>
<p>I do need to make one important caveat. Despite having done some work in
this field myself, I actually really dislike the problem formulation of
collision-free motion planning. I think that on the whole, robots are too
afraid of bumping into the world (because things still go wrong when they do).
I don't think humans are solving these complex geometric problems every time
we reach... even when we are reaching in dense clutter (I actually suspect
that we are very bad at it). I would much rather see robots that perform well
even with very coarse / approximate plans for moving through a cluttered
environment, that are not afraid to make incidental contacts, and that can
still accomplish the task when they do!</p>
<section><h1>Inverse Kinematics</h1>
<p>The goal of this chapter is to solve for motion trajectories. But I would argue that if you really understand how to solve inverse kinematics, then you've got most of what you need to plan trajectories.</p>
<p>We know that that the <a href="pick.html#kinematics">forward
kinematics</a> give us a (nonlinear) mapping from joint angles to e.g. the
pose of the gripper: $X^G = f_{kin}(q)$. So, naturally, the problem of
inverse kinematics (IK) is about solving for the inverse map, $q =
f^{-1}_{kin}(X^G).$ Indeed, that is perhaps the most common and classical
question studied in inverse kinematics. But I want you to think of inverse
kinematics as a much richer topic than that.</p>
<p>For example, when we were <a
href="clutter.html#grasp_candidates">evaluating grasp candidates for bin
picking</a>, we had only a soft preference on the orientation of the hand
relative to some antipodal grasp. In that case, a full 6 DOF pose would
have been an overly constrained specification. And often we have many
constraints on the kinematics: some in joint space (like joint limits) and
others in Cartesian space (like non-penetration constraints). So really,
inverse kinematics is about solving for joint angles in a very rich
landscape of objectives and constraints.</p>
<figure>
<iframe width="560" height="315" src="https://www.youtube.com/embed/m1rv4d_zUCY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen pdf="no"></iframe>
<p pdf="only"><a href="https://www.youtube.com/embed/m1rv4d_zUCY">Click here to watch the video.</a></p>
<figcaption>We made extensive use of rich inverse kinematics
specifications in our work on humanoid robots. The video above is an
example of the interactive inverse kinematics interface (here to help us
figure out how to fit the our big humanoid robot into the little Polaris).
<a href="https://www.youtube.com/watch?v=E_CVq0lWfSc">Here is another
video</a> of the same tool being used for the Valkyrie humanoid, where we
do specify and end-effector pose, but we also add a joint-centering
objective and static stability constraints <elib>Fallon14+Marion16</elib>.
</figcaption>
</figure>
<subsection><h1>From end-effector pose to joint angles</h1>
<p>With its obvious importance in robotics, you probably won't be
surprised to hear that there is an extensive literature on inverse
kinematics. But you may be surprised at how extensive and complete the
solutions can get. The forward kinematics, $f_{kin}$, is a nonlinear
function in general, but it is a very structured one. In fact, with rare
exceptions (like if your robot has a <a
href="https://www.hindawi.com/journals/mpe/2016/1761968/fig4/">helical
joint</a>, aka screw joint), the equations governing the valid Cartesian
positions of our robots are actually <i>polynomial</i>. "But wait! What
about all of those sines and cosines in my kinematic equations?" you say.
The trigonometric terms come when we want to relate joint angles with
Cartesian coordinates. In $\Re^3$, for two points, $A$ and $B$, on the
same rigid body, the (squared) distance between them, $\|p^A - p^B\|^2,$
is a constant. And a joint is just a polynomial constraint between
positions on adjoining bodies, e.g. that they occupy the same point in
Cartesian space. See <elib>Wampler11</elib> for an excellent
overview.</p>
<todo>example: trig and polynomial kinematics of a two-link arm.</todo>
<p>Understanding the solutions to polynomial equations is the subject of
algebraic geometry. There is a deep literature on kinematics theory, on
symbolic algorithms, and on numerical algorithms. For even very complex
kinematic topologies, such as <a
href="https://en.wikipedia.org/wiki/Four-bar_linkage">four-bar
linkages</a> and <a
href="https://en.wikipedia.org/wiki/Stewart_platform">Stewart-Gough
platforms</a>, we can count the number of solutions, and/or understand
the continuous manifold of solutions. For instance,
<elib>Wampler11</elib>
describes a substantial toolbox for numerical algebraic geometry (based on
homotopy methods) with impressive results on difficult kinematics
problems.</p>
<p>While the algebraic-geometry methods are mostly targeted for offline
global analysis, they are not designed for fast real-time inverse
kinematics solutions needed in a control loop. The most popular tool
these days for real-time inverse kinematics for six- or seven-DOF
manipulators is a tool called "IKFast", described in Section 4.1
of <elib>Diankov10</elib>, that gained popularity because of its effective
open-source implementation. Rather than focus on completeness, IKFast
uses a number of approximations to provide fast and numerically robust
solutions to the "easy" kinematics problems. It leverages the fact that a
six-DOF pose constraint on a six-DOF manipulator has a "closed-form"
solution with a finite number of joint space configurations that produce
the same end-effector pose, and for seven-DOF manipulators it adds a layer
of sampling in the last degree of freedom on top of the six-DOF
solver.</p>
<todo>add an example of calling (or implementing something equivalent to)
IKFast and/or Bertini. looks like bertini 2 has python bindings (but not
pip) and is GPL3.</todo>
<p>These explicit solutions are important to understand because they
provide deep insight into the equations, and because they can be fast
enough to use inside a more sophisticated solution approach. But the
solutions don't provide the rich specification I advocated for above; in
particular, they break down once we have inequality constraints instead
of equality constraints. For those richer specifications, we will turn
to optimization.</p>
</subsection>
<subsection><h1>IK as constrained optimization</h1>
<figure><img style="width:60%" src="data/shelf_ik.png"><figcaption>A
richer inverse kinematics problem: solve for the joint angles, $q$, that
allow the robot to reach into the shelf and grab the object, while
avoiding collisions.</figcaption></figure>
<p>We have <a href="pick.html#diff_ik_w_constraints">already
discussed</a> the idea of solving <i>differential</i> inverse kinematics
as an optimization problem. In that workflow, we started by using the
pseudo-inverse of the kinematic Jacobian, but then graduated to thinking
about the least-squares formulation of the inverse problem. The more
general least-squares setting, we could add additional costs and
constraints that would protect us from (nearly) singular Jacobians and
could take into account additional constraints from joint limits, joint
velocity limits, etc. We could even add collision avoidance constraints.
Some of these constraints are quite nonlinear / nonconvex functions of
the configuration $q$, but in the differential kinematics setting we were
only seeking to find a small change $\Delta q$ around the nominal
configuration, so it was quite reasonable to make linear/convex
approximations of these nonlinear/nonconvex constraints.
</p>
<p>Now we will consider the full formulation, where we try to solve the
nonlinear / nonconvex optimization directly, without any constraints on
only making a small change to an initial $q$. This is a much harder
problem computationally. Using powerful nonlinear optimization solvers
like SNOPT, we are often able to solve the problems, even at interactive
rates (the example below is quite fun). But there are no guarantees. It
could be that a solution exists even if the solver returns
"infeasible".</p>
<p>Of course, the differential IK problem and the full IK problem are
closely related. In fact, you can think about the differential IK
algorithm as doing one step of (projected) gradient descent or one-step
of <a
href="https://en.wikipedia.org/wiki/Sequential_quadratic_programming">SQP</a>,
for the full nonlinear problem.</p>
<p>Drake provides a nice <a
href="https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_inverse_kinematics.html"></a>InverseKinematics</a>
class that makes it easy to assemble many of the standard
kinematic/multibody constraints into a
<code>MathematicalProgram</code>. Take a minute to look at the
constraints that are offered. You can add constraints on the relative
position and/or orientation on two bodies, or that two bodies are more
than some minimal distance apart (e.g. for non-penetration) or closer
than some distance, and more. This is the way that I want you to think
about the IK problem; it is an inverse problem, but one with a
potentially very rich set of costs and constraints.</p>
<example><h1>Interactive IK</h1>
<p>Despite the nonconvexity of the problem and nontrivial computational
cost of evaluating the constraints, we can often solve it at
interactive rates. I've assembled a few examples of this in the
chapter notebook:</p>
<p><a href="https://deepnote.com/project/Ch-8-Motion-Planning-B2KxZ0AqQ2KXAn1Vnw5zuw/%2Ftrajectories.ipynb" style="background:none; border:none;" target="trajectories"> <img src="https://deepnote.com/buttons/launch-in-deepnote-white.svg"></a></p>
<p>In the first version, I've added sliders to let you control the
desired pose of the end-effector. This is the simple version of the IK
problem, amenable to more explicit solutions, but we nevertheless solve
it with our full nonlinear optimization IK engine (and it does include
joint limit constraints). This demo won't look too different from the
very first example in the notes, where you used teleop to command the
robot to pick up the red brick. In fact, differential IK offers a fine
solution to this problem, too.</p>
<p>In the second example, I've tried to highlight the differences
between the nonlinear IK problem and the differential IK problem by
adding an obstacle directly in front of the robot. Because both our
differential IK and IK formulations are able to consume the
collision-avoidance constraints, both solutions will try to prevent you
from crashing the arm into the post. But if you move the target
end-effector position from one side of the post to the other, the full
IK solver can switch over to a new solution with the arm on the other
side of the post, but the differential IK will never be able to make
that leap (it will stay on the first side of the post, refusing to
allow a collision).</p>
<figure>
<img width="40%" src="data/ik_post_1.png"/>
<img width="40%" src="data/ik_post_2.png"/>
<figcaption>As the desired end-effector position moves along
positive $y$, the IK solver is able to find a new solution with the
arm wrapped the other way around the post.</figcaption>
</figure>
</example>
<p>With great power comes great responsibility. The inverse kinematics
toolbox allows you to formulate complex optimizations, but your success
with solving them will depend partially on how thoughtful you are about
choosing your costs and constraints. My basic advice is this: <ol>
<li>Try to keep the objective (costs) simple; I typically only use the
"joint-centering" quadratic cost on $q$. Putting terms that should be
constraints into the cost as penalties leads to lots of cost-function
tuning, which can be a nasty business.</li><li>Write minimal constraints.
You want the set of feasible configurations to be as big as possible.
For instance, if you don't need to fully constrain the orientation of the
gripper, then don't do it.</li></ol> I'll follow-up with that second
point using the following example.</p>
<example><h1>Grasp the cylinder.</h1>
<p>Let's use IK to grasp a cylinder (call it a hand rail). Suppose it
doesn't matter where along the cylinder we grasp, nor the orientation
at which we grasp it. Then we should write the IK problem using
only the minimal version of those constraints.</p>
<p>In the notebook, I've coded up one version of this. I've put the
cylinder's pose on the sliders now, so you can move it around the
workspace, and watch how the IK solver decides to position the robot.
In particular, if you move the cylinder in $\pm y$, you'll see that the
robot doesn't try to follow... until the hand gets to the end of the
cylinder. Very nice!</p>
<figure>
<img width="30%" src="data/grasp_cylinder_1.png"/>
<img width="30%" src="data/grasp_cylinder_2.png"/>
<img width="30%" src="data/grasp_cylinder_3.png"/>
</figure>
<p>One could imagine multiple ways to implement that constraint.
Here's how I did it:
<pre><code class="language-python">ik.AddPositionConstraint(
frameB=gripper_frame, p_BQ=[0, 0.1, -0.02],
frameA=cylinder_frame, p_AQ_lower=[0, 0, -0.5], p_AQ_upper=[0, 0, 0.5])
ik.AddPositionConstraint(
frameB=gripper_frame, p_BQ=[0, 0.1, 0.02],
frameA=cylinder_frame, p_AQ_lower=[0, 0, -0.5], p_AQ_upper=[0, 0, 0.5])</code></pre>
In words, I've defined two points in the gripper frame; in the notation
of the <code>AddPositionConstraint</code> method they are ${}^Bp^{Q}$.
Recall the <a href="pick.html#grasp_frames">gripper frame</a> is such
that $[0, .1, 0]$ is right between the two gripper pads; you should
take a moment to make sure you understand where $[0,.1,-0.02]$ and
$[0,.1,0.02]$ are. Our constraints require that both of those points
should lie exactly on the center line segment of the cylinder. This
was a compact way for me to leave the orientation around the cylinder
axis as unconstrained, and capture the cylinder position constraints
all quite nicely.</p>
<p><a href="https://deepnote.com/project/Ch-8-Motion-Planning-B2KxZ0AqQ2KXAn1Vnw5zuw/%2Ftrajectories.ipynb" style="background:none; border:none;" target="trajectories"> <img src="https://deepnote.com/buttons/launch-in-deepnote-white.svg"></a></p>
</example>
<p>We've provided a rich language of constraints for specifying IK
problems, including many which involve the kinematics of the robot and
the geometry of the robot and the world (e.g., the minimum-distance
constraints). Let's take a moment to appreciate the geometric puzzle
that we are asking the optimizer to solve.</p>
<example><h1>Visualizing the configuration space</h1>
<p>Let's return to the example of the iiwa reaching into the shelf.
This IK problem has two major constraints: 1) we want the center of the
target sphere to be in the center of the gripper, and 2) we want the
arm to avoid collisions with the shelves. In order to plot these
constraints, I've frozen three of the joints on the iiwa, leaving only
the three corresponding motion in the $x-z$ plane.</p>
<figure>
<img width="40%" src="data/shelf_ik2.png"/>
<img width="40%" src="data/shelf_ik_cspace_grasp_constraint.png"/>
<figcaption>The image on the right is a visualization of the "grasp
the sphere" constraint in configuration space -- the x,y,z, axes in
the visualizer correspond to the three joint angles of the
planarized iiwa.</figcaption>
</figure>
<p>To visualize the constraints, I've sampled a dense grid in the three
joint angles of the planarized iiwa, assigning each grid element to 1
if the constraint is satisfied or 0 otherwise, then run a marching
cubes algorithm to extract an approximation of the true 3D geometry of
this constraint in the configuration space. The "grasp the sphere"
constraint produces the nice green geometry I've pictured above on the
right; it is clipped by the joint limits. The collision-avoidance
constraint, on the other hand, is quite a bit more complicated. To see
that, you'd better open up this
<a href="data/iiwa_shelves_configuration_space.html">3D
visualization</a> so you can navigate around it yourself. Scary!</p>
<p><a
href="https://deepnote.com/project/Ch-8-Motion-Planning-B2KxZ0AqQ2KXAn1Vnw5zuw/%2Ftrajectories.ipynb"
style="background:none; border:none;" class="deepnote"
target="trajectories"> <img
src="https://deepnote.com/buttons/launch-in-deepnote-white.svg"></a></p>
</example>
<example><h1>Visualizing the IK optimization problem</h1>
<p><a href="data/shelf_ik_prog_zoom.html">Zoomed in</a>. <a
href="data/shelf_ik_prog.html">The global optimization problem.</a>
Nonlinear optimizers like SNOPT can be pretty amazing sometimes!</p>
<p><a
href="https://deepnote.com/project/Ch-8-Motion-Planning-B2KxZ0AqQ2KXAn1Vnw5zuw/%2Ftrajectories.ipynb"
style="background:none; border:none;" class="deepnote"
target="trajectories"> <img
src="https://deepnote.com/buttons/launch-in-deepnote-white.svg"></a></p>
</example>
<p>Global IK.</p>
<p>When should we use IK vs Differential IK? IK solves a more global
problem, but is not guaranteed to succeed. It is also not guaranteed to
make small changes to $q$ as you make small changes in the
cost/constraints; so you might end up sending large $\Delta q$ commands
to your robot. Use IK when you need to solve the more global problem,
but then produce the actual $q$ trajectories with trajectory
optimization, etc. Or use differential IK if you are able to design
smooth commands in end-effector space and simply track them in joint
space.</p>
</subsection>
<subsection><h1>Grasp planning as IK</h1>
<!-- key idea to get across: solve for "pick up" and "put down" simultaneously. -->
</subsection>
</section>
<!-- maybe section on collision detection / constraints? GJK? octrees? -->
<section><h1>Kinematic trajectory optimization</h1>
<p>In our previous approach, we designed (simple) trajectories in the
end-effector coordinates, then used differential IK to get the joint
velocity commands. Now we will switch to design the motions directly in
joint coordinates, and just add (forward) kinematics constraints.</p>
<subsection><h1>Trajectory parameterizations</h1>
<p>Piecewise polynomial trajectories</p>
<p>B-spline trajectories</p>
</subsection>
<subsection><h1>Optimization algorithms</h1>
<p><a
href="https://github.com/RobotLocomotion/drake/issues/10188">B-spline
kinematic trajectory optimization</a>.</p>
<p>CHOMP <elib>Ratliff09a</elib>, STOMP <elib>Kalakrishnan11a</elib>,
KOMO<elib>Toussaint17</elib>. SQP methods vs Augmented Lagrangian</p>
</subsection>
</section>
<section><h1>Sample-based motion planning</h1>
<p>The <a href="https://ompl.kavrakilab.org/">Open Motion Planning
Library</a>. We have our <a
href="https://github.com/RobotLocomotion/drake/issues/14431">own
implementations in Drake</a>
that are optimized for our collision engines.</p>
<subsection><h1>Rapidly-exploring random trees (RRT)</h1>
<example><h1>The basic RRT</h1>
<p><a
href="https://deepnote.com/project/Ch-8-Motion-Planning-B2KxZ0AqQ2KXAn1Vnw5zuw/%2Ftrajectories.ipynb"
style="background:none; border:none;" class="deepnote"
target="trajectories"> <img
src="https://deepnote.com/buttons/launch-in-deepnote-white.svg"></a></p>
</example>
</subsection>
<example><h1>The RRT "Bug Trap"</h1>
<p><a
href="https://deepnote.com/project/Ch-8-Motion-Planning-B2KxZ0AqQ2KXAn1Vnw5zuw/%2Ftrajectories.ipynb"
style="background:none; border:none;" class="deepnote"
target="trajectories"> <img
src="https://deepnote.com/buttons/launch-in-deepnote-white.svg"></a></p>
</example>
</subsection>
<subsection><h1>The Probabilistic Roadmap (PRM)</h1>
</subsection>
<subsection><h1>Post-processing</h1>
<p><a
href="https://github.com/RobotLocomotion/drake/issues/11827">anytime
b-spline smoother</a></p>
</subsection>
</section>
<section><h1>Time-optimal path parameterizations</h1>
<!-- time-scaling trajectories (TOPPRA?) -->
</section>
<section><h1>Active areas of research</h1>
<p>Task and motion planning, graphs of convex sets, ...</p>
</section>
<section><h1>Exercises</h1>
<exercise><h1>Door Opening</h1>
<p> For this exercise, you will implement a optimization-based inverse kinematics solver to open a cupboard door. You will work exclusively in <a href="https://deepnote.com/project/81-Door-Opening-ZlPcs741QfuSbgWO0ptEYQ/%2Fdoor_opening.ipynb" target="_blank">this notebook</a>. You will be asked to complete the following steps: </p>
<ol type="a">
<li> Write down the constraints on the IK problem of grabbing a cupboard handle.
</li>
<li> Formalize the IK problem as an instance of optimization. </li>
<li> Implement the optimization problem using MathematicalProgram.</li>
</ol>
</exercise>
<exercise id="rrtExercise"><h1>RRT Motion Planning</h1>
<p> For this exercise, you will implement and analyze the RRT algorithm introduced in class. You will work exclusively in <a href="https://deepnote.com/project/82-RRT-Motion-Planning-Q5JO4yZpSpayP1fpqP2pbw/%2Frrt_planning.ipynb" target="_blank">this notebook</a>. You will be asked to complete the following steps: </p>
<ol type="a">
<li> Implement the RRT algorithm for the Kuka arm.
</li>
<li> Answer questions regarding its properties. </li>
</ol>
</exercise>
<exercise><h1>Improving RRT Path Quality</h1>
<p>Due to the random process by which nodes are generated, the paths output by RRT can often look somewhat jerky (the "RRT dance" is the favorite dance move of many roboticists). There are many strategies to improve the quality of the paths and in this question we'll explore two. For the sake of this problem, we'll assume path quality refers to path length, i.e. that the goal is to find the shortest possible path, where distance is measured as Euclidean distance in joint space. </p>
<ol type="a">
<li> One strategy to improve path quality is to post-process paths via "shortcutting", which tries to replace existing portions of a path with shorter segments <elib>Geraerts04</elib>. This is often implemented with the following algorithm: 1) Randomly select two non-consecutive nodes along the path. If these two nodes 2) Try to connect them with a RRT's extend operator 3) If the resulting path is better, replace the existing path with the new, better path. Steps 1-3 are repeated until a termination condition (often a finite number of steps or time). For this problem, we can assume that the extend operator is a straight line in joint space. Consider the graph below, where RRT has found a rather jerky path from $q_{start}$ to $q_{goal}$. There is an obstacle (shown in red) and $q_{start}$ and $q_{goal}$ are highlighted in blue (disclaimer: This graph was manually generated to serve as an illustrative example).
<figure>
<img style="width:80%", src="data/shortcutting.png"/>
</figure>
Name one pair of nodes for which the shortcutting algorithm would result in a shorter path (i.e. two nodes along our current solution path for which we could produce a shorter path if we were to directly connect them). You should assume the distance metric is the 2D Euclidean distance.</li><br/>
<li> Shortcutting as a post-processing technique, reasons over the existing path and enables local "re-wiring" of the graph. It is a heuristic and does not, however, guarantee that the tree will encode the shortest path. To explore this, let's zoom in one one iteration of RRT (as illustrated below), where $q_{sample}$ is the randomly generated configuration, $q_{near}$ was the closest node on the existing tree and $q_{new}$ is the RRT extension step from $q_{near}$ in the direction of q_sample. When the standard RRT algorithm (which you implemented in <a href="#rrtExercise">a previous exercise</a>) adds $q_{new}$ to the tree, what node is its parent? If we wanted our tree to encode the shortest path from the starting node, $q_{start}$, to each node in the tree, what node should be the parent node of $q_{new}$?</li>
<figure>
<img style="width:60%", src="data/rrtstar_step.png"/>
</figure>
</ol>
This idea of dynamically "rewiring" to discover the minimum cost path (which for us is the shortest distance) is a critical aspect of the asymptotically optimal variant of RRT known as RRT* <elib>Karaman11</elib>. As the number of samples tends towards infinity RRT* finds the optimal path to the goal! This is unlike "plain" RRT, which is provably suboptimal (the intuition for this proof is that RRT "traps" itself because it cannot find better paths as it searches).
</exercise>
</section>
</chapter>
<!-- EVERYTHING BELOW THIS LINE IS OVERWRITTEN BY THE INSTALL SCRIPT -->
<div id="references"><section><h1>References</h1>
<ol>
<li id=Fallon14>
<span class="author">Maurice Fallon and Scott Kuindersma and Sisir Karumanchi and Matthew Antone and Toby Schneider and Hongkai Dai and Claudia P\'{e}rez D'Arpino and Robin Deits and Matt DiCicco and Dehann Fourie and Twan Koolen and Pat Marion and Michael Posa and Andr\'{e}s Valenzuela and Kuan-Ting Yu and Julie Shah and Karl Iagnemma and Russ Tedrake and Seth Teller</span>,
<span class="title">"An Architecture for Online Affordance-based Perception and Whole-body Planning"</span>,
<span class="publisher">Journal of Field Robotics</span>, vol. 32, no. 2, pp. 229-254, September, <span class="year">2014</span>.
[ <a href="http://groups.csail.mit.edu/robotics-center/public_papers/Fallon14.pdf">link</a> ]
</li><br>
<li id=Marion16>
<span class="author">Pat Marion and Maurice Fallon and Robin Deits and Andr\'{e}s Valenzuela and Claudia P\'{e}rez D'Arpino and Greg Izatt and Lucas Manuelli and Matt Antone and Hongkai Dai and Twan Koolen and John Carter and Scott Kuindersma and Russ Tedrake</span>,
<span class="title">"Director: A User Interface Designed for Robot Operation With Shared Autonomy"</span>,
<span class="publisher">Journal of Field Robotics</span>, vol. 1556-4967, <span class="year">2016</span>.
[ <a href="http://groups.csail.mit.edu/robotics-center/public_papers/Marion16.pdf">link</a> ]
</li><br>
<li id=Wampler11>
<span class="author">Charles W. Wampler and Andrew J. Sommese</span>,
<span class="title">"Numerical algebraic geometry and algebraic kinematics"</span>,
<span class="publisher">Acta Numerica</span>, vol. 20, pp. 469-567, <span class="year">2011</span>.
</li><br>
<li id=Diankov10>
<span class="author">Rosen Diankov</span>,
<span class="title">"Automated Construction of Robotic Manipulation Programs"</span>,
PhD thesis, Carnegie Mellon University, August, <span class="year">2010</span>.
</li><br>
<li id=Ratliff09a>
<span class="author">Nathan Ratliff and Matthew Zucker and J. Andrew (Drew) Bagnell and Siddhartha Srinivasa</span>,
<span class="title">"{CHOMP}: Gradient Optimization Techniques for Efficient Motion Planning"</span>,
<span class="publisher">IEEE International Conference on Robotics and Automation (ICRA)</span> , May, <span class="year">2009</span>.
</li><br>
<li id=Kalakrishnan11a>
<span class="author">Mrinal Kalakrishnan and Sachin Chitta and Evangelos Theodorou and Peter Pastor and Stefan Schaal</span>,
<span class="title">"{STOMP}: Stochastic trajectory optimization for motion planning"</span>,
<span class="publisher">2011 IEEE international conference on robotics and automation</span> , pp. 4569--4574, <span class="year">2011</span>.
</li><br>
<li id=Toussaint17>
<span class="author">Marc Toussaint</span>,
<span class="title">"A tutorial on Newton methods for constrained trajectory optimization and relations to SLAM, Gaussian Process smoothing, optimal control, and probabilistic inference"</span>,
<span class="publisher">Geometric and numerical foundations of movements</span>, pp. 361--392, <span class="year">2017</span>.
</li><br>
<li id=Geraerts04>
<span class="author">R. Geraerts and M. Overmars</span>,
<span class="title">"A comparative study of probabilistic roadmap planners"</span>,
<span class="publisher">Algorithmic Foundations of Robotics V</span>, pp. 43--58, <span class="year">2004</span>.
</li><br>
<li id=Karaman11>
<span class="author">S. Karaman and E. Frazzoli</span>,
<span class="title">"Sampling-based Algorithms for Optimal Motion Planning"</span>,
<span class="publisher">Int. Journal of Robotics Research</span>, vol. 30, pp. 846--894, June, <span class="year">2011</span>.
</li><br>
</ol>
</section><p/>
</div>
<table style="width:100%;" pdf="no"><tr style="width:100%">
<td style="width:33%;text-align:left;"><a class="previous_chapter" href=force.html>Previous Chapter</a></td>
<td style="width:33%;text-align:center;"><a href=index.html>Table of contents</a></td>
<td style="width:33%;text-align:right;"><a class="next_chapter" href=rl.html>Next Chapter</a></td>
</tr></table>
<div id="footer" pdf="no">
<hr>
<table style="width:100%;">
<tr><td><a href="https://accessibility.mit.edu/">Accessibility</a></td><td style="text-align:right">© Russ
Tedrake, 2021</td></tr>
</table>
</div>
</body>
</html>