Survey of Riemannian Motion Policies

1.1K Views

March 21, 24

スライド概要

A survey of Riemannian Motion Policies (https://research.nvidia.com/publication/2018-03_riemannian-motion-policies)

シェア

またはPlayer版

埋め込む »CMSなどでJSが使えない場合

関連スライド

各ページのテキスト
1.

Survey of Riemannian Motion Policies Kohei Honda (https://kohonda.github.io/) Feb. 2023 1

2.

What is Riemannian motion policies (RMP)?  RMP is a general/theorical multi-task motion planning framework  proposed by N. Ratliff (NVIDIA), 2018  Successfully applied in some robotic problems (manipulator/navigation/multi-agent) ※ Note that I am not expert on RMP and some parts are not clear yet Manipulator [1][2] Supported by Isaac sim 2

3.

Motivation  Motivation: Realize more complex (various task-oriented and reactive) motion on dynamic environment  Requirements: • Task design as a combination of simple sub-tasks for user-friendly (because complex tasks are difficult to describe directly) • without solving optimization problem to reduce computational cost Desired task Sub-tasks • End effector reach a target • No collision • Consider physical limitation • , etc. 3

4.

Concept Sub-tasks: Target reaching Collision avoidance Joint limit Design Ref: https://www.youtube.com/watch?v=ENURunQ50js 4

5.

Concept Sub-tasks: Target reaching Collision avoidance Joint limit Design Ref: https://www.youtube.com/watch?v=ENURunQ50js 5

6.

Concept Sub-tasks: Target reaching Collision avoidance Joint limit Design Ref: https://www.youtube.com/watch?v=ENURunQ50js 6

7.

Concept Sub-tasks: Target reaching Collision avoidance Joint limit Design Ref: https://www.youtube.com/watch?v=ENURunQ50js 7

8.

Concept Execute various tasks by combining sub-task motion policies RMP / RMPflow algorithm Sub-tasks: Target reaching Collision avoidance Joint limit Design Ref: https://www.youtube.com/watch?v=ENURunQ50js Global policy: 𝑞𝑞̈ = 𝜋𝜋(𝑞𝑞, 𝑞𝑞) ̇ 8

9.

Method 9

10.

Preliminary Configuration Space 𝜃𝜃2 𝜃𝜃1 𝜃𝜃3 Task Space 𝒙𝒙 = [𝑥𝑥, 𝑦𝑦] 𝜃𝜃4 𝒒𝒒 = [𝜃𝜃1 , 𝜃𝜃2 , 𝜃𝜃3 , 𝜃𝜃4 ] [𝑥𝑥𝑑𝑑 , 𝑦𝑦𝑑𝑑 ]  Control Input space  Describable tasks  Observable  (often) Unobservable (1) Formulate tasks in Task Space (2) Execute policy 𝑞𝑞̈ = 𝜋𝜋(𝑞𝑞, 𝑞𝑞) ̇ in Configuration Space 10

11.

Preliminary Configuration Space 𝜃𝜃2 𝜃𝜃1 Given: 𝜃𝜃3 Task Space 𝒙𝒙 = [𝑥𝑥, 𝑦𝑦] 𝜃𝜃4 𝒒𝒒 = [𝜃𝜃1 , 𝜃𝜃2 , 𝜃𝜃3 , 𝜃𝜃4 ] Forward kinematics: 𝒙𝒙 = 𝛹𝛹(𝒒𝒒) dΨ(𝒒𝒒) dΨ(𝒒𝒒) 𝒙𝒙̇ = = 𝒒𝒒̇ = 𝐽𝐽(𝑞𝑞)𝒒𝒒̇ d𝑡𝑡 d𝒒𝒒 Jacobian ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 [𝑥𝑥𝑑𝑑 , 𝑦𝑦𝑑𝑑 ] 11

12.

Assumption 𝜃𝜃2 𝜃𝜃1 𝜃𝜃3 𝒙𝒙 = [𝑥𝑥, 𝑦𝑦] 𝜃𝜃4 𝒒𝒒 = [𝜃𝜃1 , 𝜃𝜃2 , 𝜃𝜃3 , 𝜃𝜃4 ] Configuration space 𝓒𝓒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 forward kinematics 𝜓𝜓 Jacobian 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 𝐽𝐽 [𝑥𝑥𝑑𝑑 , 𝑦𝑦𝑑𝑑 ] Task space 𝓧𝓧 State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Assumption: Two smooth manifolds connected by a forward kinematics 𝜓𝜓 12

13.

Definition of RMP Configuration space 𝓒𝓒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 Task space 𝓧𝓧 State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy: 𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈ 13

14.

Definition of RMP Configuration space 𝓒𝓒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝑅𝑅𝑞𝑞 = 𝑴𝑴𝒒𝒒 , 𝒇𝒇𝒒𝒒 𝒞𝒞 Riemann Motion Policy (RMP): 𝑅𝑅 = 𝒇𝒇, 𝑴𝑴 Force map 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 Task space 𝓧𝓧 State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy: 𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈  Force map 𝒇𝒇 : 𝑅𝑅𝑥𝑥 = 𝑴𝑴𝒙𝒙 , 𝒇𝒇𝒙𝒙 𝒳𝒳 Primitive motion policy for a sub-task 𝒞𝒞 Riemann Metric (inertia matrix)  Riemann Metric 𝑴𝑴 : Importance weight (not real inertia matrix) Symmetric positive semidefinite matrix 14

15.

Motion generation scheme Configuration space 𝓒𝓒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝑅𝑅𝑞𝑞 = 𝑴𝑴𝒒𝒒 , 𝒇𝒇𝒒𝒒 𝒞𝒞 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 (1) Prepare RMPs for sub-tasks on multiple 𝒳𝒳i Task space 𝓧𝓧 State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy:𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈ 𝑅𝑅𝑥𝑥 = 𝑴𝑴𝒙𝒙 , 𝒇𝒇𝒙𝒙 … 𝑅𝑅𝑛𝑛 𝑅𝑅𝑛𝑛−1 𝑅𝑅2 𝒳𝒳 𝑅𝑅1 = 𝑴𝑴𝒙𝒙𝟏𝟏 , 𝒇𝒇𝒙𝒙𝟏𝟏 𝒳𝒳1 Pre-Designed RMPs 15

16.

Motion generation scheme Configuration space 𝓒𝓒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝑅𝑅𝑞𝑞 = 𝑴𝑴𝒒𝒒 , 𝒇𝒇𝒒𝒒 𝒞𝒞 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 Final Goal • Find: Global policy 𝑞𝑞̈ from Pre-designed RMPs • Subject to: Task space 𝓧𝓧 State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy:𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈ 𝑅𝑅𝑥𝑥 = 𝑴𝑴𝒙𝒙 , 𝒇𝒇𝒙𝒙 … 𝑅𝑅𝑛𝑛 𝑅𝑅𝑛𝑛−1 𝑅𝑅2 𝒳𝒳 𝑅𝑅1 = 𝑴𝑴𝒙𝒙𝟏𝟏 , 𝒇𝒇𝒙𝒙𝟏𝟏 𝒳𝒳1 Pre-Designed RMPs Forward kinematics transformation 16

17.

Solution: RMP-algebra Configuration space 𝓒𝓒 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝑅𝑅𝑞𝑞 = 𝑴𝑴𝒒𝒒 , 𝒇𝒇𝒒𝒒 𝒒𝒒 = [𝜃𝜃1 , … 𝜃𝜃𝑚𝑚 ] 𝜃𝜃2 𝜃𝜃1 𝒞𝒞 𝜃𝜃𝑚𝑚−1 𝜃𝜃𝑚𝑚 (3) addition/resolve State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy:𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈ (1) pushforward 𝑞𝑞̈ = 𝜋𝜋(𝑞𝑞, 𝑞𝑞) ̇ Task space 𝓧𝓧 𝑅𝑅𝑥𝑥 = 𝑴𝑴𝒙𝒙 , 𝒇𝒇𝒙𝒙 𝓧𝓧𝒏𝒏−𝟏𝟏 𝓧𝓧𝒏𝒏 𝒳𝒳 𝓧𝓧𝟐𝟐 𝓧𝓧𝟏𝟏 (2) pullback Solution: Compose RMPs by RMP-algebra (pushforward / pullback / addition / resolve) 17

18.

RMP-algebra Configuration space 𝓒𝓒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝑅𝑅𝑞𝑞 = 𝑴𝑴𝒒𝒒 , 𝒇𝒇𝒒𝒒 𝒞𝒞 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 Task space 𝓧𝓧 State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy:𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈ 𝑅𝑅𝑥𝑥 = 𝑴𝑴𝒙𝒙 , 𝒇𝒇𝒙𝒙 (1)Pushforward: forward propagate observation state 𝜃𝜃2 𝜃𝜃1 𝜃𝜃𝑚𝑚−1 𝜃𝜃𝑚𝑚 Observe: 𝒒𝒒 = [𝜃𝜃1 , … 𝜃𝜃𝑚𝑚 ] 𝑝𝑝𝑝𝑝𝑝𝑝𝑝(𝑞𝑞, 𝑞𝑞) ̇ 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 𝓧𝓧𝒏𝒏 𝓧𝓧𝒏𝒏−𝟏𝟏 𝓧𝓧𝟐𝟐 𝒳𝒳 𝓧𝓧𝟏𝟏 Update states on task spaces 20

19.

RMP-algebra Configuration space 𝓒𝓒 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝑅𝑅𝑞𝑞 = 𝑴𝑴𝒒𝒒 , 𝒇𝒇𝒒𝒒 𝒞𝒞 Task space 𝓧𝓧 State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy:𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈ 𝑅𝑅𝑥𝑥 = 𝑴𝑴𝒙𝒙 , 𝒇𝒇𝒙𝒙 (2) Pullback: backward propagate RMPs 𝒙𝒙𝒊𝒊̈ = 𝐽𝐽𝑖𝑖 𝒒𝒒̈ + 𝐽𝐽𝑖𝑖̇ 𝒒𝒒̇ ̇ 𝒒𝒒̈ = 𝐽𝐽𝑖𝑖† (𝒙𝒙𝒊𝒊̈ − 𝐽𝐽𝑖𝑖̇ 𝒒𝒒) × 𝐽𝐽𝑖𝑖⊺ 𝑀𝑀𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖 (𝐽𝐽𝑖𝑖⊺ 𝑀𝑀𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖 )𝒒𝒒̈ = :=𝑴𝑴𝒊𝒊𝒒𝒒 † : pseudo inverse (𝐽𝐽𝑖𝑖⊺ 𝑀𝑀𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖 )𝐽𝐽𝑖𝑖† (𝒙𝒙𝒊𝒊̈ ̇ 𝒇𝒇𝑖𝑖𝑞𝑞 = 𝐽𝐽𝑖𝑖⊺ (𝒇𝒇𝑥𝑥𝑖𝑖 − 𝑴𝑴𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖̇ 𝒒𝒒) ̇ − 𝐽𝐽𝑖𝑖̇ 𝒒𝒒) 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 ̇ 𝒇𝒇𝑖𝑖𝑞𝑞 = 𝐽𝐽𝑖𝑖⊺ (𝒇𝒇𝑥𝑥𝑖𝑖 − 𝑴𝑴𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖̇ 𝒒𝒒) 𝒇𝒇1𝑞𝑞 ~𝒇𝒇𝑛𝑛𝑞𝑞 Find motion policies on 𝓒𝓒 𝑅𝑅𝑛𝑛 𝑅𝑅𝑛𝑛−1 𝑅𝑅2 𝒳𝒳 𝑅𝑅1 = 𝑴𝑴𝒙𝒙𝟏𝟏 , 𝒇𝒇𝒙𝒙𝟏𝟏 𝒳𝒳1 Pre-Designed RMPs 21

20.

RMP-algebra Configuration space 𝓒𝓒 𝒙𝒙 = 𝜓𝜓(𝒒𝒒) 𝒙𝒙̇ = 𝐽𝐽𝒒𝒒̇ ̇ ̇ 𝒙𝒙̈ = 𝐽𝐽𝒒𝒒̈ + 𝐽𝐽𝒒𝒒 State: 𝒒𝒒, 𝒒𝒒,̇ 𝒒𝒒̈ ∈ ℝ𝒎𝒎 Motion policy: 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ 𝑅𝑅𝑞𝑞 = 𝑴𝑴𝒒𝒒 , 𝒇𝒇𝒒𝒒 𝒞𝒞 (4) Resolve: get global policy Global policy: 𝑞𝑞̈ = 𝜋𝜋(𝑞𝑞, 𝑞𝑞) ̇ 𝒒𝒒̈ = † ⊺ Σ𝑖𝑖 {𝐽𝐽𝑖𝑖 𝑴𝑴𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖 } Σ𝑖𝑖 {𝐽𝐽𝑖𝑖⊺ ̇ 𝒇𝒇𝑖𝑖𝑞𝑞 = 𝐽𝐽𝑖𝑖⊺ (𝒇𝒇𝑥𝑥𝑖𝑖 − 𝑴𝑴𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖̇ 𝒒𝒒) 𝒇𝒇𝒒𝒒 = 𝑴𝑴𝒒𝒒 𝒒𝒒̈ (𝑴𝑴𝒒𝒒 ≔ Σ𝑖𝑖 𝐽𝐽𝑖𝑖⊺ 𝑴𝑴𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖 ) 𝒇𝒇𝑥𝑥𝑖𝑖 − 𝑴𝑴𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖̇ 𝒒𝒒̇ } State: 𝒙𝒙, 𝒙𝒙,̇ 𝒙𝒙̈ ∈ ℝ𝒏𝒏 Motion policy:𝒇𝒇𝒙𝒙 = 𝑴𝑴𝒙𝒙 𝒙𝒙̈ (3) Addition: Combine RMPs by weighted average 𝒇𝒇𝒒𝒒 = Σ𝑖𝑖 𝒇𝒇𝑖𝑖𝑞𝑞 = Σ𝑖𝑖 {𝐽𝐽𝑖𝑖⊺ 𝒇𝒇𝑥𝑥𝑖𝑖 − 𝑴𝑴𝑥𝑥𝑖𝑖 𝐽𝐽𝑖𝑖̇ 𝒒𝒒̇ } Task space 𝓧𝓧 𝑅𝑅𝑥𝑥 = 𝑴𝑴𝒙𝒙 , 𝒇𝒇𝒙𝒙 𝑅𝑅𝑛𝑛 𝑅𝑅𝑛𝑛−1 𝑅𝑅2 𝒳𝒳 𝑅𝑅1 = 𝑴𝑴𝒙𝒙𝟏𝟏 , 𝒇𝒇𝒙𝒙𝟏𝟏 𝒳𝒳1 22

21.

Example: Manipulator control Target reaching RMP: 𝑅𝑅𝑔𝑔 = (𝒇𝒇𝒈𝒈 , 𝑰𝑰) 𝒙𝒙𝒆𝒆 [1] Damping Reaching goal 𝒇𝒇𝑔𝑔 𝑥𝑥𝑒𝑒 , 𝑥𝑥𝑒𝑒̇ = 𝛼𝛼𝐬𝐬 𝒙𝒙𝒈𝒈 − 𝒙𝒙𝒆𝒆 − 𝛽𝛽 𝒙𝒙𝒆𝒆̇ 𝒙𝒙𝒈𝒈 𝒔𝒔 𝒙𝒙𝒈𝒈 − 𝒙𝒙𝒆𝒆 = 𝒙𝒙𝑔𝑔 − 𝒙𝒙𝑒𝑒 𝒙𝒙𝑔𝑔 − 𝒙𝒙𝒆𝒆 + 𝑐𝑐 log(1 + 𝑒𝑒 −2𝑐𝑐(𝒙𝒙𝒈𝒈−𝒙𝒙𝒆𝒆 ) ) Collision avoidance RMP: 𝑅𝑅𝑖𝑖𝑖𝑖 = (𝒇𝒇𝒊𝒊𝒊𝒊 , 𝑴𝑴𝒊𝒊𝒊𝒊) x N num. 𝒙𝒙𝒊𝒊 Closest obstacle from 𝒙𝒙𝑖𝑖 𝒐𝒐𝒋𝒋 Avoidance 𝒇𝒇𝒊𝒊𝒊𝒊 = 𝛼𝛼 𝑑𝑑𝑖𝑖𝑖𝑖 𝒙𝒙𝒊𝒊 −𝒐𝒐𝑗𝑗 |𝒙𝒙𝑖𝑖 −𝒐𝒐𝑗𝑗 | Damping − 𝛽𝛽 𝑑𝑑𝑖𝑖𝑖𝑖 ⊺ 𝑴𝑴𝑖𝑖𝑖𝑖 = 𝑤𝑤 𝑑𝑑𝑖𝑖𝑖𝑖 𝒔𝒔(𝑥𝑥̈ 𝑖𝑖𝑖𝑖 )𝒔𝒔(𝑥𝑥̈ 𝑖𝑖𝑖𝑖 ) 𝑥𝑥𝑖𝑖 −𝑜𝑜𝑗𝑗 |𝑥𝑥𝑖𝑖 −𝑜𝑜𝑗𝑗 | Decreasing to zero further from the obstacle 2 𝑥𝑥𝑖𝑖̇ Tips  Damping term is important for stability  Computation is proportional to the num of RMPs 23

22.

RMP-tree [2]  RMP-algebra is a general transformation for two smooth manifolds NOT constrained between configuration and task spaces  RMP-tree is the efficient data structure for RMPs by shared computational path ※ RMPflow (generalized RMP) = RMP-tree x RMP-algebra Subtask RMPs RMP-algebra between task spaces w/o RMP-tree Task spaces With RMP-tree Configuration space RMP-tree used in [6] by [4] N: num. of total nodes L: num. of leaf nodes (<=N) B: max branching factor d: max dim. of space 24

23.

Intuitive understanding of RMP algorithm  Why RMP/RMPflow works well? [4] • There is a connection between the RMP-algebra and sparse linear optimization problem [7] • RMP-tree is a efficient computation structure using the sparsity • ̇ is the solution to the following least squares problem: The desired policy 𝒂𝒂 = 𝝅𝝅(𝒒𝒒, 𝒒𝒒) Linear optimization problem acceleration 𝐾𝐾 RMP 𝑘𝑘=1 Riemann metric s. t. ∶ 𝒂𝒂𝑘𝑘 = 𝐽𝐽𝑘𝑘 𝒂𝒂 + 𝐽𝐽𝑣𝑣̇ 𝒒𝒒̇ Forward kinematics ※ Riemann metric = importance weight 𝐾𝐾 𝑎𝑎 = 𝑴𝑴† � 𝐽𝐽𝑖𝑖⊺ 𝒂𝒂𝒅𝒅𝑘𝑘 − 𝑴𝑴𝑘𝑘 𝐽𝐽𝑘𝑘̇ 𝒒𝒒̇ 𝑘𝑘=1 ℳ1 𝒂𝒂𝒅𝒅𝒌𝒌 , 𝑴𝑴𝒌𝒌 ℳ𝑘𝑘 𝒂𝒂𝒅𝒅𝑲𝑲 , 𝑴𝑴𝑲𝑲 ℳ𝐾𝐾 … Desired force 1 min � |𝒂𝒂𝒌𝒌 − 𝒂𝒂𝒅𝒅𝒌𝒌 |2𝑴𝑴𝒌𝒌 {𝑎𝑎, 𝑎𝑎𝑘𝑘 } 2 𝒂𝒂𝒅𝒅𝟏𝟏 , 𝑴𝑴𝟏𝟏 𝒂𝒂, 𝑴𝑴 ℳ … 25

24.

Theoretical Properties of RMPflow [2] RMPflow is Lyapunov stable when the sub-task policies belongs to Geometric Dynamical Systems (GDS) GDS is a dynamical system with potential and velocity dependent metrics: M 𝒙𝒙, 𝒙𝒙̇ 𝒙𝒙̈ + 𝜉𝜉G 𝒙𝒙, 𝒙𝒙̇ = −∇𝑥𝑥 Φ 𝒙𝒙 − B 𝒙𝒙, 𝒙𝒙̇ 𝒙𝒙̇ Inertia matrix Curvature term Potential function Damping matrix RMP with GDS: Root node 𝐚𝐚, M 𝐶𝐶 Leaf nodes: GDS Design force map as potential and damping term GDS is characterized by a Lyapunov function: → GDS is Lyapunov stable → policy 𝐚𝐚 is proved to be Lyapunov stable 26

25.

Summary  What is RMP? • RMP analytically generates multi-task policy by combining sub-task policies • The sub-task policies are designed along with their underlying manifolds • RMP-algebra combine them with theorical guarantees  Applications・Extensions (Appendix) • Manipulator / Visual navigation / Multi-agent • Intergradation of RMP and ML is a recent research topic 27

26.

[Appendix] Applications / Extensions 28

27.

Visual Navigation [3]  Applying RMP to visual navigation of 4-wheeled robot [demo video]  (1) Manual design RMPs (Expert RMP) for point-to-goal navigation (w/ localization)  (2) Train a network to infer RMPs from images (w/o localization) RMP network Expert RMP Manual designed RMPs Train ※ Linearized kinodynamics is considered as a RMP Predicted RMPs 29

28.

Multi-agent [5]  Applying RMP to multi-agent formation robots control  Centralized- and Decentralized- RMPs Centralized-RMP-tree Decentralized-RMP-tree RMPs 30

29.

RMP2 [4] [presentation video]  Motivation: • • Hard to implement Jacobians 𝐽𝐽, 𝐽𝐽 ̇ for all edges of RMP-trees Hard to design RMP manually  Method: Combining RMPs using Auto-diff Libraries • Jacobians 𝐽𝐽, 𝐽𝐽 ̇ are computed automatically (with computational efficiency techniques) • End-to-End differentiability → Learnable RMPs state Forward kinematics RMP-algebra + auto-diff API 𝐽𝐽 × 𝑥𝑥 𝐽𝐽 ̇ × 𝑥𝑥 Differentiable action 31

30.

Learning RMPs  Literature [4] Learns specified RMP by reinforcement learning Learn: target reaching RMP hand-design: collision avoidance & joint limit RMPs  RMPfusion learns Riemann metrics 𝑴𝑴 because it is hard to tuning [6]  Literature [8] learns RMPs by human demonstration • Learn motion policy and Riemann metrics • Riemann metrics should be semi-positive by Choleskey decomposition 32

31.

References  [1] N. Ratliff+, "Riemannian Motion Policies“, arXiv, 2018.  [2] C. Cheng+, “RMPflow: A Computational Graph for Automatic Motion Policy Generation”, WAFR, 2018.  [3] X. Meng+, “Neural Autonomous Navigation with Riemannian Motion Policy”, ICRA, 2019.  [4] L. Anqi+, “RMP2: A Structured Composable Policy Class for Robot Learning”, R:SS, 2021.  [5] L. Anqi+, “Multi-Objective Policy Generation for Multi-Robot Systems Using Riemannian Motion Policies”, ISRP, 2022.  [6] M. Mustafa+, “Riemannian Motion Policy Fusion through Learnable Lyapunov Function Reshaping”, CoRL, 2019.  [7] C. Cheng, Efficient and principled robot learning: theory and algorithms. PhD thesis, Georgia Institute of Technology, 2020.  [8] M. Rana+, “Learning Reactive Motion Policies in Multiple Task Spaces from Human Demonstrations”, PMLR, 2020. 33