Главная
Study mode:
on
1
HAI Weekly Seminar
2
Previous work
3
Experimental setup
4
Learning generalizable representatie
5
Dynamics prediction from self-supervi
6
How is each modality used?
7
Overview of our method
8
Lessons Learned
9
Overview of today's talk
10
Related works
11
Crossmodal Compensation Model CC
12
Training CCM
13
Corrupted sensor detection during deploy
14
CCM Task Success Rates
15
Model-based methods fit physically interpretable parameters
16
Deep learning-based methods can lean from data in the wild
17
Differentiable audio rendering can learn interpretable parameters from data in the wild
18
Difflmpact gets the best of both worlds impact sounds
19
Physically interpretable parameters are easier to reuse
20
Decomposing an impact sound is an posed problem
21
Modeling rigid object impact forces
22
Parameterizing contact forces
23
Optimize an L1 loss on magnitude spectrograms
24
Analysis by Synthesis Experiment
25
Analysis by Synthesis: Ceramic Mug
26
End-to-End Learning ASMR: Ceramic Plate
27
Robot Source Separation Experiment
28
Steel Fork and Ceramic Mug
29
Difflmpact's Key Takeaways
30
Conclusions
31
Thank you for your Attention
Description:
Explore cutting-edge research on robotic manipulation skills in this Stanford University seminar. Delve into the challenges of high-dimensional state and action spaces, as well as sensor and motor control uncertainties. Discover how Assistant Professor Jennifer Bohg investigates representations of raw perceptual data to enhance robot learning and performance, particularly focusing on the integration of touch sensing with other modalities. Learn about crossmodal compensation models, corrupted sensor detection, and the application of deep learning and physically interpretable parameters in robotic manipulation. Examine innovative approaches like differentiable audio rendering and impact sound modeling for improved robot perception and control. Gain insights into end-to-end learning, robot source separation experiments, and key takeaways from the DiffImpact project, all aimed at advancing robustness and generalizability in robotic manipulation.

Vision, Touch & Sound for Robustness & Generalizability in Robotic Manipulation

Stanford University
Add to list
0:00 / 0:00