Hacker News new | past | comments | ask | show | jobs | submit login
Vibration-minimizing motion retargeting for robotic characters (disneyresearch.com)
155 points by guiambros on Aug 11, 2019 | hide | past | favorite | 29 comments



After seeing the Bartender sample, I'm convinced that this is significant to improving small scale robotic arms. Now the arm does not have to be rock solid, and it's precise enough to do simple tasks. For more sophisticated arms, this can increase their speed/extend their capability. This can even help in exploiting the flexibility of the material, for throwing/catapulting stuff for example. Exciting!


Maybe. One of the issues is these seem to depend on having a very accurate model of the animatronic models and how their materials will react to forces so that the required damping motions can be baked into the model.


Wow. I wonder if this would result in more-perceivably-realistic movement when applied to 3D animation?


Yeah i think some of the simulated ‘optimized’ control output resulted in a more realistic result than the intended animation.


Yeah, not just the real optimized motion but the simulated optimized motion looked significantly better than the original unsimulated animation.

Makes me think for actual animations they should simulate physical systems and then apply optimized control inputs to approximate the animation which the animator inputs. It would result in more grounded and realistic feeling animations.


That might be because it's not longer too perfect. As far as I know little variations in timing are added to computer generated beats to make them sound better? That might be a similar thing?


To make a different audio analogy: When they graph the input vs. the optimized control values, it looks a lot like the ringing artifacts of a low-pass filter. A low-pass filter removes high frequencies from an input signal. Since there are physical limits on how fast limbs can oscillate, maybe that’s part of what makes it look more natural.


I noticed that as well and it reminded me of the Gibb's phenomenon.

https://en.wikipedia.org/wiki/Gibbs_phenomenon


I suspect it's about reducing peak acceleration, jerk, and snap.


You might be able to get more of a mocap look from keyframe animation. Which may or may not be what you want, depending on the production.


Absolutely beautiful work!

As an amateur roboticist/professional dilettante I've often been tempted to consider a robot as little more than a 'Normal Line in R3' (essentially an end effector's pitch/roll/yaw at x/y/z in effector state i from 1...n) and motion planning as little more than crow flying with zero consideration for 2nd or 3rd order effects.

While this may be true for simple industrial applications with robot arms of infinite structural rigidity... I have learned two things from this post:

[1] Physical qualities of the robot arm (beyond range of freedom) can materially affect path planning

[2] Sometimes there is more to a robot than an end effector doing work (e.g. rapping robot's funky dance moves). Raises the question though... does the entire robot arm effectively become an end-effector? Can we consider delighting an audience with cool dance moves 'work' being performed?

Thanks for sharing, and hats off to the Disney team! Great work!


People limping might serve as example that a 'state of the art, well trained neural net' can't achieve good motion with less than perfect hardware.

So how much of this is actually adressing the wrong problem, that is targeting the actuation instead of how the model should be build to achieve the result i.e. dampening oscillations?

Here's the machine-learning stick figure walking model that most likely all of you know. How much of the problem with getting them to walk decently is a wrong model with bad contraints regarding angles at joints or length of limbs? Shouldn't that be evolved iteratively as well to achieve the best results possible?

https://www.kurzweilai.net/images/humanoid-walking-training.... (not trained well yet btw)


> People limping might serve as example that a 'state of the art, well trained neural net' can't achieve good motion with less than perfect hardware.

I think that people limping has more to do with avoiding pain than damaged 'hardware'. An example being people who are on large amounts of drugs being able to push through pain and further injuring themselves.

> So how much of this is actually adressing the wrong problem

I think this paper addresses the right problem. By modeling the robot as a flexible system instead of a rigid system, performance improvements can be made in many scenarios.

Because there is no such thing as a perfectly rigid material (well at least within the realm of feasibility), even if the robot was designed to have extremely rigid and perfectly optimized joint angles and limb lengths, this technique would be beneficial.

Of course, where it really shines is when applied to a low cost, low weight system like the ones demonstrated in the paper. In the world of engineering, keeping things simple, low-cost and light opens many doors for using cheaper hardware and simplifying the design process.

If every time Disney wanted a new animatronic robot they had to get custom fabricated joints and limbs, the costs would be exorbitant. If instead they could just reach into their standard limbs box and slap it together, and then let the software fix it they save money and effort.


> People limping might serve as example that a 'state of the art, well trained neural net' can't achieve good motion with less than perfect hardware.

Toddlers are a good example of the opposite (fall down quite a bit even with "good hardware"). Seems intuitive that you benefit from having good control of the hardware, even if you can also improve it.


I don't consider toddlers as well-trained in that regard. Or ... any other ;)


That's kinda my point :)


> ...targeting the actuation instead of how the model should be build...

I see this as another tool to use to get 'ideal' motion. Sure, build your robot out of better parts, but then use this software to continue to prevent oscillation, even in your 'better-designed' hardware.


From the article and video I got the impression that the goal was to give animators control over the robot's motion, while maintaining stability. The animators will not have a good model of the physical robot in their head, and that is what this model is correcting.


I think _Microft's point is that the robot should be able to perform the original motion without oscillating, rather than having to adjust the motion to compensate for oscillation.


Yes, take the human spine for example: the double s-shape of the spine makes it very difficult to induce oscillations in it and helps absorbing shocks. And then there is the 'Dancer' robot (first example, ca. 20s into the video) with a straight 'spine' ... wobbling around.


A subtle constraint to optimize is hardware cost.

I assume Disney chose to separate the constraints by going with simple hardware shapes, yielding a predictable cost and an easy construction. A good hardware optimizer would also need to engineer assembly, which seems nontrivial.

Software costs of optimizing motion for existing hardware are more predictable (~19 minutes of an i7-7700 w/ 32GB, per second of animation).

That being said, they acknowledge in their paper that they could optimize certain properties of the robot, like rod size or joint position.


I can imagine adding this to gimbal stabilization within a drones camera. Even with a 3-axis gimbal, which is already pretty stable, I bet it would make things even better.


You would probably need to use sensing to improve stabilization there. The output of the approach here is just a position (I'm assuming) trajectory. They manage to do that because they have a good enough model of the beams and masses. I imagine it's harder to get a comparable model of the disturbances in a drone.


There's a pretty cool project using ML to help translate control from simulators to the real world. Video if it here, paper in the description: https://www.youtube.com/watch?v=aTDkYFZFWug

This seems like it could also help deliver something similar to what Disney has done (or possibly improve it)


Fantastic work. Is there any code/libraries online for this or is my only option to implement the paper?


I also wasn't able to find any sample code, but this thread [1] has some additional comments about the techniques used in the paper.

[1] https://www.reddit.com/r/robotics/comments/cjy77r/r_vibratio...


Is this different from 6th order jerk control motion planning, as used by modern CNC and 3D printer controllers? eg TinyG, g2core, recent Marlin (PR 10337)


I think it's different in a couple of aspects:

- there's a stronger timing constraint to the positioning (more than in CNC/3D printers)

- the arms/body are significantly more wobbly

- Apparently you don't need to model the wobbly response "by hand"


Cool, thanks. :)

Hopefully the algorithms (and code) they're using get Open Sourced at some point, so they can be looked over and possibly incorporated in things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: