Reactive Human-to-Robot Handovers of Arbitrary Objects
This is a bit out of the scope of what I usually cover on DT, but I was obsessed with robot arms during high school and this new NVIDIA paper by Yang et al. (2020) looks awesome. Their project, Reactive Human-to-Robot Handovers of Arbitrary Objects, does exactly what it says on the tin: it uses computer vision to let the robot arm grasp arbitrary objects presented by the user. This is a really difficult problem that’s key to building the kinds of robots we see in movies! The researchers posted a 3-minute demo video on YouTube, which is a fun watch.