Here the recorded demonstrations of 5G Research Hub Munich is displayed. A new video will be uploaded whenever a new state in the project is achieved and recorded.
30.09.2022, TUM, Germany
Advanced wireless communication networks provide lower latency and a higher transmission rate. Although this is an enabler for many new teleoperation applications, the risk of network instability or packet drop is still unavoidable. Real-time manipulator teleoperation requires data transmission with no discontinuity. Shared autonomy (SA) is a standard method to mitigate this issue. In this way, if the data from the remote side is unavailable, the controller can continue based on the previously observed models. However, due to the spatial gap between human and robot trajectories, indisputable fluctuations occur, which cause issues in teleoperation applications. This motivates us to propose a new skill refinement strategy to modify the previously trained skill and mitigate the sudden unwanted motions within the control takeover phase. To this end, our approach comprises applying the Hidden Semi-Markov Model (HSMM) and Linear Quadratic Tracker (LQT) in combination to learn and predict the user’s intentions and then exploiting Coherent Point Drift (CPD) to refine the executable trajectory. We test our method both in simulation and in the real world for 2D English letter drawing and 3D robot-assisted feeding scenarios. Our experimental results using the Kinova®️ Movo platform show that the proposed refinement approach generates a stable trajectory and mitigates the control switching inconsistency.
30.09.2022, TUM, Germany
Pouring liquids accurately into containers is one of the most challenging tasks for robots as they are unaware of the complex fluid dynamics and the behavior of liquids when pouring. Therefore, it is not possible to formulate a generic pouring policy for real-time applications. In this paper, we propose PourNet, as a generalized solution to pouring different liquids into containers. PourNet is a hybrid planner that uses deep reinforcement learning, for end-effector planning, and Nonlinear Model Predictive Control, for joint planning. In this work, we introduce a novel simulation environment using Unity3D and NVIDIA-Flex to train our agents. By effective choice of the state space, action space and the reward functions, we allow for a direct sim-to-real transfer of the learned skills without additional training. In the simulation, PourNet outperforms state-of-the-art by an average of 4.9g deviation for water-like, and 9.2g deviation for honey-like liquids. In the real-world scenario using Kinova Movo Platform, PourNet achieves an average pouring deviation of 2.3g for dish soap when using a novel pouring container. The average pouring deviation measured for water was 5.5g.
12.02.2021, TUM, Germany
Motion control and planning for the manipulator are critical components in manipulator teleoperation. Online (real-time) motion control is challenging for active obstacle avoidance and often results in fluctuating and unsafe motion. Offline motion planning, on the other hand, generates precise and secure trajectories for complex manipulation. In this paper, a real-time nonlinear model predictive control based motion planner (NMPC-MP) is designed for teleoperated manipulation. In contrast to traditional NMPC-based approaches, our model considers a complex environment with dynamic obstacles. Our multi-threaded NMPC-MP allows for real-time planning, including dynamic objects. We evaluate our approach both in a simulated environment and with real-world experiments using the Kinova ® Movo platform. The comparison to state-of-the-art approaches (e.g., RRT-Connect, CHOMP, and STOMP) shows a significant improvement in real-time motion planning using NMPC-MP. In real-world tests, the proposed planner was applied on a human-shaped dual manipulator setup. Our results show that the NMPC-MP runs in real-time and generates smooth and reliable trajectories. The experiments validate that the planner is able to precisely track active goals from the teleoperator while avoiding self-collision and obstacles.
28.11.2020, TUM, Germany
In the video below end-to-end connection of our robot MOVO to 7-DoF Haptic Interface is demonstrated. The 5G Research Hub Munich presents the new features (autonomous navigation and dynamic path planning, Cloud-RAN and function split, data plane isolation) in our end-to-end system in this video. Further, the LMT vision system is integrated to the robot which is streaming over 5G to head mounted display for teleoperation.
18.02.2020, TUM, Germany
In the video below end-to-end connection of our new robot MOVO to 7-DoF Haptic Interface is demonstrated over our network model. Commands are sent through Telepresence Station, where haptic interface is located and controlled via teleoperator, to the Radio Access Network which communicates with the MOVO robot via wireless channel with USRPs. The setup diagram of this demonstration is also provided within the video.
18.06.2019, TUM, Germany
In the video below the very first wireless control of the MAVI robot is performed over our network model. Commands are sent through the Core Network to the Radio Access Network which communicates with the robot via wireless channel with USRPs. It is demonstrated in this video that we are able to control the movement of the robot and the gripper with our approach.
19.12.2019, TUM, Germany
Our 5G enabled robots wish you Merry Christmas and a Happy New Year! (If the video is not displayed click here.)
This work receives funding by the Bavarian Ministry of Economic Affairs, Regional Development and Energy as part of the project 5G Testbed Bayern mit Schwerpunktanwendung "eHealth".