© Google Deepmind
Author profile picture

Google DeepMind’s latest creation is hitting the table tennis world hard. Its robotic arm, trained with simulations and real data, effortlessly beats beginners and even outperforms 55% of amateurs. With 13 wins in 29 matches, the AI player adjusts his tactics with lightning speed. Although top players have not yet been defeated, it is already impossible for most human opponents to beat the machine. The robot still struggles with fastballs and cannot serve, but that is a matter of time. Even with chess and Go, Deepmind did not immediately succeed in beating the world champion.

With “AlphaPong,” a robot has managed to play a physical sport on a human level for the first time. Using an ABB IRB 1100 industrial robotic arm and custom AI software, this robot can compete with human players at the amateur level.

Training and data

AlphaPong’s training was a combination of computer simulations and real data. The AI learned in a simulated physics environment based on 17,500 real-world ball trajectories and refined its skills with data from more than 14,000 rally balls and 3,000 services. This hybrid training program allowed the robot to adapt to different playing styles and levels without specific training for each player.

In tests, the robot won 45% of 29 matches, including all matches against beginners and 55% against intermediate players. However, against advanced players, the robot lost all matches. The robot still struggles with fast, high, or low balls and cannot measure the ball’s spin directly, which is a limitation against more experienced opponents.