
Prof. Darius Burschka learned it from sailors
Calculate impending collisions of flying drones or cars in traffic in advance and thus avoid them: That is the goal of Darius Burschka. To do this, the professor at the Technical University of Munich (TUM) tracks every point of an image taken by the cameras of a drone in the air or a vehicle on the road. In principle, he is proceeding in exactly the same way as seafarers have always done with the standing bearing.The compound eye of a wasp gave Burschka an idea. By swiveling its body horizontally back and forth, the insect probes which objects are close and which are farther away. This is how she builds her mental map when she is on the move.
Airspace and road traffic: 60 measurements per second for more safety
Burschka, Co-Head of Perception at the Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM, uses a similar solution to find out whether drones or cars are in danger of colliding with other objects. His computer system checks the pixels of a camera 60 times a second and determines the "collision ratios. We track up to one million pixels of an image in real time," Burschka explains. He does not need a supercomputer to calculate this so-called optical flow, but "only" a very powerful graphics processor that handles the image processing and another processor that evaluates the collision paths and a camera. We look at the features in the image that are detectable and see how they move across the image," Burschka describes.
Two-dimensional images as a basis: Like the standing bearing in shipping
To calculate the current danger of a collision, the TUM professor only needs two-dimensional images from one perspective, like the wasp that fixes individual points and perceives their change. Or like a sailor who proceeds according to the standing bearing. According to the definition of "standing bearing", a ship is on a collision course if the bearing does not change or changes only slightly when the vessels approach. A collision is best detected when you pay attention to the objects around you that are not moving," says Burschka. The TUM scientist calculates where and at what distance objects fly past the camera, i.e., "pierce" the observation plane. Traditionally, experts for autonomous driving, for example, use multiple cameras that calculate the distances to other objects via vectors in the close range. If the objects are far away from the camera, the 3-D method no longer delivers reliable results," explains Burschka. Then the movement of the individual points between the images is no longer perceptible.
Paradigm shift: Time to Interaction replaces metric state determination
With the new method, objects that are still far away but very quickly coming directly toward the observer are recognized as more dangerous than others that are instantly closer but moving away in the same direction. "This means that prioritization is not based on movement, but on dynamic collision conditions," Burschka says. All the "features" in the image are now under observation, and the potentially dangerous ones can be flagged accordingly. "We measure the time to interaction," says Burschka, meaning the time that elapses before a collision occurs. The new method allows scientists to analyze motion with a single camera, where the camera moves as well as the object. "In contrast to metric reconstruction, this approach is much cheaper and more robust," Burschka is CONVINCED. So using time-to-interaction would be a paradigm shift for research. The professor wants to use his invention in drones, in connected vehicles and in service robotics.