ChatGPT: reasoning machine needs to be regulated

- EN - DE

Robotics must be safe

OpenAI-Chef Sam Altman beim Besuch im Audimax der TUM zusammen mit Reinhard Heck
OpenAI-Chef Sam Altman beim Besuch im Audimax der TUM zusammen mit Reinhard Heckel.
ChatGPT can make programming more efficient, write texts, act as a brainstorming partner or create design proposals. But when it comes to the deployment of generative artificial intelligence in the physical world of robotics, researchers at the Technical University of Munich (TUM) are cautious.

In the view of OpenAI CEO Sam Altman, the generative AI model ChatGPT, which he calls a reasoning machine, marks the start of a paradigm shift comparable to the launch of the iPhone or even the internet. On his visit to TUM at the end of May, he described it as "the most useful, significant and inspiring tool ever created by humanity." Is the same true for robotics?

"Now I often don’t have to write my own code."

Reinhard Heckel is a fan of ChatGPT. The professor of machine learning at TUM uses the OpenAI product Copilot himself for programming purposes. "Now I often don’t have to write out parts of the code myself. The tool does it automatically," explains Prof. Heckel. Especially with routine tasks, this saves time that he can use to work on the "truly exciting" programming tasks. In brainstorming sessions with ChatGPT, he looks for good titles for a paper or even lets ChatGPT get started with the writing. "ChatGPT is a good assistant for small and bothersome tasks," says Prof. Heckel. Even the image recognition of the tool is so good that blind people can use it to tell them how the world looks. However, limits are reached when robots are involved. "Cutting an onion, for example, is a complex task for a robot that is now more challenging than dealing with text." Nevertheless, he concludes: "The pluses outweigh the minuses. ChatGPT and tools like Copilot make work easier and more efficient."

Training data for neural networks should be there

Klaus Diepold is also experimenting with generative artificial intelligence. A few years ago, in his bachelor course "Computers and creativity", the professor of data processing at TUM assigned students the task of developing ideas on how computers could become creative. Using forerunners of ChatGPT 3, the students created a greeting card "for Aunt Erna" - with AI-generated graphics and text. "The software wrote a poem and instructed the tool to add buttercups - Aunt Erna’s favorite flower," recalls Diepold.

However, there were no practical applications in robotics. Today, under the lighthouse project KI.Fabrik at the Robotics and AI Institute MIRMI, doctoral candidate Lucca Sacchetto is working on a design generator based on Dalle-E, the image transformer from Altman’s company OpenAI. "There is plenty of knowledge for generative models to use when creating a chair because lots of images are available to train neural networks. But the task of designing a robot gripper is more difficult because of the lack of training data. On top of that, we have requirements for the rigidity of the material and other physical properties," says Diepold.

Robotics: no mistakes allowed

The challenges in classical robotics are complex. A service robot, for example, moves in an environment where people are present. It might hand a glass of water to a nursing home resident or help someone in a rehabilitation setting. "The robot has to be autonomous, smart and agile and cannot do harm," says Diepold. "In environments where safety is an issue, you cannot allow even one mistake to happen."

So regulations are especially important in robotics. At the Munich event, Altman promised that he intends to "regulate his software responsibly." As "a nice linguistic toy", as Diepold calls it, ChatGPT does not do any visible damage for the moment. But Diepold has doubts about its intelligence. "ChatGPT produces texts that are eloquent, but not intelligent," says the professor. "It’s like a politician giving a speech: it might all sound fine, but that certainly doesn’t make it true."

Alin Albu-Schaeffer, a professor of sensor-based robot systems and intelligent assistance systems at TUM, is emphatic: " AI must interact with the physical world. ’Just’ talking is not enough." The best example of what can go wrong comes from a homework assignment that his students generated with ChatGPT. "The text reads quite well, but none of the references in this piece of scientific writing were correct," says Diepold. Whenever knowledge is lacking, ChatGPT compensates by inventing things. In robotics, by contrast, mistakes are not permitted.

"A still immature technology has been unleashed on the world"

Alena Buyx of TUM, the co-director of the high-tech platform munich_i, is one of around 20 professors weighing up opportunities and risks in the generative AI task force at TUM. "A still immature technology has been unleashed on the world, in a sense. So now there is a rush to make recommendations for the legal regulations that lie ahead - from the technical side and for the ethical, social and regulatory aspects," explains the professor of ethics in medicine and healthcare technologies. Along with the risks, however, she is also interested in recognizing the potential upsides and carefully considering where they can be used. In hospitals in the USA, for example, social bots are already being deployed - in controlled and supervised settings - to conduct in-depth patient discharge interviews, taking time to answer all questions. Generative AI is also being used in biotechnology to analyze protein folding in order to develop new molecules for cancer drugs, for example.

Are the developments heading in the right direction? "All of the data needed to train the AI models ultimately come from all of us. They have been collected by the companies and used for technologies that will soon permeate the world we live in," explains Buyx. "Obviously societies have the right to actively shape the use and regulation of the technologies." On his visit to Munich, OpenAI CEO Altman also stressed his determination to regulate the capabilities of his reasoning machine responsibly. There is still a lot of work to do. That is another reason why Altman believes that the opportunities for young people starting out in tech careers are better today than at any time in the past 15 years.