Thanks to artificial intelligence, the AIMOS software is able to recognize bones and organs on three-dimensional grayscale images and segments them, which makes the subsequent evaluation considerably easier. Image: Astrid Eckert / TUM
Thanks to artificial intelligence, the AIMOS software is able to recognize bones and organs on three-dimensional grayscale images and segments them, which makes the subsequent evaluation considerably easier. Image: Astrid Eckert / TUM Self-learning algorithms analyze medical imaging data - Imaging techniques enable a detailed look inside an organism. But interpreting the data is time-consuming and requires a great deal of experience. Artificial neural networks open up new possibilities: They require just seconds to interpret whole-body scans of mice and to segment and depict the organs in colors, instead of in various shades of gray. This facilitates the analysis considerably. How big is the liver? Does it change if medication is taken? Is the kidney inflamed? Is there a tumor in the brain and did metastases already develop? In order to answer such questions, bioscientists and doctors to date had to screen and interpret a wealth of data. "The analysis of three-dimensional imaging processes is very complicated," explains Oliver Schoppe.
UM DIESEN ARTIKEL ZU LESEN, ERSTELLEN SIE IHR KONTO
Und verlängern Sie Ihre Lektüre, kostenlos und unverbindlich.