Why we need Neuroethics

NewIn: Marcello Ienca

New findings in the fields of artificial intelligence (AI) and brain research are accelerating each other. The emerging technologies will profoundly change our lives. However, we are at a point where we can shape these developments, says Marcello Ienca. In this episode of "NewIn," the Professor of Ethics of AI and Neuroscience talks about the potential and risks of current developments.

Software such as Chat GPT gives computers human-like abilities. At the same time, there have been recent breakthroughs in neurotechnology - most prominently AI-based implants allowing people to speak or walk again. Is there a connection?

Absolutely. We are currently experiencing a scientific revolution. Advances in AI and neurology are spurring each other on. They form a virtuous circle - the opposite of the vicious circle.

That sounds very optimistic. Even if you look solely at AI, there are many dangers: Lack of transparency, discrimination, flawed decisions...

Of course, there are dangers. For example, it is conceivable that AI can derive sensitive information, such as a person’s sexual orientation, from neurodata. This raises new questions about self-determination. Our brain is no longer that fortress that is not available to the digital world. We increasingly have access to the neurological basis of thought processes. As a society, we must consider what we want and where we draw red lines.

And that’s why we need ethics?

Yes, but for me, ethics does not just mean avoiding dangers and risks. It’s also about doing good. We do this by developing the technologies that can help the hundreds of millions of people with neurological and psychiatric disorders. Especially if we incorporate ethical considerations from the outset through human-centered technology development.

Isn’t it already too late for that?

For neurotechnology: No. Regarding AI, we only acted reactively and responded to existing technologies. This time, we are acting proactively. For example, in 2019, OECD, the International Organization for Economic Cooperation and Development, established guidelines for the responsible development of neurotechnologies. I contributed to these guidelines myself. Currently, the Council of Europe and UNESCO are also developing principles on this topic.

Are the guidelines also relevant for companies? Elon Musk’s company Neuralink recently announced that it had implanted a brain implant in a patient. Apple has secured a patent for measuring brain waves with AirPod headphones in 2023. That sounds rather worrying to me.

In some cases, the private sector resembles the Old Wild West. Neuralink is an example of a company seemingly having no interest in ethics. On the other hand, many other companies have set up ethics councils. Companies were also actively involved in developing the OECD guidelines. We must - and can - ensure that a majority of neurotech companies cultivate a culture of responsible innovation.

Does this mean that new laws are not necessary?

We can pass laws that regulate which products can be sold in Europe. However, compulsion is not always the only solution. It is also in the interests of companies to prevent a scandal like Cambridge Analytica in the coming years. That would have a devastating impact on the entire field.

Apart from working on the guidelines, what are you currently researching yourself?

Many things. For example, as part of a collaborative international project we are working with people with brain implants to incorporate their views (e.g. on ethical aspects) into the development of future implants. Another example: we are working with colleagues from the computer science department at TUM on the development of transparent AI for neurotechnology and privacy-preserving neural data processing.