Scientists in a lab urgently discussing consciousness amid holographic displays of brains, AI, and organoids, highlighting ethical risks from advancing neurotech.
Scientists in a lab urgently discussing consciousness amid holographic displays of brains, AI, and organoids, highlighting ethical risks from advancing neurotech.
Immagine generata dall'IA

Scientists say defining consciousness is increasingly urgent as AI and neurotechnology advance

Immagine generata dall'IA
Verificato

Researchers behind a new review in Frontiers in Science argue that rapid progress in artificial intelligence and brain technologies is outpacing scientific understanding of consciousness, raising the risk of ethical and legal mistakes. They say developing evidence-based tests for detecting awareness—whether in patients, animals or emerging artificial and lab-grown systems—could reshape medicine, welfare debates and technology governance.

The rapid development of artificial intelligence and neurotechnology is intensifying calls from consciousness researchers to clarify what it means to be conscious—and how to detect it.

In a review published in Frontiers in Science, Prof. Axel Cleeremans of Université Libre de Bruxelles, Prof. Liad Mudrik of Tel Aviv University, and Prof. Anil Seth of the University of Sussex argue that advances in these technologies are moving faster than scientific agreement on how consciousness arises. They describe consciousness in broadly familiar terms—as awareness of the world and of oneself—while noting that science still lacks consensus on how subjective experience emerges from physical processes.

The authors point to ongoing competition among major scientific theories of consciousness, including global workspace approaches, higher-order theories, integrated information theory and predictive processing frameworks. They argue that progress depends in part on developing stronger methods to test these ideas, including “adversarial collaborations” in which proponents of rival theories jointly design experiments intended to distinguish between them.

A key goal, the review argues, is the development of evidence-based tests for consciousness that can be applied beyond healthy adult humans. Such tools could affect clinical care by helping clinicians detect covert awareness in some patients who appear unresponsive, and by refining assessments in conditions such as coma, advanced dementia, and anesthesia—areas that can influence treatment planning and end-of-life decisions.

The review also outlines potential implications for mental health research. The authors argue that a better scientific account of subjective experience could help narrow gaps between findings in animal models and the lived experience of human symptoms, with possible relevance for conditions including depression, anxiety and schizophrenia.

Beyond medicine, the authors say improved ways of identifying consciousness could reshape debates over animal welfare and ethical obligations, influencing practices in research, agriculture and conservation if society gains clearer evidence about which animals are sentient.

They also highlight potential legal consequences. The review notes that neuroscience findings about unconscious influences on behavior could pressure legal systems to revisit how they interpret responsibility and concepts such as mens rea, the mental element traditionally required for criminal liability.

In technology, the authors argue that emerging systems—from advanced AI to brain organoids and brain–computer interfaces—raise new questions about whether consciousness could be created, altered, or convincingly simulated, and what moral and regulatory obligations might follow. Cleeremans warned that unintended creation of consciousness would pose “immense ethical challenges and even existential risk.” Seth said that advances in the science of consciousness are likely to reshape how humans understand themselves and their relationship to both AI and the natural world. Mudrik argued that a clearer understanding of consciousness in animals could transform how humans treat them and other emerging biological systems.

To move the field forward, the authors call for more coordinated, collaborative research that combines careful theory testing with greater attention to phenomenology—the qualities of experience itself—alongside functional and neural measures.

They argue that such work is needed not only to advance basic science, but also to prepare society for the medical, ethical and technological consequences of being able to detect—or potentially create—consciousness.

Cosa dice la gente

Initial reactions on X to the article primarily involve shares and paraphrases emphasizing the urgency of defining consciousness due to advances in AI and neurotechnology. Users highlight ethical risks, the need for scientific tests for awareness, and potential impacts on medicine, law, animal welfare, and rights for machines or lab-grown systems. Sentiments are mostly neutral with some underscoring the unsettling moral implications.

Articoli correlati

Illustration of glowing whole-brain neural networks coordinating efficiently, representing a University of Notre Dame study on general intelligence.
Immagine generata dall'IA

Study points to whole-brain network coordination as a key feature of general intelligence

Riportato dall'IA Immagine generata dall'IA Verificato

University of Notre Dame researchers report evidence that general intelligence is associated with how efficiently and flexibly brain networks coordinate across the whole connectome, rather than being localized to a single “smart” region. The findings, published in Nature Communications, are based on neuroimaging and cognitive data from 831 Human Connectome Project participants and an additional 145 adults from the INSIGHT Study.

Members of the Catholic Educational Association of the Philippines said artificial intelligence cannot duplicate the human conscience as they pushed for the responsible integration of AI into the teaching-learning process.

Riportato dall'IA

At the India AI Impact Summit, Prime Minister Narendra Modi described artificial intelligence as a turning point in human history that could reset the direction of civilisation. He expressed concern over the form of AI to be handed to future generations and emphasised making it human-centric and responsible. Experts have warned about risks including data privacy, deepfakes, and autonomous weapons.

Researchers at Korea University have developed a dual-output artificial synapse to boost the energy efficiency of multitasking AI systems, the university announced. The device emits both electrical and optical signals simultaneously to enable parallel processing. Tests showed up to 47 percent faster computation and energy use reduced by as much as 32 times compared to conventional GPU hardware.

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta