A team of researchers, including those from Penn State University, has investigated how blockchain technology can enforce ethical boundaries on artificial intelligence systems. Their work, published in the Journal of Cybersecurity and Privacy, proposes a framework to ensure AI complies with human values. The study highlights blockchain's potential for transparency and accountability in AI decision-making.
In Malvern, Pennsylvania, Penn State Great Valley Assistant Professor of Software Engineering Dusan Ramljak and his collaborators published a paper examining blockchain's role in aligning AI with ethical standards. As AI systems like autonomous trading bots and diagnostic tools increasingly operate with minimal human oversight, risks of untrustworthy outcomes in areas such as medicine and finance rise. "AI systems are trained to act in certain ways within specific parameters. If the parameters shift... they can act in unpredictable ways that contradict human values — for example, misrepresentation by mishandling sensitive health data or misinformation by recommending the wrong medical treatment," Ramljak explained.
The team, which includes Penn State Abington Assistant Professor Maryam Roshanaei, master's student Jainil Kakka, Vienna University doctoral students Alexander Neulinger and Lukas Sparer, and University of Kragujevac doctoral student Dragutin Ostojić, reviewed over 7,000 studies. They developed a three-tiered framework for future research: micro-level analysis focusing on technical protocols to secure user data; meso-level analysis for sector-specific use cases in finance and healthcare; and macro-level analysis addressing moral principles, international legal standards, and cultural variations, such as the European Union’s Artificial Intelligence Act.
Blockchain, known for powering cryptocurrency, offers secure, decentralized records with transparent data lineage and tamper-resistant logs. It enables consensus mechanisms where participants vote on ethical rules, enforced via smart contracts. This promotes traceable decisions and resists manipulation of training data. However, challenges include high energy consumption, privacy risks from increased transparency, and the need for flexible frameworks across diverse cultural and legal contexts.
Ramljak emphasized the need for multi-layered frameworks integrating security, governance, and legal constraints. "The convergence of blockchain and AI could give us not only enhanced reliability of technical systems but also new ways of human-centered governance, ensuring accountability at scale," he said.