Virginia Dignum, Umeå University (Sweden)
The last few years have seen a huge growth in the capabilities and applications of Artificial Intelligence (AI). Hardly a day goes by without news about technological advances and the societal impact of the use of AI. Not only are there large expectations of AI's potential to help to solve many current problems and to support the well-being of all, but also concerns are growing about the impact of AI on society and human wellbeing. Currently, many principles and guidelines have been proposed for trustworthy, ethical or responsible AI.
In this talk, I argue that ensuring responsible AI is more than designing systems whose behavior is aligned with ethical principles and societal values. It is above all, about the way we design them, why we design them, and who is involved in designing them. This requires novel theories and methods that ensure that we put in place the social and technical constructs that ensure that the development and use of AI systems is done responsibly and that their behavior can be trusted.
Virginia Dignum is Professor of Social and Ethical Artificial Intelligence at Umeå University, Sweden and associated with the TU Delft in the Netherlands. She is the scientific director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software. She is a Fellow of the European Artificial Intelligence Association (EURAI), a member of the European Commission High Level Expert Group on Artificial Intelligence, of the World Economic Forum’s Global Artificial Intelligence Council, of the Executive Committee of the IEEE Initiative on Ethically Alligned Design, and a founding member of ALLAI-NL, the Dutch AI Alliance. She is the author of “Responsible Artificial Intelligence: developing and using AI in a responsible way” published by Springer in 2019.
She has a PHD in Artificial Intelligence from Utrecht University and in 2006 she was awarded the prestigious Veni grant by the NWO (Dutch Organization for Scientific Research). She a well-known speaker on the social and ethical impacts of Artificial Intelligence, is member of the reviewing boards for all major journals and conferences in AI and has published over 180 peer-reviewed papers.
Alessio Lomuscio, Imperial Coleague London (UK)
Over the past 15 years, methods for the formal verification of multi-agent systems, such as model checking, have achieved a considerable degree of sophistication and increasingly complex systems have been analysed.
But two design paradigms are rapidly emerging in applications of agent-based systems: swarming and machine learning (ML). In swarm-based agent-based systems, such as swarm robotics or IoT systems, the number of agents is not known at design time and may vary at run-time. In ML-based agent systems, the agents are not programmed via an agent-based programming language, but learned from data. Traditional verification methods cannot deal with systems with these characteristics.
In this talk I will summarise some of the recent work in our lab towards the verification of agent-based systems in which the number of components is unbounded at design time and agents synthesised via ML-based methods.
(The talk is based on joint work with several members of the Verification of Autonomous Systems research group at Imperial College London).
Alessio Lomuscio (http://www.doc.ic.ac.uk/~alessio) is full Professor in the Department of Computing at Imperial College London (UK), where he leads the Verification of Autonomous Systems Group and serves as Deputy Director for the UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence. He is a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. He held an EPSRC Leadership Fellowship between 2010 and 2015.
Alessio's research interests concern the realisation of safe artificial intelligence. Since 2000 he has worked on the development of formal methods for the verification of autonomous systems and multi-agent systems. To this end, he has put forward several methods based on model checking and various forms abstraction to verify AI systems. A particular focus of Alessio's work has been the attention to support AI specifications, including those concerning the knowledge and strategic properties of the agents in their interactions. The methods and resulting toolkits have found applications in autonomous vehicles, robotics, and swarms. He has recently turned his attention to the verification of systems synthesised via machine learning methods.
He has published over 150 papers in AI conferences (including AAMAS, IJCAI, KR, AAAI, ECAI), verification and formal methods conferences (CAV, SEFM), and international journals (AIJ, JAIR, JAAMAS).
Sascha Ossowski, University Rey Juan Carlos (Spain)
Many challenges in today’s society can be tackled by open distributed software systems. To instil coordination in such systems is particularly demanding, as usually only part of them can be directly controlled at runtime. In this talk, I will first introduce the agreement technologies (AT) approach to the development of open multiagent systems. I will argue that, by adequately combining models and methods from the AT sandbox, one can achieve an appropriate level of coordination in domains with different degrees of openness, and will back my claim by means of various application examples.
Sascha Ossowski is a full professor of computer science and director of the Centre for Intelligent Information Technologies (CETINIA) at University Rey Juan Carlos in Madrid. He received a MSc degree in informatics from U Oldenburg (Germany) and a PhD in artificial intelligence from TU Madrid (Spain). The main themes of his research refer to models and mechanisms for coordination in all sorts of agent systems and environments. He was co-founder and first chair of the board of directors of the European Association for Multiagent Systems (EURAMAS), and chaired the European COST Action on Agreement Technologies. He is emeritus member of the board of directors of the International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) and belongs to the steering committee of the ACM Annual Symposium on Applied Computing (SAC).