“MIND THE GAPS: Deception in real world argumentative dialogue systems”, presented at the 9th Conference of the International Society for the Study of Argumentation (ISSA), University of Amsterdam, NL, on Friday 6th July 2018.
Abstract: With the increasing prevalence of artificially intelligent machines in everyday life, a trend that threatens not only to continue but to accelerate, the need to examine how people interact with these machines intensifies. Whilst the basis for much of the increased interest and utility of AI has been rooted in machine learning and neural network based systems, there are also areas of particular concern for argumentation theorists. For example, regardless of how an AI decision is made internally, should that decision be called into question, then the system should be able to explain itself, and perhaps even defend itself, furthermore, the system should be able to work with people to improve decisions, should they be found wanting.
This is in line with recent trends stemming from various regulatory and professional bodies, which have independently proposed that artificial intelligence systems be capable of explaining their decisions. This trend is found both at the supranational regulatory level, in recommendations from the European Commission, as well as at the industrial professional level, in British standards for intelligent and autonomous robots.
It would appear that many years of research into formal argumentative dialogue systems may soon result in real-world payoffs. However, thorny questions remain in relation to how our ideal, normative systems of argument and dialogue will fair when exposed to real-world motivations.
Whilst it is often assumed that the truth should, or will, always be told, this can be easier said than done, and even when achievable, can be counterproductive. In this paper we attempt to shed light on some gray areas concerning truth telling, or lack thereof, in relation to human dialogical interaction with AI systems. From this investigation, we make recommendations for the design of future, real world, applied dialectical argumentation systems.
“Towards Argumentative Dialogue as a Humane Interface between People and Intelligent Machines”, presented at the Reasoning, Learning, & eXplainability Workshop (ReaLX 2018), Aberdeen, Scotland, on Wednesday 27th June 2018.
“Monkeypuzzle: Towards Next Generation, Free & Open-Source, Argument Analysis Tools”, presented at the Seventeenth International Workshop on Computational Models of Natural Argument (CMNA17), London, England, on Friday 16th June 2017.
“Automatically Detecting Fallacies in System Safety Arguments”, presented at the Fifteenth International Workshop on Computational Models of Natural Argument (CMNA15), Bertinoro, Italy, on Monday 26th October 2015.
“Argument Mining: Was Ist Das?”, presented at the Fourteenth International Workshop on Computational Models of Natural Argument (CMNA14), hosted by the Jurix conference, Krakow, Poland, on Wednesday 10th December 2014.
“Using Code Generation to Build a Platform for Developing & Testing Dialogue Games”, presented at the Fourteenth International Workshop on Computational Models of Natural Argument (CMNA14), hosted by the Jurix conference, Krakow, Poland, on Wednesday 10th December 2014.
“Applied Argument Mining: Supporting Behaviour Change”, presented at the First SICSA Workshop on Argument Mining (SWAM), hosted by the School of Computing, University of Dundee, on Wednesday 9th July 2014.
“Towards Applied & Reproducible Gamified Interactions”, presented at the First Urban Sustainable, CollaboratIve, and Adaptive MObility Workshop (USCIAMO) workshop, hosted by the 11th International Conference on the Design of Cooperative Systems (COOP 2014), on Monday 27th May 2014.
“Aligning Argumentation Theory with Behaviour Change Mechanisms”, presented at the Second Scottish Argumentation Day, hosted by the School of Computing, University of Dundee, on Friday 19th July 2013.
Abstract: In this short talk I report on preliminary work that aims to effect real change in the context of difficult societal problems. Many such problems stem from the cumulative effects of the individual behaviours of large numbers of people. Digital behaviour change mechanisms are used to support people in forming new habitual behaviours and build on rich psychological models of behaviour dynamics. Argumentation theory has rich models of both argumentation and interaction as well as extensive collections of stereotypical patterns of real-world argumentation. In this work we begin to align elements of pyschological models of behaviour change with models of argumentative interaction. The aim is to increase the motivation of bahaviour change targets, enabling them to make informed and justifiable decisions about their behaviours and to increase the overall effectiveness of behaviour change mechanisms.