Category Archives: AI Ethics

ALTAI – A new assessment tool from the High-Level Expert Group on Artificial Intelligence (HLEG)

ALTAI Spider Graph

The High-Level Expert Group on Artificial Intelligence (HLEG) have produced a new tool for the assessment of trustworthiness of AI systems. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a tool to assess whether AI systems at all stages of their development and deployment life cycles comply with seven requirements of Trustworthy AI.

The HLEG was set up by the European Commission to support the European Strategy on AI. Created in June 2018, HLEG produce recommendations related to the development of EU policy, together with recommendations related to the ethical, social and legal issues of Artificial Intelligence. HLEG is comprised of over 50 experts, drawn from industry, academia and civil society. It is also the steering group for the European AI Alliance; essentially a forum that provides feedback to the HLEG and more widely contributes to the debate on AI within Europe.

The ALTAI Tool

Based on seven key requirements, the new ALTAI tool is a semi-automated questionnaire allowing you to assess trustworthiness of your AI system. It does rely on honest answers to the questions of course! The seven key requirements are identified as:

    • Human Agency and Oversight.
    • Technical Robustness and Safety.
    • Privacy and Data Governance.
    • Transparency.
    • Diversity, Non-discrimination and Fairness.
    • Societal and Environmental Well-being.
    • Accountability.

 

Using the system is relatively straightforward. First, you must create an account and log in to the ALTAI web site. Then choose 'My ALTAIs'. The system allows you to complete, store and update multiple ALTAI questionnaires. Once you have completed the questionnaire, the system produces a graphical representation of your 'trustworthiness' (the spider graph above),  together with a set of specific recommendations based on your answers. Note that the ALTAI website is a prototype of an interactive version of the Assessment List for Trustworthy AI. You should not use personal information or intellectual property while using the website.

I found the system easy to use, but would have liked to see a graph/tree of how the question boxes are arranged, and some clearer explanation of the red and blue outlines - in short, more system transparency of the assessment system!

I also have some reservations related to the independence of the person completing the assessment, and the possibility of bias when someone closely involved in an AI development project is tasked with assessing it. This could be improved by using an independent, suitably qualified and competent auditor.

It's very encouraging to see the emergence of these kinds of audit systems specifically targeted towards the deployment of AI technologies. Hopefully as these systems develop they will align with the international standards that are currently being developed - for example the IEEE Ethically Aligned Design standards, such as P7001 for Transparency of Autonomous Systems.

New Book: Transparency for Robots and Autonomous Systems

wordle for new book

After many months of writing, proof reading and waiting for printing, I'm delighted that my  book is now available. It's a very practical book, explaining why transparency is so important, followed by the details of experiments with various forms of transparency.

The book is based on my PhD research, but is expanded and extended, including an additional chapter to explain the importance of transparency within the wider context of accountability, responsibility and trust (ART). Here is a short extract from that new chapter:

Transparency as a Driver for Trust
.... I argue that although trust is complex, we can use system transparency to improve the quality of information available to users, which in turn helps to build trust. Further, organisational transparency drives both accountability and responsibility, which also bolster trust. Therefore transparency is an essential ingredient for informed trust. These relationships are illustrated in Figure 2.3.
System Transparency helps users better understand systems as they observe, interact or are otherwise affected by them. This informed understanding of system behaviour in turn helps users make appropriate use of systems.

System Transparency also supports accountability, by providing mechanisms that allow the system itself to offer some kind of ‘account’ for why it behaves as it does. It also provides mechanisms to facilitate traceability....

Organisational Transparency supports and encourages organisational accountability, and helps to develop a culture of responsibility....

Trust is built from a combination of informed system understanding, together with the knowledge that system providers are accountable and behave responsibly. Trust ultimately drives greater product acceptance and use....

In this book I also argue for the creation of transparency standards applicable to Autonomous Intelligent Systems (AIS) of all kinds. Standards will encourage transparency, and regulation may enforce it. This encourages business to develop cultures that embrace transparency in their processes and products.

Pile of my books

Wortham, Robert H., Transparency for Robots and Autonomous Systems: Fundamentals, Technologies and Applications, The Institution of Engineering and Technology, 2020 

ISBN-13: 978-1-78561-994-6 (eBook ISBN: 978-1-78561-995-3)

DOI: 10.1049/PBCE130E

Film: The Age of AI

I was recently interviewed for a new film about how AI impacts us today, and what the future may hold. Created by a Birmingham MA student, this film includes interviews from various academics in the field, including myself. It asks the questions everyone wants to ask about intelligent systems and robots; how they might change our … Continue Reading ››