AI Ethics Hub 4 Asia
The AI Ethics Hub 4 Asia ('The Hub') is a space for developing conversations about the impact of ethics and principled design by looking at the whole anatomy of AI development and big data use
We hold conversations between researchers, AI practitioners, policy designers and the community. These conversations grow from the recognition that while the push to develop AI and apply big data to all aspects of human decision-making is driven by concerns for economic growth and market sustainability, these technologies need primarily to produce social good and recognise human dignity in what they achieve.
Moving towards Responsible AI
Ethics and principled self-regulation are presently the dominant governance languages and policy commitments for AI from government and industry.
If we are to have faith that ethics can ensure trust in new technologies and their responsible use, in situations where the community is often confused about the science and concerned about the risk it poses to fundamental social expectations such as the future of work, then we need to be having conversations about:
- How are ethics and principled design understood by everyone involved in the AI ecosystem?
- How are the responsibilities that ethics requires attributed and distributed by key stakeholders?
- Where in the use of big data and the development of AI do decisions arise which present ethical challenges?
- How are those ethical challenges read and responded to by the people making decisions on our technological futures?
- What can be done to assist decision-makers to identify sites for ethical challenge, and then construct operational and social re-actions that can generate trust and broaden fair processes and outcomes?
- How can inter-personal and informational relationships be developed across AI ecosystems so that the project to make AI and big data use fair is a shared one?
The Hub offers an educational and audit facility for AI practitioners and their organisations to 'road-test' the reception of ethics and principled design at both personal and project levels. Opportunities are presented through The Hub for AI practitioners, technicians and designers to participate in creating a language for and an understanding of ethics as a vital regulator in specific operational, market and social contexts. The particular issues of decision-making power and specialist knowledge in the anatomy of AI are worked with through real-life problems so that participants in the conversation can be confident when the applications of their work move out to communities of users they do so with ethical authorisation.
Research outputs
- "Nose to Glass: Looking in to get beyond". Josephine Seah. Presented at the Navigating the Broader Impacts of AI Research Workshop at NeurIPS 2020, 12 December 2020. Available here. A presentation of this paper is also available here.
- "An Ecosystem Approach to Ethical AI and Data Use: Experimental Reflections". Mark Findlay and Josephine Seah. Presented at the 2020 IEEE / ITU International Conference on Artificial Intelligence for Good, 23 September 2020. Available here or here.
- "Ethics, Rule of Law and Pandemic Responses". Mark Findlay. SMU Centre for AI & Data Governance Research Paper No. 2020/07. July 2020. Available here.
- "Conversations on Ethical AI: Workshopping Methodology". Mark Findlay and Josephine Seah. CAIDG Blog. 5 March 2020. Available here.
Last updated on 15 Jan 2021 .