Recently, we were asked whether our research in CAIDG relied on social sciences. This got us thinking we needed to make clear that within our ongoing projects at CAIDG, social science research approaches and traditions contextually ground the considerations of AI and data governance as social responsibilities.
From the outset, the work of CAIDG has been directed to reflect the interests of society, communities, industry and commerce, in Singapore and beyond. It is becoming increasingly clear that a singular economic commitment to the promotion of artificial intelligence risks alienating important social interests and perceptions: one needs to look no further than the presence of biases baked into image recognition systems trained on unrepresentative data, or the recent use of predictive algorithms for exam grading. To develop AI for social good, our interrogation of its design, deployment and use must be grounded in interdisciplinary thinking.
The recent discussions of ‘humans in the loop’ when considering the AI/social interface reveal just how far AI and big data can drift away from considerations of social benefit beyond economic growth alone. When formulating advice on regulation and governance for AI and big data, CAIDG recognises the crucial importance of social science perspectives that add a human face to considerations of robustness, resilience and sustainability.
With this in mind, our research exploring regulation and governance themes necessitates both contextual and critical understandings of AI and big data. In this emerging field, thinking about governance not only involves thinking about legislation, but also norms and behaviours that impact on our quality of life. Here is where social science perspectives are essential. Trustworthiness, privacy concerns, the impacts of AI and automation on the future of work, the use of credit scoring algorithms – all these issues are more richly understood by social science research.
CAIDG’s Ethics Hub, for example, has initiated a large-scale investigation into aspects of the AI ecosystem to understand how responsibility is attributed and distributed within and between project teams and organisations. Our project design draws on literature from fields across human-computer interaction (Holstein et al. 2019), archival studies (Jo and Gebru 2019), and organisational studies (Madaio et al. 2020), which enrich understandings of how norms regarding responsible innovation can be developed across the industry.
In this exercise we have also been in discussions with industry players, where we drew attention to the human consequences of down-sizing, technologising, and job substitution. We have recently examined the transition of gig work into essential services in the pandemic and the challenge that this poses for more employee-friendly regulation. In addressing platform companies’ monetisation of personal data, we have also called for more transparency to enhance the trust of data subjects. The Centre recently shared with the National Robotics Programme the need to understand trust as a relationship between humans and machine, rather than for regulators and developers to assume simply that trust can be built through the anthropomorphising of robots. At the moment, in collaboration with our international partners, we are charting how public disquiet is negatively influencing the utility and efficacy of AI-assisted COVID-19 control responses.
Furthermore, as our population ages, we are working with smart city planners to ensure that senior citizens are actively engaged in the policies that will have an impact on their lives. Similarly, as the demand for care robots increases, careful attention must be paid to their design and use. Our research has also touched on the ability of these robots to care for humans, the potential deception by robot morphology and communications, the extent of human reliance on or attachment to robots, and issues on informed consent regarding carebot use and potential infringements of privacy.
Interdisciplinary research is neither easy nor straightforward. It requires talking across disciplines that have their own ontologies and epistemologies, including disciplines that have been typically further afield from the law, such as computer science and engineering. Terms that we use across disciplines – bias, fairness, transparency, explainability – have a multitude of varying definitions and tend to be interrogated differently (Mehrabi et al. 2019) depending on where and why they are debated. Recent research into AI regulation has started to engage with this approach (Xiang and Raji 2019), but there is a long road ahead if we as researchers are to ensure that social sciences are not seen as secondary to market analysis, computerisation of all disciplines, and AI as a stimulus for economic growth.
It seemed timely to pen these reflections when discussions of AI governance are recognising how much public trust, community confidence and concerns for individual liberty and the dignity of personal data are having dramatic impacts on the efficacy and utility of AI-assisted control responses to COVID-19. If programme designers, AI innovators, and policy promoters ever believed that the world of AI was somewhat distant from social concerns, contemporary experience with trust formation and governance accountability have clearly shown otherwise.
References
Holstein, Kenneth, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, and Hanna Wallach. 2019. “Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?” Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, 1–16. https://doi.org/10.1145/3290605.3300830.
Jo, Eun Seo, and Timnit Gebru. 2019. “Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning.” ArXiv:1912.10389 [Cs], December. https://doi.org/10.1145/3351095.3372829.
Madaio, Michael A, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. “Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI,” 20.
Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. “A Survey on Bias and Fairness in Machine Learning.” ArXiv:1908.09635 [Cs], August. http://arxiv.org/abs/1908.09635.
Xiang, Alice, and Inioluwa Deborah Raji. 2019. “On the Legal Compatibility of Fairness Definitions.” ArXiv:1912.00761 [Cs, Stat], November. http://arxiv.org/abs/1912.00761.
Last updated on 24 Aug 2020 .