Skip to content Skip to navigation

11.04.19 || ANU-SMU Collaboration on Law & Technology

Artificial Intelligence and Law:
Commercial Advantage, Administrative Dilemma and Professional Transformation

In April, the SMU Centre for Artificial Intelligence and Data Governance (CAIDG) held colloquium in collaboration with Australian National University's (ANU) College of Law about the challenges and opportunities presented by developing technologies to the transformation of law. The two-day event kicked off with presentations from both ANU and SMU, followed by an industry roundtable on the second day to seek and gather insights from a wider audience of regulatory stakeholders in Singapore.

Highlights and key questions from both days are collected below, and more information about the individual presentations can be found here.  


Key Questions and Themes

Over the course of the first day of the colloquium, scholars presented draft papers and research ideas or plans, touching on possible common analytical and thematic interests and comparative frameworks leading to future ongoing research collaborations. Attendants and presenters had the opportunity to discuss and provide critical commentary on these works in progress and research agendas in order to enrich the discussions and projects. The meeting was significant for research stimulation in the manner that specific research possibilities and alliances were identified and developed at a researcher-to-researcher level. 

From this meeting, the following major themes and questions were identified: 

  • Importance of context – the interface between AI/Big data and economies and societies are best understood by focusing on situational specifics (ie. Can the creations of AI lead to copyright? If AI facilitates alternative dispute resolution how can trust be maximised?)
  • Need to make regulatory terminology more specific and applied – what is meant by transparency, trust, responsibility, ethics? Data and AI – what are their applications?
  • Risk – what are risks associated with AI decision-making and automated data management? How is risk determined and what is the influence of perception?
  • Is this a new challenge for regulation and governance? – Do algorithms just magnify pre-existing institutional and process challenges in areas such as the financial sector?
  • What are the ways in which automated determinations can streamline and improve access to justice? What challenges are presented through automation?
  • Can or should algorithms remove discretion from decisions on rights and benefits?
  • What value frameworks are necessary to ensure that developments such as ‘smart contracts’ don’t lose touch with the social purpose of contracting?
  • If AI and automated data management can empower stakeholders in self-regulation, how can a ‘race to the bottom’ be avoided?
  • What is the importance of bias in algorithmic decision-making?
  • If ‘data lakes’ are becoming more consolidated in the hands of massive platform repositories, what can be done to ensure freer access?
  • AI has great potentials in tracking provenance and policing fraud. – How??
  • The importance of standardisation and challenges in the translation of standards into action
  • Is human rights language valuable for the regulation of data access?
  • Limits of machine learning in a ‘fractured world’ – unreal expectations
  • Transparency versus information overload

Highlights from the Industry Roundtable

SMU hosted a half-day industry roundtable for the second day of the Colloquium. Invited representatives of industry and the public sector in Singapore were invited to share their insights and build on the themes addressed from the commentary from the day before. Professor Mark Findlay and Associate Professor Jolyon Ford moderated the event. Key discussion points and questions included: 

  • What are the limits to mechanical decision making, how far can it go, and can/should it go further? 
  • What are the types of contracts that you can and cannot automate? Disputes around them will continue and pose potential complications for the utility of smart contracts that make disagreements difficult to manage.
  • At the same time, there are certain domains that will adopt such technologies, particularly where the risk remains low: smart contracts may have a big role to play in securing supply chains (e.g., fair trade products). Difficulties and complications arise in the use of these technologies in other sectors such as the financial industry and the derivatives market where the risk is higher. 
  • It is important to keep the human in the loop. There is a temptation to dream of these technologies being fully autonomous, but it would be more realistic to see their future in terms of human augmentation. What AI might be able to do, rather than taking over jobs, is to highlight discrepancies and red flags to individuals who will have to make the final decision. 
  • If humans are kept in the loop this way, the question then becomes: when and in what situations is it crucial for humans to be in the loop? Furthermore, are we placing larger and more complex cognitive loads onto people who may be ill-equipped/insufficiently trained to handle such data overload? 
  • In turn, the questions that regulators will need to address is whether and how regulation can grow with and guide these augmentations. In addition, how might what has been typically a principled-based discussion transform into more original and tailored regulation? Is regulatory sandboxing the way to go?
  • In addressing questions of trust around automated technologies, there is a general consensus regarding the need for transparency and algorithmic accountability. Nonetheless, challenges ahead include grappling with cultural expectations and perceptions (of both use and risk), and the difference in languages (between how ‘data’ is understood by software developers and regulators, which in turn influences how the public perceives it). An emerging problem is also that we may increasingly be demanding transparency around decision-making processes that we were previously comfortable with not requiring to the same extent or in the same detail.    

 

Last updated on 06 Aug 2019 .