Skip to content Skip to navigation

TechLaw.Fest 2020

 

"TechLaw.Fest is a signature Law & Technology event that brings together the international community to debate, deliberate, act and innovate in both the law of technology (policies, regulations, legislation, case law and governance) and the technology of law (infrastructure, business transformation and people development)"

More details are available on the main event page and accessible here: TechLaw Fest 2020


 

From 28 September to 2 October, thousands of global attendees from 80 countries, and 150 international speakers from 25 countries, meet online to discuss content on law and technology. Members of the Asian Dialogue on AI Governance participated in two roundtables about data rights, data ownership and AI governance. See below the takeaways of the sessions!

 

Roundtable 1 - Rethinking Database Rights and Data Ownership in an AI World

 

Moderator:     Mark Findlay. Director, SMU Centre for AI and Data Governance

Panellists:        Desmond Chew. Senior Associate, Dentons Rodyk &  Davidson LLP

                        Smitha Prasad. Director, Centre for Communication Governance – National Law University Dehli

                        Yong Lim. Associate Professor, Seoul National University. SNU AI Policy Initiative, Korea Competition Law Association.

                        Antony Cook. Regional Vice President and Chief Legal Counsel, CELA Asia. Microsoft

                        Nydia Remolina Leon. Research Associate, SMU Centre for AI and Data Governance

 

 

The exponential growth in data in the past decade has enabled giant datasets to be compiled and used as the basis for developing ever-more sophisticated AI systems. Those systems are in turn being used to enhance humans’ ability to carry out tasks, or to replace those humans altogether. From self-driving cars and robotic carers, to autonomous weapons and automated financial trading systems, robotic and other data-driven AI systems are increasingly becoming the cornerstones of our economies and our daily lives. Increased automation promises significant societal benefits. Yet as ever more processes are carried out without the involvement of a ‘human actor’, the focus turns to how those autonomous robots and other autonomous systems operate, how they ‘learn’, and the data on which they base their decisions to act. Even in Singapore, questions inevitably arise as to whether existing systems of law, regulation and wider public policy remain ‘fit for purpose’. That is, do they encourage and enable innovation, economic growth and public welfare, while protecting against misuse and harm – whether physical, financial or psychological –harm to individuals?

This panel considered the legal issues regarding who controls or has rights over the ‘big data’ databases that underpin many AI technologies, and how best to ensure that those who contribute data to those databases retain an appropriate degree of ‘control’ over and access to that data.

 

Data ownership and data rights

Desmond Chew kicked off the discussion by stating that copyright and patents offer owners of large databases in Singapore limited protection. For example, the requirement of creativity in the context of databases presents challenges. In the context of big data it is particularly difficult to attribute creations to a human author. According to Desmond, the increased use of unstructured data also challenges traditional principles of copyright law. Hence, the Law Reform Committee of Singapore Academy of Law proposed some recommendations: (i) there is no need to create a specific database right as we see in the European Union, (ii) to provide further clarity on how big data databases can enjoy the copyright protections through soft law measures.

Then, Desmond mentioned the importance of addressing the current worldwide debate around data ownership. Should countries adopt an ownership regime, or a information intermediaries regime? In Singapore, regulations give the individuals protections to access and control the data, but not in the typical understanding of property right. Linking data to the concept of property turn data into a commodity and will trigger disputes and harm innovation. Moreover, the current framework applicable in Singapore already confers data subject control and access over their data.

Mark Findlay asked Desmond about data in the marketplace and seeing data as a commodity as a something that is holding back the discussion on the rights that data subjects should have. For Desmond, what is impeding to release the full potential of data is the problem of data access. Granting data access to organisations will make it possible to exploit the full benefits of data.

However, citizens don’t think so much in terms of rights. But when the conversation starts to be about rights, the attitude towards data challenges from a data subject’s perspective changes. 

 

MyData: a South Korean initiative related to data rights and data portability

Yong Lim explained how MyData functions in South Korea as an aggregator of financial and credit information of an individual to provide innovative services to this individual. An individual can request any financial institution to transfer their data to other financial institution or services provider. In other words, this amended law enhanced data portability in South Korea. However, there are still some remaining challenges in practice. First, standardisation as a technical challenge. The Financial Services Commission is pushing for  standardisation on the method of porting the data, but this requires significant effort given the multiple type of companies that are data controllers (e-commerce, fintech companies, etc). Second, the problem of defining what data need to be ported. What is personal credit information? How granular should the data be?

Mark Findlay discussed with Yong whether there is a storage and concentration  problem to some extent. For Yong, data is plentiful currently and the problem is more about data use, rather than how big are data controllers.  However, that might be a problem to some extent.

 

The experience of India

Smitha Prasad highlighted an important issue in the discussion of data rights. Currently, the private and public sector work with large amount of so called “anonymised” data and there is no separate need to comply with much data protection regulations given that this type of data is not considered personal data. However, it can be connected to an individual sometimes. Thus, in India it is very unclear what role individual rights play in data that is derived from personal data. To tackle these issues in Indian regulatory framework, concepts such as “collective privacy” have appeared recently.

Mark Findlay asked whether there is a discussion around the applicable restrictions on data use applicable to the state in India and the role of Courts – known for being very active in India – in the data protection discussions. Smitha shared that this is a timely and controversial topic since some data breaches in the country relate to the state as a data controller.

 

Personal and non-personal data

Nydia Remolina mentioned that the rights approach might fall short to adequately protect data subjects. In practice, the right regime might not prevent the litigations and disputes that the ownership approach also creates. Thus, this can harm innovation. Instead of centring the discussion on data rights, Nydia suggested to improve data access as a tool for innovation and economic growth, and at the same time work on the concept of digital self-determination for data subject’s protection, not only on an individual basis, but also collectively. Nydia also call on the challenges of data use of when using aggregated data that is considered non-personal data under data protection regulations.

Mark Findlay asked about the importance on data portability and Nydia provided some examples where collaboration between regulators and the private sector can help to enhance data portability rights. Financial regulators have assumed this role as facilitators in some jurisdictions (eg. Singapore). This can be particularly challenging in contexts where data subjects do not even know what data the data controllers have.

 

The role of the private sector

Antony Cook emphasised the importance of collaboration in the data space. It is evident the weight of open data and collaborations between the public and private sector in this context. Along these lines, Microsoft is focusing on three things: First, to drive their initiatives based on a set of principles that reinforce the need to be as open as possible in the data space, in particularly when it comes to solving societal problems, to help organisations to generate value, to protect individuals privacy, and to be useful. Second, to participate in collaborations that look how data can benefit the largest number of parties. Thirdly, to help create the tools and frameworks to make data usable and address the technical challenges that Yon Lim mentioned earlier.

Mark Findlay asked whether the self-regulation approach to some aspects in data protection (e.g. AI Governance) is an issue. Antony talked about the importance of finding the right balance between regulation and the need to stimulate innovation. The European Union just faced this complex debate in the database rights regulations discussed in 2019. Engaging the private sector in these regulatory discussions is fundamental to balance the regulatory objectives and find a thoughtful outcome.

Mark added to the discussion that regulation can actually contribute to increase data access, and regulation is not necessary the counterparty of innovation. 

 

Do data subjects believe they have data rights?

Interestingly, the audience responded yes to this question.

 


 

Roundtable 2 - Applying Ethical Principles for Artificial Intelligence and Autonomous Systems in Regulatory Reform

 

Moderator:     Mark Findlay. Director, SMU Centre for AI and Data Governance

Panellists:        Gilbert Leong. Senior Partner, Dentons Rodyk &  Davidson LLP

                        Smitha Prasad. Director, Centre for Communication Governance – National Law University Dehli

                        Yong Lim. Associate Professor, Seoul National University. SNU AI Policy Initiative, Korea Competition Law Association

                        Jolyon Ford. Associate Professor, Australian National University.

                        Malavika Jayaram. Assistant Professor and Lee Kong Chian Fellow, Singapore Management University

                        Brian Tang. LITE Lab@HKU, University of Hong Kong

                        Dharma Sadasivan. Associate Director, BR Law Corporation

 

 

This panel considers the core ethical principles for which there is emerging consensus, and assesses their implications, and the challenges they may raise, for policy makers in formulating any hard or soft law interventions considered necessary.

 

Impact of Digitalisation and AI adoption – the role of the State and the private sector

Digitalisation and the adoption of AI might exacerbate inequalities. The State and regulations need to balance competing interests: the societal benefits against the possible harms of the use of AI on the individual. In the emerging field of AI, another important challenge is the accelerated pace of AI development. It is a challenge for regulators to keep up with the rapid advancement of the technology.

Additionally, the developing of one AI model in one country might end up affecting people in other country. That is why an important part of the discussion is related to the AI ethical principles and those fundamentals that we all should be protecting by the law. The objective is to generate some consensus. But the State is not the only player that is important in the application of the Ethical principles. Private actors need also to participate in these discussions.

Mark Findlay asked Gilbert about the role of the private sector in AI Governance, especially given that AI is largely promoted by the private sector. According to Gilbert, the AI Governance conversation should be part of the social responsibility of companies to have these conversations.

 

Trend in the Governance of AI – from ethics to law?

Jolyon Ford mentioned how 2017 we have witnessed the proliferation of AI Governance frameworks following the high level ethical principles approach. However, regulations might be needed to translate this principles into practice in some cases. According to Australian regulators, regulations should be implemented “only if there are clear regulatory gaps and failure of the market to address those gaps”. This is an “after the market” approach. Thus the place of law is juts to fill in what the market is not addressing.

Is the right and responsibility approach better that the ethical based approach? For Joylon Ford, ethical principles need to be combined with regulations to tackle the challenges of AI Governance for societies.

 

The shift from principles to regulations in India

Smitha shared how the discussions on AI Governance are evolving in India, especially with the digitalisation of government services. A government think tank in India is currently thinking about regulations on AI. A report on this issue recommends regulation based on the broader principles. The problem of the report is that the principles are still too high level and there is no discussion on the actual meaning of some of the ethical principles. Additionally, an emerging issue in AI Governance in India is that the country is becoming a hub for AI development, so the consequences of the use of AI might go beyond Indian borders. According to Smitha, there is still a long way to go in moving away from high level principles to regulations in the context of AI.

Mark Findlay asked Smitha whether she has reservations regarding the interest of some government in publishing these high level principles documents. In some cases, governments are interested in rolling out AI for the benefit of the economy and not necessarily for the benefit of society. Smitha shared the same reservation and thinks it is crucial to keep looking to the Constitutional rights and keep working towards regulatory frameworks that allow governments to ensure that those principles are being applied in the context of the use of AI.

 

Do we need to tackle certain challenges differently when using AI?

Malavika Jayaram presented a set of though provoking questions about issues that all AI governance frameworks aim to tackle on a general level. However, when looking at them in practice, some questions have not been addresses yet. For instance, with regards to algorithmic bias, are we aiming to an utopian idea of neutrality? How can we design inclusive AI systems? Does this need less trust or more trust than a human-human relationship? In previous technological paradigms is one particular country the one who gets to decide the standards and those standards are later on exported into other countries with other contexts. Is there an element of data or AI colonization? The idea of innovation and growth is a view that might be harming inclusiveness in the context of AI. Thus, one big step we should be taking to counteract this effect is to teach about the issues and ethical challenges of the use of AI to AI developers. Currently the discourse is too much focused on developing technical skills in society to help us get on board of the AI revolution. 

 

Criticism to the AI general principles and the South Korean response

Discussions surrounding AI ethics mirrors the biomedical profession. However, the differences between AI and the medical professions are evident. In AI there is no profession, a lack of legal and professional mechanisms to distinguish a good practice from a bad practice. Based on the South Korean experience, the dialogues between the tech, business community and government has helped people to push forward the agenda for meaningful self-regulation and better practices. Even with the limitations of AI ethical frameworks, perhaps is the best approach that we could choose at this stage of AI development. At some point this might not be enough, so it is important to advocate for a process driven approach to AI governance and having checks and balances incorporated into AI systems or into companies that are using AI models.

 

Sectorial approaches to AI governance

Regulators in particular sectors have been active in issuing guidance to supervised entities operating in those sectors. Financial regulators in the region are advancing on this and most of them follow a high level based approach, but also, initiatives such as the sandbox allow regulators to test some use cases in practices in a controlled environment. Education has also been part of this sectorial initiatives.

Mark Findlay asked whether we are putting too much responsibility on the client or user, instead of focusing more on the developer. Brian Tang explained that requiring a human in the loop should ensure that the responsibility is not allocated only on the users shoulders.

 

Bridging ethics and Programming

Dharma Sadasivan argued that machines are not great ethical agents. Machine learning is a two stage process that might produce from time to time unexplainable bad outcomes. Unfortunately, regulation will not be able to prevent all those bad outcomes. However, some measures could be implemented to minimise the risks of undesirable (in terms of ethics) or unexplainable outcomes. For instance, mandatory sandboxes to test possible outcomes. Another option might be to implement a mandatory licensing regime which facilitates compensation for bad outcomes.

 

Last updated on 26 Apr 2021 .