Using AI to manage personal data

Artificial intelligence and its role in protecting personal data should feature more prominently in regulations and in practice

Consumers and businesses have benefited from the development of artificial intelligence (AI) technology in recent years.

In the area of data protection, for instance, AI is being used to strengthen personal data rights and interests, improve the accuracy of personal information, and make the collection and flow of personal data more secure.

Using AI, businesses can also comply with data protection laws more cost-effectively, while the authorities can better monitor compliance and trends.

However, there are also risks – AI can be used by criminals regardless of data protection rights; an AI system can go “rogue” if there is insufficient human oversight. As such, experts have been calling for a strong regulatory regime and protocols for adequate human supervision to minimise these adverse outcomes.

This debate is timely for Singapore as its data protection regime, the Personal Data Protection Act (PDPA), is likely to undergo major changes in its next revision after an extensive period of review. The Act has been in force for the past five years.

In a recent paper, Associate Professor Warren Chik from the Singapore Management University (SMU) School of Law argues that organisations should consider the future role of AI in the management and protection of personal data, and how the technology can feature more prominently in ‘PDPA 2.0’.

“The PDPA does not have a specific provision on AI, but judging from the public consultation papers that have emerged from the PDPC (Personal Data Protection Commission) in recent years, we can expect AI to be a part of the conversation, and to be included, in the proposed provisions relating to data portability, the accountability measures and privacy by design,” he said.

“Certainly, AI will increasingly be used to comply with, and to regulate personal data protection principles as a matter of efficiency since data protection obligations, especially in bigger organisations, can be intricate and complex.”

Growing urgency

Regulating the use of AI in managing and protecting data has become an increasingly urgent issue as data has become the most important driver for modern economic change and development.

At the same time, AI has become an integral tool for the management and processing of data, including personal data, as it provides greater accuracy and capability.

“AI has not only replaced the manual work of data processing but has surpassed the abilities of humans. They can also operate 24/7 and perform tasks that are integral to functions that require no down time,” said Assoc Prof Chik.

One recent example of AI’s increasing importance in managing data is its use in collecting and collating location and movement data for contact tracing purposes related to the current Covid-19 outbreak.

“Scholars are currently studying how AI, which can function optimally and with a high level of accuracy, can work quickly in such circumstances where time is of the essence. AI may also be used in the future for the delivery of medical and other services remotely, which obviates human contact in times like these where it must be minimised.”

Striking a balance

In light of the changing environment, AI should be harnessed by the public sector to regulate and ensure compliance with personal data regulations, said Assoc Prof Chik.

For example, AI can be used to detect breach of privacy by organisations that fall under the purview of the PDPA.

However, the authorities will have to decide to what extent AI can be left to run independently and when there should be human oversight or, human intervention to reverse the ‘decision-making’ of an AI.

In a recent case in Singapore, a programmer’s knowledge and responsibility for “decisions” and actions made by an AI on an automated cryptocurrency trading platform emerged as an issue to determine if a “mistake” had been made in a trade due to human oversight in updating the platform’s critical operating system.

Reflecting the complex legal issues that can arise in the use of AI and how it can affect existing legal principles, the Court of Appeal determined in a split decision that the contract was valid and enforceable.

Said Assoc Prof Chik: “Regulators must strike a balance between the policy interest of technological  innovation and data protection.”