Giving AI a fair trial

Society must carefully weigh the merits and potential pitfalls of advanced AI that can do the work of junior lawyers, make judicial decisions and influence human behaviour, said experts at SMU’s Future of Law Conference.

Above (from left): Assistant Professor Eliza Mik, Singapore Management University; Mr Koh Chia Ling, Managing Director at Osborne Clarke Queen Street; Ms Jacqueline Poh, Chief Executive of Government Technology Agency of Singapore; Professor Ian Kerr at the University of Ottawa.

By Jeremy Chan

SMU Office of Research & Tech Transfer – Laws are a reflection of what society holds dear and seeks to protect, and thus far, they have been applied by humans, to humans. But as artificial intelligence (AI) increasingly makes its way into law firms and courtrooms, the practice of law is quickly evolving, and the jury is still out on how these changes will impact society. 

In science fiction, artificially intelligent overlords playing judge, jury and executioner over human subjects are a recurrent trope. Will such a dystopian future – where lawyers are rendered obsolete and AI rules in place of a human judge – play out in reality? Thought leaders at the interface of technology and the law offered their opinions on these questions at the Future of Law conference held from 26-27 October 2017 at the Singapore Management University (SMU) School of Law, organised by SMU and international law firm and digital technology specialists Osborne Clarke in conjunction with their Singapore firm OC Queen Street.

A matter of margins

The fear of being replaced has been at the heart of debates surrounding AI and jobs, and the legal profession is not immune to technological disruption. Nonetheless, a sense of optimism permeated the ‘AI and the Law’ panel discussion, which consisted of Professor Ian Kerr of the University of Ottawa; Mr Koh Chia Ling, managing director at law firm OC Queen Street, Singapore; and Ms Jacqueline Poh, Chief Executive, Government Technology Agency of Singapore (GovTech). Assistant Professor Eliza Mik of the SMU School of Law moderated the panel.

While some functions typically performed by junior lawyers such as document review and research may be taken over by AI, Mr Koh felt that the human touch in legal matters is still far beyond the capabilities of intelligent machines. “For example, your ability to empathise, listen and come up with a solution – that’s something that AI has not been able to emulate well,” he explained.

Commenting that the question of AI and jobs is often too binary, Ms Poh added that “The real question to be asked – in many professions, not just legal – is about margins. It’s not about ‘will my job exist’, but about ‘how much will people pay me to do this job’.”

Hence, as the cost of an AI programme gets cheaper to the point where it approaches the salary of a junior associate, firms will have to rethink how young lawyers are deployed in legal practice.

Beware of biases and issues of equality

Anxieties over employment and bottom lines aside, the bigger issue that lawyers will have to grapple with is judicial decision-making by AI. As much as laws seek to uphold what human society values, those values may sometimes be tainted by bias. AI that is trained on such prejudiced precedents may make rulings that are unfair to particular segments of society.

“For example, it could be that past case law, in certain court decisions, in certain countries, was biased against certain races. An AI [trained on past case law] could therefore come to conclusions that might not align well with the direction that society wants to take for the future,” said GovTech’s Ms Poh.

On a similar note, Professor Kerr voiced concern that the predictive power of AI could allow for social sorting, thus amplifying discrimination and threatening equality. Placing people in categories and treating them in certain ways could limit social mobility and undermine due process, he said.

An added challenge is that predictive AI makes decisions about individuals in the background. “We’ll have no ability to know of those decisions being made about us, and no ability to have a say in some of those decisions. That’s due process worry that comes with prediction,” explained Professor Kerr.

Ms Poh also made a powerful point about equality in the age of AI in that it was ironic “that people have the tendency to anthropomorphise” non-human machines while at the same time dehumanising other human beings.  

The right amount of regulation

Like any emergent technology, the full implications of AI on society are difficult to forecast. The law is thus pulled in two directions when it comes to regulating AI – over-regulation could stymie progress, while under-regulation could make it difficult to deal with unexpected consequences especially relating to human health and safety, the integrity of the environment, and fundamental values.

Professor Roger Brownsword of King’s College London, who has published extensively on emerging technology and the law, thinks that the current approach of regulating AI – placing it in categories that we already have in existing law – may be inadequate for keeping pace with the technology’s rapid development.

“We’re trying to find a place for it in contract law, in tort law… The court, understandably, will be looking at legal pegs on which you can hang a particular dispute. But that doesn’t necessarily mean you are addressing the real questions raised by these technologies,” he said during his closing keynote speech.

Professor Brownsword instead recommended taking a risk management stance on issues arising from AI – thinking through what could go wrong and what compensation is appropriate should something indeed go wrong. “This is a way of calming concerns about acceptable risk,” he said.

Avoiding the slippery slope

Ironically, AI may be best positioned to assist humans in risk management. Just as closed-circuit television and DNA profiling nudge people away from criminal activity, AI could be deployed to influence human behaviour such that any outcome falls within acceptable risk boundaries, said Professor Brownsword. For example, AI could be used to tweak traffic light intervals at an accident-prone junction to remove the impetus to jaywalk, or to redirect traffic away so that accidents are less likely to occur.

For the technophobic, this already sounds like a slippery slope toward a dystopian future – the slow and subtle erosion of human agency. This is why Professor Brownsword cautions against extreme forms of technology-dependent risk management – AI so pervasive that it restricts our interests, ambitions and relationships must never be allowed to exist, and authorities everywhere have an obligation to abide by this regulatory red line, he said. 

“In George Orwell’s 1984, Big Brother did not do anything to damage the conditions of human existence [itself a regulatory red line], but Big Brother was fatal for human self-development,” he said.

Nonetheless, since the current capabilities of AI are still very narrow in scope, GovTech’s Ms Poh was in favour of a wait-and-see approach to regulation.

“What we need to do is further explore what can be done with machine learning and deep learning to improve productivity and decision making, to get to a point where we can even begin to be able to say what is it that we want to regulate,” she concluded.

In closing, Mr Adrian Bott, partner and international head of the digital business sector at Osborne Clarke, together with SMU’s Professor Mik, summarised the highlights of the two-day conference proceedings. Echoing the words of Associate Professor Goh Yihan, dean of the SMU School of Law, Mr Bott called for all conference participants to be brave in embracing technology and the law. He then aptly concluded the conference with this thought-provoking quote: “The AI does not love you, nor does it hate you – it simply knows you are made of atoms it can use for something else.”

Back to Research@SMU Events Feature Series


Image credit: Jeremy Chan

Read more about our research