Richard Burnham looks at the similarities between SRA Code of Conduct and any codes of conduct that might regulate AI lawyers
With the rise in ‘artificial intelligence’ in the legal profession, you’ve probably asked yourself this question a thousand times; what kind of regulation would a robot solicitor be bound by? Whilst creating a storyline about artificial intelligence for http://www.auragames.co.uk, I started fleshing out an ethical decision making process the AI’s in that universe subscribe to, called the ‘foundation code’ which got me thinking, how would a robot solicitor be regulated?
Ordinary non-robotic solicitors are of course bound by the SRA’s Code of Conduct, an outcomes focused approach to regulation that requires solicitors to apply ethical principles to their day to day decision making. This code, in theory, protects the public and rule of law by requiring solicitors to conduct their business in an ethical way. What is interesting about this non-rules based application of ethics is that it aligns with the principles of creating ethical artificial intelligence.
A set of black and white rules leads to complications when the ethics of artificial intelligence is concerned, as Issac Asimov explored in his novel iRobot, by testing three rules applied to robots:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The problem with the system presented by Asimov is that robots could unknowingly break the rules. For example, under the above system, a robot could murder a human being, provided they did not know that their actions (or inactions) would lead to the human’s death. Conversely, under these rules a robotic surgeon, something which would be desirable, is impossible; it would simply be unable to harm a human being whilst it performed surgery. The same was true of the previous code of conduct that applied to solicitors; it was in theory possible to be compliant with the code, but to behave in an unethical way, which of course was likely not the SRA’s intention when implementing it.
Therefore, any ethical AI code of conduct would likely be similar to the code of conduct solicitors follow, in that any ethical decision making of a robotic solicitor would need to apply ethical principles to its decision making process, and not be based on strict black and white rules. The robot solicitor would need to be able to weigh up decisions, and apply the most ethically appropriate course of action, before actioning such a decision.
The benefits of this approach are of course that, assuming the ethical code functioned properly, a robot solicitor would apply ethical decisions to all arears of its work. Human solicitors can often falter to apply ethical decision making for numerous reasons caused by external pressures. It would be crucial that we are able to review the robotic solicitor’s thought process.
The difficulty with this would involve defining exactly what ‘ethics’ is, which is difficult because everyone has their own interpretation of ethics based on their own moral compass! What is unethical for me may be completely reasonable for you, and vice versa. This is why the SRA uses an outcomes focused code of conduct. Robot lawyers, at least, would not fall to such human pitfalls like egos, fear of blame or arrogance.
Of course, all this is based on the assumption that a robot lawyer wouldn’t just become self aware and go all Terminator 3 on the SRA, but that’s a different kind of blog entirely.