korum forum.

Keeping you informed on the latest NewLaw thinking and insights

Intelligent Machines: Does AI Really Spell the End of Lawyers?.

This article originally appeared in Hong Kong Lawyer, the Official Journal of the Law Society of Hong Kong.

The concept of artificial intelligence (“AI”) is not novel, but in recent years, it has rapidly transitioned from the science fiction department into real life, causing discussions and debates on how it might affect the fate of humanity.

Professor Stephen Hawking, rather noncommittally noted that, “AI will be either the best, or the worst thing, ever to happen to humanity – we just don’t know which.”

Presumably, that statement also applies to the legal profession – and one or other version of it is actively being discussed by lawyers, legal consultants and broader intellectual community.

While the possibility of an approaching doomsday is undoubtedly a consideration, it is equally important to understand the current state of the technology and in particular, its presently available applications and limitations.

So, what is artificial intelligence?

Many people imagine AI to be a robot or at the very least, a computer programme that has human-like cognitive capacity, including an ability to learn, think and make decisions. While that is one aspect of AI, the term itself is generally understood more broadly to include any kind of computational modelling of intelligent behaviour.

Conventionally, it is understood that, just like lawyers at a law firm, AI has its hierarchy:


  1. Junior Associate AI aka Artificial Narrow Intelligence (or Weak AI) is a machine intelligence which is domain specific. It is bound by the principles and relationships of the domain and literally cannot think “outside the box”. That means that the machine is only good at a specific task, such as playing the complex board game “Go” or getting you out of paying for a parking violation, but not both.


  1. Senior Associate AI aka Artificial General Intelligence (or Strong AI) is a machine intelligence which is human-like, meaning it can perform the same cognitive tasks as a human, including generalisation of learning. Although in many circumstances human intelligence does not seem like much at all, it has one critical component which is not yet accessible to machines – using existing skills to solve new problems. If we learn how to use a spoon to eat soup, most of us will probably figure out that the same spoon can be used to eat rice or ice-cream or even scoop out sand from a sandbox into someone’s shoes. That is not so obvious to the machines. Yet.


  1. Partner AI aka Superintelligence is, in the words of Nick Bostrom, “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” What that looks like is unclear, but hopefully, something like the humanoid robots from Isaac Asimov’s novels…

It is important to note that all of today’s advanced AI applications are Weak AI – they are confined to a single domain and require a substantial amount of human oversight. We can thus gain some intuition on the future of AI within the legal industry – for years to come, we will be stuck with the Junior Associate.

It would therefore appear that the important question now is not whether the Junior Associate AI will end the world as we know it, but what this AI can do for us and whether it is worth keeping around.

So, what can an AI system do within the legal domain? Today – just two things, really.

1. Natural language processing (NLP)

Natural language processing, or computational linguistics, is a field of AI that is trying to endow the machines with an ability to understand and reproduce human speech.

Probably, the easiest to understand application of NLP is chatbots. In this case, natural language serves as an interface between the user and a knowledge database behind a chatbot. The user can ask questions in regular sentences and the chatbot will spew out, hopefully, relevant information. It is conceivable that chatbots could be quite useful in the areas of legal research and regulatory compliance. But at the current stage, most chatbots floating around are nothing more than a glorified FAQ tool.

ROSS is not marketed as a chatbot but is very similar to one. ROSS is an AI lawyer which can answer legal questions asked in natural language and even put together (with some help from a human) a brief memo. Present ROSS with a legal issue, and it (or he?) will produce a list of most relevant cases within seconds – a task that would take a human lawyer hours and hours of billable work.

Another application of NLP that looks promising is organisation of large sets of unstructured data which can become an indispensable tool in the area of document management.

Kira is an AI system that uses machine learning technology to do precisely that. On the company website, Kira is said to be able to take legal documents as an input, sort them, identify specified concepts and clauses and spot and analyse issues and trends across the documents, which would be particularly useful during due diligence and discovery.

Both legal research and due diligence are generally very time-consuming tasks and for the most part are done by junior lawyers. So will ROSS, Kira and others in their likeness make lawyers obsolete? Probably not. But if you are paying for junior lawyers more than those systems cost, perhaps it’s time to ask for a discount.

2. Modelling and Predictive Analytics

Predictive data analytics and modelling is another very important aspect of AI that is finding application in the legal domain, especially in the area of litigation. Decisions on such issues as the best timing for a settlement offer or the optimal judge-attorney personality fit are a couple of examples that can be aided by a data-driven approach.

The main premise of predictive data analytics is that if you deconstruct and analyse a meaningful data set in relation to a certain system through a machine learning algorithm, you will obtain a statistical model that can predict the future behaviour of such system.

A key phrase in the above sentence is “meaningful data”. Data that is not meaningful will not produce an accurate model, which in turn, will not produce accurate analytics.

The first principle of “meaningful data” is that the data set needs to be large enough (see the law of large numbers (ie, the larger the sample size, the more accurate the statistical analysis) and the overall fascination with Big Data).

The second principle of “meaningful data” is that the data set needs to be accurate, both in terms of factual accuracy and absence of bias.

It is pretty obvious – if you input inaccurate or biased data, then you cannot expect an accurate/unbiased model as an output. Garbage in, garbage out – but many of the current analytical systems don’t seem to take that into account. In the recent months, there has been an increase in the number of reports noting that some AI systems learning from real-world data are displaying unpleasant traits such as racism or gender bias – the all too familiar faults of our society.

With that in mind, enter Lex Machina, a legal analytics platform from LexisNexis. On the company website, it says that Lex Machina “mines litigation data, revealing insights never before available about judges, lawyers, parties, and the subjects of the cases themselves, culled from millions of pages of litigation information.”

Another company providing similar insights is Ravel, which according to their website, “enables lawyers to find what’s important, understand why it’s important, and put that information to use in the most persuasive way possible.”

Can those companies deliver on their promises? Perhaps. However, while it may be reasonable to assume that their data is relatively accurate – knowing that on a relevant scale, the entire data set of legal precedents is quite small – the insights from those companies, and especially their predictive capacity, should be considered with a degree of caution. There is no doubt, however, that the legal profession will benefit from the systematisation of knowledge those companies are pursuing.

This is probably much more than an average junior associate is capable doing. However, it remains to be seen whether the insights from the services such as Lex Machina and Ravel can compete with the insights and intuition coming from seasoned practitioners.


As with everything, only time will tell what AI’s true impact will be on the legal profession. But for now, there is no need to brace for the end of the world as we know it.

Anna Kim