Do Bots Scare Away the Attorneys?
In recent times, machines have put back human beings to handle predictable tasks. Chores like toll tax collection, consumer query resolution, and even kitchen floor cleaning is now fulfilled by an automated device. It would be right to say that machines have left human beings to take on highly unforeseeable tasks that require creativity, critical problem-solving, and application of what we call as gut feeling based on real human life experiences.
As machines have made a permanent place in our everyday life, few folks have reasoned that Artificial intelligence (AI) could never replace what they do. Through this article we loathe to break it to you, tech companies are raising millions to advance AI powered solutions for symptom analysis and patient triage.
Leaving aside all the cons, we must also look at the pros of utilising Bots in attorney’s day-to-day tasks, filling monotonous forms/documents, secretarial services including calendaring, appointment booking, first-level call answering, these including other tasks take a lot of energy and time of attorneys.
Vicarious Liability in case of Technological Negligence
No legal action can be bought against robots or the artificial intelligence and even if it were possible at all, the artificial intelligence has no intellectual assets of its own. Although, legal frameworks that are applicable to machines that make no critical decision and only focus on labour could be applied to the AI. For instance, when a machine that can take no decisions of its own and is solely programmed to do a certain task by the programmer injures a human worker, they can be compensated by applying the principles of negligence or product liability principle. A crystal-clear inspection of the facts and apparatus assigning liability allows for the allocation of blame and potentially provides a remedy.
The currently existing legal concepts and frameworks can only be sufficient when it comes to either determining liability or the amount of compensation one must receive. According to a 2019 book by Ryan Abbott on Artificial Intelligence and the Law titled The Reasonable Robot, If an AI application is programmed in such a way to make independent decisions along the specific lines and injures someone. The negligence principle, such as the foreseeability of the injury and contributory negligence of human factors, could be used to allot culpability.
In the book, Ryan argues for a Principle of Legal Neutrality. Under the Principle, the behaviour of AI systems is evaluated approximately in the same way as the same behaviour is committed by a human. That means treating a “crime” committed by AI systems the same as crimes committed by humans. Any human death resulting from AI conduct needs investigating to see if it was truly an “accident,” or if it resulted from intentional or criminally reckless conduct.
The predominant ethical transgression that arises from the immense use of AI, lies in the procedure to manifest such a machine and mould it into working on a particular subject matter or job. For example, the process of training a facial recognition AI includes teaching it to identify faces using key features and measurements to read the distances between nodal features of the face from a huge database of photographs compiled to aid its training.
Ethical Ambiguity
In addition to questions of accountability for harm caused in the legal setup, attorneys should consider the ethical innuendo of using artificial intelligence. Ethics rules and regulations have already been enforced in many states that require legal professionals to be qualified and diligent in understanding new technologies such as those used in eDiscovery. This same duty likely applies to other technologies such as legal bots. The lawyers or the advocates are also tasked with monitoring duties. These duties include the monitoring or supervising the results and the actions performed by the legal bots or the artificial intelligence.
The duties relating to prerogative and intimacy may be compromised as use of the so-called legal bots include capturing the confidence of clients. The data breaches can only be stopped when the information inputted by the legal bot is carefully reviewed. But at the same time, we must keep in mind that the world is moving more towards software relied technology and AI stimulations, In-house legal offices have the reputation of being disinclined to completely accept the cutting-edge technology that can dehumanize a small portion of the work or the management of the processes, consultants say. But a few years prior to the pandemic, companies began funnelling more money to their corporate attorneys and pushing them to operate more efficiently, so legal departments started investing in more software.
Global management consulting company Gartner anticipates the adoption of tech innovations will bring transformational changes to in-house law offices. In our February 2022 article, we discussed Gartner’s prediction about the legal industry. According to Gartner, legal departments will increase spending on legal technology threefold by 2025, and will have automated 50% of legal work related to major corporate transactions by 2024. Machines are designed to strictly adhere to the programming, they simply lack capacity to understand the gut feeling or the intuition that human beings use to make logical leaps forward. Human beings as attorneys count the most on experience and the intuition to solve new problems and to make decisions. This is something artificial intelligence or machines can’t do right now, but that doesn’t mean it’s impossible in future, who knows where we might advance to.
Predictable vs. Intuition-based decisioning
When humans have incomplete knowledge, we make decisions based on intuition or what in common language we know as ‘the gut feeling’. Intuition is a much-appreciated human quality, but we’re often rootless at the idea of a machine that can use intuition or less than complete data to solve a problem.
For the time being, human beings’ intelligence and artificial intelligence have a major and the most basic difference — the “why”. AI can easily learn how anything can be done better than humans. But robots or artificial intelligence still doesn’t have the ability to question the “why,” which prevents artificial intelligence from doing the most meaningful of legal work. However, this doesn’t mean that the curiosity to ask a “why” cannot be seen in the future of Artificial intelligence.
Author: Varun Bhatia, Co-founder of 3NServe
Also read top viewed Ai Legal article: The Role of AI in Legal Research.