Interview Beiträge

Legal Revolution: The Groundbreaking Role of AI and Mathematics - A Commentary by Maximilian Janisch

He sheds light on the growing influence of AI on the legal industry and provides a deep insight into the intersection of mathematics, AI, and law. As a mathematician and the youngest doctoral candidate in Switzerland, he discusses the opportunities and concerns regarding AI in legal practice and emphasizes the importance of ethical and transparent approaches.

 

 


Topics: AI models, ChatGPT, LLMs, legal industry, mathematics, Maximilian Janisch, Ph. University of Zurich.
Feel free to comment on Linkedin.
Reading Time: 4 minutes.

The influence of AI on the profession of lawyers is growing. To shed further light on the discourse surrounding AI and ChatGPT in the legal industry, particularly among lawyers, we interviewed mathematician and researcher Maximilian Janisch.

Mr. Janisch is currently the youngest doctoral candidate in mathematics in Switzerland. He completed his mathematics baccalaureate at the age of 9 and has since delved deeply into the discipline. The ability of AI models to recognize causal relationships is even a subfield of his own research work at the Zurich Graduate School in Mathematics, a joint program of ETH Zurich and the University of Zurich.
 

In this interview, Mr. Janisch shares his personal insights on the mathematical models and algorithms used in AI applications in the legal industry, as well as his hopes and concerns regarding the impact of AI on legal practice. He also discusses the responsibility of mathematicians concerning the ethical use of AI, the transparency of AI decisions in the legal domain, and the professional opportunities for lawyers to develop their skills in an increasingly AI-supported environment.
 

Hello, Mr. Janisch. Thank you for agreeing to this interview. It is evident that AI is increasingly being utilized in the legal industry. What mathematical models and algorithms are being employed in AI to automate or assist legal tasks?
 

A significant advancement has recently been made in the field of "large language models (LLMs)." One of the most well-known applications of LLMs is ChatGPT, a program capable of formulating sentences and paragraphs as responses to user input that are linguistically impressive. LLMs are AI models that learn correlations in linguistic sentences from vast amounts of text data and can predict the next probable word in a sentence.
 

For example, in a sentence containing the word "table," the word "chair" also frequently occurs, but the word "salamander" is rare. Thus, LLMs can achieve an impressive understanding of language. However, it has turned out that they are not capable of distinguishing true from false statements. Numerous examples exist online where ChatGPT provides a well-formulated but incorrect response.
 

LLMs can be useful, in my opinion, for quickly analyzing document content and highlighting important sections. They can also be employed to enhance the linguistic formulation of existing arguments.
 

And in your view, how can AI change the legal industry?
 

I hope that AI will be able to take over repetitive or tedious tasks in the legal industry. I particularly think of literature research, document management, and summarization of texts. This could make the industry more efficient and free up more human time for important tasks, such as designing legal arguments.
 

Do you see a risk that the use of AI in the legal industry might lead to problematic legal findings or applications based solely on mathematical probabilities?
 

In principle, I believe it is beneficial to employ mathematical probabilities for legal findings and applications. After all, probability theory is an extremely useful tool that has proven effective in describing the world in various fields, from financial mathematics to the physics of hydrodynamics. This could also be the case in the legal domain through AI models. However, as mentioned earlier, learning correlations alone is insufficient to understand causal relationships. That is why ChatGPT also provides many incorrect responses. In my view, it would be very interesting to develop AI models capable of comprehending causal relationships as well. This is an area of research I am exploring, but it will take some time before useful results for everyday use in the legal industry are achieved.

In my opinion, it would be very interesting to develop AI models that can understand causal relationships. This is a subfield of my research, but it will take some time before we achieve results that are useful for everyday use in the legal industry. - Maximilian Janisch

What responsibility do mathematicians and experts have regarding the use of AI in the legal industry, especially in terms of ensuring the rule of law, ethics, and justice?
 

Almost every new technology can also be used for malicious purposes. For instance, nuclear fission physics, which is currently used in nuclear power plants, also gave rise to the atomic bomb. The internet brought not only easy access to information but also facilitated access to "fake news" and misinformation. The same applies to AI: it is a scientific and mathematical tool that can be used both beneficially and maliciously. As a mathematician, I believe it is my responsibility to make the malicious use of the tools I develop difficult, or easily detectable. For example, AI can be used not only to generate fake news but also to detect whether a given text was produced by AI or not.
 

How and by whom can decisions in the legal domain remain transparent and understandable despite the use of AI, and how can potential biases or distortions in the algorithms be avoided?
 

Biases and distortions in the models are typically already present in the data on which these models have been trained. Addressing this is challenging, as data must be collected or prepared in a way that minimizes such biases. Additionally, what constitutes a bias or distortion is somewhat subjective. The interpretability of artificial neural networks is a vast field of research. One of the most straightforwardly interpretable statistical methods is linear regression, which is used in almost all publications in empirical sciences. Modern neural networks with hundreds of layers are difficult to interpret. Improving this is also a task for mathematicians and computer scientists.
 

How can lawyers develop their skills and knowledge to succeed in an increasingly AI-supported environment?
 

It is entirely possible that training in AI handling becomes part of law studies: already today, people dedicate full-time efforts to "prompt engineering," which involves formulating inputs for AI natural language processing models in a way that yields as meaningful responses as possible. These and similar competencies can also be useful for rapid processing of texts in the legal industry.

Thank you very much for the interview, Mr. Janisch. We wish you continued success in your research and future endeavors.

Weblaw AG

Academy I Verlag I Weiterbildung I Software I LegalTech