Legal prompting: Teaching a language model to think like a lawyer

F Yu, L Quartey, F Schilder�- arXiv preprint arXiv:2212.01326, 2022 - arxiv.org
arXiv preprint arXiv:2212.01326, 2022arxiv.org
Large language models that are capable of zero or few-shot prompting approaches have
given rise to the new research area of prompt engineering. Recent advances showed that
for example Chain-of-Thought (CoT) prompts can improve arithmetic or common sense
tasks significantly. We explore how such approaches fare with legal reasoning tasks and
take the COLIEE entailment task based on the Japanese Bar exam for testing zero-shot/few-
shot and fine-tuning approaches. Our findings show that while CoT prompting and fine�…
Large language models that are capable of zero or few-shot prompting approaches have given rise to the new research area of prompt engineering. Recent advances showed that for example Chain-of-Thought (CoT) prompts can improve arithmetic or common sense tasks significantly. We explore how such approaches fare with legal reasoning tasks and take the COLIEE entailment task based on the Japanese Bar exam for testing zero-shot/few-shot and fine-tuning approaches. Our findings show that while CoT prompting and fine-tuning with explanations approaches show improvements, the best results are produced by prompts that are derived from specific legal reasoning techniques such as IRAC (Issue, Rule, Application, Conclusion). Based on our experiments we improve the 2021 best result from 0.7037 accuracy to 0.8148 accuracy and beat the 2022 best system of 0.6789 accuracy with an accuracy of 0.7431.
arxiv.org