Brief Impact

Description

This deliverable details the experimentation with different parameter-efficient fine-tuning (PEFT) methods applied to DistilBERT for legal case classification. The focus was on comparing fine-tuning techniques like LoRA, DoRA, and QLoRA, exploring various PEFT classes, and implementing explainable AI tools to enhance model interpretability.

Steps Followed

Step 1: PEFT Methodology Overview

Below is an image showing the difference between these fine-tuning techniques

Lora vs Dora vs Qlora

Step 2: Fine-Tuning Experiments with LoRA, DoRA, and QLoRA

Using the PEFT type classes defined above, LoRA, DoRA, and QLoRA were applied across different model components. This experimentation allowed us to observe how each method affected performance, generalization, and computational efficiency.

Below is the image showing the accuracy result of 10 epoch using LoRA:

Lora

Below is the image showing the accuracy result of 10 epoch using DoRA:

Dora

Below is the image showing the accuracy result of 10 epoch using QLoRA:

QLora

Step 3: Fine-Tuning Experiments with LoRA, DoRA, and QLoRA using updated evaluation metrics

To evaluate model performance comprehensively, the following metrics were used:

These metrics together allowed for a more nuanced evaluation of LoRA, DoRA, and QLoRA, highlighting not only overall accuracy but also the reliability of each model's predictions in terms of specificity and coverage.

Below is the image showing the result of 10 epoch using LoRA with the updated evaluation metrics:

Lora extra

Below is the image showing the result of 10 epoch using DoRA with the updated evaluation metrics:

Dora extra

Below is the image showing the result of 10 epoch using QLoRA with the updated evaluation metrics:

QLora extra

Step 4: Why LoRA Was Best Suited for Legal Case Prediction

Through detailed experimentation, LoRA emerged as the most effective PEFT technique for legal case prediction based on the following factors:

Based on these advantages, LoRA was selected as the primary technique for further fine-tuning and evaluation in legal case classification tasks.

Step 5: Exploring Different Layers Of Embeddings

In this step, I selected specific layers (0, 2, 4, and 6) in DistilBERT to apply LoRA transformations using the layers_to_transform parameter. By focusing only on these layers, the model can achieve a balance of targeted domain adaptation and computational efficiency, as not all layers undergo fine-tuning.

Impact of Configuring layers_to_transform

By restricting LoRA transformations to specific layers:

In contrast, the default LoRA setting-without specifying layers_to_transform tends to produce higher-quality embeddings as all layers undergo adaptation, resulting in a more robust model output, especially for complex tasks like legal case prediction.

Below is the image showing the result of 10 epoch using LoRA with no extra layers of embeddings:

Lora extra

Below is the image showing the result of 10 epoch using LoRA with extra layers of embeddings:

Lora extra

Step 6: Overview of PEFT Task Types

Parameter-efficient fine-tuning (PEFT) optimizes large language models (LLMs) for a variety of specific tasks. The following task types benefit from PEFT methodologies:

After researching all these task types, I found that sequence classification is the best suited for our use case, as it effectively aligns with our goals of accurately categorizing legal cases based on predefined classes.

Step 7: Implementing Explainable AI with LoRA

Why Explainable AI (XAI)? To interpret the model's decision-making process, I integrated XAI tools such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations). These tools helped highlight which case details most influenced the model's verdict predictions, enhancing transparency and trustworthiness in legal contexts.

Steps to Implement SHAP for Explainable AI with LoRA

Below is the image showing the result of integrating ExAI using LoRA:

Lora ex ai

Code Snippet:

1. Code snippet for using multi-faceted evaluation metrics:

Evaluation metrics

2. Code snippet for fine-tuning with LoRA:

Fine-tuning with PEFT Techniques Fine-tuning with PEFT Techniques

3. Code snippet for fine-tuning with DoRA:

Fine-tuning with PEFT Techniques

4. Code snippet for fine-tuning with QLoRA:

Fine-tuning with PEFT Techniques Fine-tuning with PEFT Techniques

5. Code snippet for applying embeddings to multi layers with LoRA:

Fine-tuning with PEFT Techniques

6. Code snippet for explainable AI integration:

Explainable AI with SHAP

Model Evaluation

After experimenting with each PEFT technique, LoRA achieved the highest accuracy, peaking at 68.4%, while DoRA and QLoRA provided computational efficiency gains without significant loss in accuracy. The explainable AI integration revealed that factual accuracy was a primary driver in the model's verdict predictions.

Insights Gained

This experimentation reinforced the value of PEFT techniques in optimizing model performance for domain-specific tasks. Explainable AI provided essential insights into the model's reasoning process, proving invaluable for ensuring fair and transparent legal predictions.

Next Steps

Future work includes experimentation with other models to gain better accuracy.

Tutorial Reference:

You can find the tutorials I followed for understanding the different types of fine-tuning techniques:

Comparing fine-tuning optimization techniques lora qlora dora and qdora

What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED

DoRA LLM Fine-tuning explained : Improved LoRA

LoRA & QLoRA Fine-tuning Explained In-Depth

Code Reference

You can download the complete Jupyter notebook by clicking the link below:

LoRA vs DoRA vs QLoRA : with accuracy measures:

CS_297_AI_Powered_Legal_Decision_Support_System_Fine_tuning_with_LoRA.ipynb

CS_297_AI_Powered_Legal_Decision_Support_System_Fine_tuning_with_DoRA.ipynb

CS_297_AI_Powered_Legal_Decision_Support_System_Fine_tuning_with_QLoRA.ipynb

LoRA vs DoRA vs QLoRA : with multiple evaluation metrics:

CS_297_AI_Powered_Legal_Decision_Support_System_Fine_tuning_with_LoRA_With_Multiple_eval_metrics.ipynb

CS_297_AI_Powered_Legal_Decision_Support_System_Fine_tuning_with_DoRA_With_Multiple_eval_metrics.ipynb

CS_297_AI_Powered_Legal_Decision_Support_System_Fine_tuning_with_QLoRA_With_Multiple_eval_metrics.ipynb

LoRA with multi layer embedding:

CS_297_AI_Powered_Legal_Decision_Support_System_With_LoRA__More_embeddings.ipynb

LoRA with explainable AI:

CS_297_AI_Powered_Legal_Decision_Support_System_With_LoRA_Explainable_AI.ipynb

Presentation Links:

Presentation PDF available:

Different Fine-Tuning Models - PDF

Explainable AI With LORA - PDF

TaskTypes In PEFT- PDF