Welcome to Legal Prompting, I am Nicola Fabiano and this is Episode 6.
In the last episode, we looked at chain of thought and few-shot prompting, two techniques
that let us guide the model through logical steps and show it concrete examples of the
output we want.
Today we apply those techniques to one of the areas where lawyers work the most, contracts.
When I talk about contract analysis with AI, I mean four distinct operations.
The first is the structured review of a single contract.
The second is the comparison between two versions of the same text.
The third is the verification of a clause against a checklist.
The fourth is the analysis of a data processing agreement against Article 28 of the GDPR.
These are different operations and they require different prompts.
Let us start with structured review.
The most common mistake is to ask the model review this contract, a request that generic
produces a summary, not an analysis.
The prompt must instead define the role, indicate the applicable law, set the point of view
from which we read the text, and list the areas to examine.
For example, limitation of liability clauses, jurisdiction, term and termination, confidentiality,
data processing, force majeure.
For each area we ask the model to quote the text of the clause, identify the issue found,
and propose an alternative wording.
This is chain of thought applied.
The model does not issue a verdict, it walks through a reasoning that we can verify.
Comparison between versions is the use case where AI delivers the most reliable results.
We provide the two versions, we indicate that we want substantive differences and not purely
formal ones, and we ask that each difference be classified, in whose favor it has shifted,
and what risk it introduces.
Here, few-shot helps a great deal.
If we show the model one or two examples of how we want the difference analysis formatted,
the output becomes immediately more useful.
Verification against a checklist is delicate ground.
If the checklist is generic, the output is generic.
If the checklist is specific and truly reflects our professional practice, the output becomes
a working tool.
The point is that the checklist must be built beforehand, not improvised inside the prompt.
Now to the DPA, the Data Processing Agreement.
Here the reference is precise.
Article 28 of the GDPR lists the mandatory minimum content.
We can build a prompt that asks the model, for each letter of paragraph 3 of article
28, whether the corresponding provision is present in the DPA, where it can be found,
and whether it is drafted in a compliant manner.
This is an exercise where the model is genuinely useful, because the regulatory reference is
clear and exhaustive.
Three cautions.
First, the model does not negotiate.
It can flag that a clause is unbalanced, but it does not know how much bargaining power
we have, nor what the practice in the sector is.
Second, the model does not know the commercial context.
An exclusivity clause can be normal in one industry and pathological in another.
Third, no output replaces the full reading of the contract by the professional.
AI speeds up the first pass.
It does not sign in our place.
There is also a theme that will run through the upcoming episodes, where the model that
you use to analyze a client's contract is actually running.
Uploading a confidential contract to an uncontrolled cloud service is a compliance choice before
it is a technical one.
We will return to it in depth in the episode on professional secrecy.
In the next episode, we take a step forward, from the review of a single contract to legal
prompting as a structural component of corporate compliance processes.
How AI is integrated into a legal workflow without creating new risks and without diluting
responsibilities.
Thank you for listening.
See you in the next episode.