Home
Exploration

Enabling legal teams to build automations with natural language

How we can replace rigid boolean logic builders with AI that translates plain text into executable rules.
Legal teams know what they want to automate, but current tools require complex workflow builders that force lawyers to think like engineers.
I explored an agentic interface where instead of manually configuring logic, users define intent in plain English for the LLM to translate into rules they can check and test.

How it works

Imagine a lawyer who wants to triage incoming vendor contracts. She types her policy as she would explain it to a colleague: "If it's a small deal on our standard paper with no changes, auto-approve it".
The system handles the translation layer. It interprets the request into concrete attributes and asks clarifying questions.
Once questions are answered, the user sees a "rule card" that shows the interpreted logic in readable English.
The user can calibrate the rules by adjusting smart chips. After confirming changes, the rule card is generated again as a new message and the old rule card above deactivates (opacity drops).
Before activation, the user can run a simulation on a "smart sample" of historical documents. This provides immediate feedback on how the policy performs in reality, showing exactly which docs pass and which get flagged.

Implementation risks & considerations

Security and permissions

An automation should never be allowed to move confidential files into shared folders or bypass the creator's existing access rights.

Handling low confidenceope

What happens when the system creates a rule with a low confidence score? We need a human-in-the-loop state where the system pauses and asks to manually review the logic.

Scope

Probably we need to limit execution to specific folders, timeframes, or document batches to prevent mass errors.

Data normalization

Logic like "value under $50k" fails if contracts use different currencies. The system needs a normalization layer to convert financial (and other) values before evaluation.

Handling nested logic

The system needs to support multi-step dependencies (If A, then B; but if C, then D) without forcing the user back into a node-graph view.

Regulatory changes

We need a way to alert users when a law changes to prevent them from relying on automations that might no longer be compliant.

Auditing

We should consider a plain-text log for every action explaining which part of the policy triggered a decision.

Roadmap & ideas

Hybrid editing

Users may need to tweak a one parameter without rewriting the entire prompt. We need a state where manual inputs and AI-generated logic coexist gracefully.

Prompt guidance

Template suggestions can help users structure their intent in a way the model can easily interpret.

Predictive automation

The system could detect repetitive manual actions (eg: always archiving NDA drafts) and suggest automations to the user.