

Contract Management Pro (CM Pro/CLM)
Workflow-native contracting, then GenAI and agentic patterns only where trust can be designed into the loop.
Role: Design Manager (hands-on)
Company: ServiceNow
Team: Started with 2 designers, scaled to ~4
Partners: Product, Engineering, UX Research (shared), Solution Consulting and Enablement, Implementation partners
At a glance
Problem: Enterprise contracting fails when it behaves like an island.Users context-switch, governance gets brittle, and adoption stalls.
Bet: Don’t build a destination CLM suite. Build a workflow-native contracting wedge that plugs into sales and procurement workflows, meet counsel in Word, then add AI only where trust can be designed into the loop.
What shipped: Workflow-native CM Pro designed to live inside sales and procurement motions, Word-first authoring wedge, AI-assisted review and repository extraction, conversational AI search MVP in the workspace. Admin guided setup explored as a north star direction.
Proof: Over ~12 months: 100+ licensed customers, with dozens live and dozens in implementation across multiple regions.
AI impact: For simple sales NDAs, turnaround dropped ~50% (3–4 days → 1–2). Redlining effort dropped ~80–90% because AI handled first-pass issue spotting and clause-level suggestions, with reviewers approving changes clause-by-clause.
My role: Led design strategy and delivery end-to-end, scaled the team 2 → 4, stayed hands-on in MS Word, AI trust patterns, conversational search, and admin setup north star.

How contracting actually works
Contracts don’t live only in Legal. They touch teams like Sales, Procurement, HR, Finance, and IT, which is why contracting breaks when it behaves like a standalone island.
Most work starts in one of two ways:
Own templates (first-party paper): generate from approved templates and clause libraries.
Their templates (third-party paper): review and negotiate someone else’s contract.
After signing, contracts become operational truth: metadata, renewals, and obligations that drive downstream work. The lifecycle is: request → author/review → sign → repository → manage obligations.

The moment it clicked
There was a point early on where “standalone CLM suite” stopped being a product aspiration and started feeling like a strategic trap.
Not because it wasn’t attractive. Because it wasn’t viable.
With our investment and timelines, going suite-first would have pushed us into feature-chasing, and it would have asked customers to adopt a brand new destination workspace and operating model on day one.
But contracting is tightly knitted into how enterprises already work across teams and workflows, especially in Sales and Procurement. When a standalone contracting tool pulls people out of those contexts, adoption quietly leaks back into Word attachments, email threads, and spreadsheet trackers.
So we made a different bet: build a workflow-native wedge that can be adopted incrementally. Commercially, it also created a cleaner land-and-expand motion: start as a capability inside existing relationships, prove value, expand as migration becomes feasible.
What changed for me: over time, the question shifted from “how do we build a full CLM” to “what’s the smallest contracting wedge that feels native, credible, and expandable.”
Where CM Pro plugs in
Workflow-native wasn’t a slogan for us. It was a packaging decision.
We designed CM Pro so contracting could plug into the places work already starts, especially Procurement and Sales Order Management (SOM). That way, contracting could be adopted inside existing motions rather than forcing everyone into “another legal workspace” up front.


What I owned
As Design Manager, I owned experience direction and delivery across the contract lifecycle.
Set the north star (workflow-native, familiar-first, trust-by-design)
Built the operating cadence that kept quality coherent as the team scaled
Partnered tightly with PM and Engineering to turn ambiguity into shippable slices
Stayed hands-on where small decisions compound:
Word experience
AI-assisted review patterns
Repository workflows (metadata + obligations extraction)
Conversational AI search MVP in the workspace
Admin setup north star direction
How I scaled ownership (without experience drift)
As the team scaled, I assigned clear surface ownership (Word, Workspace, Admin, AI) and used cross-surface decision reviews to keep patterns consistent, especially trust gates, approvals, and traceability. We also separated work into “ship now” vs “design debt” vs “north star” so MVP delivery didn’t get tangled with vision work.
[ARTIFACT NOTE: Add one “operating cadence” mini-callout]
Decision log + weekly PM/Eng triage + critique to hold quality bar
Trust gates treated as non-negotiables

How we planned releases and made scope trade-offs explicit.

The recurring loops that kept scope, decisions, and quality stable through the release cycle.

Word was not a feature, it was the deal
One of the clearest signals came from a senior legal counsel (25–30 years in practice). He was direct in a way that left no room for interpretation:
“I have been a lawyer for 25–30 years. MS Word is my default. Even if you build the perfect contracting tool, I will still be in Word for everything else. So unless you meet me there, you are asking me to change the way I work.”
That quote changed the nature of the conversation. We stopped treating Word as an integration checkbox and started treating it as the adoption surface.
So the job wasn’t “add a Word add-in.” The job was: respect counsel’s authoring context while preserving what the platform needs to be credible, traceability, governance, workflow state, and contract truth.
[ARTIFACT NOTE: Word add-in hero screenshot]
Show counsel in Word plus the CM Pro governance/traceability hook

Word is the surface. The platform is the system. Counsel edits in Word while CM Pro preserves workflow state, traceability, and contract truth.
AI, with one line we refused to cross
When GenAI entered the picture, it could have easily turned into “AI everywhere.” But, we refused to ship AI that isn’t trustworthy.
This forced discipline. AI could suggest, summarise, and extract, but humans retain authority. Nothing becomes contract truth without explicit confirmation. Users can edit, override, and reject.
1) The unsafe ask we said no to
There was an early push to let AI scan a contract, correct missing clauses and discrepancies, and generate a new “fixed” document version.
We said no.
Even if accuracy is “90%,” the remaining 10% in legal text is where the risk lives. Auto-changing the master copy without reviewer approval creates silent errors, hard-to-trace edits, and painful reversal. It also pushes reviewers into validating a whole new document version, where it’s easier to miss something critical.
So we designed a safer posture: AI compares clause-by-clause against the playbook, flags discrepancies, and suggests fixes, but every change stays at the clause level and requires explicit human approval before it touches the document.
[ARTIFACT NOTE: Trust-by-design model]
Contract input → AI identifies issues → clause-level suggestions → user review/edit → explicit confirm → publish
Red line: Nothing becomes contract truth without human confirmation

2) Where AI actually reduced real work
We anchored AI in two cost centers where effort is expensive and risk is high.
1) AI-assisted review (Word + workspace)
For simple sales NDAs, turnaround dropped ~50% (3–4 days → 1–2). Redlining effort dropped ~80–90% because AI did first-pass issue spotting and suggested clause-level language, with reviewers approving changes clause-by-clause.
Scope note: based on pilot comparisons and reviewer self-report for simple NDAs, not complex, highly negotiated agreements.
[ARTIFACT NOTE: AI review UI]
Show clause-level suggestions plus explicit approve/confirm gate

2) Repository workflows (metadata + obligations extraction)
Post-sign is where contracts become operational truth. We used AI to extract structured outputs into a reviewable state, routed it through human validation before publish, and kept every field editable with override.
[ARTIFACT NOTE: Repository extraction UI]
Metadata + obligations + review/approve before publish

A very enterprise reality: usage was high enough that some customers asked for limits or credit-style controls, because each AI run costs money. That became part of the product conversation too.
Conversational AI search (MVP): speed first, verification where needed
Once you put contracts into a workspace, a new kind of pain shows up. Not missing data. Hunting for it.
So we built an MVP of conversational AI search inside the contract workspace for high-frequency operational questions. Instead of scanning lists and opening records, users could ask questions like “Which contracts have auto-renewal clauses?” and get an immediate, actionable list.
We made the experience feel agentic without feeling opaque: verification paths like “Show sources,” lightweight processing transparency where needed, and safe fallbacks. When a query was unclear or unsupported, the assistant didn’t guess. It prompted rephrasing and offered suggested questions that mapped to supported knowledge.
[ARTIFACT NOTE: Conversational search strip]
Use: list result on workspace page + follow-up on a contract record + “Show sources” credibility moment
Optional inset: fallback prompt
The trade-off we made
Early on, we made a deliberate trade-off: we prioritised the contracting workflows that would make or break adoption immediately, fulfillers, legal counsels, requestors, approvers, reviewers, signatories. If those experiences felt clunky, no amount of admin polish would save the product.
The cost of that choice showed up later. Admin setup and configuration stayed closer to classic platform list-and-form UX, and too much was left for users to infer. As a design leader, I could see where this would land: once customers moved from “this looks promising” to “we’re implementing,” setup would become the adoption cliff.
Interim reality: we leaned on solution consulting and implementation partners to offload heavy setup work for customers while we shaped a longer-term fix.
The adoption cliff: the setup black box
This is the part many product stories skip, but it is where enterprise adoption wins or loses.
Admin research made it clear the pain was not just UI. The top issues repeatedly surfaced as documentation gaps, setup complexity, and support discoverability.
Combine older classic UI patterns with unclear documentation and setup started to feel like a black box: not just “how do I do this,” but “when do I do this, in which tool, and how do I know it worked.”
One admin said it best:
“I did not know where to start. I kept bouncing between steps, unsure what the right sequence was, and it got confusing fast.”
Template configuration and Word mapping were where this gap became most visible. Users asked for step-by-step guidance and struggled with the process, especially around variable mapping and content controls.
There was also a hidden tax: admins had to set up and maintain content controls per parent template across contract types. That is heavy upfront work, plus ongoing operational overhead as templates evolve.
In the field, admins often fell back to support tickets or implementation partners to unblock them, and setup simply stopped moving until someone pulled them through.
Once we saw the pattern, we stopped treating it like a documentation problem. We treated it like a product experience problem.
[ARTIFACT NOTE: Research proof tile]
One tile: “Top issues: Documentation + Setup + Support” + one supporting quote
Guided setup north star: make the sequence visible
In parallel to partner-supported implementations, we worked on a north star direction for admin setup. This was vision work, not fully productised yet, designed to reduce the black box feeling and make completion predictable.
The direction I drove focused on:
Breaking setup into smaller, ordered tasks with prerequisites made explicit
Turning template configuration and Word mapping into a guided step with examples and checkpoints
Adding validation so admins can tell when mapping is correct before they move on
Making completion criteria visible, not implied
[ARTIFACT NOTE: Guided setup model]
Prereqs → template selection → clause library readiness → Word mapping (guided + examples) → validation → publish
Success looks like: predictable completion, fewer “I’m stuck” moments, less partner dependence for basic setup.
Win story tile (portfolio-safe): regulated energy provider
A regulated energy enterprise needed to replace a contract solution flagged as a risk and approaching end-of-life, creating a compelling event. They needed a compliant contract lifecycle with visibility and unified intake for contract and purchase requests in a highly regulated environment.
Design angle: we engaged early as thought partners, before an RFP, shifting the conversation away from checklist comparisons and toward workflow outcomes and a credible maturity path. We also aligned closely with the implementation partner on what could be achieved out-of-the-box vs what required custom build.
Result: they signed CM Pro + LSD (plus supporting SKUs) and kicked off implementation workshops shortly after.
[ARTIFACT NOTE: Win story tile]
Problem → Compelling event → Why CM Pro won → What changed
Impact, the way it actually unfolded
This work didn’t land as one big moment. It landed in phases, because enterprise adoption does.
Phase 1: workflow-native positioning made the bet believable and adoption feasible.
Phase 2: Word-first made counsel adoption realistic without a forced behavior change.
Phase 3: trust-gated AI reduced review effort while keeping humans in control.
Phase 4: admin setup surfaced as the adoption cliff and shaped the guided setup direction.
A hard leadership moment
The hardest part was not choosing the bet. It was holding the line once every stakeholder wanted a different version of the product: sales pushing for parity checklists, engineering pushing for feasibility, legal pushing for risk controls, and implementations pushing for “make setup easier yesterday.”
My job was to keep the system coherent. I used the thesis as the decision filter, forced trade-offs into the open, and protected the trust gates even when speed was tempting, because in contracting, the cost of being wrong is higher than the cost of being slower.
What I learned and carry forward
Workflow-native wedges win when suite-first is not viable
Familiar-first (Word) is an adoption and trust wedge, not a convenience feature
AI trust is designed through gates and override, not promised through labels
Fast retrieval needs verification paths in legal workflows
Admin/setup UX becomes the adoption constraint once you ship
Migration is incremental, so the product must support feasible adoption paths
Summary (signature bets + outcomes)
Scope: Led CM Pro design from 0→1 through enterprise commercial adoption and early operational rollout.
Signature bets: workflow-native wedge (Procurement + SOM), Word-first counsel experience, trust-gated AI for review + extraction, conversational search MVP, guided setup north star to reduce admin ambiguity.
Outcomes: In ~12 months: ~120 licensed, ~35+ live, ~55 implementing (multi-region). For simple NDAs: turnaround dropped ~50% (3–4 days → 1–2) and redlining effort dropped ~80–90% with AI-assisted review. A regulated enterprise win validated the thesis: adoption follows trust, rollout realism, and workflow-native packaging.





