
1) Project overview
SDM Plus is a Delivery workflow hub portal used by delivery teams to find the right method, phase, activity, and templates while executing services work. This case study is about rebuilding SDM Plus around three things that determine whether people use guidance under pressure: findability, reliability, and fixability.
Role: UX Designer (high ownership)
Company: Microsoft Services (via LTI)
Product: SDM Plus 2.0 (methodology portal)
Year: 2020
Partners: PM, Engineering, SMEs (content owners)
2) The trust break
SDM Plus was designed to be the place teams went for clarity.
But the first pattern I noticed wasn’t “people can’t find things.”
It was what they did after they couldn’t find things.
They stopped trying.
You could hear it in the fallback behavior: “I’ll ask someone. It’s faster.”
That’s not a navigation complaint. That’s a trust break.
[Media 1: Quote tile]
3) When trust breaks, teams route around the system
In delivery work, people don’t pause because the portal is confusing. The work still moves. Teams route around the system, and inconsistency becomes the default.
The baseline signal was blunt: SDM Plus scored 48.2 on SUS.
Low UX wasn’t just a perception. It was measurable.
We were working under an MVP timeline, so we could not afford a broad rebuild. Early conversations drifted toward “restructure everything.” My job was to slow that down and earn the right fix with proof.
[Media 2: SUS card]
4) The shape of the pain
Across stakeholder sessions, survey, and workshops, the issues clustered into three repeat themes:
Findability: too many clicks, unclear paths, friction switching between methods
Reliability: outdated or incorrect content made people second-guess what they found
Fixability: improving content was bottlenecked, so the portal aged faster than it could be corrected
[Media 3: Key observations card]
5) Proof before redesign
“Hard to find” can be caused by bad structure (IA) or by a UI that hides a workable structure.
Most teams jump straight into screens. I didn’t want guesswork to drive months of effort.
So I ran two tests to separate the causes:
Tree testing removed the UI and tested the structure. The IA held up reasonably well.
URUT tested the live UI experience. People took longer and backtracked more, showing the interface was where confidence and time were being lost.
That evidence gave us permission to be precise: fix the experience that was hiding the structure, not rebuild everything.
[Media 4: Tree vs URUT comparison panel]
5) How I worked (so this didn’t become a design-by-opinion loop)
I kept the team aligned using a simple rhythm:
turn every complaint into a testable question
decide with evidence first (even lightweight evidence)
design in “pillars” so UI decisions stayed coherent
phase delivery so we shipped confidence-building foundations before add-ons
This kept the work moving without losing rigor.
6) Three moves to restore trust
Navigation aligned to the mental model
People arrived with intent: “I’m doing a certain kind of work. Tell me the right method, phase, and activity.”
The mental model path was clear:
Consulting/Support → Methods → Phases → Activities
Before, switching methods felt like losing your place. After, the path stayed consistent so moving between methods felt like continuing work, not restarting it.
[Media 5: Mental model diagram]
Search built for scanning
Search needed to work like triage: let users scan quickly, narrow confidently, and reach the right entity type without opening ten tabs.
Before, results forced people to hunt and open multiple pages to confirm they were in the right place. After, results were structured to scan and filter by entity type so the right item surfaced faster.
[Media 6: Search redesign redraw]
Trust through controlled contribution
Trust doesn’t come from cleaner UI alone. It comes from content being current, and fixable.
Before, a content fix depended on chasing the right person. After, fixes moved through a governed flow: submit → SME review → publish, with versioning and notifications so trust could build over time.
[Media 7: Contribution workflow diagram]
7) Phase 1 delivery plan (MVP first)
With the MVP timeline in mind, we prioritized anything that reduced time-to-content and restored confidence first.
Phase 1 focused on: navigation/layout improvements, method/phase landing pages, templates and job aids as first-class entities, baseline improved search, redundancy cleanup, and collaboration support.
[Media 8: Phase 1 scope card]
8) What we shipped (MVP)
What we shipped first was intentionally unglamorous, but high impact: the parts that made people stop working around the portal.
clearer method + phase entry points (less “where do I start?”)
a more legible path from method → phase → activity
more scannable search results with clearer grouping and filtering
a controlled contribution workflow to keep content current over time
cleanup of redundant or confusing content patterns
9) What I deprioritized (on purpose)
We chose sequencing over scope. These items were valuable, but not the first domino.
deep customization by team or region (kept one backbone first)
advanced reporting dashboards (solved findability + trust first)
heavy automation requiring major backend work (sequenced after MVP)
10) What changed
To keep the scope focused and validation-ready, we intentionally deprioritized a few ideas during this phase.
avoided a costly “rebuild everything” by proving where the breakdown was (UI vs IA)
moved the portal closer to what it was meant to be: reliable guidance under pressure
made content improvement scalable through governed contribution, not a bottleneck
The goal was simple: when work got hard, SDM Plus should be the first place teams went, not the place they worked around.
11) What I learned
Designing SDM Plus wasn’t about making a portal prettier.
It was about making guidance feel trustworthy at the exact moment people needed it.








