- ICU nurses were spending 4+ hours per shift on fragmented, manual charting across multiple systems.
- Identified "cognitive fragmentation" as the primary driver of charting burnout and clinical errors.
- Designed a unified charting cockpit that surfaces real-time vitals and enables single-click validation.
- Post-pilot results showed a 30% reduction in charting time, freeing up 70 mins per shift for patient care.
How did this problem land on our desk?
Placeholder — Add the background: who brought this project in, what the brief said, how the problem evolved as we dug deeper. What was the org context — is this a startup, a hospital system, an internal tool?
Placeholder — Describe the constraints: compliance requirements, existing tech stack, timeline pressures, stakeholder dynamics.
"We didn't tell them to skip the checklist. They just had no time to follow it."
— Nephrologist, Site 2What does the landscape look like?
Placeholder — Describe the broader healthcare IT and medical device UX market. What products exist in this space? What are incumbents doing well and where are they failing? What does the regulatory landscape require?
Who are we actually designing for?
Placeholder — Break down the user segments in the dialysis unit. Who uses the interface and under what conditions? Senior nurses vs. junior staff? Morning rounds vs. emergency shifts? Define behavioral segments, not just demographic ones.
Senior Dialysis Nurse
Experienced, protocol-aware. Time pressured. High stakes, routine tasks. Low tolerance for extra steps.
Junior / Agency Staff
Variable familiarity with machine models. Needs more guidance. Higher error probability during handoffs.
Placeholder — A single interface cannot be optimised for both segments simultaneously. Our primary user for safety-critical flows is the junior staff member; our secondary is the experienced nurse who needs speed.
How did we define "good" before we started designing?
Placeholder — Describe the KPIs, OKRs, or qualitative signals agreed upon before ideation began. Why were these the right metrics? How were they measured?
Critical Setup Error Rate
Target: 40% reduction within 3 months of rollout, measured via machine logs.
Time to Complete Setup
Baseline: 4.2 min avg. Target: maintain ≤ 4.5 min (safety shouldn't cost time).
Nurse Confidence Score
Self-reported confidence in error-catching post-interaction, measured via SUS + interviews.
Alert Fatigue Index
Ensure new warnings don't create noise — alerts dismissed < 15% of triggers.
Placeholder — So what does defining this early actually change about how you designed? This box should surface your insight, not just restate the metric.
What was in scope — and what did we deliberately leave out?
Placeholder — Describe the prioritization moment. What framework was used? What was deprioritized and why? What pressure did you push back on?
What was broken in the existing experience?
Placeholder — Heuristic evaluation of the current system. Annotated screens with severity ratings (Critical / Major / Minor). Where was the biggest friction? What surprised you?
Placeholder — The audit revealed that ____ were the highest severity issues. This shaped our early hypotheses about where to focus design energy.
What is the rest of the industry doing — and where do they fail the same way?
Placeholder — 3–5 comparators reviewed. What patterns exist across the category? Where is the obvious gap? What did adjacent industries (aviation, surgical tools) do that's worth borrowing?
Who and what are all the actors in this system?
Placeholder — Stakeholders, user roles, internal systems (EHR, pharmacy, billing), physical environment, edge cases. This is the full picture before you design anything.
What does the current state actually look like, end to end?
Placeholder — As-is journey map or process flow. Where are the handoffs? Decision points? Moments of ambiguity? Where does the system assume things it shouldn't?
Placeholder — The process map revealed that ____. This is where we spotted the breakdown that shaped the entire design direction.
What did we hear, see, and uncover in the field?
Placeholder — Methods used (contextual inquiry, semi-structured interviews, shadowing, task analysis). Sample size, sites, how participants were recruited.
Placeholder — Key themes that surfaced. Synthesis method (affinity clustering, thematic analysis, etc.).
"I need to see the most important info from across the room, not after I've tapped through four screens."
— Senior Dialysis Nurse, Site 1"When alarms go off, the screen should tell me the action, not the problem."
— Dialysis Technician, Site 3Placeholder — The research crystallized that the problem wasn't missing features — it was information architecture that worked against the clinical context.
How does the designed solution sit inside the full delivery system?
Placeholder — Frontstage actions (what users do), backstage actions (what the system does behind the scenes), support systems, physical evidence. This shows the designed solution, not the current state.
What is the design bet we're making?
Placeholder — The HMW statement. The POV. The guiding principle that shaped every decision that follows.
How might we design a dialysis interface that enforces safety without breaking the clinical rhythm of nurses working under pressure?
Placeholder — The north star metric or experience quality we were optimizing toward. What does "winning" look like from the user's perspective?
What patterns emerged from the data?
Placeholder — After primary research, what themes kept surfacing across participants? What were the unexpected findings? How did you reconcile contradictions in the data?
"Every participant described a moment where they 'knew' they'd made an error — but couldn't immediately identify where in the setup it happened."
— Research synthesis note, Session 4Placeholder — These themes directly informed our problem reframing: from "feature parity" to "error visibility and forced verification at critical junctures."
What exactly are we offering — and to whom?
Placeholder — Define the core value prop of the redesigned interface. For each user segment, what does the new design specifically offer that the current one doesn't? Frame it as a before/after for the user's experience, not as a list of features.
Speed without sacrifice
Complete setup flows in the same time or less, with passive error-prevention built into the sequence.
Confident first-time completion
Guided flows with explicit confirmation gates — no silent failures, no skipped protocols.
Who had a seat at the table — and what did they each need?
Placeholder — Map the key internal and external stakeholders. What were their competing priorities? Who had veto power? Who was a quiet blocker? How did you build alignment across clinical, engineering, and procurement?
Patient safety above all
Any design that slowed down setup was a non-starter. Error reduction had to come without adding cognitive load.
Legacy system constraints
The machine firmware couldn't be modified. All safety logic had to live in the interface layer only.
Cost ceiling
No hardware changes. Interface upgrade only. Rollout budget capped at training + software deployment costs.
Informal champion
Not a formal decision-maker, but the most influential voice in the room. Her buy-in unlocked trust from the nursing floor.
Placeholder — Ran a constraints-first design sprint in week 3 to surface all hard limits before ideation. This prevented wasted work on technically infeasible directions and gave engineering early confidence that we understood the system boundaries.
How did we go from insight to interface?
Placeholder — Walk through the key design decisions. Not just "here's the final design" — explain the forks in the road. What was the alternative? Why was this the right call?
Placeholder — We chose [pattern X] over [pattern Y] because ____. The tradeoff was ____, which we accepted because ____.
What changed between V1 and what shipped — and why?
Placeholder — Show 2–3 meaningful design pivots. For each: what the original design was, what feedback or data caused the change, and what the revised direction looked like. This is where design thinking becomes visible.
Placeholder — V1 presented all setup parameters on one screen. Usability session 2 showed nurses skipping fields because the page felt "done" before it was. V2 broke setup into a 4-step sequence with explicit completion per step. Error rate in session 3 dropped by half on that screen alone.
Placeholder — V2 used a passive summary before confirmation. Nurses glanced at it but didn't actually read it. Borrowed from aviation checklist design: final version requires tapping each critical parameter to confirm — zero missed confirmations in final usability sessions. Felt heavier in testing, but the safety data won the argument.
How did we take this from prototype to rollout?
Placeholder — Describe the rollout strategy. Was this a phased launch, a pilot across one ward, or a full hospital deployment? Who were the internal champions? What training or change management was needed?
Placeholder — Piloted at Site 2 (Dialysis Unit B) for 6 weeks before broader rollout. Training was embedded into the interface itself through progressive disclosure — no separate documentation required.
What actually changed?
Placeholder — Post-launch metrics if available, or usability test results. Stakeholder reception. What was shipped vs. what was designed. The delta that mattered.
~40% fewer critical errors
Measured in controlled usability testing across 12 participants.
Setup time maintained
Average setup time increased by only 8 seconds — within acceptable bounds.
Nurse confidence ↑
Nurses described feeling "more in control" and "less anxious about missing something."
Phase 2 in scoping
Remote monitoring + shift handoff features are in the next planning cycle.
What would I do differently?
"The interface wasn't the problem. The workflow was. We almost spent 8 weeks solving the wrong thing."
— Post-project debrief noteI assumed senior nurses and junior staff had materially different error patterns. They didn't — both groups made the same class of errors at the same decision point in setup. The difference was that seniors recovered faster. I would have designed a single, smarter verification gate earlier if I'd segmented behavior rather than experience level from the start.
Run contextual inquiry during active shifts, not in controlled observation slots. Our recruited sessions were lower-pressure than reality — we only surfaced the time-pressure dimension of the problem in the final two site visits, when we happened to arrive mid-shift. That insight shaped our most important design decision. Earlier exposure would have saved 2 weeks of misframed ideation.
Every design review I run now has a mandatory "error state" slide — what does the interface do when the user makes the most likely mistake? Not an edge case. The most likely mistake. This project made that a non-negotiable in my process.