HealthTech ICU WORKFLOW DATA VISUALIZATION CRITICAL CARE

Digital Medical Charting for CTVS ICU

Streamlining life-critical documentation for cardiothoracic surgery recovery — reducing charting time while increasing data accuracy.

Doc Time
30%
reduction in daily charting time
Timeline
12 wks
discovery to handoff
Observation
100+
hours of ICU shadowing
My Role
Lead UX
research, IA & UI design
TLDR — Executive Summary
  • ICU nurses were spending 4+ hours per shift on fragmented, manual charting across multiple systems.
  • Identified "cognitive fragmentation" as the primary driver of charting burnout and clinical errors.
  • Designed a unified charting cockpit that surfaces real-time vitals and enables single-click validation.
  • Post-pilot results showed a 30% reduction in charting time, freeing up 70 mins per shift for patient care.
01 — Context & Framing

How did this problem land on our desk?

Placeholder — Add the background: who brought this project in, what the brief said, how the problem evolved as we dug deeper. What was the org context — is this a startup, a hospital system, an internal tool?

Placeholder — Describe the constraints: compliance requirements, existing tech stack, timeline pressures, stakeholder dynamics.

"We didn't tell them to skip the checklist. They just had no time to follow it."

— Nephrologist, Site 2
02 — Market Research

What does the landscape look like?

Placeholder — Describe the broader healthcare IT and medical device UX market. What products exist in this space? What are incumbents doing well and where are they failing? What does the regulatory landscape require?

Market landscape map placeholder
Healthcare UX market overview — competitive landscape snapshot
03 — User Segmentation

Who are we actually designing for?

Placeholder — Break down the user segments in the dialysis unit. Who uses the interface and under what conditions? Senior nurses vs. junior staff? Morning rounds vs. emergency shifts? Define behavioral segments, not just demographic ones.

Segment A

Senior Dialysis Nurse

Experienced, protocol-aware. Time pressured. High stakes, routine tasks. Low tolerance for extra steps.

Segment B

Junior / Agency Staff

Variable familiarity with machine models. Needs more guidance. Higher error probability during handoffs.

So What?

Placeholder — A single interface cannot be optimised for both segments simultaneously. Our primary user for safety-critical flows is the junior staff member; our secondary is the experienced nurse who needs speed.

02 — Success Metrics

How did we define "good" before we started designing?

Placeholder — Describe the KPIs, OKRs, or qualitative signals agreed upon before ideation began. Why were these the right metrics? How were they measured?

Primary KPI

Critical Setup Error Rate

Target: 40% reduction within 3 months of rollout, measured via machine logs.

Primary KPI

Time to Complete Setup

Baseline: 4.2 min avg. Target: maintain ≤ 4.5 min (safety shouldn't cost time).

Qualitative Signal

Nurse Confidence Score

Self-reported confidence in error-catching post-interaction, measured via SUS + interviews.

Guardrail Metric

Alert Fatigue Index

Ensure new warnings don't create noise — alerts dismissed < 15% of triggers.

So What?

Placeholder — So what does defining this early actually change about how you designed? This box should surface your insight, not just restate the metric.

03 — Prioritization

What was in scope — and what did we deliberately leave out?

Placeholder — Describe the prioritization moment. What framework was used? What was deprioritized and why? What pressure did you push back on?

Effort / Impact Matrix — Placeholder
Prioritization framework used to scope the problem space
04 — UX Audit

What was broken in the existing experience?

Placeholder — Heuristic evaluation of the current system. Annotated screens with severity ratings (Critical / Major / Minor). Where was the biggest friction? What surprised you?

Annotated UX Audit Screens — Placeholder
Key violations flagged across the existing dialysis machine UI
So What?

Placeholder — The audit revealed that ____ were the highest severity issues. This shaped our early hypotheses about where to focus design energy.

05 — Competitive Benchmarking

What is the rest of the industry doing — and where do they fail the same way?

Placeholder — 3–5 comparators reviewed. What patterns exist across the category? Where is the obvious gap? What did adjacent industries (aviation, surgical tools) do that's worth borrowing?

Competitive Audit Matrix — Placeholder
Side-by-side feature/UX comparison across comparators
06 — Ecosystem Mapping

Who and what are all the actors in this system?

Placeholder — Stakeholders, user roles, internal systems (EHR, pharmacy, billing), physical environment, edge cases. This is the full picture before you design anything.

Ecosystem / Stakeholder Map — Placeholder
Mapping all touchpoints, actors and dependencies
07 — Process Mapping

What does the current state actually look like, end to end?

Placeholder — As-is journey map or process flow. Where are the handoffs? Decision points? Moments of ambiguity? Where does the system assume things it shouldn't?

As-Is Process Flow — Placeholder
Current state nurse workflow across a full setup session
So What?

Placeholder — The process map revealed that ____. This is where we spotted the breakdown that shaped the entire design direction.

08 — Primary Research

What did we hear, see, and uncover in the field?

Placeholder — Methods used (contextual inquiry, semi-structured interviews, shadowing, task analysis). Sample size, sites, how participants were recruited.

Placeholder — Key themes that surfaced. Synthesis method (affinity clustering, thematic analysis, etc.).

"I need to see the most important info from across the room, not after I've tapped through four screens."

— Senior Dialysis Nurse, Site 1

"When alarms go off, the screen should tell me the action, not the problem."

— Dialysis Technician, Site 3
Research Synthesis / Affinity Map — Placeholder
Patterns identified across 14 nurse interviews and 3 site visits
So What?

Placeholder — The research crystallized that the problem wasn't missing features — it was information architecture that worked against the clinical context.

09 — Service Blueprint

How does the designed solution sit inside the full delivery system?

Placeholder — Frontstage actions (what users do), backstage actions (what the system does behind the scenes), support systems, physical evidence. This shows the designed solution, not the current state.

Service Blueprint — Placeholder
To-be service blueprint mapping the redesigned experience
10 — North Star & POV

What is the design bet we're making?

Placeholder — The HMW statement. The POV. The guiding principle that shaped every decision that follows.

How might we design a dialysis interface that enforces safety without breaking the clinical rhythm of nurses working under pressure?

Placeholder — The north star metric or experience quality we were optimizing toward. What does "winning" look like from the user's perspective?

09 — Thematic Analysis

What patterns emerged from the data?

Placeholder — After primary research, what themes kept surfacing across participants? What were the unexpected findings? How did you reconcile contradictions in the data?

"Every participant described a moment where they 'knew' they'd made an error — but couldn't immediately identify where in the setup it happened."

— Research synthesis note, Session 4
Affinity Map — Thematic Clusters Placeholder
Key themes grouped from 6 research sessions across 3 sites
So What?

Placeholder — These themes directly informed our problem reframing: from "feature parity" to "error visibility and forced verification at critical junctures."

10 — Value Proposition

What exactly are we offering — and to whom?

Placeholder — Define the core value prop of the redesigned interface. For each user segment, what does the new design specifically offer that the current one doesn't? Frame it as a before/after for the user's experience, not as a list of features.

For Senior Nurses

Speed without sacrifice

Complete setup flows in the same time or less, with passive error-prevention built into the sequence.

For Junior / Agency Staff

Confident first-time completion

Guided flows with explicit confirmation gates — no silent failures, no skipped protocols.

11 — Stakeholders & Constraints

Who had a seat at the table — and what did they each need?

Placeholder — Map the key internal and external stakeholders. What were their competing priorities? Who had veto power? Who was a quiet blocker? How did you build alignment across clinical, engineering, and procurement?

Clinical Lead

Patient safety above all

Any design that slowed down setup was a non-starter. Error reduction had to come without adding cognitive load.

Engineering

Legacy system constraints

The machine firmware couldn't be modified. All safety logic had to live in the interface layer only.

Procurement

Cost ceiling

No hardware changes. Interface upgrade only. Rollout budget capped at training + software deployment costs.

Head Nurse, Site 2

Informal champion

Not a formal decision-maker, but the most influential voice in the room. Her buy-in unlocked trust from the nursing floor.

How we navigated it

Placeholder — Ran a constraints-first design sprint in week 3 to surface all hard limits before ideation. This prevented wasted work on technically infeasible directions and gave engineering early confidence that we understood the system boundaries.

11 — Design & Decisions

How did we go from insight to interface?

Placeholder — Walk through the key design decisions. Not just "here's the final design" — explain the forks in the road. What was the alternative? Why was this the right call?

Ideation / Sketches — Placeholder
Early concepts explored before committing to direction
Wireframes — Placeholder
Mid-fidelity layouts with annotated rationale
High-Fidelity Prototype — Placeholder
Final design reviewed in 6 usability sessions
Key Decision

Placeholder — We chose [pattern X] over [pattern Y] because ____. The tradeoff was ____, which we accepted because ____.

12 — Iteration Log

What changed between V1 and what shipped — and why?

Placeholder — Show 2–3 meaningful design pivots. For each: what the original design was, what feedback or data caused the change, and what the revised direction looked like. This is where design thinking becomes visible.

V1 → V2 Comparison — Setup Screen Placeholder
Before: single-screen setup list. After: sequenced step-by-step flow with inline confirmation per step
Pivot 1 — What changed

Placeholder — V1 presented all setup parameters on one screen. Usability session 2 showed nurses skipping fields because the page felt "done" before it was. V2 broke setup into a 4-step sequence with explicit completion per step. Error rate in session 3 dropped by half on that screen alone.

V2 → Final — Verification Gate Placeholder
Before: passive summary screen. After: forced read-back with active confirmation required for each critical parameter
Pivot 2 — What changed

Placeholder — V2 used a passive summary before confirmation. Nurses glanced at it but didn't actually read it. Borrowed from aviation checklist design: final version requires tapping each critical parameter to confirm — zero missed confirmations in final usability sessions. Felt heavier in testing, but the safety data won the argument.

12 — Go-to-Market

How did we take this from prototype to rollout?

Placeholder — Describe the rollout strategy. Was this a phased launch, a pilot across one ward, or a full hospital deployment? Who were the internal champions? What training or change management was needed?

Rollout Strategy

Placeholder — Piloted at Site 2 (Dialysis Unit B) for 6 weeks before broader rollout. Training was embedded into the interface itself through progressive disclosure — no separate documentation required.

13 — Outcomes & Impact

What actually changed?

Placeholder — Post-launch metrics if available, or usability test results. Stakeholder reception. What was shipped vs. what was designed. The delta that mattered.

Result

~40% fewer critical errors

Measured in controlled usability testing across 12 participants.

Result

Setup time maintained

Average setup time increased by only 8 seconds — within acceptable bounds.

Qualitative

Nurse confidence ↑

Nurses described feeling "more in control" and "less anxious about missing something."

What's Next

Phase 2 in scoping

Remote monitoring + shift handoff features are in the next planning cycle.

14 — Reflections

What would I do differently?

"The interface wasn't the problem. The workflow was. We almost spent 8 weeks solving the wrong thing."

— Post-project debrief note
Wrong assumption

I assumed senior nurses and junior staff had materially different error patterns. They didn't — both groups made the same class of errors at the same decision point in setup. The difference was that seniors recovered faster. I would have designed a single, smarter verification gate earlier if I'd segmented behavior rather than experience level from the start.

What I'd do differently

Run contextual inquiry during active shifts, not in controlled observation slots. Our recruited sessions were lower-pressure than reality — we only surfaced the time-pressure dimension of the problem in the final two site visits, when we happened to arrive mid-shift. That insight shaped our most important design decision. Earlier exposure would have saved 2 weeks of misframed ideation.

What this changed going forward

Every design review I run now has a mandatory "error state" slide — what does the interface do when the user makes the most likely mistake? Not an edge case. The most likely mistake. This project made that a non-negotiable in my process.

Next Case Study
Fund Allocation Dashboard