CF

Work

Process

About

Overview

Project Summary

NEO Scheduling is a mission-critical enterprise application used to coordinate maintenance tasks and workforce allocation across a fleet of nuclear power plants. While the core platform managed the "happy path" of scheduling, high-stakes adjustments—known as Change Requests—remained a manual, paper-based bottleneck that introduced significant operational risk.

The Problem

Before this project, any schedule change requiring high-level authorization (e.g., moving a task to a different work week) triggered a grueling manual process. Schedulers had to physically fill out paper forms with data already existing in our system, scan them, and email them to off-site managers.

  • Inefficiency:

Manually re-entering task names, durations, and IDs was redundant and prone to error.

  • Lack of Guardrails:

Junior schedulers often didn't realize a change required approval, leading to unauthorized (and potentially hazardous) schedule shifts.

  • Visibility Black Hole:

Requests frequently "fell through the cracks" in crowded email inboxes, and leadership had zero data to track recurring scheduling friction points.

The Solution

We designed an integrated, end-to-end digital workflow that automated the Change Request process.

We designed an integrated, end-to-end digital workflow that automated the Change Request process.

  • Smart Triggers: Built-in logic that automatically flags when a schedule change requires a formal request.

  • Pre-populated Requests: A seamless UI that pulls existing task data into the request form, reducing manual entry by 95%.

  • Universal Notification System: A scalable framework (now a part of the NEO Design System) to alert off-site approvers in real-time.

  • Analytics Dashboard: A high-level view for fleet managers to track request volume, bottlenecks, and long-term trends via Power BI.

  • Smart Triggers:

Built-in logic that automatically flags when a schedule change requires a formal request.

  • Pre-Populated Requests:

A seamless UI that pulls existing task data into the request form, reducing manual entry by 95%.

  • Universal Notification System:

A scalable framework (now a part of the NEO Design System) to alert off-site approvers in real-time.

  • Analytics Dashboard:

A high-level view for fleet managers to track request volume, bottlenecks, and long-term trends via PowerBI.

Results

  • 90% Reduction in total time to complete a change request.

  • 95% Faster initiation time for schedulers.

  • 50% Faster review time for managers.

  • Zero missed or "lost" change requests.

  • Scalable Asset: Created a notification framework adopted by the entire NEO application ecosystem.


My Role

Lead UX Designer

Other Team Members

  • Associate UX Designer

  • Product Owner

  • Full Stack Development Team

Who We Helped

Directly:

  • Work Week Managers

  • Scheduling Coordinators

  • Site Scheduling Managers

Indirectly:

  • Work Supervisors

  • Plant Managers

Final Designs

Placing a Change Request

Updating a Change Request

Reviewing A Change Request (First Level Approver)

Reviewing A Change Request (Second Level Approver)

Deep Dive: Process & Insights

In this section, we will walk through phases of the design process an go into detail about the process and methods we used to arrive at the solution that was developed into a live product.

Exploring the Problem Space

Even though the basics of the change request process were known and established, we made sure to fully explore the problem space to ensure the quality of our digital solution.

The Fundamentals of Nuclear Scheduling

To understand the design of NEO Scheduling, one must first understand the rigid, safety-critical framework of nuclear maintenance.

Where Scheduling Fits

Scheduling is the final safeguard in a 10-step maintenance process. We focused on the final two stages of this lifecycle: finalizing work weeks and assigning specific workers to shifts. Because these are the last steps before execution, any error here carries directly onto the plant floor.

Hierarchy: Work Orders Vs. Tasks

The work in this case study focused on the critical sixth step: Task Planning. This is the operational bridge where screened issue reports are converted into actionable work orders.

The planning workflow itself is dense, requiring Planners to navigate approximately 13 specific steps—ranging from equipment review to permit generation and clearance requests. Within this workflow, we identified two high-friction areas ripe for automation: Documents Review and Material Request. These two specific touchpoints (highlighted below) became the focus of our AI implementation, aiming to shift the user from a manual compiler to an efficient editor.

  • Work Order:

A high-level objective or "mini-project" (e.g., repairing a specific pump).

  • Task:

The smallest unit of work (e.g., erecting scaffolding). This is the level where attributes, durations, and personnel are assigned.

The Timeline: T-Weeks & The Schedule Freeze

Nuclear scheduling is managed through T-Weeks—a relative countdown to the Week of Execution (T-0).

  • T-0 to T-3 (The Schedule Freeze):

A critical buffer window where no changes should be made to ensure plant safety.

  • T-4 to T-9 (Core Scheduling):

The primary window for Work Week Coordinators to assign shifts and personnel.

  • T-10 to T-16 (Resource Loading):

High-level planning where tasks are bundled and assigned to specific work weeks based on resource availability.

The Players: User Personas

Nuclear scheduling is managed through T-Weeks—a relative countdown to the Week of Execution (T-0).

Persona

Role in Case Study

Responsibility

Work Week Coordinator

The Requester

Junior schedulers managing specific disciplines (Electrical, Mechanical) or specific crews

Work Week Manager

Primary Approver

Oversees the entire work week, focusing on high-level concepts such as resource allocation

Scheduling Management (or other high-level nuclear managers)

2nd Level Approver

High-level fleet managers or specialists (Engineering/Weather) for high-impact changes

The "Guardrail": What is a Change Request?

A Change Request is a digital safety gate. It prevents unauthorized changes to the schedule that could lead to operational risks. Common triggers include:

  • Scope Freeze Violations:

Moving a task into or out of the T-3 window.

  • Hold Codes:

Attempting to schedule a task while a hold condition - such as a "Material Hold" (missing parts)- is active.

  • Specialized Assets:

Moving tasks flagged for Weather Preparedness (e.g., hurricane or blizzard prep), which require expert engineering sign-off.

User Research & Discovery

To bridge the gap between a decades-old paper process and a modern digital experience, we employed a two-pronged research strategy focused on empathy and process optimization.

Persona-Specific Workshops & Interviews

We conducted segmented workshops and deep-dive interviews with Work Week Coordinators, Managers, and Fleet Leadership.

  • Co-Design Workshops:

We used whiteboarding sessions to let users visualize their own "ideal" digital touchpoints. This helped us understand where the Change Request felt like a hurdle versus a necessary safeguard.

  • Mental Model Interviews:

Instead of forcing a new workflow, we identified the "prevailing strategies" users already used to navigate the manual system. Our goal was to digitize their existing intuition to ensure high adoption and minimal retraining.

The "Paper Trail" Audit

To truly understand the friction, I role-played the current process by manually filling out the physical paper forms. This "day-in-the-life" exercise highlighted the extreme redundancy: users were hand-copying data that already existed in our digital ecosystem.

Strategic Process Improvement

I led a review of the physical form with high-level Scheduling Management to "trim the fat." By questioning every field, we identified and removed non-critical requirements, such as the Primary vs. Support Work Group approvals.

Why we cut it:

  • Ambiguity:

There was no official system-of-record to distinguish "primary" from "support," causing constant confusion.

  • Timing:

These sign-offs often required specific crew assignments that hadn't happened yet in the T-week timeline.

  • Redundancy:

Management confirmed these checks were already happening in weekly sync meetings, making the form field a bureaucratic bottleneck rather than a safety necessity.

Research Synthesis: Balancing Efficiency with Oversight

The research revealed a significant gap between how different personas viewed the change request process. While everyone agreed the paper system was broken, their "ideal" digital replacement varied based on their specific responsibilities.

The Requester's Mindset: "Make it Invisible"

For Work Week Coordinators, the Change Request is a hurdle to jump so they can get back to their primary job.

  • Top Priority:

Speed and minimal friction

  • Key Pain Point:

The "Data Tax"—having to manually re-type system data into paper forms.

  • Behavior:

A "fire-and-forget" approach. They rarely track requests proactively and assume the system or an approver will notify them of any issues.

  • Design Requirement:

Maximum automation and pre-populated fields.

The Approver's Mindset: "Make it Accurate"

Work Week Managers feel the weight of responsibility for the schedule's integrity.

  • Top Priority:

Schedule quality and resource impact.

  • Key Pain Point:

Information fragmentation. They currently "manage" requests using a mix of sticky notes, Excel, and overflowing email inboxes, making it impossible to gauge total workload.

  • Behavior:

Before approving, they must see the impact on resource allocation (e.g., "Will moving this task overload our electricians?").

  • Design Requirement:

A centralized management dashboard with side-by-side views of request details and task history.

The Leadership Mindset: "Make it Measurable"

Scheduling Management looks beyond individual tasks to the health of the entire fleet.

  • Top Priority:

Identifying recurring bottlenecks and tracking KPIs.

  • Key Interest:

High-level trends—why are requests happening? Are certain sites struggling more than others?

  • Design Requirement:

A data-rich dashboard (Power BI integration) to visualize volume and root causes over months and years.

Feature

Design Requirement

Automation

Auto-flag changes that trigger a request and pre-fill all known task data.

Centralization

Replace email/paper with a dedicated "Pending Approvals" queue for managers.

Contextual Review

Provide a side-by-side UI so managers can see the request and the resource impact simultaneously.

Silent Notifications

A non-intrusive "Inbox" system within the app to alert users without breaking their workflow.

Defining Solutions

User Flows

With the research synthesized, I developed four primary user flows to bridge the gap between site-level schedulers and fleet-level management. By focusing on an action-oriented, screen-agnostic approach, we ensured the logic was sound before any UI was designed.

The Initiation (Requester)

This flow handles the "moment of discovery." When a scheduler attempts a change that violates a system rule (like a scope freeze), the app automatically triggers the Change Request modal. They key logic is that the system determines if a request is even needed. If not, the change is saved instantly, maintaining high efficiency for "safe" edits.

The Revision (Requester)

This is the most complex flow, containing multiple logic gates. The manager reviews the request reason, resource impact, and scheduling priority. The manager faces a few key decisions: they can can Approve (automatically updating the schedule), or Reject (notifying the requester with a reason). Escalations, where the change request requires a higher level of authority, are automated by the system, and still require a manager's review before being passed along to a higher-level approver.

Second-Level Review (Fleet Management)

For high-impact changes (e.g., weather preparedness), this flow provides a streamlined path for executive-level users. It focuses on final sign-off and ensures that once approved, the change is propagated through the entire NEO ecosystem immediately.

Sketching and Wireframing

The transition from user flows to interface design began with a period of rapid, divergent sketching. Because the Work Week Managers had such diverse mental models for how they wanted to review requests, I used low-stakes paper sketching to explore multiple layout concepts before committing to digital pixels.

The Convergent Process

As the fidelity of the designs increased, the number of features decreased. We used the previously established User Flows as a strict "logic filter":

  • Sketches:

High volume of ideas, focusing on different entry points (dashboards vs. flyouts).

  • Wireframes:

Mapping the successful paths from our user flows into structured layouts.

  • High-Fidelity Prototype:

One unified, "NEO-compliant" interface that satisfied all technical and user requirements.

Sketches

High-Fidelity Prototype

"Value vs. Effort" Filter

To ensure we were building the most impactful tool, we evaluated every experimental feature against a product management framework: the Value/Effort Ratio.

Value/Effort Ratio

=

Anticipated User Benefit

Development & Design Complexity

Two notable features were cut during this phase:

  • The Draft Calendar:

Originally requested by managers to allow coordinators to build "tentative" weeks. However, user testing with the Coordinators (the actual users) revealed they found it unnecessary and cumbersome. We cut it to save development time.

  • Bulk Approvals:

This sounded like an "efficiency" win, but it failed the "Quality" requirement. Managers couldn't see enough task detail in a bulk view to make a safe decision. Since they would end up checking tasks individually anyway, the "efficiency" gain was a mirage. We prioritized schedule safety over the illusion of speed.

Iterative Design: The Frequent Feedback Loop

Design at this scale requires constant calibration. To ensure NEO Scheduling met the rigorous demands of the plant floor, I integrated a continuous feedback loop that ran in parallel with our development sprints. This wasn't just about "checking in"—it was about active, evidence-based refinement.

Sprint-Based Alignment

We established a standing bi-weekly review session synchronized with our technical sprints. This allowed us to:

  • Demo Early & Often:

Move from "show-and-tell" to "hands-on" interaction as quickly as possible.

  • Persona-Targeted Reviews:

We didn't invite everyone to every meeting. We strategically curated our guest list based on the feature being reviewed—ensuring Work Week Managers gave feedback on oversight tools while Coordinators focused on the request workflow.

  • Bridge the Gap:

When a feature impacted multiple roles, we held joint sessions to observe the "hand-off" between personas, ensuring the transition from a request to an approval was seamless.

Mini-Usability Tests

Beyond standard reviews, we frequently conducted Mini-Usability Tests. We would have a user share their screen and attempt to navigate a wireframe or prototype in real-time.

  • Observing vs. Asking:

These sessions allowed us to catch friction points that users might not think to mention in a conversation.

  • Evidence-Based Design:

This shifted the conversation from "I like this color" to "I can’t find the resource impact data," allowing us to lead with design instinct backed by hard user evidence.

Outcome: De-Risking the Build

By maintaining this loop, we caught logic gaps (like the "Bulk Actions" mentioned earlier) during the wireframing stage rather than the development stage. This saved hundreds of engineering hours and ensured that the final high-fidelity prototype was already "pre-validated" by our primary users.

The Final Designs

WE conducted a final round of formal usability testing to gather final updates that we could streamline the user experience. We had great success with our usability testing and found a few updates that could be made before the final handoff to engineering.

Usability Testing

To ensure the design was as intuitive for a new user as it was for our frequent collaborators, we tested the prototype with a mix of "fresh-eye" users and those from our recurring feedback loop. While the veteran users were faster, both groups achieved a 100% task completion rate with zero critical errors.

The Testing Protocol

We designed persona-specific scenarios based on the user flows:

  • Coordinators:

Initiating a request via a schedule change, editing a pending request, and identifying CR status within the task flyout.

  • Managers:

Reviewing and responding to requests via the dashboard and notification system.

  • Fleet Leadership:

Performing second-level reviews and analyzing high-level metrics.

Key Findings and Pivots

The "Context Over Task" Realization

Our biggest assumption was that managers would want a direct link to the specific task being changed. Testing proved us wrong. Not a single manager used the task link. Instead, they demanded a link to the entire Work Week. They needed to see how the change rippled through the rest of the schedule—something the individual task view couldn't provide.

Pivot: We replaced the task-specific link with a "Work Week Context" link, providing the macro-view managers needed for high-quality decisions.

Closing the Communication Loop

Initially, we only required notes for rejections. However, managers requested the ability to leave optional notes for approvals to provide "just-for-info" context to the requester (e.g., "Approved, but please monitor the electrical crew's fatigue levels").

Pivot: We updated the approval modal to include an optional text field, improving the professional dialogue between roles.

Increasing "Guardrail" Transparency

Coordinators loved the automation but wanted more explicit signaling when they were entering a "protected" state.

Pivot: We sharpened the UI at the point of entry to more clearly define why a request was triggered and exactly which approval steps would follow.

Harmonizing Jargon

Second-level approvers noted that while the data was correct, the phrasing of certain metrics didn't match the established fleet vocabulary.

Pivot: We performed a copy audit, replacing generic terms with specialized nuclear maintenance jargon to increase trust and system authority.

High-Fidelity Design

Our final, high-fidelity designs were handed off to engineering to develop into new features in the application.

Individual Change Request

The Change Request provides a guardrail against high-risk updates to the nuclear maintenance schedule.

Change Request Detail Screen

The reviewer has access to an extensive details screen that they can use to approve change requests.

Change Requests Dashboard

The Change Requests Dashboard shows in-depth change request metrics and also serves as a management system for pending change requests.

Roles & Permissions

Scheduling Management has control over different requester and approver roles.

Results & Lessons Learned

Results & Business Impact

By transforming a manual, paper-based process into a streamlined digital workflow, we didn't just save time—we increased the safety and reliability of the nuclear maintenance schedule.

  1. The NEO Ecosystem Contribution

The impact of this project extended beyond the Scheduling app. We developed a Universal Notification System that was subsequently adopted as a core component of the NEO Design System, providing a standardized communication framework for all future applications in the ecosystem.

  1. Data-Driven Leadership

Digitization unlocked the ability to funnel real-time data into a custom Power BI Dashboard. For the first time, fleet leadership could track the volume and root causes of change requests over months and years, allowing for long-term process optimization and resource planning.

  1. Key Performance Indicators (KPIs)

The shift from paper to digital resulted in drastic improvements across every measurable metric:

  • Total Cycle Time:

The average time to complete a request dropped to 1/10th of the previous duration.

  • Initiation Speed:

Schedulers now initiate requests in under 5% of the time it previously took to fill out paper forms.

  • Review Efficiency:

Manager review time was cut by 50%.

  • Reliability:

Zero change requests were "lost" or missed after implementation.

  • Outlier Reduction:

Requests taking longer than a week were reduced by 90%.

  1. User Sentiment & Satisfaction

Beyond the hard numbers, we saw a significant lift in how users felt about the platform. Quarterly tracking showed a noticeable boost in both System Usability Scale (SUS) and Net Promoter Scores (NPS) across all three primary personas.

Reflections & Lessons Learned

Designing for a high-consequence industry like nuclear energy taught me that great UX is as much about resilience and logic as it is about aesthetics.

The Power of Design Autonomy

Early in the project, the Product Owner went on extended medical leave. Without a direct line to business requirements, I had to act independently. I pivoted to intensive primary research—extracting the "missing" paper forms directly from users and facilitating my own workshops to define the business logic.

The Lesson: Designers must be comfortable acting as pseudo-Product Managers—making high-stakes decisions, documenting their rationale, and standing by those choices when stakeholders return.

User Flows as a "North Star"

In a complex ecosystem, it is easy to "drop the ball" on a specific user interaction. I used the user flows as a strict compliance checklist throughout the project.

The Lesson: Beyond just a design tool, user flows are a roadmap for test plans and a safeguard against feature creep. If a design didn't align with the "North Star" flow, it was cut.

Value Frequency Over Duration

I learned that three 15-minute "micro-syncs" with users are often more valuable than one 60-minute formal review.

The Lesson: Constant, low-friction touchpoints build trust and allow for course correction before hours are wasted on the wrong path. Users are the final authority on success; the earlier they see a wireframe, the safer the final build will be.

Solving for the "1% Scenarios"

Midway through high-fidelity prototyping, we realized we needed a dedicated "Edge Case Audit." We brainstormed and designed for unlikely but critical scenarios such as: What if an unauthorized user attempts to edit a task while a Change Request is pending? What if two users initiate a request for the same task simultaneously?

The Lesson: Developers will eventually ask these questions. Preempting them with an "Edge Case Matrix" during the design handoff ensures a smoother build and prevents last-minute logic gaps.

Social

LinkedIn

Contact

Get in touch



Portfolio

Featured Work

Process

About