Table of Contents

Nuclear AI

Table of Contents

Overview

Project Summary

I led the design of GenAI integration across Planning and Scheduling applications, bridging the gap between technical consultants and business stakeholders to automate data synthesis from fragmented legacy sources. By advocating for a pragmatic MVP focused on generating repair material lists and instructional procedures, I transitioned planners from manual creators to efficient editors. This shift reduced planning time by 50%—an efficiency gain equivalent to adding 3+ FTEs—transforming a complex data-management struggle into the streamlined, AI-driven workflow featured in this Planning application case study.

I led the design of GenAI integration across Planning and Scheduling applications, automating data synthesis from fragmented legacy sources. By advocating for a pragmatic MVP focused on generating repair material lists and instructional procedures, I transitioned planners from manual creators to efficient editors. This reduced planning time by 50% - an efficiency gain equivalent to adding 3+ FTEs - transforming a complex data-management struggle into the streamlined, AI-driven workflow featured in this Planning application case study.

Results

  • Added the equivalent of 3+ FTE due to GenAI automations

  • Decreased the time spent planning simple work orders by 50% for AI-assisted Work Orders

My Role

User Experience Designer

Launch Date

October, 2024

Who We Helped

Directly:

  • Nuclear Planners

  • Planning Management

Indirectly:

  • Work Supervisors

  • Work Schedulers

Final Designs

High-Fidelity Designs

AI-Powered Documents Review

Checking AI Sources

Giving Feedback on AI Generations

AI Material Request Details

Selecting AI-Suggested Materials

Annotated Wireframes

Documents Review

Material Request

Deep Dive: Process & Insights

Understanding the Problem

Before I started creating solutions, I made sure that I fully understood the problem and its context. I developed a set of AI principles, reviewed UX research, and did a root cause analysis of past designs. Then, I passed this information on to the rest of the team so that we all would understand the problem we set out to solve.

AI Ethics Principles

At the onset of the project, I led an effort to develop a set of AI ethics principles that we would follow to ensure that the products we built would be good for the users.

Worker Augmentation

AI is designed to enhance human capabilities and decision-making, acting as a powerful tool for productivity rather than a replacement for the workforce.

User Control

Users must maintain ultimate authority over the AI's impact on the material used for planning or scheduling nuclear work.

Source Transparency

Every AI generated output must clearly reference and link to the original data sources it drew upon, enabling verification and contextual understanding.

Easy Correction 

Every AI generated output must clearly reference and link to the original data sources it drew upon, enabling verification and contextual understanding.

AI Awareness

The user interface must always clearly indicate when the AI is active and what impacts the AI has had, ensuring full situational awareness for the user.

Seamless Integration

Initial AI implementations should be embedded directly into existing, familiar workflows to minimize friction and reduce the learning curve for users.

Which Are the Most Impactful Areas for AI Implementation?

Before committing to a specific solution, I collaborated with our AI engineers and Product Owner to establish a framework for selection. We needed to ensure that we weren't just implementing "AI for AI's sake," but were targeting areas where Generative AI had a legitimate competitive advantage over traditional software logic.

Revisiting Past Research

We reviewed research that we had conducted regarding the Documents Review and Material Request to help us understand how we could solve users' problems with AI. Below are some of the research methods that we had used:

Contextual Inquiry

We watched workers plan a variety of tasks, taking notes and recordings that we could refer back to.

Surveying

We sent surveys to key users to gather information in an efficient manner.

User Interviews

We sat down with users, usually in a 1:1 setting, and discussed pain points in the planning process.

Workshops

We met with groups of workers and conducted whiteboard exercises to understand their mental models.

Then we reviewed our product KPIs and analytics. A couple examples of insights that we found from our research methods are:

Quantitative

Material Requests and Documents Reviews account for more than 50% of the time it takes to plan a task.

Quantitative

Planners are using multiple methods - more than two methods on average -for adding materials to a request.

Qualitative

"I have to take all these pieces of these documents that I've pasted here and roll them up into my work plan procedure."

Qualitative

"… if the material for the task is now obsolete, I have to search through the notes and vendor manuals to find an adequate replacement part."

Quantitative

Material Requests and Documents Reviews account for more than 70% of the time it takes to plan a task (on average).

Quantitative

Planners usually add controlled documents to the Documents Review before they create the Work Plan; they likely use these documents as references.

Qualitative

"I like the 'add to DWP' button because I add a lot of documents that the wok executer doesn't really need to see."

Qualitative

"Even if the material on a history work order looks the same, you still have to check that it has the right quality level."

Reviewing Past Designs

Before we had the option to use generative AI, we aggressively optimized the existing UI to reduce friction. We successfully streamlined the retrieval of information, but we hit a ceiling when it came to the synthesis of that information.

Adding Materials from Past Work Orders

  • The Solution:

A feature that allowed users to add materials based on past repair work for the same equipment.

  • The Limitation:

While we made finding known items faster, the process was still fundamentally manual. If a Planner encountered a novel repair or a "corner case" where history was thin, they still had to engage in a time-consuming "hunt and peck" strategy to figure out what they needed.

Advanced Materials Search

  • The Solution:

An advanced search for materials that matched the users' mental model in terms of search logic and filtering

  • The Limitation:

While we made finding unknown items faster, the searching process was still fundamentally manual.

Saved Documents Playlists

  • The Solution:

A "Saved Documents" feature allowing Planners to save frequently used and difficult-to-find documents for quick retrieval.

  • The Limitation:

The feature excelled at managing reference material, but they didn't help with the most cognitively demanding task: authoring the Work Plan.

The Core Challenge: The Synthesis Barrier

Problem Statement:

Despite having optimized tools for finding information, Planners are still burdened with the heavy cognitive load of synthesizing fragmented data and the manual labor of composing complex work plans from scratch.

We realized our goal was not to build a better search bar, but to fundamentally shift the user’s role:

  • From:

A manual researcher hunting for data and typing out documents.

  • To:

A subject matter expert reviewing high-quality drafts generated by the system.

Defining Solutions

Mapping the Logic: User Flows

Before drawing a single interface element, I created detailed user flows to define how the AI would interact with the existing system. We stress-tested these flows against our Easy Correction and User Control principles, specifically targeting the "corner cases" where AI typically fails:

  • Inaccuracy:

If the generated Work Plan is vague, how does the user intervene?

  • Incompleteness:

If the AI misses a critical material, document, or work plan section, how does the manual "safety net" kick in?

  • Feedback Loops:

We designed specific pathways for users to flag poor generations, ensuring that human dissatisfaction became data for model fine-tuning.

Documents Review

Material Request

Divergent Ideation: Sketching

We then moved to hand-sketching to generate volume. My goal was to explore every possible layout configuration—from conservative sidebar integrations to "outlandish" AI-dominant interfaces.

Low-Fidelity Prototyping

We took the survivors of the feasibility review and built a functional Low-Fidelity prototype. This allowed stakeholders to actually click through the "Happy Path" and feel the flow of the AI integration without being distracted by visual design details.

Documents Review

Material Request

Early Validation: Testing Lo-Fi Designs

Test Criteria and Results

Our primary goals were to test recognition and awareness, trust and control, and feedback efficiency. The core interaction patterns (Atom icons, inline integration) tested well. However, a critical friction point emerged regarding the user’s ability to control the AI, and their trust in their corrective ability.

  • The Planners' Fear:

Planners were terrified of being "stuck" with a bad generation. They worried that fixing a hallucinated Material Request or Documents Review would take longer than writing one from scratch, negatively impacting their performance metrics.

  • Management Pressure:

Upper Management feared "resistance to change." They wanted to force adoption and were skeptical of giving users any way to opt-out.

The Pivot: The "Manual Override" Compromise

  • The Feature:

We added a toggle switch to the "AI Card," allowing users to completely disable the AI for a specific section (e.g., turn off only the Material Request AI while keeping the Documents Review AI).

  • The Negotiation:

This feature faced initial resistance from leadership, including the Nuclear VP, who feared users would simply turn it off and never look back. I negotiated a strategic compromise to satisfy both sides.

The Final Designs

High-Fidelity Design: Principles in Action

We audited the interface against our six original AI Ethics Principles. Here is how those abstract principles materialized in the final UI:

Principle 1: AI Awareness

  • The UI Pattern:

We implemented a distinct "AI Card" at the top of every AI-augmented section.

  • Key Detail:

Every specific line item generated by the model (materials, documents) was marked with a unique Atom Icon, ensuring users could scan a list and immediately distinguish between human-entered data and AI suggestions.

Principle 2: Seamless Integration

  • The UI Pattern:

Inline Injection.

  • Key Detail:

AI-generated materials appeared directly in the standard "Material Request" and “Documents” sections. This meant users didn't have to learn a new tool; they just worked in their familiar views, now pre-populated by the AI.

Principles 3 & 4: User Control and Easy Correction

  • The UI Pattern:

The Manual Override & Standard Editing.

  • Key Detail:

Users maintained ultimate authority. If the AI was unhelpful, the Manual Override Toggle allowed them to disable it entirely for that section. For smaller corrections, users could edit AI generations using the exact same tools used for manual entry.

Principle 5: Source Transparency

  • The UI Pattern:

We implemented a distinct "AI Card" at the top of every AI-augmented section.

  • Key Detail:

We didn't ask users to trust a "Black Box." Clicking "View Sources" on the AI Card opened a pop-up modal listing every technical document and past work order the AI referenced. This turned the tool into an engine for discovery, helping planners find documents and materials they might have missed otherwise.

Principle 6: Worker Augmentation

  • The UI Pattern:

The "Reviewer" Workflow.

  • Key Detail:

The design explicitly positioned the Planner as the decision-maker. The AI provided the draft, but the Planner provided the stamp of approval.

Validation: The Efficiency Test

We worked with senior planning SMEs to create an extremely realistic prototype to test time-on-task in three scenarios to make sure that the design would perform well even if the AI model didn't.

Condition 1: The "Happy Path" (AI is Perfect)

  • Scenario:

The AI generation is 100% correct. The user needs to verify.

  • Goal:

Material Request completed in < 1 min; Documents Review completed in < 3 min.

  • Result:

PASSED

Condition 2: The "Human in the Loop" (AI Needs Edits)

  • Scenario:

The AI is mostly right but includes specific errors (e.g., one missing material, one wrong header, one missing procedural step).

  • Goal:

Material Request < 2 min; Documents Review < 5 min.

  • Result:

PASSED

Condition 3: The "Fail State" (AI Hallucination)

  • Scenario:

The AI generation is unusable. The user must recognize the failure and use the "Manual Override" toggle to revert to manual planning.

  • Goal:

Material Request < 1.5 min; Documents Review < 4 min.

  • Result:

FAILED (with a positive twist). Users took nearly twice as long as hypothesized to turn off the AI because they wanted to show that they weren't abusing the manual override feature.

Post-Launch Iteration: The "Survivorship Bias" Problem

A few months after launch, the AI engineering team hit an unexpected plateau. While the model was performing well, its rate of improvement had stalled. We discovered the culprit was a Survivorship Bias caused by our greatest trust-building feature: the Manual Override. We realized that to break through this plateau, we needed to retire the "Manual Override" for Material Requests. This was a calculated risk: The feature that built our initial trust was now the bottleneck for future quality.

The Solution: From "Manual Override" to "Active Selection"

Trigger and Interaction

  • Old Flow (Pre-Fill):

The AI silently created a request. The user opened it and either fixed it or turned it off with the manual override.

  • The New Flow (Suggestion):

  • New Flow (Suggestion):

When entering the section, the user is immediately presented with a "Selection Modal" containing AI suggestions. All items are selected by default. Then, the user simply unchecks the bad items. This action sends precise, negative reinforcement data to the model for those specific parts, while validating the correct ones.

High-Fidelity Design Update

We removed the global "Off" switch. While users can cancel the modal to perform other tasks, creating a Material Request now requires passing through the AI suggestion phase.

  • Forced Feedback:

If a user deselects every item (indicating a total hallucination), a feedback modal is triggered. This replaces the old "Manual Override" feedback loop with a more integrated data collection point.

Results & Lessons Learned

Final Results

  1. Immediate ROI: The 50% Efficiency Jump

  • 50% Reduction in Planning Time:

Total planning time for AI-assisted tasks was cut in half.

  • The 3+ FTE Gain:

This efficiency gain was equivalent to adding more than 3 Full-Time Employees (FTEs) to the planning staff without increasing headcount.

  • Financial Impact:

At the salary level of expert planners, this efficiency translates to hundreds of thousands of dollars in annual value returned to the business.

  1. Workforce Agility through Standardization

  • Workforce Mobility:

The organization can now operate as a cohesive unit. During emergency outages or work surges, planners from one site can now assist other sites seamlessly, as the underlying work plan formats are no longer fragmented.

  1. Role Evolution: From Writer to Reviewer

This project initiated a fundamental transition for the workforce. We successfully shifted expert workers away from low-value manual tasks (typing work plans) and toward high-value cognitive tasks (reviewing and approving engineering logic). This "Reviewer Role" maximizes the impact of their expertise.

  1. The AI Catalyst

  • Confidence to Scale:

The success of this launch proved that AI could be safely implemented in a nuclear environment, securing investment for AI initiatives in previously hesitant departments.

  • The Path to Complexity:

While this MVP focused on "Simple" work orders, the data loop we established is currently fine-tuning the models to handle "Complex" work order types, promising exponentially greater value in future releases.

Lessons Learned

  1. Validate Assumptions Early

We conducted abbreviated testing on grayscale low-fidelity wireframes due to tight deadlines, but this constraint turned into a superpower. It forced us to validate our base-level assumptions (e.g., "Will they trust AI?") rather than getting bogged down in UI details. Testing early prevented costly rework and proved that low-fidelity validation is not a luxury, but a necessity for velocity.

  1. Leverage Organizational Momentum

Standardizing the Work Plan template had been a "third rail" issue for years—too much friction, too little immediate payoff. However, by attaching this "side quest" to the high-priority AI initiative, I utilized the organizational pressure behind AI to bulldoze through the resistance. When you have a massive strategic mandate behind you, use that momentum to solve long-standing debt that the organization usually ignores.

  1. Problem Definition > Tech Trends

In the age of AI, it is easy to fall into the trap of "A Solution Looking for a Problem." Because we spent time in the discovery phase analyzing past feature development, KPIs, historical user research, and user pain points, we didn't just "add AI." We defined the problem as Manual Synthesis, ensuring the AI was a targeted tool rather than a generic tech upgrade.

  1. Plan for "Bridge Features"

The "Manual Override Toggle" was effectively a disposable feature - something we knew we might eventually phase out. However, designing it was critical for Day 1 success. Don't be afraid to build features that don't scale forever if they are required to get you off the ground today. Optimizing for long-term code purity is useless if you fail to secure initial adoption.

  1. Trust is a Long-Term Asset

Implementing AI in a safety-critical environment requires a massive leap of faith from the users. We secured this buy-in not because the UI was pretty, but because I had spent years building "Trust Capital" with these users on previous projects. Digital transformation is 10% code and 90% relationship management.

  1. Radical Simplicity (Do More with Less)

We deliberately avoided making the interface look "futuristic" or "flashy." By injecting the AI results directly into the existing views (Seamless Integration), we minimized the cognitive load of the transition. The most effective UI changes are often the ones users barely notice. Reducing the "fear factor" of a new tool is often more valuable than highlighting its novelty.