Where Do We Start? Lessons from UPS and Delta

Most executives are asking the same question: “Where do we start with AI?”

Pilots rarely fail because of the technology itself. They fail when organizations jump straight to models and tools without preparing their people, processes, and data to use them effectively. The real starting point is a single, measurable operational problem and the organizational alignment needed to act on AI outputs.

UPS and Delta took very different paths, but both illustrate the same lesson: analytics and AI succeed only when execution is built around a clear operational problem. UPS tackled last-mile delivery inefficiencies, where cutting just one mile per driver saved $50 million annually. Delta focused on maintenance-driven delays that could ripple across the network, preventing thousands of delayed passengers.

The takeaway is simple: successful pilots don't start with “What can it do?” They start with “What operational problem can we measure and fix?”

Start with the Problem, Not the Algorithm

AI pilots fail often because companies skip the foundational work that makes it operationally useful. Executives need to start with a clear, quantifiable problem tied to operational impact. Defining the problem first ensures that AI efforts translate into measurable business outcomes.

Case examples:

  • UPS: Faced massive inefficiency in last-mile delivery. ORION’s route optimization identified redundant stops and poorly sequenced routes, enabling drivers to reduce mileage. This lowered fuel use, vehicle wear, and labor costs, saving $50M annually for just 1 mile reduction per route, and scaling to hundreds of millions as the system matured.

  • Delta: Maintenance-driven delays caused cascading network disruptions. Predictive analytics identified parts likely to fail before scheduled maintenance, allowing preemptive fixes. By proactively managing risks, Delta cut cancellations from 5,600 to 55, preventing costly delays and protecting revenue and reputation.

Operational insight: Start with one operational problem that matters to cost, efficiency, or uptime. AI is only valuable when it addresses a problem that leadership already cares about.

Immediate next-steps for you:

  • Identify one operational bottleneck in your organization that is measurable and urgent. Tie it to KPIs or financial impact.

  • Sketch the primary data points and decision steps that contribute to this problem. This will be the foundation for AI application.

Data First: Building the Foundation

AI can't perform without clean, structured, and connected operational data. The success of any pilot depends on a strong foundation: the more complete and accurate your data, the faster and more reliably AI will deliver results. Skipping this step creates pilots that never scale beyond prototypes.

Case examples:

  • UPS: Package Flow Technology (PFT) centralized vehicle, package, and routing data. ORION leveraged this integrated dataset to optimize routes at scale. The foundation allowed predictive modeling of driver behavior and dynamic route adjustments.

  • Delta: Early predictive maintenance programs connected engine telemetry, component histories, and operational schedules. Analytics were validated against historical failures, ensuring reliability before broad rollout.

Operational insight: Data readiness is non-negotiable. Pilots built on unstructured, siloed data are unlikely to scale.

Immediate next-steps for you:

  • List the top 3 datasets critical to your operational problem. Identify gaps, inconsistencies, or unconnected sources.

  • Plan a 30–60 day data cleanup and integration initiative before any AI pilot deployment.

Most organizations skip this step or treat it as a simple task. That is exactly where pilots get stuck. If you want to make sure your foundation actually supports AI (or any operational initiative), getting systems, processes, and teams aligned first is the smartest move. That's exactly what I help leaders do, building foundations that stick so every pilot you launch has a chance to scale.

Execution Models: Build vs. Partner

Once the foundation is ready, organizations have to choose an execution model: build internally or partner externally. This decision shapes your control over data, speed of deployment, and operational scalability. The wrong model can lead to integration problems, slow adoption, or pilots stuck in limbo.

Case examples:

  • UPS: Proprietary Build. Vertically integrated control allowed ORION to be developed and deployed across a 55,000-driver fleet. Complete ownership enabled tight control over data, testing, and rollout.

  • Delta: Ecosystem Partnership. Operating in a federated environment with third-party systems, Delta co-founded the Digital Alliance and leveraged Airbus’s Skywise platform to scale predictive maintenance insights.

Operational insight: Choose a model aligned with your operational reality. Proprietary builds work when you control end-to-end processes; partnerships are necessary in federated or multi-vendor environments.

Immediate next-steps for you:

  • Map process ownership and dependencies for your pilot. Determine if internal control or strategic partnerships will best support scaling.

  • Identify potential integration risks and mitigation strategies before committing.

Metrics That Matter: Translating AI into P&L

AI is meaningless without translating outputs into metrics leadership already tracks. Focusing on operationally relevant KPIs ensures pilots drive measurable business outcomes rather than technical novelty.

Case examples:

  • UPS: ORION’s route optimization reduced redundancy in driver routes, allowing the company to cut 100 million miles driven annually. This directly lowered fuel costs, vehicle wear, and labor inefficiencies, translating to $300–400M in annual operating savings. The key wasn’t just algorithm accuracy—it was connecting optimization outputs to real-world operational levers.

  • Delta: Predictive maintenance via Skywise cut cancellations from 5,600 to 55 by anticipating part failures before they occurred, which reduced network-wide delays. This also generated 8-figure cost savings, not from abstract model performance, but by preventing cascading operational disruptions and unscheduled maintenance costs.

Operational insight: Define success in terms your C-suite understands. Tie AI outcomes directly to cost savings, efficiency improvements, or uptime metrics—not algorithm accuracy.

Immediate next-step exercise:

  • Select 2–3 metrics your pilot can move within six months and link them to business value for executive visibility.

Design for Humans: Adoption is Everything

AI only delivers value when people use it effectively. Scaling AI is as much a human challenge as a technical one. Misaligned incentives, unclear responsibilities, or poorly communicated benefits can prevent adoption even when the model works perfectly. Leaders need to embed AI into daily workflows and ensure teams trust the output.

  • UPS: Drivers initially resisted ORION because it replaced long-standing route planning habits. The solution was to redesign KPIs so that managers measured percentage of time drivers followed optimized routes, rather than traditional performance metrics. This alignment between AI recommendations and human incentives drove widespread adoption and consistent operational results.

  • Delta: Engineers participated directly in the creation of predictive maintenance tools. By integrating their expertise into the AI models, the company built trust and avoided clashes with legacy systems. This co-creation ensured adoption across engineering and operations teams and improved the reliability of AI outputs in day-to-day decisions.

Operational insight: AI deployment is a human adoption problem first. Align performance metrics, involve key users in design, and anticipate resistance to ensure the technology is actually used.

Immediate next-steps for you:

  • Map the roles and teams affected by your AI pilot and identify where adoption barriers may occur.

  • Adjust KPIs or incentives so that human behavior aligns with the pilot’s goals.

  • Engage key users in validation or co-creation workshops to build trust and operational alignment.

Governance: Integrate, Don’t Isolate

AI governance must be part of existing operational and regulatory frameworks. Creating a standalone AI ethics or oversight function without links to operations rarely improves adoption or compliance. Governance should protect the business while supporting smooth integration into daily operations.

Case examples:

  • Delta: Predictive maintenance programs were fully integrated into the FAA-mandated Safety Management System. This ensured compliance and aligned AI outputs with operational priorities, allowing maintenance teams to act on insights without adding layers of separate oversight.

  • UPS: Governance focused on adoption and employee data privacy, embedding policies into standard change management processes. Drivers and managers followed clear protocols for AI-generated recommendations, minimizing conflict and ensuring consistent application of the technology.

Operational insight: Effective AI governance doesn't exist in isolation. Embed oversight into existing operational frameworks and ensure responsibilities, reporting, and accountability align with business outcomes.

Immediate next-steps for you:

  • Identify where AI oversight can be integrated into existing operational, compliance, or safety frameworks.

  • Assign responsibilities and reporting lines that tie directly to operational results and existing decision-making processes.

Patience and Iteration: Overnight Success Takes a Decade

AI pilots are starting points, not finished solutions. Each iteration strengthens data quality, human adoption, and operational performance. Executives should treat the pilot as a baseline for continuous improvement and plan for incremental progress over time.

Case examples:

  • UPS: From the 2003 launch of Package Flow Technology to the full 2017 deployment of ORION, the company scaled the system gradually, validating outputs and building adoption at every stage. Each phase improved route efficiency, reduced mileage, and increased savings, demonstrating the value of patient iteration.

  • Delta: Predictive maintenance programs evolved over sixteen years, from early telemetry analysis to full integration with the Digital Alliance. Incremental improvements increased aircraft availability, reduced delays, and optimized maintenance schedules, showing that sustained investment and repeated refinement were necessary for measurable results.

Operational insight: Expect incremental progress. A pilot sets the foundation, but operational and human systems must be refined continuously to achieve full impact.

Immediate next-steps for you:

  • Define quarterly milestones for adoption, data quality, and operational performance.

  • Treat each milestone as an opportunity to refine AI outputs, workflows, and KPIs, ensuring continuous improvement rather than one-off deployment.

Actionable Takeaways: A Playbook for Operators

  1. Start with one operational problem – Quantify a single pain point with real financial or operational impact.

  2. Invest in data first – Clean, structure, and connect your operational data before deploying AI.

  3. Pick your execution model – Build if you control the process, partner if you rely on third parties.

  4. Translate AI into P&L metrics – Track outcomes that matter to leadership, not algorithmic accuracy.

  5. Design for humans – Align incentives and KPIs; involve users in tool development.

  6. Integrate governance – Fold AI oversight into existing operational or regulatory frameworks.

  7. Plan for iteration – Treat the pilot as a starting point and evolve through measured quarterly steps.

Before you invest in models, dashboards, or tools, make sure your operations, data, and teams are aligned to actually use the insights. At Life Aligned Systems, I help organizations get that foundation in shape. Whether you are launching a new venture, re-architecting existing operations, or aligning leadership and KPIs, I build systems that stick. Start here, and every pilot you run has a real chance to scale and deliver measurable impact.

Previous
Previous

Training for Speed: Rolling Out AI Without Breaking Your Teams

Next
Next

Structure Over Software: Lessons from Building a New Executive Team