Mission-critical EU-wide pricing system—evolved from manual spreadsheets to Standard Operating Procedure adopted across entire EU Supply Chain by 2023 Q1. Now fully autonomous cloud execution, eliminating ~40 labor hours weekly and saving over ~£1M annually through real-time fill rate validation
EU network expansion created an unsustainable crisis—operators manually reviewed reports every 30 minutes, consuming 40 labor hours weekly across the region while blind price decisions wasted over £1M annually
Four-phase transformation achieving full EU adoption by 2023 Q1—becoming the core system all EU Supply Chain operations depend on daily
Operators manually reviewed reports every ~30 minutes, filtering thousands of rows to identify stations needing price increases. No live data validation—decisions made on stale information, causing massive overspend.
Built Python/Tkinter interface converting raw reports into organized UI with list boxes by delivery cycle. Highlighted only eligible stations for price increases. Reduced manual review time from ~40 hours/week to ~20 hours/week—proving the concept and gaining early adoption from EU operations teams.
CRITICAL MILESTONE: Integrated Selenium automation for live fill-rate validation—this breakthrough cut unnecessary surges by 50%+ and saved over £1M annually. System formally adopted as Standard Operating Procedure across entire EU Supply Chain. All EU stations now rely on this tool for daily price optimization. Added Slack webhooks for stakeholder notifications and real-time SOP compliance checks.
COMMUNICATOR TOOL ADDED: Once automation started, SLA misses emerged from other departments manually requesting price increases. Built dedicated communicator tool allowing teams to send surge requests directly to a processing queue—eliminating manual intervention and reducing SLA misses by 10% on average. Requests were automatically validated and queued for processing without user involvement.
FULL CLOUD MIGRATION: Fully autonomous cloud-based system—the mission-critical tool that entire EU Supply Chain now depends on 24/7. Configurable SOP parameters (buffer thresholds, price caps), automated Selenium-based price adjustments, fail-safe contingency alerts, S3 cloud logging for business intelligence, and back-end API enabling stations to trigger requests directly.
COMMUNICATOR MERGED TO CLOUD: The standalone communicator tool was fully integrated into cloud infrastructure—requests now flow directly to cloud processing queue via API, maintaining the 10% SLA improvement while eliminating the separate desktop application.
Zero human input required—runs on schedule with complete audit trail while eliminating all 40 hours/week of manual labor.
Historical GUI from manual/semi-automated phases—now fully cloud-based with no interface required
Note: First version interface, no longer in active use. For demonstration purposes only.
Note: First version interface, no longer in active use. For demonstration purposes only.
Note: First version interface, no longer in active use. For demonstration purposes only.
Real-time Slack webhooks for successful surges and failure contingencies
Note: First version interface, no longer in active use. For demonstration purposes only.
Note: First version interface, no longer in active use. For demonstration purposes only.
Dedicated tool enabling departments to submit surge requests directly to processing queue—reducing SLA misses by 10%
Note: First version interface, no longer in active use. For demonstration purposes only.
Once the Fill Rate Optimizer automation started in 2023 Q1, a new bottleneck emerged: other departments (capacity planning, station operations) still needed to manually request price increases for specific scenarios. These manual requests created SLA misses and delayed critical pricing decisions.
Built a standalone communicator tool allowing departments to send surge requests directly to a processing queue. Requests were automatically validated, queued, and processed without manual intervention from the Fill Rate Optimizer operator—cutting SLA response time and reducing SLA misses by 10% on average.
During Phase 3 cloud migration, the communicator tool was fully integrated into the cloud infrastructure. Requests now flow directly via back-end API to the cloud processing queue—maintaining the SLA improvement while eliminating the separate desktop application entirely.
Reports arrived every 30 minutes showing blocks needing coverage. By the time operators reviewed them and raised prices, the data was already stale. We had no visibility into what was happening right now—no live fill rate checks before surging.
Standard Operating Procedures defined threshold fill rates—blocks above these should NOT be surged. But without live data, we were blindly raising prices on blocks that were already filling, wasting massive amounts of money.
Worst of all: when someone raised prices in the morning cycle, those elevated rates stayed applied for ALL cycles throughout the day—even evening blocks that didn't need surging. We were paying premium rates all day based on morning data.
I identified this systemic issue and built a solution that checks live fill rate data from the scheduling platform BEFORE every price adjustment. The tool now:
RESULT: Over ~50% reduction in surge requests → ~£1M annual savings
Quantifiable improvements across cost efficiency, labor savings, and operational resilience