A modular safety system designed to move field operations from static declarations to continuous risk awareness — combining self-assessment, real-time sensing, AI-assisted monitoring, and forward-looking scheduling integration.
Fatigue was a recognised safety risk across field operations — particularly for mobile and vehicle-based roles where impairment has immediate physical consequences. The problem was not awareness. It was architecture.
The existing approach was fragmented, reactive, and largely compliance-driven. Staff completed self-assessments at shift start and the process ended there — no continuous monitoring, no early warning, no connection to scheduling data that could flag future risk before it materialised.
Rather than a single tool, the solution was approached as a system of components — some delivered into production, others prototyped or investigated to de-risk future investment.
The design challenge was to build something that could be incrementally delivered while preserving a coherent long-term architecture — so that each component added genuine value on its own and compounded with the others.
Four components spanning the full maturity spectrum — from production-deployed improvements through to exploratory technology investigations. Each designed to stand alone and compound with the others as the solution matured.
A fatigue self-assessment tool already existed but was limited in its effectiveness. Rather than replacing it, I reverse-engineered the automated scoring and weighted assessment logic to understand precisely how it worked — and where it didn't.
I contributed to a pilot exploring in-cab retinal and eye-tracking technology designed to detect fatigue indicators in real time — including blink rate, eye closure duration, and gaze patterns — while a vehicle was being operated.
The technology was positioned as an assistive safety layer, not a disciplinary tool. The distinction mattered: field crews needed to trust it before it could be effective. Alerts and prompts were designed to prompt a rest stop, not generate an incident report.
I designed and built a ChatGPT-based fatigue assessment proof of concept that embedded fatigue rules directly into the daily work-hour recording process — turning a compliance task into a live safety intervention moment.
Rather than a separate assessment at shift start, fatigue rules were evaluated continuously as hours were entered. Approaching a threshold triggered immediate feedback. Breaching one triggered escalation logic based on the specific policy rule violated.
Retrieved job details using job numbers entered during time recording — ensuring hours were correctly allocated to cost codes and enabling task-specific fatigue rules (driving vs non-driving activities carry different thresholds).
Retrieved upcoming shift assignments from the scheduling system — enabling forward-looking detection of potential fatigue breaches before they occurred, shifting management from reactive to preventative.
I investigated the potential use of wearable devices — rings and watches capable of capturing physiological indicators — as a trusted input layer for automated fatigue assessment. The focus was on data that correlated with actual impairment: sleep duration and quality, heart rate variability, and recovery and strain metrics.
The design question was not "can we collect this data?" but "how does trusted physiological data feed into automated assessments in a way that reduces user burden and improves decision quality?"