Discover & Design : AI AUTOMATION SYSTEM PROCESSING

Process mining, KPI baselines, tooling benchmark, ROI model and a 90‑day plan

8/20/20253 min read

a man riding a skateboard down the side of a ramp
a man riding a skateboard down the side of a ramp

Executive summary

Discover & Design is the first, high‑leverage phase of any Automation program. We turn scattered knowledge into a measurable, production‑ready plan: current‑state maps, KPI baselines, a tooling benchmark aligned to your stack and constraints, a defensible ROI model, and a 90‑day execution plan with clear exit criteria.

1) Objectives & scope

• Establish factual baselines for time, cost, quality and risk.

• Identify and prioritise automation candidates with business owners.

• Select tools (build/buy) with security, data and ops stakeholders.

• Produce an ROI model and a 90‑day plan with week‑by‑week deliverables.

2) Inputs required (week 0)

• Access: sample datasets, test accounts, logs/exports, API keys (sandbox).

• Stakeholders: process owners, operators, security/legal, data engineering.

• Systems: CRM/ERP/ITSM, content stores, data warehouse, identity & permissions.

• Policies: data retention, privacy, PII handling, procurement guardrails.

3) Method overview

Step A — Process mining & interviews → current‑state map, bottlenecks, exception paths.

Step B — KPI baselines → time, error, cost, throughput, exception %, satisfaction.

Step C — Tooling benchmark → shortlist, evaluation, security & cost posture.

Step D — ROI model → drivers, scenarios, payback and sensitivity analysis.

Step E — 90‑day plan → weekly activities, deliverables, owners and exit criteria.

4) Process mining — how we map reality

• Data sources: event logs, tickets, emails, CRM states, doc metadata, manual time studies.

• Artefacts: current‑state swimlanes, exception tree, handoff matrix, pain score heatmap.

• Prioritisation: impact × feasibility × risk; top 3–5 candidate workflows.

5) KPI baselines — measurement that survives production

We define metrics that can be measured in the pilot and at scale (same definition).

KPI

Definition

Unit

Baseline method

Target (120 days)

Cycle time

Start → finish per item

minutes

Logs/time study

−30–50%

AHT

Average handling time (human touch)

minutes

Sampled sessions

−25–45%

Error rate

Defects per 100 items

%

QA sampling

−40–70%

Exceptions

Items needing human review

%

Ticket tags

−20–40%

Unit cost

All‑in cost per item

$/item

Finance model

−15–35%

6) Tooling benchmark — pick the right stack

We compare candidates against your requirements (quality, cost, privacy, operations).

Category

Candidates

Evaluation criteria

Notes

Models & inference

Open‑source LLMs; vendor APIs; domain‑specific models

Quality on tasks, latency, cost/1k tokens, privacy posture, on‑prem support

Fallback/routing options; red‑team results

Orchestration

Agent frameworks; workflow engines; function routers

Determinism, testability, retries, observability, CI/CD

Feature flags & rollbacks

Retrieval & data

Vector DB; warehouse; connectors

Freshness, security, scaling, TCO

PII handling; residency

Observability

Tracing & evaluation; cost telemetry

Coverage, ease of integration, SLOs

Dashboards & alerts

7) ROI model — drivers & scenarios

We build a simple, auditable model with baseline, conservative and aggressive scenarios.

Driver

Baseline value

Assumption / impact

Source

Volume

10,000 items / month

Stable ±10%

Logs / forecasts

Cycle time

12 min / item

−35% after 90 days

Time study

Error rate

6.5%

−50% with guardrails

QA sampling

Human time share

7 min / item

−40% HITL → co‑pilot

Observations

Unit cost

$1.10 / item

−25% by Q2

Finance

8) 90‑day plan — week‑by‑week

Week

Focus

Key activities

Deliverables

Exit criteria

0

Access & kick‑off

Accounts, data samples, security & privacy review

Access checklist; risk register

All accesses granted

1

Process mining

Interviews; log analysis; map exceptions & handoffs

Current‑state map; bottlenecks

Top candidates agreed

2

Design

Target state; test strategy; success criteria

Solution sketch; test plan

Design sign‑off

3–4

Baseline

Time studies; QA sampling; cost model

Baseline KPI sheet

Stakeholders agree numbers

5

Benchmark

Prototype critical paths; compare tools

Evaluation matrix; shortlist

Preferred stack approved

6

Plan

Roadmap; resource plan; risk mitigations

90‑day plan; RACI; SOW

Go for build sprint

9) Deliverables checklist

• Current‑state map (swimlanes) and exception tree.

• KPI baseline sheet (methods, samples, caveats).

• Tooling evaluation matrix + security notes.

• ROI model (xlsx) with scenarios & sensitivity.

• 90‑day plan with RACI, risks and exit criteria.

• Draft runbooks: setup, rollout, rollback.

• Compliance pack (DPA/PII handling/audit log schema).

10) RACI — who does what

Workstream

Sponsor

PM

Lead Eng

Data Eng

Security/Legal

Ops

Process mining

A

R

C

R

C

I

KPI baselines

R

C

R

I

I

Tooling benchmark

C

R

R

C

C

I

ROI model

C

R

C

C

I

I

90‑day plan

A

R

R

C

C

C

11) Risks & mitigations

Risk

Likelihood

Impact

Mitigation

Owner

Access delays

M

H

Escalation path; fallback datasets

PM

Data quality gaps

M

M

Profiling; sampling; cleanse plan

Data Eng

Vendor lock‑in

L

M

Open connectors; export paths

Lead Eng

Cost sprawl

M

M

Token budgets; caching; routing

Lead Eng

Change fatigue

M

H

Quick wins; weekly demos; comms

PM

12) Governance & cadence

• Weekly steering (30–45') with Sponsor, PM, Security and Ops.

• Demo at weeks 2, 4 and 6; decisions logged; risks reviewed.

• All artefacts versioned; metrics and costs visible in a shared dashboard.

13) Success criteria & sign‑off

• Baselines agreed; tooling stack approved; ROI model validated.

• 90‑day plan signed by stakeholders; risks and mitigations captured.

• Go decision for Build sprint with clear exit criteria.

14) FAQ

Q: How long does Discover & Design take?
A: Typically 4–6 weeks depending on access, with week‑0 prep for security and data.

Q: Can we include on‑prem constraints?
A: Yes. We benchmark on‑prem/open‑source options and model the TCO accordingly.

Q: Will you share templates?
A: Yes — KPI sheets, evaluation matrices, risk registers and runbooks ship with the engagement.

Book a 30’ ROI diagnosis

Email: contact@smartonsteroids.com — we’ll scope the Discover & Design phase, confirm inputs, and start week‑0 prep.


© 2025 Smart On Steroids — AI Automation Studio → Platform