Why Your Fraud Engine Needs 4 Decision Outcomes, Not 3
Most fraud engines use 3 outcomes. Separating step-up authentication from manual review reduces friction, satisfies regulators, and cuts false positives.
RTD Team
Run-True Decision
The Problem with Three Decision Outcomes
Most fraud decision engines force every transaction into one of three buckets: pass, review, or reject. This creates a hidden problem — the "review" bucket is doing double duty, handling both automated step-up challenges and manual analyst investigations as if they were the same thing.
They are not. Sending a one-time password to a customer's phone is a fundamentally different action from routing a case to your fraud analyst's queue. The first takes seconds and keeps the customer in their purchase flow. The second can take hours or days. When these two actions share a single outcome label, fraud operations teams lose visibility, customers experience unnecessary friction, and compliance reporting becomes ambiguous.
Research across twelve major anti-fraud platforms — six enterprise and six fintech — reveals that only half clearly separate automated challenges from manual review in their decision models. The rest merge them into a single escalation tier, leaving operational teams to sort it out downstream.
What the Industry Actually Does
The fraud detection industry overwhelmingly uses a three-tier decision model: approve, decline, and some form of escalation. But the details of that third tier vary dramatically across vendors.
Among the twelve platforms studied, the landscape breaks down into three camps:
- Explicit separation — Five fintech-era platforms clearly distinguish between automated customer challenges (like OTP or 3D Secure) and manual analyst review as separate decision outcomes. These platforms treat each path as a first-class workflow with distinct routing, SLAs, and metrics.
- Partial or implicit separation — Three platforms offer some distinction, typically through alert types or flag-versus-suspend mechanics, but the separation is buried in configuration rather than exposed as distinct API outcomes.
- No separation — Four legacy enterprise platforms merge both actions into a single escalation pathway. Case creation and customer authentication challenges are handled through rules engines rather than the core decision model itself.
The pattern is clear: newer platforms designed in the API-first era tend to make this distinction explicit, while older platforms built around alert-based workflows leave it implicit. This is not a minor implementation detail — it shapes how fraud teams measure performance, how customers experience friction, and how regulators evaluate your compliance posture.
Why the Distinction Matters
Separating step-up authentication from manual review is not an academic exercise. It directly impacts three areas that fraud operations leaders care about most.
Customer experience and conversion. When a fraud engine flags a transaction for "review," what happens next depends entirely on what that review means. If it triggers an automated OTP challenge, the customer stays in the purchase flow — typically resolving in under thirty seconds. If it queues the transaction for manual analyst review, the customer may wait hours. Lumping both into a single bucket means your product team cannot measure the true customer impact of each path, and your conversion analytics become unreliable.
Regulatory compliance. The EU's Payment Services Directive (PSD2) requires strong customer authentication for transactions above certain thresholds — this maps directly to an automated step-up outcome. Meanwhile, GDPR Article 22 grants individuals the right to human intervention in automated decision-making — this maps to a manual review outcome. When your decision engine conflates these into a single "review" action, demonstrating compliance to auditors requires manual log analysis rather than straightforward API-level reporting.
Analyst workload and efficiency. Fraud analysts reviewing fifty cases a day need to know which transactions genuinely require human judgment and which were simply waiting for a customer to complete a step-up authentication. Without clear separation, analyst queues fill with cases that should never have left the automated pipeline, increasing workload and slowing response times for cases that truly need human review.
The Four-Outcome Model
A well-designed fraud decision engine should produce four distinct outcomes, each mapping to a specific operational workflow:
- Allow — Transaction proceeds automatically. Risk score is below the threshold. No additional action required.
- Step-Up — Transaction is paused for automated verification. The customer is prompted for additional authentication (OTP, biometric, 3D Secure). The process is real-time, customer-facing, and typically resolves in seconds.
- Manual Review — Transaction is queued for human analyst investigation. The case enters a structured workflow with priority levels, focus areas, and recommended actions. Resolution may take minutes to days.
- Block — Transaction is automatically declined. Risk score exceeds the rejection threshold. The decision is logged with reason codes for transparency and potential appeal.
The critical insight is that step-up and manual review are not different degrees of the same action — they are fundamentally different workflows with different actors, timelines, and customer impacts. Step-up keeps the customer in the loop. Manual review takes the customer out of the loop. Treating them as separate first-class outcomes enables:
- Distinct SLA tracking — Step-up should resolve in seconds; manual review may have a 24-hour SLA. You cannot measure both against the same benchmark.
- Separate conversion metrics — Step-up completion rates (typically 80-90%) differ dramatically from manual review approval rates. Blending them obscures your true fraud-to-friction ratio.
- Cleaner compliance reporting — Auditors can see exactly which decisions were fully automated, which involved automated customer authentication, and which required human judgment.
- Better analyst tooling — Manual review cases can include structured analyst guidance: what to investigate, which signals triggered the escalation, and recommended actions. Step-up cases never need this.
What This Means for Southeast Asian Banks
The four-outcome model is especially relevant for financial institutions operating across Southeast Asia, where the regulatory and operational landscape creates unique pressures on fraud decision systems.
Southeast Asian markets are among the fastest-growing real-time payment ecosystems in the world. Systems like Singapore's PayNow, Thailand's PromptPay, and Indonesia's QRIS process millions of transactions daily with settlement in seconds. In this environment, the difference between a step-up authentication (completing in real-time) and a manual review hold (potentially blocking funds for hours) has outsized impact on both customer satisfaction and merchant trust.
Regulatory frameworks across the region are also evolving rapidly. Singapore's MAS Technology Risk Management Guidelines emphasise both automated controls and human oversight in fraud management. Indonesia's OJK and Thailand's BOT have similar expectations. A four-outcome model maps cleanly to these requirements: automated outcomes (allow, step-up, block) demonstrate systematic risk management, while the manual review pathway shows human oversight capability.
For mid-market banks in the region — those processing tens of thousands to hundreds of thousands of transactions daily — the operational leverage of a four-outcome model is significant. Rather than routing all ambiguous transactions to a small fraud team, the engine can resolve a substantial portion through automated step-up challenges, reserving human analyst time for cases that genuinely require investigation. This is particularly valuable in markets where experienced fraud analysts are scarce and expensive to retain.
Run-True Decision is building a fraud decision engine purpose-built for Southeast Asian banks — with four distinct decision outcomes from day one. Talk to us to learn more.
Explore the Platform
See how Run-True Decision handles real-time fraud scoring, on-premise deployment, and regional compliance for Southeast Asian banks.
View Platform Overview